Adding PowerBI to SCCM

Found this article and it will be helpful in the future.

How to integrate Power BI Report Server with Configuration Manager reporting


Disabling Dropbox from Installing or Running if Installed

Recently I was on a quest to disable the Dropbox program from running on company owned (domain joined) machines. There were lots of hacks to make it work but finally I found a solution, although it was worded relatively cryptically, on Experts Exchange by a McKnife ( Long story short you can use Software Restriction Policies ( to do this but his solution was more elegant as it blocked Dropbox programs based on the certificate used to sign them as opposed to the file path or things that might change often. This not only blocks the Dropbox program if it’s already installed but also prevents a user from installing it in the first place. Here is my expanded version of his instructions.

First download the Dropbox installer. Right click it and select Properties then go to Digital Signatures. Select the first one (SHA1) and click “Details”. Click “View Certificate” then the Details tab then “Copy to File…”. This lets you export out the certificate. Click Next then “Base-64 encoded X.509 (.CER)” and next again. Save the certificate as something like “Dropbox SHA1 Cert.CER”. Once that one is exported repeat the procedure for the SHA256 certificate.

Once you have both certificates open up Group Policy Management and if you already have a software restrictions policy edit it. If not I suggest you create a new one. Navigate to Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Software Restriction Policies -> Additional Rules. Right click and create a “New Certificate Rule”. Browse for the SHA cert and make sure the Security Level is set to Disallow. Give it a description such as “Dropbox SHA Certificate”. When you click OK, if you didn’t have any certificate rules before, it will prompt you to turn them on and display the “Enforcement Properties” page. At the bottom “Enforce certificate rules” then “OK”. Repeat for the SHA256 certificate.

Once GPO updates Dropbox will no longer start and executing the exe or installer directly will give you a nice error message.

Side note: Once this policy is in place you will also not be able to uninstall Dropbox since the same certificate is being used on the uninstall. Keep that in mind…you would have to disable enforcing certificates temporarily to get it uninstalled.


Why the Cloud?

A decision that I have been seeing more and more recently is companies taking their entire infrastructure into the cloud.  Personally, I see this as a recipe for disaster!

Companies set up so that their entire infrastructure is cloud based but they only purchase a single cirquit to the net.  What happens if/when that cirquit fails?  I’ll tell you what!  You have an entire company that is sitting around playing solitaire because all their files are internet based.  The networking team is scrambling around because the network is down but there is not a whole lot that can be done if the link was cut by a backhoe operator who misread the plans about where he was supposed to start digging.  Don’t laugh, it happens.

My solution to this is using a hybrid configuration.  Have 1/3 or so of your processing power and the majority of your file servers on premise.  Use Onedrive or whatever your file storage solution of choice is strictly as a backup.  This way if you are down you can still work from the local storage and then backup to onedrive when the link is restored.



Ultimate Audit Policy Guide

This is the ultimate guide to Windows audit and security policy settings.

In this guide, I will share my tips for audit policy settings, password and account policy settings, monitoring events, benchmarks and much more.

Table of contents:

  • What is Windowing Auditing
  • Use The Advanced Audit Policy Configuration
  • Configure Audit Policy for Active Directory
  • Configure Audit Policy for Workstations and Servers
  • Configure Event Log Size and Retention Settings
  • Recommended Password & Account Lockout Policy
  • Recommended Audit Policy Settings
  • Monitor These Events for Compromise
  • Centralize Event Logs
  • Audit Policy Benchmarks
  • Planning Your Audit Policy


What is Windows Auditing?
A Windows audit policy defines what type of events you want to keep track of in a Windows environment. For example, when a user account gets locked out or a user enters a bad password these events will generate a log entry when auditing is turned on. An auditing policy is important for maintaining security, detecting security incidents and to meet compliance requirements.

Use the Advanced Audit Policy Configuration
When you look at the audit policies you will notice two sections, the basic audit policy, and the advanced audit policy. When possible you should only use the Advanced Audit Policy settings located under Security Settings\Advanced Audit Policy Configuration.

The advanced audit policy settings were introduced in Windows Server 2008, it expanded the audit policy settings from 9 to 53. The advanced policy settings allow you to define a more granular audit policy and log only the events you need. This is helpful because some auditing settings will generate a massive amount of logs.

Important: Don’t use both the basic audit policy settings and the advanced settings located under Security Settings\Advanced Audit Policy Configuration. Using both can cause issues and is not recommended.

Microsoft provides the following information.

The advanced audit policy has the following categories. Each category contains a set of policies.

  • Account Logon
  • Account Management
  • Detailed Tracking
  • DS Access
  • Logon/Logoff
  • Object Access
  • Policy Change
  • Privilege Use
  • System
  • Global Object Access Auditing



Threats and Countermeasures Guide: Advanced Security Audit Policy

Configure Audit Policy for Active Directory (For all Domain Controllers)
By default, there is a bare minimum audit policy configured for Active Directory. You will need to modify the default domain controller policy or create a new one.

Follow these steps to enable an audit policy for Active Directory.

Step 1: Open the Group Policy Management Console
Step 2: Edit the Default Domain Controllers Policy
Right click the policy and select edit

Step 3: Browse to the Advanced Audit Policy Configuration
Now browse to the Advanced Audit Policy Configuration

Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration

Step 4: Define Audit Settings
Now you just need to go through each audit policy category and define the events you want to audit. See the recommended audit policy section for the recommended settings.

Configure Audit Policy on Workstations and Servers
It is highly recommended that you enable an audit policy on all workstations and servers. Most incidents start at the client device, if you are not monitoring these systems you could be missing out on important information.

To configure an audit policy for workstations and servers you will need to create a new audit policy. This will be a separate audit policy from your domain controllers. I would not apply this policy to the root of the domain, it is best to have all your workstations and servers in a separate organization unit and apply the audit policy to this OU.

You can see below I have an organizational unit called ADPRO computers. This organizational unit contains sub OUs for department workstations and a server OU for all the servers. I will create a new audit policy on the ADPRO computers OU, this policy will target all devices in this folder.

Configure Event Log Size and Retention Settings
It is important to define the security event log size and retention settings. If these settings are not defined you may overwrite and lose important audit data.

Important: The logs generated on servers and workstations from the audit policy are intended for short term retention. To keep historical audit logs for weeks, months or years you will need to set up a centralized logging system. See the section below for recommendations.

In your audit policy, you can define the event log settings at Computer Configuration -> Policies -> Security Settings -> Event Log

Here are the recommended settings

Maximum application log size
4,194,240 (kilobytes)
Maximum Security log size
4,194,240 (kilobytes)
Maximum system log size
4,194,240 (kilobytes)
Even with the log settings configured you could still overwrite events in a short period of time. It all depends on your audit policy and how many users you have. If you are tracking bad password attempts for 2000 users that will generate way more events than 20 users.


Recommended settings for event log sizes in Windows

Recommended Password and Account Lockout Policy
To successfully audit user accounts you need to ensure you have the password and account lockout policy configured. If you are auditing for account lockouts but don’t have a lockout threshold set you will never see those events.

These settings are from the MS Security baseline Windows 10 and Server 2016 document.

Password Policy
GPO location: Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Account Policies -> Password Policy

Enforce password history
Maximum password age
Minimum password age
Minimum password length
Password must meet complexity requirements
Store passwords using reversible encryption
Account Lockout Policy
GPO location: Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Account Policies -> Account Lockout Policy

Account lockout duration
Account lockout threshold
Reset lockout counter after

Microsoft Security compliance toolkit

Recommended Audit Policy Settings
These settings are from the MS Security baseline Windows 10 and Server 2016 document.

Recommended domain controller security and audit policy settings.

GPO Policy location: Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration

Account Logon
Audit Credential Validation
Success and Failure
Audit Kerberos Authentication Services
Not configured
Audit Kerberos Service Ticket Operations
Not configured
Audit Other Account Logon Events
Not configured
Account Management
Audit Application Group Management
Not configured
Audit Computer Account Management
Audit Distribution Group Management
Not configured
Audit Other Account Management Events
Success and Failure
Audit Security Group Management
Success and Failure
Audit User Account Management
Success and Failure
Detailed Tracking
Audit DPAPI Activity
Not configured
Audit Plug and Play Events
Audit Process Creation
Audit Process Termination
Not Configured
Audit RPC Events
Not Configured
Audit Token Right Adjected
Not Configured
DS Access
Audit Detailed Directory Service Replication
Not configured
Audit Directory Service Access
Success and Failure
Audit Directory Service Changes
Success and Failure
Audit Directory Service Replication
Not Configured
Audit Account Lockout
Success and Failure
Audit User / Device Claims
Not configured
Audit Group Membership
Audit IPsec Extended Mode
Not configured
Audit IPsec Main Mode
Not configured
Audit Logoff
Audit Logon
Success and Failure
Audit Network Policy Server
Not configured
Audit Other Logon/Logoff Events
Not configured
Audit Special Logon
Object Access
Audit Application Generated
Not configured
Audit Certification Services
Not configured
Audit Detailed File Share
Not configured
Audit File Share
Not configured
Audit File System
Not configured
Audit Filtering Platform Connection
Not configured
Audit Filtering Platform Packet Drop
Not configured
Audit Handle Manipulation
Not configured
Audit Kernal Object
Not configured
Audit Other Object Access Events
Not configured
Audit Registry
Not configured
Audit Removable Storage
Success and Failure
Audit SAM
Not configured
Audit Central Access Policy Staging
Not configured
Policy Change
Audit Audit Policy Change
Success and Failure
Audit Authentication Policy Change
Audit Authorization Policy Change
Audit Filtering Platform Policy Change
Not configured
Audit MPSSVC Rule-Level Policy Change
Not Configured
Audit Other Policy Change Events
Not configured
Privilege Use
Audit Non Sensitive Privilege Use
Not configured
Audit Other Privilege Use Events
Not configured
Audit Sensitive Privilege Use
Success and Failure
Audit IPsec Driver
Success and Failure
Audit Other System Events
Success and Failure
Audit Security State Change
Audit Security System Extension
Success and Failure
Audit System Integrity
Success and Failure
Global Object Access Auditing
File System
Not configured
Not configured
I recommend you download the Microsoft Security compliance toolkit. It has an excel document with recommended security and audit settings for windows 10, member servers and domain controllers. In addition, the toolkit has additional documents and files to help you apply security and audit settings.

Centralize Windows Event Logs
When you enable a security and audit policy on all systems those event logs are stored locally on each system. When you need to investigate an incident or run audit reports you will need to go through each log individually on each computer. Another concern is what if a system crashes and you are unable to access the logs?

and… don’t forgot those local logs are intended for short term storage. In large environments, those local logs will be overwritten by new events in a short period of time.

Centralizing your logs will save you time, ensure logs are available and make it easier to report and troubleshoot security incidents. There are many tools out there that can centralize windows event logs.

Below is a list of free and premium tools that will centralize windows event logs. Some of the free tools require a bit of work and may require additional software to visualize and report on the logs. If you have the budget I recommend a premium tool, they are much easier to setup and saves you a ton of time.

SolarWinds Log Analyzer (Premium tool, 30-day FREE trial)
Windows Event Collector (Free, requires additional tools to visualize and report on data)
ManageEngine Audit Plus – (Premium tool)
Splunk – (Premuim tool, a popular tool for analyzing various log files)
Elastic Stack – (Free download)
SolarWinds Event Log Consolidator (Free Download)
Monitor These Events for Compromise
Here is a list of events you should be monitoring and reporting on.

Logon Failures – Event ID 4624, 4771
Successful logons – Event ID 4624
Failures due to bad passwords – Event ID 4625
User Account Locked out – Event ID 4740
User Account Unlocked – Event ID 4767
User changed password – Event ID 4723
User Added to Privileged Group – Event ID 4728, 4732, 4756
Member added to a group – Event ID 4728, 4732, 4756 , 4761, 4746, 4751
Member removed from group – Event ID 4729, 4733, 4757, 4762, 4747, 4752
Security log cleared – Event ID 1102
Computed Deleted – Event ID 4743
Audit Policy Benchmarks
How do you know for sure if your audit policy is getting applied to your systems? How does your audit policy compare to industry best practices? In this section, I’ll show you a few ways you can audit your own systems.

Using auditpol
auditpol is a built-in command that can set and get the audit policy on a system. To view the current audit run this command on your local computer

auditpol /get /category:*

You can check these settings against what is set in your group policy to verify everything is working.

Microsoft Security Toolkit
I mention this toolkit in the recommended settings section but it is worth mentioning again. It contains a spreadsheet with the Microsoft recommended audit and security policy settings. It also includes GPO settings, a script to install and GPO reports. It is a great reference for comparing how your audit policy stacks up against Microsoft’s recommendations.

CIS Benchmarks
CIS benchmarks have configuration guidelines for 140+ systems, including browser, operating systems, and applications.

CIS Benchmarks

CIS provides a tool that can automatically check your systems settings and how it compares to its benchmarks. This is by far the best method for testing your audit policy against industry benchmarks. The pro version does require a membership, there is a free version with limited features.


Planning Your Audit Policy
Here are some tips for an effective audit policy deployment.

Identify your Windows audit goals
Don’t just go and enable all the auditing settings, understand your organization’s overall security goals. Enabling all the auditing rules can generate lots of noise and could make your security efforts more difficult than it should be.

Know your Network Environment
Knowing your network, Active Directory architecture, OU design and security groups are fundamental to a good audit policy. Deploying an audit policy to specific users or assets will be challenging if you do not understand your environment or have a poor logical grouping of your resources.

Group Policy
It is best to deploy your audit policy with group policy. Group policy gives you a centralized location to manage and deploy your audit settings to users and assets within the domain.

How will you obtain event data
You will need to decide how will event data be reviewed.

Will the data be kept on local computers
Will the logs be collected on each system and put into a centralized logging system?

Planning and deploying advanced security audit policies


SCCM Maintenance

Daily Maintenance Tasks

  1. Verify that the nightly backup was successful
  2. Check free disk space on all volumes on all site systems (use a PowerShell script for that).
  3. Check the ConfigMgr database size
  4. Check Site Database Status (Monitoring workspace)
  5. Check ConfigMgr inboxes for backlogs (again, PowerShell is useful, or simply tools like WinDirStat)
  6. Review Windows Event logs on site systems
  7. Checking and removing obsolete clients, as well as checking for client errors
  8. Check on Content Distribution Report (script or dbjobmgr)
  9. Check that ADR’s have run successfully (definitions updates run daily)
  10. Backup task sequences and endpoint protection policies (six copies kept)
  11. Cleanup old IIS logs so they don’t build up
  12. Backup custom SCCM Reports
  13. Cleanup any systems still in collections with OSD Task Sequence deployments.
  14. Cleanup old SCCM Users 60 days after they disappear from Active Directory.

Weekly Maintenance Tasks

  1. Review all daily tasks
  2. Review and disk space usage on all site systems, and compare to previous week (to see trends)
  3. Verify that predefined weekly maintenance tasks scheduled are running successfully
  4. Review collection evaluation runtimes
  5. Review software updates compliance reports
  6. Review client health (again to see trends)
  7. Check SQL Maintenance, re-indexing etc.
  8. Verify that networks haven’t changed (boundaries etc.)
  9. Verify that old IIS Log files have been deleted

Monthly Maintenance

To be added, but these are for preparing for upgrades, and to establish long term trends. Usually scheduled meetings with workplace managers and other team members.

  1. Update, test, and deploy OSD reference images. Delete inactive computers accounts.

Quarterly to semi-annual Maintenance Tasks

  1. Review the security plan for any needed changes
  2. Change accounts and passwords if necessary according to your security plan
  3. Review the maintenance schedule for upgrades to the ConfigMgr platform
  4. Review the Configuration Manager hierarchy design for any needed changes
  5. Check network performance to ensure changes have not been made that affect site operations
  6. Review the disaster recovery plan for any needed changes
  7. Perform a site recovery according to the disaster recovery plan in a test lab
  8. Check hardware for any errors or hardware updates available
  9. Check overall health of site

How to reconfigure a machines time configuration to sync from the domain hierarchy?

Normally the PDC FSMO at the forest root domain will synchronize from an external time server. All other domain controllers and domain members should synchronize from the domain hierarchy. To configure this on every machine (except the forest root PDC FSMO):

Open an elevated command prompt
Run commands:
w32tm /config /syncfromflags:DOMHIER /update
w32tm /resync /nowait
net stop w32time
net start w32time
If this does not work try again but this time for the resync command add /rediscover.

You can check the time source and state using:

w32tm /query /source
w32tm /monitor


Layer 2 of the OSI Model – Data Link Layer

The 2nd layer of the OSI layer is called the Data Link Layer.  This is where the method of networking is determined.  (wired or wireless or token ring etc)
Data Link Layer (Layer 2)

The second-lowest layer (layer 2) in the OSI Reference Model stack is the data link layer, often abbreviated “DLL” (though that abbreviation has other meanings as well in the computer world). The data link layer, also sometimes just called the link layer, is where many wired and wireless local area networking (LAN) technologies primarily function. For example, Ethernet, Token Ring, FDDI and 802.11 (“wireless Ethernet” or “Wi-Fi’) are all sometimes called “data link layer technologies”. The set of devices connected at the data link layer is what is commonly considered a simple “network as opposed to Internetwork

Data Link Layer Sublayers: Logical Link Control (LLC) and Media Access Control (MAC)The data link layer is often conceptually divided into two sublayers: logical link control (LLC) and media access control (MAC). This split is based on the architecture used in the IEEE 802 Project, which is the IEEE working group responsible for creating the standards that define many networking technologies (including all of the ones I mentioned above except FDDI). By separating LLC and MAC functions, interoperability of different network technologies is made easier, as explained in our earlier discussion of networking model concepts.

Data Link Layer Functions

The following are the key tasks performed at the data link layer:

Logical Link Control (LLC): Logical link control refers to the functions required for the establishment and control of logical links between local devices on a network. As mentioned above, this is usually considered a DLL sublayer; it provides services to the network layer above it and hides the rest of the details of the data link layer to allow different technologies to work seamlessly with the higher layers. Most local area networking technologies use the IEEE 802.2 LLC protocol.

Media Access Control (MAC): This refers to the procedures used by devices to control access to the network medium. Since many networks use a shared medium (such as a single network cable, or a series of cables that are electrically connected into a single virtual medium) it is necessary to have rules for managing the medium to avoid conflicts. For example. Ethernet uses the CSMA/CD method of media access control, while Token Ring uses token passing.

Data Framing: The data link layer is responsible for the final encapsulation of higher-level messages into frames that are sent over the network at the physical layer.

Addressing: The data link layer is the lowest layer in the OSI model that is concerned with addressing: labeling information with a particular destination location. Each device on a network has a unique number, usually called a hardware address or MAC address, that is used by the data link layer protocol to ensure that data intended for a specific machine gets to it properly.

Error Detection and Handling: The data link layer handles errors that occur at the lower levels of the network stack. For example, a cyclic redundancy check (CRC) field is often employed to allow the station receiving data to detect if it was received correctly.

Layer 1 of the OSI Layer- Physical

Today’s post is going to be fairly short as the physical layer is the easiest to understand.
The 1st layer of the OSI model is the Physical layer.  This layer covers the actual physical infrastructure.  the cable, the jack, the connection.

Networking OSI Layers

The part of networking that I always have problems with is the OSI model.  Because of this, I am documenting my study of those layers here.  To help with this, I have copied the Dummies guide description of the explanation.  They always say that typing things out helps in memorization so over the next week or so I am going to translate the “Dummies'” definition to the complete Idiot’s definition that I need to finally understand this stuff.

Wish me luck

The layers of the OSI model

Under its official name, the Open Systems Interconnection Reference Model, or the OSI model, was developed by the International Organization for Standardization, which uses the abbreviation of ISO. And, yes, the full acronym of the OSI is ISO OSI.
The OSI model is a layered model that describes how information moves from an application program running on one networked computer to an application program running on another networked computer. In essence, the OSI model prescribes the steps to be used to transfer data over a transmission medium from one networked device to another. The OSI model is a seven-layer model developed around five specific design principles:
Whenever a discrete level of abstraction is required, a new layer should be created.
Each layer of the model should carry out a well-defined function.
The function of each layer should define internationally standardized protocols.
The boundaries of the layers should be placed to minimize the flow of information across interfaces.
There should be a sufficient number of layers defined to prevent unnecessary grouping of functions and the number of layers should also be small enough so that the model remains manageable.

Moving down through the layers

The OSI model breaks the network communications process into seven separate layers. From the top, or the layer closest to the user, down, these layers are:
Layer 7, Application: The Application layer provides services to the software through which the user requests network services. Your computer application software is not on the Application layer. This layer isn’t about applications and doesn’t contain any applications. In other words, programs such as Microsoft Word or Corel are not at this layer, but browsers, FTP clients, and mail clients are.
Layer 6, Presentation: This layer is concerned with data representation and code formatting.
Layer 5, Session: The Session layer establishes, maintains, and manages the communication session between computers.
Layer 4, Transport: The functions defined in this layer provide for the reliable transmission of data segments, as well as the disassembly and assembly of the data before and after transmission.
Layer 3, Network: This is the layer on which routing takes place, and, as a result, is perhaps the most important OSI layer to study for the CCNA test. The Network layer defines the processes used to route data across the network and the structure and use of logical addressing.
Layer 2, Data Link: As its name suggests, this layer is concerned with the linkages and mechanisms used to move data about the network, including the topology, such as Ethernet or Token Ring, and deals with the ways in which data is reliably transmitted.
Layer 1, Physical: The Physical layer’s name says it all. This layer defines the electrical and physical specifications for the networking media that carry the data bits across a network.

Other interesting OSI layer stuff

Layers 5 through 7 are generally referred to as the upper layers. Conversely, Layers 1 through 4 are collectively called the lower layers. Seems obvious, but you’ll see these references on the test.
You need to know the seven layers in sequence, either top-to-bottom or bottom-to-top. Here are some mnemonic phrases to help you remember the layers of the OSI model:
“Please Do Not Throw Salami Pizza Away” — this works for bottom-to-top. If you don’t like salami pizza, then how about seafood or spinach pizza instead?
“All People Seem To Need Data Processing” — a top-to-bottom reminder.
“APS Transports Network Data Physically” — APS refers to Application, Presentation, and Session. This one separates the upper and lower layer groups.
“Please Do Not Tell Secret Passwords Anytime” — Shh! Another bottom-to-top phrase.

Packaging the data

Each layer of the OSI model formats the data it receives to suit the functions to be performed on that layer. In general, the package of data that moves through the layers is called a Protocol Data Unit (PDU). However, as the data is reformatted and repackaged, it takes on unique names on certain layers. Table 1 lists the name each layer uses to refer to a message.

Vmotion without shared storage

I have been running into issues on my home lab when it comes to load balancing. Apparently with the release of the VSphere 5.1 there is a new feature that allows you to migrate running images between hosts without shutting them down first.

The requirements are as follows

Requirements and Limitations for vMotion Without Shared Storage

A virtual machine and its host must meet resource and configuration requirements for the virtual machine files and disks to be migrated with vMotion in the absence of shared storage.

vMotion in an environment without shared storage is subject to the following requirements and limitations:

The hosts must be licensed for vMotion.

The hosts must be running ESXi 5.1 or later.

The hosts must meet the networking requirement for vMotion. See vSphere vMotion Networking Requirements.

The virtual machines must be properly configured for vMotion. See Virtual Machine Conditions and Limitations for vMotion in the vSphere Web Client

Virtual machine disks must be in persistent mode or be raw device mappings (RDMs). See Storage vMotion Requirements and Limitations.

The destination host must have access to the destination storage.

When you move a virtual machine with RDMs and do not convert those RDMs to VMDKs, the destination host must have access to the RDM LUNs.

Consider the limits for simultaneous migrations when you perform a vMotion migration without shared storage. This type of vMotion counts against the limits for both vMotion and Storage vMotion, so it consumes both a network resource and 16 datastore resources. See Limits on Simultaneous Migrations in the vSphere Web Client.


Migration with vMotion in Environments Without Shared Storage

You can use vMotion to migrate virtual machines to a different host and datastore simultaneously. In addition, unlike Storage vMotion, which requires a single host to have access to both the source and destination datastore, you can migrate virtual machines across storage accessibility boundaries.

In vSphere 5.1 and later, vMotion does not require environments with shared storage. This is useful for performing cross-cluster migrations, when the target cluster machines might not have access to the source cluster’s storage. Processes that are working on the virtual machine continue to run during the migration with vMotion.

You can place the virtual machine and all of its disks in a single location or select separate locations for the virtual machine configuration file and each virtual disk. In addition, you can change virtual disks from thick-provisioned to thin-provisioned or from thin-provisioned to thick-provisioned. For virtual compatibility mode RDMs, you can migrate the mapping file or convert from RDM to VMDK.

vMotion without shared storage is useful for virtual infrastructure administration tasks similar to vMotion with shared storage or Storage vMotion tasks.

Host maintenance. You can move virtual machines off of a host to allow maintenance of the host.

Storage maintenance and reconfiguration. You can move virtual machines off of a storage device to allow maintenance or reconfiguration of the storage device without virtual machine downtime.

Storage load redistribution. You can manually redistribute virtual machines or virtual disks to different storage volumes to balance capacity or improve performance.



Migrate a Virtual Machine to a New Host and Datastore by Using vMotion in the vSphere Web Client

You can move a virtual machine to another host and move its disks or virtual machine folder to another datastore. With vMotion, you can migrate a virtual machine and its disks and files while the virtual machine is powered on.

You can perform vMotion in environments without shared storage. Virtual machine disks or contents of the virtual machine folder are transferred over the vMotion network to reach the destination host and datastores.

To make disk format changes and preserve them, you must select a different datastore for the virtual machine files and disks. You cannot preserve disk format changes if you select the same datastore on which the virtual machine currently resides.


Verify that your hosts and virtual machines meet the necessary requirements. See Requirements and Limitations for vMotion Without Shared Storage.

Required privilege: Resource.HotMigrate



Right-click the virtual machine and select Migrate.


To locate a virtual machine, select a datacenter, folder, cluster, resource pool, host, or vApp.


Click the Related Objects tab and click Virtual Machines.


Select Change both host and datastore and click Next.


Select the destination resource for the virtual machine migration.


Select a destination host or cluster for the virtual machine, and click Next.

Any compatibility problems appear in the Compatibility panel. Fix the problem, or select another host or cluster.

Possible targets include hosts and fully automated DRS clusters. You can select a non-automated cluster as a target. You are prompted to select a host within the non-automated cluster.


Select the format for the virtual machine’s disks.



Same format as source

Use the same format as the source virtual machine.

Thick Provision Lazy Zeroed

Create a virtual disk in a default thick format. Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.

Thick Provision Eager Zeroed

Create a thick disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the thick provision lazy zeroed format, the data remaining on the physical device is zeroed out during creation. It might take longer to create disks in this format than to create other types of disks.

Thin Provision

Use the thin provisioned format. At first, a thin provisioned disk uses only as much datastore space as the disk initially needs. If the thin disk needs more space later, it can grow to the maximum capacity allocated to it.


Assign a storage profile from the VM Storage Profile drop-down menu.

Storage profiles define the storage capabilities that are required by the applications running on the virtual machine.


Select the datastore location where you want to store the virtual machine files.



Store all virtual machine files in the same location on a datastore.

Select a datastore and click Next.

Store all virtual machine files in the same Storage DRS cluster.


Select a Storage DRS cluster.


(Optional) To not use Storage DRS with this virtual machine, select Disable Storage DRS for this virtual machine and select a datastore within the Storage DRS cluster.


Click Next.

Store virtual machine configuration files and disks in separate locations.


Click Advanced.


For the virtual machine configuration file and for each virtual disk, select Browse, and select a datastore or Storage DRS cluster.


(Optional) If you selected a Storage DRS cluster and do not want to use Storage DRS with this virtual machine, select Disable Storage DRS for this virtual machine and select a datastore within the Storage DRS cluster.


Click Next.


Select the migration priority level and click Next.



Reserve CPU for optimal VMotion performance

vCenter Server attempts to reserve resources on both the source and destination hosts to be shared among all concurrent migrations with vMotion. vCenter Server grants a larger share of host CPU resources. if sufficient CPU resources are not immediately available, vMotion is not initiated.

Perform with available CPU resources

vCenter Server reserves resources on both the source and destination hosts to be shared among all concurrent migration with vMotion. vCenter Server grants a smaller share of host CPU resources. If there is a lack of CPU resources, the duration of vMotion can be extended.


Review the information on the Review Selections page and click Finish.

vCenter Server moves the virtual machine to the new host and storage location. Event messages appear in the Events tab. The data that appears in the Summary tab shows the status and state throughout the migration. If errors occur during migration, the virtual machines revert to their original states and locations.