Archive for the ‘Uncategorized’ Category

Free eBook – The Backup Bible

As the old adage goes: fail to prepare, prepare to fail.

It’s the perfect description for backup yet to this day so many companies don’t have an adequate backup & disaster strategy in place for when the worst-case scenario happens. Just how well are you protecting your vital data?

clip_image002

The Backup Bible, a free eBook from Altaro, guides you through the stages of preparing for, responding to, and recovering from a substantial data loss event, which can otherwise be disastrous for a company of any size.

This eBook is the first instalment of a 3-part series comprising the complete guide to data protection. It covers how to get started with disaster recovery planning; how to set recovery objectives and loss tolerances; how to translate your business plan into a technically oriented outlook; how to create a customized agenda for obtaining key stakeholder support; and the essentials to setting up a critical backup checklist.

The second and third parts in the series will be released later this year but by downloading the first part, you’ll automatically receive the other eBooks in your inbox as soon as they become available!

Whether you’re just starting to put together a backup and disaster strategy or you want to make sure your current plan is up to scratch, every IT admin needs to read The Backup Bible.

Download your free copy today

Special offer for Halloween on Vembu BDR Suite

Special offer for this Halloween season! Save up to 10% on Vembu BDR Suite!

Build a virtual S2D cluster with Windows Server 2019 build 17744

Windows Server 2016 and 2019 Storage Sapces Direct (S2D) allows building HA storage systems using storage nodes with local storage, such as SATA or SSD disks.

In this blogpost, I’ll deploy a two node S2D cluster based on Windows Server 2019 build 17744. The main machine is a HP ProBook 450 G5 with Windows 10, 16 GB memory, 512 GB SSD disk, and Hyper-V enabled.

First of all, I’ve deployed the following virtual machines:

  • S2D-W2019-DC01 (Domain Controller, DNS, Group Policies)
    IP address: 172.16.0.100
  • S2D-W2019-HV01 (Hyper-V host, S2D node)
    IP address: 172.16.0.101 (LAN)
    IP address: 10.10.0.101 (Live Migration)
  • S2D-W2019-HV02 (Hyper-V host, S2D node)
  • IP address: 172.16.0.102 (LAN)
    IP address: 10.10.0.102 (Live Migration)

All the servers are installed with Windows Server 2019 build 17744. The first server I’ve configured is the domain controller. My internal domain is s2dlab.local.

For both S2D nodes (S2D-W2019-HV01 and S2D-W2019-HV02), you’ve to configure some additional settings, because this servers are virtual. So we’re going to run Hyper-V in Hyper-V and on that Hyper-V host there’re some guest virtual machines (nested virtualization) 😀 Cool stuff!!!


$S2DHOST1 = 'S2D-W2019-HV01'
$S2DHOST2 = 'S2D-W2019-HV02'

# List all virtual machines
Get-VM

# Enable nested virtualization on virtual machines
Set-VMProcessor -VMName $S2DHOST1 -ExposeVirtualizationExtensions $true
Set-VMProcessor -VMName $S2DHOST2 -ExposeVirtualizationExtensions $true

Next, you’ve to Configure the following settings within the VM configuration:

  • Disable dynamic memory;
  • Set the number of virtual processors to 2 or 4;
  • Turn on MAC address spoofing on your network interface(s);

           

Now the domain controller is up and running and both S2D nodes are installed and configured with Windows Server 2019, it’s time to add some storage. Both servers have 3 x 50 GB virtual disks attached! Note!! this is only for testing and demo!! 

So we’ve 300 GB storage available for our S2D cluster. After this is done, you can install the following roles and features within Windows Server:

  • (Role) File and Storage Services;
  • (Role) Hyper-V;
  • (Feature) Failover Clustering;

           

Now all the components are ready to build the cluster. It’s recommended to run the cluster validation before building your cluster! The name of my cluster is ‘S2D-CL01’ with IP address 172.16.0.200/16. Note!! Uncheck the option ‘Add all eligible storage to the cluster’!!

The cluster is up and running. As you can see within your Active Directory and DNS configuration, there’re three computer objects (two cluster nodes and one Failover Cluster object).

                 

The last step before enabling ‘S2D’ on our cluster is checking the disk configuration.


# List all available disks within the cluster nodes
Get-PhysicalDisk

# Enable Storage Spaces Direct on the cluster
Enable-ClusterS2D

# List all Storage Pools within the S2D cluster
Get-Storagepool S2D*

     

Now our cluster is Storage Spaces Direct (S2D) enabled. The last step is to create a virtual disk within our Storage Pool and add it as a Cluster Shared Volume (CSV) to the cluster, wo we can store workloads on it! Bacause we’ve a two node cluster, the only Resiliency type is Two-Way Mirror.

                 

Wrap Up:

In this blogpost we’ve builded a two nodes virtual Storage Spaces Direct cluster in Hyper-V (Windows 10). The S2D nodes are running Windows Server 2019. It’s really a nice opportunity to run this configuration virtual on your laptop or desktop, while nested virtualization is supported and it works great!!

In the next blogpost I’ll show you to install and configure a virtual machine within our S2D cluster. Also performing some live migrations to show the high availability and resiliency of our setup!

WIN – SysAdmin Day Contest – WIN

Feeling lucky? #WIN one of 101 fantastic prizes with #Altaro VM Backup, including a PlayStation 4 Pro, Xbox One X, Amazon eGift Cards and more! Enter our #SysAdminDay Contest today and don’t forget to share this post with your friends!

Experts Live NL 2018 Intro Movie

Last Tuesday, it was a great day again! Experts Live 2018 NL. The biggest community event in the Netherlands. Great sessions, great speakers, very high level of content, great demos and off course as always a great intro movie!! 🙂 Many thanks to the organization of Experts Live! See you next year!!

http://www.expertslive.nl

Download Microsoft Ignite 2017 session content

This script can download all the Microsoft Ignite slidedecks and videos that are available from the Ignite portal. Very useful if you want to watch all the movies and sessions once again!! In this example I’ll download only the sessions with the keyword “IaaS”. But you can also download all the content, or just by title, speaker, etc.

Download the script from Microsoft Gallery here.


\Get-IgniteSession.ps1 -DownloadFolder C:\Ignite2017 -Keyword "IaaS"

Happy New Year!!

With a few days to go, 2017 is there!! 2016 was a really great year. The number of visitors on my blog is growing every day, so hopefully this will continue in 2017.

I want to thank all the sponsors for the support on my blog!! Also special thanks to all the visitors on my blog!

I wish all of you a happy new year and a great 2017!!!

Happy-New-Year-2017

How to: Resize hard disk in Azure Resource Manager (ARM)

Resizing a virtual hard disk in Azure Resource Manager is really easy to do through the Azure Managent Portal. In a few clicks you can extend the virtual hard disk size. Note that the VM should be turned off!! So you need to plan a maintenance window!!
You can also extend the virtual hard disk with PowerShell. In this example I’ve extended the data disk from 25 to 30 GB.


# Specify the VM
$VM = Get-AzureRmVM -ResourceGroupName MSS-DEMO -VMName MSS-DEMO-DC01
# Set the new size of the data disk
Set-AzureRmVMDataDisk -VM $VM -Name MSS-DEMO-DC01-20160801-100246 -DiskSizeInGB 30
# View the new size of the data disk(s)
$VM.StorageProfile.DataDisks
# Update the configuration in Azure
Update-AzureRmVM -VM $VM -ResourceGroupName MSS-DEMO

2016-08-01_10h31_33    2016-08-01_09h45_27    2016-08-01_10h11_55

2016-08-01_10h12_18    2016-08-01_10h25_33    2016-08-01_10h25_49

2016-08-01_10h31_15     2016-08-01_10h32_11    2016-08-01_10h33_23

1.) Login to the Azure Management Portal
2.) Check the current size of the data disk. In my example 25 GB
3.) Start PowerShell and login to your Azure subscription
4.) Change the data disk to the new value
5.) Update the configuration to Azure
6.) Check the new size of the data disk with PowerShell or within the Azure Management Portal.
In my example the new size is 30 GB.