The Home Lab Update 2024

Back in August 2023, I had the pleasure of presenting two VMware Explore sessions about home labs.  While preparing for those sessions, I realized that I hadn’t done a home lab update post in a long time.  In fact, my last update post was four years ago in February 2020.

And a lot has changed in my lab. The use cases, architecture, the hardware, and even my areas of focus have changed significantly in the last four years.  With VMware being acquired by Broadcom and my desire to retool and expand my skillsets, my home lab will be more important than ever.  I will be using it as a tool to help achieve my goals and find my next path.

And while I was originally going to write about the lab infrastructure changes, I decided that my original post just wasn’t right.  My home lab is practically a private cloud, and the tone of the post unintentionally came off as bragging.

That didn’t sit right with me, so I decided to scrap that post and start over. I want to focus on workloads and applications instead of hardware and infrastructure solutions, and I want to elevate some of the open-source projects that I’m using as learning tools.

And when I talk about hardware or infrastructure, it’s going to be about how that hardware supports the specific application or workload.

Home Lab Use Cases

I think it’s very important to talk about my lab use cases. 

I had a slide that I used in two of my VMworld VMware Explore sessions that summed up my home lab use cases:

I really want to focus on the last two use cases on that slide: self-hosting open-source solutions and Minecraft.  The latter has really driven the “roadmap” for my lab by forcing me to look for open-source solutions.  I don’t have a budget, so I’ve been forced to look at open-source solutions to support my kids’ Minecraft servers.

Minecraft isn’t the only thing I’m self-hosting, though.  I’ve found some awesome tools thanks to the r/self-hosted sub-Reddit, and I’ve used some of the tools there to fill in the gaps in my infrastructure.

Most of these solutions are containerized or offer a container-based option.  I’m using containers whenever possible because it makes deploying and maintaining the application and its dependencies much easier than managing binary installs. Each application stack gets its own Debian-based VM, and I am using Traefik as my reverse proxy and SSL offload manager of choice.

I haven’t jumped into Kubernetes yet as I’m still getting comfortable with containers, and self-hosting Kubernetes adds another layer of complexity to my lab.  It is on my to-do list.

All the solutions I’m using would be deserving of their own posts, but in the interest of time and wordcount, I’ll keep it fairly high level today.

Vaultwarden

There was a time, a long time ago, when I was a Lastpass family customer.  It got harder to justify the yearly cost of Lastpass when self-hosted alternatives like Bitwarden were available (and…if I’m being honest…my family was not using the service). The Lastpass breach and security issues came to light about six months after I had cancelled my subscription and migrated my vault out, but it only justified my decision to move on.

I was originally using the self-hosted Bitwarden container.  But I recently switched to Vaultwarden so I could start offering password vaults to the rest of my family as they are seeing the need for a password vault service.

Vaultwarden is one of the most important services in my lab.  This service contains critical data, and I need to make sure it is backed up. I’m using a combination of this Vaultwarden  backup container and Restic to protect the data in this application.

MinIO

MinIO is one of the few applications that I’ve deployed with infrastructure dependencies.  I originally deployed MinIO in my lab when I was testing VMware Data Services Manager (DSM) as that product required S3-compatible storage. 

I have a 3-node MinIO cluster in my lab.  Each MinIO node has two data volumes, so I have a total of 6 data disks across my 3 nodes. 

This is one of the few applications in my lab that is tied to specific hosts and hardware.  Each MinIO node data volume is sitting on a dedicated local SSD, so each node is tied to an ESXi host in a workload cluster.  This setup allows me to use erasure coding and provides some degree of data redundancy, but it makes host management operations a little more complex because I must shut down the MinIO node on a host before I can perform any maintenance operations. 

Even though I’m no longer testing DSM in my lab, I still have MinIO deployed.  I’m using it as the backend for other services in my lab that I will be talking about later in this post. 

Wiki.JS

Home labs are rarely documented.  This is something I’m trying to improve on with my lab as I’ve had a few processes that I’ve had to figure out or reverse engineer from looking at my shell/command history.  I used to use Confluence SaaS free tier for documenting my home network, but SSO was a $30 per month add-on. 

I also wanted a self-hosted option.  I looked at a few wiki options, including Bookstacks, Dokuwiki, and a few others.  But I’m also kind of picky about my requirements and wanted something that supported SSO out of the box and used PostgreSQL.

So I settled on wiki.js as my solution because it is open source, met all my technical requirements, and it fit in my budget.

I’m not taking full advantage of WikiJS yet.  My focus has been importing content from Confluence and testing out the SSO features.  But I plan to add more lab documentation and use it for some of my programming and lab side projects in the future. 

Grafana Loki and Promtail

I’ve needed a log management solution for my fleet of Minecraft servers for a while now.  Log management has been an issue on those, and some method to easily search the logs is kind of a requirement before I let my kids share the servers with their friends.

There are a lot of open-source solutions in this space, but I am settling on the Grafana stack.  I’m starting with this stack because it seems to be a well-integrated stack for performance monitoring, log aggregation, and creating dashboards. Time will tell on that as I am just getting started with Grafana Loki.  I have a small instance deployed today to get my Promtail config ironed out, and I will be redeploying it as I roll it out to the rest of my lab.

One thing I like about some of the newer log management systems is that they can use S3-compatible storage for log data.  Loki isn’t the only solution that can do this but being a part of the Grafana stack set it apart in my mind and helped make it my first choice.

I’m using the Promtail binary for my Minecraft servers, and getting that config set up properly has been a pain.  The documentation is very high level, and as far as I can tell, doesn’t include very many example configs to start from.  Some of the issues I had to work through are scraping the systemctl journal, which required adding the promtail service user to a systemctl-journald group and getting the hostname and IP address added to all forwarded logs.  The documentation covered some of what I needed, but there were some significant gaps in my opinion.  It took a lot of trial, error, and experimentation to get where I wanted to be.

I need to write a longer blog post to talk about my setup and how I worked around some of the issues I faced once I get this rolled out into “production” and get some dashboards built.  I will also be looking at Grafana’s version of Prometheus for performance monitoring in a later phase.

OwnCloud Infinite Scale

Have you ever exceeded the limits of the free tiers that Microsoft and Google offer on OneDrive or Google Drive?  Or needed a Dropbox-like service that was self-hosted to meet data sovereignty or compliance requirements? 

OwnCloud Infinite Scale (OCIS) is an open-source ground-up rewrite of OwnCloud Server using the Go programming language.  It is a drop-in replacement for OneDrive, Google Drive, Dropbox and similar solutions.  The client app supports files-on-demand (although this feature is experimental on MacOS).  The server supports integration with multiple web-based office suites, OpenID Connect for SSO, and S3-compatible storage.

I use it for some of my file storage needs, especially the stuff that I don’t want to put on OneDrive or transferring data from my laptop to my lab. I expect to use the Spaces feature to replace some of my lab file servers and QNAP virtual appliances in my lab.  

DDNS-Route53

DDNS-Route53 is a Go application that allows you to build your own Dynamic DNS service using AWS Route53. I was getting tired of having multiple dynamic DNS services tied to different domains, and I’ve started to standardize all my domains on Route53 and use this service to replace the few dynamic DNS services that I currently use.

Conclusion

These are just a few of the open-source projects I’ve been using in my lab.  I have a few more that I’ve been testing out that I will talk about in future posts. 

Open-source solutions are a great way to get more utilization out of your home lab while building or enhancing your technical skills.  I’ll be talking more about this topic at the Wisconsin VMUG Usercon in April 2024.  If you’re going to be there, please stop by my session.

How I Automated Minecraft Server Builds

If you have kids that are old enough to game on any sort of device with a screen, you’ve probably been asked about virtual Lego kits. And I don’t mean the various branded video games like LEGO Worlds or the LEGO Star Wars games. No, I’m talking about something far more addictive – Minecraft.

My kids are Minecraft fanatics. They could play for hours on end while creative how-tos and “Let’s Play” YouTube videos loop non-stop in the background. And they claim they want to play Minecraft together, although that’s more theory than actual practice in the end. They also like to experiment and try to build the different things they see on YouTube. They wanted multiple worlds to use as playgrounds for their different ideas.

And they even got me to play a few times.

So during the summer of 2020, I started looking into how I could build Minecraft server appliances. I had built a few Minecraft servers by hand before that, but they were difficult to maintain and keep up-to-date with Minecraft Server dot releases and general operating system maintenance.

I thought a virtual appliance would be the best way to do this, and this is my opinionated way of building a Minecraft server.

TL;DR: Here is the link to my GitHub repo with the Packer files and scripts that I use.

A little bit of history

The initial version of the virtual appliance was built on Photon. Photon is a stripped down version of Linux created by my employer for virtual appliances and running container workloads. William Lam has some great content on how to create a Photon-based virtual appliance using Packer.

This setup worked pretty well until Minecraft released version 1.17, also known as the Caves and Cliffs version, in the summer of 2021.

There are a couple of versions of Minecraft. The two main ones are Bedrock, which is geared towards Windows, mobile devices, and video game consoles, and Java, which uses Java and only runs on Windows, Linux, and Mac.

My kids play Java edition, and up until this point, Minecraft Java edition servers used the Java8 JDK. Minecraft 1.17, however, required the Java16 JDK. And that led to a second problem. The only JDK in the Photon repositories at the time was for Java8.

Now this doesn’t seem like a problem, or at least it isn’t on a small scale. There are a few open-source OpenJDK implementations that I could adopt. I ended up going with Adoptium’s Temurin OpenJDK. But after building a server or two, I didn’t really feel like maintaining a manual install process. I wanted the ease of use that came with a installing and updating from a package repository, and that wasn’t available for Photon.

So I needed a different Linux distribution. CentOS would have been my first choice, but I didn’t want something that was basically a rolling release candidate. My colleague Timo Sugliani spoke very highly of Debian, and he released a set of Packer templates for building lightweight Debian virtual appliances on GitHub. I modified these templates to use the Packer vSphere-ISO plugin and started porting over my appliance build process.

Customizing the Minecraft Experience

Do you want a flat world or something without mob spawns? Or try out a custom world seed? You can set that during the appliance deployment. I wanted the appliance to be self-configuring so I spent some time extending William Lam’s OVF properties XML file to include all of the Minecraft server attributes that you can configure in the Server.Properties file. This allows you to deploy the appliance and configure the Minecraft environment without having to SSH into it to manually edit the file.

One day, I may trust my kids enough to give them limited access to vCenter to deploy their own servers. This would make it easier for them.

Unfortunately, that day is not today. But this still makes my life easier.

Installing and Configuring Minecraft

The OVF file does not contain the Minecraft Server binaries. It actually gets installed during the appliance’s first boot. There are few reasons for this. First, the Minecraft EULA does not allow you to distribute the binaries. At least that was my understanding of it.

Second, and more importantly, you may not always want the latest and greatest server version, especially if you’re planning to develop or use mods. Mods are often developed against specific Minecraft versions, and they have a lengthy interoperability chart.

The appliance is not built to utilize mods out of the box, but there is nothing stopping someone from installing Forge, Fabric, or other modified binaries. I just don’t feel like taking on that level of effort, and my kids have so far resisted learning important life skills like the Bash CLI.

And finally, there isn’t much difference between downloading and installing the server binary on first boot and downloading and installing an updated binary. Minecraft Java edition is distributed as a JAR file, so I only really need to download it and place it in the correct folder.

I have a pair of PowerShell scripts that make these processes pretty easy. Both scripts have the same core function – query an online version manifest that is used by the Minecraft client and download the specified version to the local machine. The update script also has some extra logic in it to check if the service is running and gracefully stop it before downloading the updated server.jar file.

You can find these scripts in the files directory on GitHub.

Running Minecraft as a systemd Service

Finally, I didn’t want to have to deal with manually starting or restarting the Minecraft service. So I Googled around, and I found a bunch of systemd sample files. I did a lot of testing with these samples (and I apologize, I did not keep track of the links I used when creating my service file) to cobble together one of my own.

My service file has an external dependency. The MCRCON tool is required to shut down the service. While I was testing this, I ran into a number of issues where I could stop Minecraft, but it wouldn’t kill the Java process that spawned with it. It also didn’t guarantee that the world was properly saved or that users were alerted to the shutdown.

By using MCRCON, we can alert users to the shutdown, save the world, and gracefully exit all of the processes through a server shutdown command.

I also have the Minecraft service set to restart on failure. My kids have a tendency to crash the server by blowing up large stacks of TNT in a cave or other crazy things they see on YouTube, and that tends to crash the binary. This saves me a little headache by restarting the process.

Prerequisites

Before we begin, you’ll want to have a couple of prerequisites. These are:

  • The latest copy of HashiCorp’s Packer tool installed on your build machine
  • The latest copy of the Debian 11 NetInstall ISO
  • OVFTool

There are a couple of files that you should edit to match your environment before you attempt the build process these are:

  • Debian.auto.pkrvars.hcl – variables for the build process
  • debian-minecraft.pkr.hcl file – the iso_paths line includes part of a hard-coded path that may not reflect your environment, and you may want to change the CPUs or RAM allocated to the VM.
  • Preseed.cfg file located in the HTTP folder: localization information and root password

This build process uses the Packer vsphere-iso build process, so it talks to vCenter. It does not use the older vmware-iso build process.

The Appliance Build Process

As I mentioned above, I use Packer to orchestrate this build process. There is a Linux shell script in the public GitHub repo called build.sh that will kick off this build process.

The first step, obviously, is to install Debian. This step is fully automated and controlled by the preseed.cfg file that is referenced in the packer file.

Once Debian is installed, we copy over a default Bash configuration and our init-script that will run when the appliance boots for the first time to configure the hostname and networking stack.

After these are files are copied over, the Packer build begins to configure the appliance. The steps that it takes are:

  • Run an apt-get update & apt-get upgrade to upgrade any outdated installed packages
  • Install our system utilities, including UFW
  • Configure UFW to allow SSH and enable it
  • Install VMware Tools
  • Set up the Repos for and install PowerShell and the Temurin OpenJDK
  • Configure the rc.local file that runs on first boot
  • Disable IPv6 because Java will default to communicating over IPv6 if it is enabled

After this, we do our basic Minecraft setup. This step does the following:

  • creates our Minecraft service user and group
  • sets up our basic folder structure in /opt/Minecraft
  • downloads MCRCON into the /opt/Minecraft/tools/mcrcon directory.
  • Copy over the service file and scripts that will run on first boot

The last three steps of the build are to run a cleanup script, export the appliance to OVF, and create the OVA file with the configurable OVF properties. The cleanup script cleans out the local apt cache and log files and zeroes out the free space to reduce the size of the disks on export.

The configurable OVF properties include all of the networking settings, the root password and SSH key, and, as mentioned above, the configurable options in the Minecraft server.properties file. OVFTool and William Lam’s script are required to create the OVA file and inject the OVF properties, and the process is outlined in this blog post.

The XML file with the OVF Properties is located in the postprocess-ova-properties folder in my GitHub repo.

The outcome of this process is a ready-to-deploy OVA file that can be uploaded to a content library.

First Boot

So what happens after you deploy the appliance and boot it for the first time.

First, the debian-init.py script will run to configure the basic system identity. This includes the IP address and network settings, root password, and SSH public key for passwordless login.

Second, we will regenerate the host SSH keys so each appliance will have a unique key. If we don’t do this step, every appliance we deploy will have the same SSH host keys as the original template. This is handled by the debian-regeneratesshkeys.sh script that is based on various scripts that I found on other sites.

Our third step is to install and configure the Minecraft server using the debian-minecraftinstall.sh script. This has a couple of sub-steps. These are:

  • Retrieve our Minecraft-specific OVF Properties
  • Call our PowerShell script to download the correct Minecraft server version to /opt/Minecraft/bin
  • Initialize the Minecraft server to create all of the required folders and files
  • Edit eula.txt to accept the EULA. The server will not run and let users connect without this step
  • Edit the server.properties file and replace any default values with the OVFProperties values
  • Edit the systemd file and configure the firewall to use the Minecraft and RCON ports
  • Reset permissions and ownership on the /opt/Minecraft folders
  • Enable and start Minecraft
  • Configure our Cron job to automatically install system and Minecraft service updates

The end result is a ready-to-play Minecraft VM.

All of the Packer files and scripts are available in my GitHub repository. Feel free to check it out and adapt it to your needs.

Getting Back to Blogging

As you’ve probably noticed, I’ve been pretty quite lately. I haven’t actually posted anything in about two years. So I decided to write a quick update, especially since I’m not as active on social media as I used to be.

Yes, I am still alive. I’m not on Twitter anymore. I’ve moved to Mastodon, and if you’re looking for a Mastodon instance, I would highly recommend vmst.io.

COVID was hard. Burnout kind of kicked in during the pandemic, and it hit home shortly after I decided to start a YouTube channel. I had a few things in the production pipeline before I went on vacation, and then I decided to take a longer break. I even took a bit of a break from creating content for VMUG.

I played a lot of Pokemon instead.

But I’m trying to get back into the swing of things. Recently, I helped my team launch a Multi-Cloud page on the VMware Cloud Techzone site. And that has me back in the content creating mood.

I will be posting stuff again soon. I have some thing in the pipeline that I’ve been kicking around for a while. I’ll be putting out a home lab update, and I owe everyone a post about the Minecraft appliance OVA template that I built for my kids. And some other stuff around identity management in a multi-cloud world.

So hit that subscribe button and ring the bell to be notified when new content is posted.

Wait…that only works for YouTube.

What’s In The Studio – Pivoting Community Involvement to Video

As we all start off 2021, I wanted to talk a little about video.

As we all know, 2020 put the kibosh on large, in-person events. This included all of the vendor conferences, internal conferences, and community events like the VMware User Group UserCons and other user groups. Most of these events transitioned to online events with presenters delivering recorded sessions. It also meant more webinars, Zoom meetings, and video conferences.

And it doesn’t look like this will be changing for at least the first half of 2021.

I’ve seen a number of blog and Twitter posts recently about home studios (for example, this great post by Johan van Amersfoort or this Twitter thread from John Nicholson), and I thought I would share my setup.

Background

I was not entirely unprepared to transition to video last year. I had been a photographer since high school, and I made the jump to digital photography in college when Canon released the Digital Rebel. I mainly focused on sports that are played in venues that are a step or two above dimly lit caves. After college, I kind of put the camera down (except for a couple of vacations and trying my hand at a wedding or two which was not my thing). At the beginning of 2020, I decided it was time to get back into photography, thinking I might as well get back into photography since I was traveling ( 😂 ), and pick up a used Canon 6D that was opportunisticly priced. And it also could record video in 1080p.

Slideshow: Some of my photos from years past.

Video was new ground for me, and it resulted in a little lot of experimentation and purchasing in order to get things right. This was also happening at the beginning of the lockdowns when my whole family was at home all day and almost everything I needed was delayed or backordered. Some of this was driven by equipment limitations, which I will cover below, and some of it was driven by other factors.

And as I went through this, I spent a lot of time learning what worked and what didn’t work for me. For example, I found that sitting in front of my laptop trying to record in Zoom didn’t work for me. When recording for a VMUG or VMworld, I wanted to stand and have room to move around because that was what felt natural to me.

Before I go into my setup, I want to echo one point that Johan made in his post. The audio and video gear is there to support the message and enable remote delivery. If you are new to presenting, spend some time learning the craft of storytelling and presentation design. Johan recommended two books by Nancy Duarte – resonate and Slide:ology. I highly recommend these books as well. If you’re new to presenting in general, I also recommend finding a mentor and learning how to use PowerPoint as the graphics capabilities are powerful but intimidating. There are a number of good YouTube videos, for example, on how to do different things in PowerPoint.

Requirements and Constraints

I have primarily used my video gear two different ways. The first was for video conferencing. Whether it was Zoom, Teams, or “Other,” video became a major part of meetings to replace in-person meetings and workshops. The second use case case was the one I probably focused on more – producing recorded content for user groups and conferences, and my goal here was to try and replicate some of the feel of presenting live while taking advantage of the capabilities that video offers.

Most of the recorded video content was for VMUG UserCons. These sessions were 40 minutes long, and they wanted to have presenters on camera along with the slides.

There is a third use case, which didn’t really apply for 2020. This use case was live events such as webinars and video podcast recordings, although my studio kit can be used for this.

I had a few things I needed to consider when planning out my setup. The first was space. I had a few limiting factors when it came to having a space to record. My office was not set up properly for keeping the gear set up permanently, and the furniture arrangement was dictated by where the one outlet was located. (I have installed additional outlets in my office and rearranged.) I also wanted a space that I could record while standing. Both of these factors meant that I would be using common areas to record, so my gear selections would have to be something portable and easy to assemble.

Most of my recording was originally done in my kid’s playroom in my basement.

The second consideration was trying to keep this budget friendly. The key word here is trying. I may have failed there.

I already had a lot of Canon gear from my photography days, so I wanted to reuse it as much as possible. I already had a Canon EOS 6D, and that could record 1080p HD video. Although I did upgrade my camera bodies by trading in old gear, I stayed in the Canon ecosystem as I didn’t want to invest in all new lenses.

I had a copy of Camtasia for screen recording, but combining the Camtasia capture with video recorded in camera would require additional workflow to get the final video together. This would require some sort of video editing software. And I would also need audio and lighting gear. This gear had to fit the requirements and constraints laid out above and be both cost effective and portable.

Studio Gear

My studio setup in my office.

Note: I will be linking to Amazon, Adorama, and other sites in this section. These are NOT affiliate links. I have not monetized my site, and I make no money off of any purchases you choose to make.

Cameras and Lenses

Canon EOS R6 with Canon EF 50mm F/1.4 USM Lens and DC Adapter – Primary Camera

Canon EOS Rebel SL3 with Canon EF 40mm F/2.8 STM Lens and DC Adapter – Secondary/Backup Camera

Note: Both cameras use DC Adapters when set up in the studio because these cameras will eat through their batteries when doing video. Yes, I’ve lost a few hours while waiting for all of my battery packs to recharge.

Audio

Synco Audio WMic-T1 Wireless Lavalier Microphone System (x2) – primary audio

Comica CVM-V30 Pro Shotgun Microphone – Secondary audio

Blue Yeti USB Mic(Note: This is at my desk, but I only use it for recording voiceovers or while on Zoom/Teams/etc calls. If I ever restart my podcast, I will use this for that as well.)

Lighting

Neewer 288 Large LED Video Light Panel (x2)

Viltrox VL-162T Video Light (x2)

Amazon Basics Light Stands (x2)

Other Hardware and Software

Blackmagic Design ATEM Mini Pro ISO – See Below

Davinci Resolve (Note: Davinci Resolve is a free, full featured video editing suite. There is also a paid version, Davinci Resolve Studio, that has a one-time cost of $299. Yes. It’s a perpetual license.)

Camtasia

A note on why I’m using the ATEM Mini Pro ISO

When I started, I was using Camtasia to record my screen while I recorded my presentation using my camera. Creating the final output required a lot of post-processing work to line up the audio and video across multiple sources.

The ATEM Mini Pro ISO allows me to bring together all my audio and video sources into a single device and record each input. So I can bring both cameras, my microphones, and any computers that I’m displaying content on (such as slides or demos) and record all of these inputs to disk. This allows me to record everything on one disk, so I don’t have to worry about managing data on multiple memory cards, and it simplifies my post-production workflow because I don’t have to synchronize everything manually.

There is a second benefit that I haven’t covered. It also allows me to get around a video recording limit built into modern cameras.

Most DSLRs and mirrorless cameras are have a video recording time limit when recording to internal cards. Video segments are limited to approximately 29 minutes and 59 seconds. This limit isn’t due to file size or hardware limitations (although some cameras have shorter time limits due to heat dissipation issues). It’s an artificial limit due to import-duty restrictions that the European Union put on video cameras.

VMUG UserCon sessions are 40 minutes, and I was burned by the 30 minute time limit on a couple of occassions.

That recording time limit only applies when recording to the internal card, though. It does not apply to external devices like the ATEM Mini. In order to use this with a DSLR or mirrorless camera, you need a one that supports sending a clean video feed over HDMI (Clean HDMI Out). Canon has a good video that explains it here. (Note: There are also USB webcam drivers for many modern DSLR and mirrorless cameras that allow you to do the same type of thing with tools like OBS.)

Horizon 8.0 Part 10: Deploying the Unified Access Gateway

And we’re back…this week with the final part of deploying a Horizon 2006 environment – deploying the Unified Access Gateway to enable remote access to desktops.

Before we go into the deployment process, let’s dive into the background on the appliance.

The Unified Access Gateway (also abbreviated as UAG) is a purpose built virtual appliance that is designed to be the remote access component for VMware Horizon and Workspace One.  The appliance is hardened for deployment in a DMZ scenario, and it is designed to only pass authorized traffic from authenticated users into a secure network.

As of Horizon 2006, the UAG is the primary remote access component for Horizon.  This wasn’t always the case – previous Horizon releases the Horizon Security Server.  The Security Server was a Windows Server running a stripped-down version of the Horizon Connection Server, and this component was deprecated and removed with Horizon 2006.

The UAG has some benefits over the Security Server.  First, it does not require a Windows license.  The UAG is built on Photon, VMware’s lightweight Linux distribution, and it is distributed as an appliance.  Second, the UAG is not tightly coupled to a connection server, so you can use a load balancer between the UAG and the Connection Server to eliminate single points of failure.

And finally, multifactor authentication is validated on the UAG in the DMZ.  When multi-factor authentication is enabled, users are prompted for that second factor first, and they are only prompted for their Active Directory credentials if this authentication is successful.  The UAG can utilize multiple forms of MFA, including RSA, RADIUS, and SAML-based solutions, and setting up MFA on the UAG does not require any changes to the connection servers.

There have also been a couple of 3rd-party options that could be used with Horizon. I won’t be covering any of the other options in this post.

If you want to learn more about the Unified Access Gateway, including a deeper dive on its capabilities, sizing, and deployment architectures, please check out the Unified Access Gateway Architecture guide on VMware Techzone.

Deploying the Unified Access Gateway

There are two main ways to deploy the UAG.  The first is a manual deployment where the UAG’s OVA file is manually deployed through vCenter, and then the appliance is configured through the built-in Admin interface.  The second option is the PowerShell deployment method, where a PowerShell script and OVFTool are used to automatically deploy the OVA file, and the appliance’s configuration is injected from an INI file during deployment.

Typically, I prefer using the PowerShell deployment method.  This method consists of a PowerShell Deployment Script and an INI file that contains the configuration for each appliance that you’re deploying.  I like the PowerShell script over deploying the appliance through vCenter because the appliance is ready to use on first boot. It also allows administrators to track all configurations in a source control system such as Github, which provides both documentation for the configuration and change tracking.  This method makes it easy to redeploy or upgrade the Unified Access Gateway because I rerun the script with my config file and the new OVA file.

The PowerShell script requires the OVF Tool to be installed on the server or desktop where the PowerShell script will be executed.  The latest version of the OVF Tool can be downloaded from MyVMware.  PowerCLI is not required when deploying the UAG as OVF Tool will be deploying the appliance and injecting the configuration.

The zip file that contains the PowerShell scripts includes sample templates for different use cases.  This includes Horizon use cases with RADIUS and RSA-based multifactor authentication.  You can also find the reference guide for all options here.

If you haven’t deployed a UAG before, are implementing a new feature on the UAG, or you’re not comfortable creating the INI configuration file from scratch, then you can use the manual deployment method to configure your appliance and then export the configuration in the INI file format that the PowerShell deployment method can consume.  This exported configuration only contains the appliance’s Workspace ONE or Horizon configuration – you would still have to add in your vSphere and SSL Certificate configuration.

You can export the configuration from the UAG admin interface.  It is the last item in the Support Settings section.

UAG-Config-Export

One other thing that can trip people up when creating their first UAG deployment file is the deployment path used by OVFTool.  This is not always straightforward, and vCenter has some “hidden” objects that need to be included in the path.  OVFTool can be used to discover the path where the appliance will be deployed.

You can use OVFTool to connect to your vCenter with a partial path, and then view the objects in that location.  It may require multiple connection attempts with OVFTool to build out the path.  You can see an example of this over at the VMwareArena blog on how to export a VM with OVFTool or in question 8 in the troubleshooting section of the Using PowerShell to Deploy the Unified Access Gateway guide.

Before deploying the UAG, we need to get some prerequisites in place.  These are:

  1. Download the Unified Access Gateway OVA file, PowerShell deployment script zip file, and the latest version of OVFTool from MyVMware.
  2. Right click on the PowerShell zip file and select Properties.
  3. Click Unblock.  This step is required because the file was downloaded from the Internet, and is untrusted by default, and this can prevent the scripts from executing after we unzip them.
  4. Extract the contents of the downloaded ZIP file to a folder on the system where the deployment script will be run.  The ZIP file contains multiple files, but we will only be using the uagdeploy.ps1 script file and the uagdeploy.psm1 module file.  The other scripts are used to deploy the UAG to Hyper-V, Azure, and AWS EC2.The zip file will also contain a number of default templates.  When deploying the access points for Horizon, I recommend starting with the UAG2-Advanced.ini template.  This template provides the most options for configuring Horizon remote access and networking.  Once you have the UAG deployed successfully, I recommend copying the relevant portions of the SecurID or RADIUS auth templates into your working AP template.  This allows you to test remote access and your DMZ networking and routing before adding in MFA.
  5. Before we start filling out the template for our first access point, there are some things we’ll need to do to ensure a successful deployment. These steps are:
    1. Ensure that the OVF Tool is installed on your deployment machine.
    2. Locate the UAG’s OVA file and record the full file path.  The OVA file can be placed on a network share.
    3. We will need a copy of the certificate, including any intermediate and root CA certificates, and the private key in PFX or PEM format.  Place these files into a folder on the local or network folder and record the full path.If you are using PEM files, the certificate files should be concatenated so that the certificate and any CA certificates in the chain are in one file, and the private key should not have a password on it.  If you are using PFX files, you will be prompted for a password when deploying the UAG.
    4. We need to create the path to the vSphere resources that OVF Tool will use when deploying the appliance.  This path looks like: vi://user@PASSWORD:vcenter.fqdn.orIP/DataCenter Name/host/Host or Cluster Name/OVF Tool is case sensitive, so make sure that the datacenter name and host or cluster names are entered as they are displayed in vCenter.

      The uppercase PASSWORD in the OVFTool string is a variable that prompts the user for a password before deploying the appliance.  If you are automating your deployment, you can replace this with the password for the service account that will be used for deploying the UAG.

      Note: I don’t recommend saving the service account password in the INI files. If you plan to do this, remember best practices around saving passwords in plaintext files and ensure that your service account only has the required permissions for deploying the UAG appliances.


    5. Generate the passwords that  you will use for the appliance Root and Admin passwords.
    6. Get the SSL Thumbprint for the certificate on your Connection Server or load balancer that is in front of the connection servers.
  6. Fill out the template file.  The file has comments for documentation, so it should be pretty easy to fill out. You will need to have a valid port group for all three networks, even if you are only using the OneNic deployment option.
  7. Save your INI file as <UAGName>.ini in the same directory as the deployment scripts.

There is one change that we will need to configure on our Connection Servers before we deploy the UAGs – disabling the Blast and PCoIP secure gateways.  If these are not disabled, the UAG will attempt to tunnel the user protocol session traffic through the Connection Server, and users will get a black screen instead of a desktop.

The steps for disabling the gateways are:

  1. Log into your Connection Server admin interface.
  2. Go to Settings -> Servers -> Connection Servers
  3. Select your Connection Server and then click Edit.
  4. Uncheck the following options:
    1. Use Secure Tunnel to Connect to machine
    2. Use PCoIP Secure Gateway for PCoIP connections to machine
  5. Under Blast Secure Gateway, select Use Blast Secure Gateway for only HTML Access connections to machine.  This option may reduce the number of certificate prompts that users receive if using the HTML5 client to access their desktop.
  6. Click OK.

Connection Server Settings

Once all of these tasks are done, we can start deploying the UAGs.  The steps are:

  1. Open PowerShell and change to the directory where the deployment scripts are stored.
  2. Run the deployment script.  The syntax is .\UAGDeploy.ps1 –inifile <apname>.ini
  3. Enter the appliance root password twice.
  4. Enter the admin password twice.  This password is optional, however, if one is not configured, the REST API and Admin interface will not be available.
    Note: The UAG Deploy script has parameters for the root and admin passwords.  These can be used to reduce the number of prompts after running the script.
  5. If RADIUS is configured in the INI file, you will be prompted for the RADIUS shared secret.
  6. After the script opens the OVA and validates the manifest, it will prompt you for the password for accessing vCenter.  Enter it here.
  7. If a UAG with the same name is already deployed, it will be powered off and deleted.
  8. The appliance OVA will be deployed.  When the deployment is complete, the appliance will be powered on and get an IP address from DHCP.
  9. The appliance configuration defined in the INI file will be injected into the appliance and applied during the bootup.  It may take a few minutes for configuration to be completed.

image

Testing the Unified Access Gateway

Once the appliance has finished it’s deployment and self-configuration, it needs to be tested to ensure that it is operating properly. The best way that I’ve found for doing this is to use a mobile device, such as a smartphone or cellular-enabled tablet, to access the environment using the Horizon mobile app.  If everything is working properly, you should be prompted to sign in, and desktop pool connections should be successful.

If you are not able to sign in, or you can sign in but not connect to a desktop pool, the first thing to check is your firewall rules.  Validate that TCP and UDP ports 443, 8443 and 4172 are open between the Internet and your Unified Access Gateway.  You may also want to check your Connection Server configuration and ensure that HTTP Secure Gateway, PCoIP Secure Gateway, and Blast Secure Gateway are disabled.

If you’re deploying your UAGs with multiple NICs and your desktops live in a different subnet than your UAGs and/or your Connection Servers, you may need to statically define routes.  The UAG typically has the default route set on the Internet or external interface, so it may not have routes to the desktop subnets unless they are statically defined.  An example of a route configuration may look like the following:

routes1 = 192.168.2.0/24 192.168.1.1,192.168.3.0/24 192.168.1.1

If you need to make a routing change, the best way to handle it is to update the ini file and then redeploy the appliance.

Once deployed and tested, your Horizon infrastructure is configured, and you’re ready to start having users connect to the environment.

Horizon 8.0 Part 9: Creating Your First Desktop Pool

This week, we’re going to talk about desktop pools and how to create your first desktop pool in your new Horizon environment.

Desktop Pools – Explained

So what is a desktop pool?

Desktop pools are a logical grouping of virtual machines that users can access, and these groupings control specific settings about the pool. This includes how the desktops are provisioned and named, protocols that are available for connectivity, and what physical infrastructure they are deployed on.

Horizon has a few different types of desktop pools.  Each pool handles desktops in different ways, and they each have different purposes.  The type of pool that you select will be determined by a number of factors including the use case, the storage infrastructure and application requirements.

The type of desktop pools are:

  • Full Clone Pools – Each virtual desktop is a full virtual machine cloned from a template in vCenter.  The virtual machines require a desktop management tool for post-deployment management.  VMs are customized using existing Guest Customization Specifications. These desktops usually persist after the user logs out.
  • Linked Clone Pools – Each virtual desktop is based on a parent VM snapshot and shares its disk with the parent virtual machine.  Changes to the linked clone are written to a delta disk.  The virtual machines are managed by View Composer.   Linked Clone desktops can be Floating or Dedicated assignment, and they can be configured to be refreshed (or rolled back to a known good snapshot) or deleted on logoff. Linked Clone desktops are officially deprecated in Horizon 2006, and they will be removed in a future release.
  • Instant Clone Pools – Each virtual desktop is based on a parent VM snapshot. The snapshot is cloned to a VM that is deployed to each host, powered up, and then stunned. All guest VMs are then “forked” from this VM and quickly customized. Guest VMs share virtual disks and initial memory maps with the parent VMs.  VMs are managed by vCenter and a “next generation” Composer that is built into the Connection Servers.
  • Manual Pools – The machines that make up the manual pool consist of virtual and/or physical machines that have had the View Agent installed.  These machines are not managed by Horizon.
  • Remote Desktop Session Host Pool – The machines that make up these pools are Windows Servers with the Remote Desktop Session Host Role installed.  They can be provisioned as linked clones or manually, and they are used for published desktops and published applications.

There is one other choice that needs to be selected when creating a desktop pool, and that is the desktop assignment type.  There are two desktop assignment types:

  • Floating Assignment – Desktops are assigned to users at login and are returned to the pool of available desktops when the user signs out.
  • Dedicated Assignment – Desktops are assigned to a user, and the user gets the same desktop at each login.  Desktops can be assigned automatically at first login or manually by an administrator.

Creating Your Desktop Image

Before you can create a desktop pool, you need to have configured a desktop virtual machine with all of your applications and optimizations configured.  This virtual machine will be the template or gold pattern for all of the virtual machines that Horizon will deploy as part of the pool.

The virtual desktop template details, including the virtual machine specifications and installed applications, will depend on the results of any use case definition and desktop assessment exercises that are performed during the project’s design phase.  

I won’t cover how to create a desktop or RDSH template in this series.  Instead, I recommend you check out the Building an Optimized Windows Image guide on VMware Techzone or Graeme Gordon‘s session from VMworld – DWHV1823 Creating and Optimizing a Windows Image for VDI and Published Applications.

Creating A Desktop Pool

For this walkthrough, I will be doing an Automatic Floating Assignment Instant-Clone desktop pool.  These are otherwise known as Non-Persistent desktops because the desktop is destroyed when the user signs out.

If you’re familiar with previous versions of the series, you’ll notice that there are more screens and the order that some steps are performed in has changed.  Please note that some of the menu options will change depending on the type of desktop pool you’re provisioning.

1. Log into the Horizon 7 Administrator.  Under Inventory, select Desktops.

2.  Click Add to add a new pool.

3. Select the Pool Type that you want to create.  For this, we’ll select Automated Pool and click Next.

Note: In some environments, you may see the following error if you’re using Instant Clones when View Storage Accelerator is disabled. 

4.  Choose the type of virtual machines that will be deployed in the environment. For this walkthrough, select Instant Clone. If you have multiple vCenter Servers in your environment, select the vCenter where the desktops will be deployed. Click Next.

5. Select whether you want to have Floating or Dedicated Desktops. For this walkthrough, we’ll select Floating and click Next.

Note: The Enable Automatic Assignment option is only available if you select Dedicated. If this option is selected, View automatically assigns a desktop to a use when they log in to dedicated pool for the first time.

6. Select whether VSAN will be used to store desktops that are provisioned by Horizon.  If VSAN is not being used, select the second option – “Do Not Use VSAN.

If you want to store the Instant Clone replica disks that all VMs are provisioned from on different datastores from the VMs, and you are not using VSAN, select the Use Separate Datastores for Replica and Data Disks.

7. Each desktop pool needs an ID and, optionally, a Display Name.  The ID field is the official name of the pool, and it cannot contain any spaces.  The Display Name is the “friendly” name that users will see when they select a desktop pool to log into.  You can also add a description to the pool.

8. Configure the provisioning settings for the pool.  This screen allows you to control provisioning behavior, computer names, and the number of desktops provisioned in the pool.

9. After configuring the pool’s provisioning settings, you need to configure the pool’s vCenter settings.  This covers the Parent VM and the snapshot that the Instant Clones will be based on, the folder that they will be stored in within vCenter, and the cluster, datastores, and, optionally, the networks that will be used when the desktops are deployed.

In order to configure each setting, you will need to click the Browse button on the right hand side of the screen.  These steps must be completed in order.

9-A. First, select the parent VM that the Instant Clone desktops will be based on.  Select the VM that you want to use and click Submit.

9-B. The next step is to select the Parent VM snapshot that the Instant Clone desktops will be based on.  Select the snapshot that you want to use and click OK.

9-B. After you have selected a Parent VM and a snapshot, you need to configure the vCenter folder in the VMs and Templates view that the VMs will be placed in.  Select the folder and click OK.

9-D. The next step is to place the pool on a vSphere cluster.  The virtual machines that make up the desktop pool will be run on this cluster, and the remaining choices will be based on this selection.  Select the cluster that they should be run on and click OK.

9-E. The next step is to place the desktops into a Resource Pool.  In this example, I have not resource pools configured, so the desktops would be placed in the Cluster Root.

9-F. Next, you will need to pick the datastores that the desktops will be stored on. 

9-G. When using Instant Clone destops, you will have the option to configure the network or networks that the desktops are deployed onto. By default, all desktops are deployed to the same network as the parent VM, but administrators have the ability to optionally deploy virtual desktops to different networks.

10. After configuring the vCenter settings, you need to configure the Desktop Pool settings. These settings include:

  • Desktop Pool State – Enabled or Disabled
  • Connection Server Restrictions
  • Pool Session Types – Desktop only, Published Applications, or Both
  • Disconnect Policy
  • Cloud Management – Enable the pool to be consumed by the Universal Broker service and entitled from the Horizon Cloud Service

12. Configure the remote display settings. This includes choosing the default display protocol, allowing users to select a different protocol, and configuring the 3D rendering settings such as enabling the pool to use NVIDIA GRID vGPU. Administrators can also choose to enable Session Collaboration on the pool.

13. Configure Guest Customization settings by selecting the domain that the provisioned desktops will join, the OU where the accounts will be placed and any scripts that will be run after provisioning.

14. Review the settings for the pool and verify that everything is correct.  Before you click Finish, check the Entitle Users checkbox in the upper right.  This will allow you to select the users and/or groups who have permission to log into the desktops.

15. After you click Finish, you will need to grant access to the pool.  View allows you to entitle Active Directory users and groups.  Click Add to entitle users and groups.

16. Search for the user or group that you want to add to entitle.  If you are in a multi-domain environment, you can change domains by selecting the domain from the Domains box.  Click on the users or groups that you want to grant access to and click OK.

Note:  I recommend that you create Active Directory security groups and entitle those to desktop pools.  This makes it easier to manage a user’s pool assignments without having to log into View Administrator whenever you want to make a change.

17. Review the users or groups that will be entitled to the pool, and click OK.

19. You can check the status of your desktop pool creation in vCenter.  If this is a new pool, it will need to complete the Instant Clone provisioning process. To learn more about the parent VMs that are provisioned when Instant Clone pools are created, please see this article for traditional instant clones or this video for Instant Clone pools using Smart Provisioning.

Once the desktops have finished deploying, you will be able to log into them through the Horizon HTML5 Client or the Horizon Client for your endpoint’s platform.

I realize that there are a lot of steps in the process of creating a desktop pool.  It doesn’t take nearly as long as it seems once you get the hang of it, and you will be able to fly through it pretty quickly.

Applying the DaaS 9.0.1 Update

Earlier this week, VMware released the first major update bundle for the Horizon DaaS platform. This update applies some fixes to the platform and updates the desktop agent bundle to include the Microsoft Teams support that was released with Horizon 8. You can find the release notes here.

Background

Today, I will walk through how to apply the update in your environment. But before I do that, I want to give a little background on DaaS.

Horizon DaaS is VMware’s Desktop-as-a-Service Platform. It is typically used by organizations that want to provide a multi-tenant platform for hosting desktops and published applications. While the Horizon Client and the desktop agent are shared with Horizon 7, the management plane for the service providers and the tenants is built from the ground up to support multitenancy.

DaaS 9.0 was released back in May, and it contained some major enhancements to the platform. One of these enhancements was automating the lifecycle of the service provider and tenant appliances, including applying hotfixes using a ne set of appliances and components called Horizon Version Manager and Horizon Air Link.

DaaS utilizes virtual appliances, deployed in pairs for high availability, for service provider and tenant operations. Prior to DaaS 9.0, all of the deployment and update operations had to be performed manually, and this could take hours in large environments with a lot of customers as the updates had to be deployed and installed on two appliances for each tenant.

Checking the Environment’s Health

The first thing that should be done before deploying any hotfixes or patches in the DaaS environment is evaluating the environment’s health. The patching operation for the service provider or tenant management infrastructure will fail if one of the appliances in the pair is in an unhealthy state.

The steps to perform a quick health check are:

  1. Log into the DaaS Service Center
  2. Go to the Appliances menu and select Browse Appliances.

3. Validate that all appliances have a green Up arrow next to their name as shown in the picture below.

Any appliances in an unhealthy state will need to be investigated, and you will need to troubleshoot the appliances. If basic troubleshooting does not resolve the issue, you can open a ticket with GSS to investigate further.

GSS may have you redeploy the appliance if the issues are not easily resolved. You can redeploy appliances by clicking the Actions menu for the appliance and selecting the Restore option. This will deploy a new appliance and sync it with the other appliance in the HA pair.

Preparing for the DaaS Update

The process for applying hotfixes and upgrades has changed in DaaS 9. The process is automated, and it is managed through the Horizon Version Manager interface. Before we go into this interface, there are a few tasks that you need to perform to prepare to deploy the hotfix.

Before we begin, there are some components that need to be downloaded from My VMware. The DaaS update has four components. Two are the hotfixes that will be installed on appliances that have already been deployed, and two files will be used when deploying new appliances.

Note: There is also an updated version of the Horizon Version Manager appliance that was released as part of DaaS 9.0.1. This post is just about deploying the hotfix components to the tenant and service provider appliances, so we will not be talking about the new HVM.

These files are:

Component NameDescription
dt-platform-20_2_0-update01_SP-RM.tgzCumulative Update to Horizon Daas 9.0.0 for Service Provider appliances
dt-platform-20_2_0-update01_TA-DM.tgzCumulative Update to Horizon Daas 9.0.0 for Tenant appliances
dt-aux-20_2_0.debHorizon DaaS 9.0.1 Core Platform Debian
node-manifest.jsonUpdated Node Manifest file used to validate deb component checksums

Update: The dt-aux-20_2_0.deb and node-manifest.json files (listed below) are not used when applying a hotfix to a DaaS environment. These files are only used when performing an upgrade from 8.0.1 to 9.0.1 or deploying new management appliances and new tenants in an existing DaaS 9.0.1 environment. It is important to update the deb files in the install cache so that new management appliances and tenants do not need to have the updates applied after deployment. This was originally not clear.

Once you have these files downloaded, you will need to upload them to the Horizon Version Manager appliance. If you are using a Windows machine, you will need to use a tool like WinSCP to complete this task.

These files need to be uploaded to the following folders on the Horizon Version Manager appliance:

Component NameUpload Path
dt-platform-20_2_0-update01_SP-RM.tgz/opt/vmware/hvm/hotfixes
dt-platform-20_2_0-update01_TA-DM.tgz/opt/vmware/hvm/hotfixes
dt-aux-20_2_0.deb/opt/vmware/hvm/install-upgrade
node-manifest.json/opt/vmware/hvm/install-upgrade
Note: When you upload the dt-aux and node-manifest files in a deployed environment, you will be prompted to replace the existing files. The old files will need to be replaced.

File ownership and permissions will need to be updated on the new files after they have been uploaded. The Rundeck group should have the group ownership of these files, and the permissions should have an octal value of 644. If you are using WinSCP on Windows, you can set the permissions for these files through the properties menu.

Deploying the Hotfix

Once the health of the environment has been validated and the update components downloaded and staged, it’s time to deploy the hotfix.

As mentioned above, the hotfix will be deployed to all appliances using Horizon Version Manager interface. At a high level, the update process looks like this:

  1. Refresh the list of available hotfixes – this job looks at the hotfixes available in the /opt/vmware/hvm/hotfixes folder and updates the list of available hotfixes to include any new files that have been added.
  2. Apply hotfixes to DaaS appliances – Deploys hotfixes to DaaS appliances in the specified tenants

The steps for deploying the DaaS hotfixes are:

  1. Log into your Horizon Version Manager interface
  2. Select Horizon DaaS Hotfix Management

3. Click 1. Refresh Hotfix List

4. Click Run Job Now.

5. Wait for the job to complete. After the job completes successfully, return to the Horizon DaaS Hotfix Management job list by clicking Jobs on the left-hand menu.

6. Click 3. Apply Hotfix to DaaS Appliances.

7. Fill in the following details:

  • ServiceProvider-IP: IP address or FQDN of one of the Service Provider appliances
  • ServiceProvider-Appliance-Password: The password for the desktone user on the appliance
  • Domain-Name: NetBIOS name for the Service Provider Active Directory environment
  • Domain-User: Active Directory User with administrator rights in the Service Provider admin console
  • Domain-Password: Password for the administrator user
  • Org-DaaS-Version: Select the version of your DaaS organization from the dropdown box
  • Hotfix: Select the hotfixes that you wish to apply in the environment. For DaaS 9.0 Update 1, there is an update bundle for the Service Provider appliances, and there is an update bundle for the Tenant appliances
  • Org-IDs: Enter the DaaS organization IDs for the tenants you want to apply the hotfix to. If you leave this field blank, it will apply the selected hotfixes to all appliances in all tenants. You do not have to apply the hotfix to all tenants in the environment – you can specify which tenants will get the update by entering the tenant ID in the field.

The hotfix should be applied to the service provider org, Tenant 1000, before applying the hotfix to any customer tenants in the environment.

Note: The Service Provider and Resource Manager appliances are part of Tenant 1000. If you want to just upgrade these appliances, enter 1000 in the Org-ID field.

Note: You can find the tenant ID field by logging into Service Center and selecting the Tenants tab.

8. Click Run Job.

9. Horizon Version Manager will execute a workflow that completes the following steps for each appliance that will receive the hotfix:

  • Take a snapshot of the appliance virtual machines
  • Push the hotfix to the appliance
  • Install the hotfix
  • Resetart the DaaS services on the appliance

The job log will list all of the appliances that the update was attempted on, the status of the appliance, and the status of each tenant organization where the update was attempted.

As you can see, the process for deploying and managing hotfixes in Horizon DaaS 9 is fairly straightforward. The is only one manual step – uploading the hotfix files to the HVM appliance

Next week, we’ll return to our Horizon 8/Horizon 2006 series where we talk about building a desktop pool in the new environment.

Horizon 8.0 Part 8: Configuring Horizon for the First Time

The Horizon series took a hiatus over the last few weeks so I could prepare for VMworld.  If you haven’t done so, you can check out the VMworld content at in the VMworld Content library.  I highly recommend you do – there is a lot of good Horizon content in there.

We’re going to pick up right where we left off after Part 7 and start configuring our deployed connection servers.

Now that the Connection Server has been set up, it’s time to configure to work with vCenter to provision and manage desktops and RDSH servers.

Logging into the Horizon Administrator

Before anything can be configured, though, we need to first log into the Horizon Administrator management interface.  Horizon now uses an HTML5-based management interface, so it can be accessed from any modern web browser.

Prior to Horizon 2006, the main interface was built on Adobe Flex, which required Adobe Flash to be installed on any machine that you planned to use to administer Horizon. The HTML5 interface was introduced during the Horizon 7 lifecycle, and it reached feature parity within the last year.

In Horizon 2006, the Flash-based console has been removed, and the HTML5 console is now the only administrator console.  This makes it easier to perform administrative tasks in Horizon as you don’t need to install Flash or jump through hoops to get it temporarily enabled for a website.

To log in, take the following steps:

1. Open your web browser.

2. Navigate to https://<FQDN of connection server>/admin

3. Log in with the Administrator Account you designated (or with an account that is a member of the administrator group you selected) when you installed the Connection Server.

1

4. After you log in, you will be prompted for a license key.

2

Note:  The license keys are retrieved from your MyVMware site.  If you do not input a license key, you will not be able to connect to desktops or published applications after they are provisioned.  You can add or change a license key later under View Configuration –> Product Licensing and Usage. If you are using Horizon Universal or Horizon Subscription license, you will not have a license key. Licensing is handled by a cloud service through the Cloud Connector appliance.

5. Click Edit License.  Paste your license key from the MyVMware site into the license key box and click OK.

3

6. After your license key is installed, the Licensing area will show when your license expires and the features that are licensed in your deployment.

Configuring Horizon for the First Time

Once you’ve logged in and configured your license, you can start setting up the Horizon environment.  In this step, the Connection Server will be configured to talk to vCenter and Composer.

1.   Expand View Configuration and select Servers.

9

2.  Select the vCenter Servers tab and select Add…

10

3, Enter your vCenter server information.  The service account that you use in this section should be the vCenter Service Account that you created in Part 6.  Do not change anything in the Advanced Settings section.

Note: If you are using vCenter 5.5 or later, the username should be entered in User Principal Name format – username@fqdn.

11

4. If you have not updated the certificates on your vCenter Server, you will receive an Invalid Certificate Warning.  Click View Certificate to view and accept the certificate. 

Note: Old screenshot.

7

Note: Steps 5-8 refers to Horizon Composer. Composer is deprecated in Horizon 2006, and it will be removed in a future version. It is mainly here to support migrations from Horizon 7 to Horizon 8. If you are starting a new project, please use Instant Clones instead of Composer and Linked Clones, and do not configure Composer when integrating Horizon with vCenter.

These steps are included for completeness, and they may be required in some instances where you are adding a new vCenter to an existing environment. I will be using old screenshots for this section.

5.  Select the View Composer option that you plan to use with this vCenter.  The options are:

A. Do not use View Composer – View Composer and Linked Clones will not be available for desktop pools that use this vCenter.

B. View Composer is co-installed with vCenter Server – View Composer is installed on the vCenter Server, and the vCenter Server credentials entered on the previous screen will be used for connecting.  This option is only available with the Windows vCenter Server. (Note: This option should not be used as vCenter is now distributed as a virtual appliance and Composer runs on Windows Server.)

C. Standalone View Composer Server – View Composer is installed on a standalone Windows Server, and credentials will be required to connect to the Composer instance.  This option will work with both the Windows vCenter Server and the vCenter Server virtual appliance.

Note: The account credentials used to connect to the View Composer server must have local administrator rights on the machine where Composer is installed.  If they account does not have local administrator rights, you will get an error that you cannot connect.

8

6. If Composer is using an untrusted SSL certificate, you will receive a prompt that the certificate is invalid.  Click View Certificate and then accept.

For more information on installing a trusted certificate on your Composer server, please see Part 5.

9

7. The next step is to set up the Active Directory domains that Composer will connect to when provisioning desktops.  Click Add to add a new domain.

11

8. Enter the domain name, user account with rights to Active Directory, and the password and click OK.  The user account used for this step should be the account that was set up in Part 6.

Once all the domains have been added, click Next to continue.

10

9. The next step is to configure the advanced storage settings used by Horizon.  The two options to select on this screen are:

  • Reclaim VM Disk Space – Allows Horizon to reclaim disk space allocated to linked-clone virtual machines.
  • Enable View Storage Accelerator – View Storage Accelerator is a RAMDISK cache that can be used to offload some storage requests to the local system.  Regenerating the cache can impact IO operations on the storage array, and maintenance blackout windows can be configured to avoid a long train of witnesses.  The max cache size is 2GB.

After you have made your selections, click Next to continue.

13

10. Review the settings and click finish.

14

Configuring the Horizon Events Database

The last thing that we need to configure is the Horizon Events Database.  As the name implies, the Events Database is a repository for events that happen with the View environment.  Some examples of events that are recorded include logon and logoff activity and Composer errors.

Part 6 described the steps for creating the database and the database user account.

1. In the View Configuration section, select Event Configuration.

4

2. In the Event Database section, click Edit.

5

3. Enter the following information to set up the connection:

  • Database Server (if not installed to the default instance, enter as servername\instance)
  • Database Type
  • Port
  • Database name
  • Username
  • Password
  • Table Prefix (not needed unless you have multiple Connection Server environments that use the same events database – IE large “pod” environments)

6

Note: The only SQL Server instance that uses port 1433 is the default instance.  Named instances use dynamic port assignment that assigns a random port number to the service upon startup.  If the Events database is installed to a named instance, it will need to have a static port number.  You can set up SQL Server to listen on a static port by using this TechNet article.  For the above example, I assigned the port 1433 to the Composer instance since I will not have a named instance on that server.

If you do not configure a static port assignment and try to connect to a named instance on port 1433, you may receive an error that the server is not reachable.

5. If setup is successful, you should see a screen similar to the one below.  At this point, you can change your event retention settings by editing the event settings.

7

6. To edit the event retention settings, click Edit.  Select the length of time that you want events to be shown in View Administrator and classified as new. Then click OK for the change to take effect.

8

After completing these steps, your Horizon environment should be licensed, connected to your vCenter, and the event database should be configured. At this point, you are ready to create your parent image and deploy your first desktop pool. We’ll cover those steps in the next post.

Horizon 8.0 Part 7: Deploying Horizon Connection Servers

Connection Servers are one of the most important components in any Horizon environment, and they come in two flavors – the standard connection server and the replica connection server.

Connection Servers handle multiple roles in the Horizon infrastructure.  They handle primary user authentication against Active Directory, management of desktop pools, provide a portal to access desktop pools and published applications, and broker connections to desktops, shared hosted desktops, and published applications. 

When you run the Horizon installer, you will see two connection server types – the standard connection server and the replica connection server.  The Standard and Replica Connection Servers have the same feature set and perform identically after installation has completed.  The only difference between the two is that the standard connection server is the first server installed in the pod and initializes a new environment while a replica server copies data from a server in an existing environment.

Note: When you run the Connection Server installer, you will see a third role – the TrueSSO Enrollment Server role. This role is used to enable TrueSSO, which provides passwordless authentication to desktops in conjunction with VMware Workspace ONE Access.  That role will not be covered at this time.

In previous versions of Horizon, there was another connection server role – the Security Server.  The Security Server is a stripped down version of the regular Connection Server designed to provide secure remote access.  The Security Server was discontinued in Horizon 8, and it has been replaced by the Unified Access Gateway.

Architecture and Concepts

Before we go through the steps to deploy a Horizon Connection Server, let’s cover some basic architecture and concepts around Connection Servers.

VMware uses the “pod and block” concept to describe the basic Horizon architecture. A pod is a cluster of Horizon connection servers. Pods can scale out to 7 connection servers supporting up to 10,000 user sessions. Horizon uses Microsoft Active Directory Lightweight Directory Services (AD LDS, formerly ADAM) as it’s primary datastore. The AD LDS instance contains all of the common configuration for the Horizon pod, such as vCenter information, Instant Clone AD provisioning accounts, and pool configuration and entitlements. Each connection server in the pod has a local copy of the AD LDS instance.

All connection servers in the pod must be in the same physical datacenter, and Cloud Pod Architecture must be used if a multi-site deployments are required. VMware does not support Horizon deployments where connection servers in the same pod are spread across sites. This includes sites where active-active storage technologies are used, including technologies like VPLEX and Stretched VSAN. If active-active storage technologies are used, the Horizon management components must all be pinned to one site. They can manage desktops that are deployed into the other site, but all of the connection servers must be located together.

There are two reasons for this – the AD LDS instance mentioned above and the message bus technology used for communications between the connection servers. AD LDS can be prone to split-brain and USN rollback issues in the event of a network or server outage, especially if a server is operating on it’s own or restored from a backup or snapshot. These can prevent servers from replicating data and resolving replication conflicts when two connection server have different , which will impact the stability and operation of the environment.

Horizon uses Java Message Bus for communications between the desktops, and this requires extremely low latency (sub-millisecond) to ensure that all connection servers are in sync. If the connection servers are not in sync, then there may be errors where two users may have the same floating desktop assigned which will lead to errors. Simon Long has a great blog post explaining this in more detail.

In the Horizon block and pod architecture, blocks are the vSphere resources that desktops and RDSH servers will run on. A block is a vCenter servers for management and one or more ESXi hosts or clusters for running virtual machines. A Horizon deployment can have one or more resource blocks. The management components can along side desktop or RDSH resources, but this is not recommended for most deployments.

Note: While the Horizon connection servers must be located together in the same site, the desktop resources do not need to be located in the same site as the Connection Servers.

Installing the First Connection Server

Before you can begin installing the Horizon Connection Server, you will need to have a server prepared that meets the minimum requirements. The basic requirements, which are described in Part 2, are a server running Server 2012 R2 or later with 2 CPUs and at least 4GB of RAM.

Note:  If you are going have more than 50 virtual desktop sessions on a Connection Server, it should be provisioned with at least 10GB of RAM.

Once the server is provisioned, and the Connection Server installer has been copied over, the steps for configuring the first Connection Server are:

1. Launch the Connection Server installation wizard by double-clicking on VMware-viewconnectionserver-x86_64-8.x.x-xxxxxxx.exe.

2. Click Next on the first screen to continue.

3.  Accept the license agreement and click Next to continue.

4.  If required, change the location where the Connection Server files will be installed and click Next.

5. Select the type of Connection Server that you’ll be installing.  For this section, we’ll select the Horizon Standard Server.  If you plan on allowing access to desktops through an HTML5 compatible web browser, make sure that “Install HTML Access” is installed. This option should be enabled by default.  Select the IP protocol that will be used to configure the Horizon environment.  Click Next to continue.

If you are installing a standard server to create a new pod, please skip to step 6. If this is a replica server to expand an existing pod, please see Step 5a.

5a – Applies only to Replica Servers. If you are installing a Horizon Replica Server, you will be prompted to provide the IP address or host name of another server in the pod. This is the server that the AD LDS instance will be replicated from.

6. Enter a strong password for data recovery.  This will be used if you need to restore the Connection Server’s LDAP database from backup.  Make sure you store this password in a secure place.  You can also enter a password reminder or hint, but this is not required.

7. Horizon requires a number of ports to be opened on the local Windows Server firewall, and the installer will prompt you to configure these ports as part of the installation.  Select the “Configure Windows Firewall Automatically” to have this done as part of the installation.

Note: Disabling the Windows Firewall is not recommended. 

8. The installer will prompt you to select the default Horizon environment administrator.  The options that can be selected are the local server Administrator group, which will grant administrator privileges to all local admins on the server, or to select a specific domain user or group.  The option you select will depend on your environment, your security policies, and/or other requirements.

If you plan to use a specific domain user or group, select the “Authorize a specific domain user or domain group” option and enter the user or group name in the “domainname\usergroupname” format.

Note: If you plan to use a custom domain group as the default Horizon administrator group, make sure you create it and allow it to replicate before you start the installation. 

9.  Chose whether you want to participate in the User Experience Improvement program.  If you do not wish to participate, just click Next to continue.

10. If you are installing into an SDDC on a public cloud, such as VMware Cloud on AWS, select the cloud service from the drop down box. Otherwise, select General. Click Install to begin the installation.

11. The installer will install and configure the application and any additional windows roles or features that are needed to support Horizon.

12. The installer will wait for the Horizon web apps to start.

13. Once the install completes, click Finish.  You may be prompted to reboot the server after the installation completes.

Configuring the Locked.Properties file

Before you sign into Horizon Administrator to configure the environment or allowing users to connect to a new connection server in an existing pod, a configuration change needs to be made.

Horizon is configured to perform origin checking on all incoming connections. This can prevent users from accessing the HTML5 client and admin interface through a load balanced URL or the Unified Access Gateway. This KB explains the issue in more detail.

In order to properly configure Horizon, we will need to create a file called locked.properties on the connection server. The locked.properties file is a simple text file, and it needs to be created in C:\Program Files\VMware\VMware View\Server\sslgateway\conf.

The steps for configuring HTML client access when using load balanced connection servers can be found in the VMware documentation here, and the steps for configuring HTML client access when using the Unified Access Gateway can be found here. Depending on your environment, one or both of the articles may apply, and these changes must be applied to each connection server in the pod.

The connection server services will need to be restarted after the changes have been made.

Origin checking is not the only thing the locked.properties file is used for. This file is used to control the behavior of the web server used with Horizon. You can learn more about the Cross-Origin Resource Sharing options used in the locked.properties file here.

Note: The locked.properties file is also used for configuring HTTP settings, Smart Card Access, and other features of the web server used by Horizon. Please see the Horizon documentation at docs.vmware.com for more details.

Now that the Connection Server is installed, it’s time to begin configuring the Horizon application so the Connection Server can communicate with vCenter as well as setting up any required license keys and the events database.  Those steps will be covered in the next part.

Horizon REST API Library for .Net

So…there is a surprise second post for today. This one is a short one, and if you’ve been interested in automating your Horizon deployment, you will like this.

One of the new features of Horizon 2006 is an expand REST API. The expanded REST API provides administrators with the ability to easily automate pool management and entitlement tasks. You can learn more about the REST API by reviewing the Getting Started guide on Techzone, and you can browse the entire API on VMware Code.

Adopting this API to develop applications for Horizon has gotten easier. Andrew Morgan from the VMware End User Computing business unit has developed a .Net library around the Horizon Server REST API and released it on Github. This library supports both .Net 4.7 and .Net Core. The Github repo includes code samples for using the library with C# and Visual Basic.

I’m excited to see investment in this REST API as it will help customers, partners, and the community build applications to enhance and extend their Horizon deployments.