“I would like my lab to have web access” seems like such a straightforward request. You want to be able to browse websites, download files, and generally be as connected as you would at your own workstation. The reality; however, is vastly different. Providing good, reliable Internet in on-demand, resetting, isolated environments is an interesting engineering challenge. What I’m going to do is lay out what the challenges are, the options we provide, and how we have recently made significant changes to improve the overall experience for users.

More and more labs are demanding faster web access and greater throughput rates. This has necessitated some changes, which we will discuss in this post.

What web access options are available?

In OneLearn Lab on Demand, we give you a number of options for providing Internet access to lab environments. In reality, you can choose any method with the caveat that we don’t just grant any user the rights to do this. Normally, any request for web access is vetted by our content team and the appropriate access added, but in some cases, our more advanced IDL Studio users can be granted permission to configure web access by themselves.

Web Access (NAT)

This is our simplest method of web access. It enables a lab developer to add a simple Linux based host which functions to provide NAT, DNS, and DHCP services. You can set your own ranges and IP schemes. You never see or interact with it. It’s just there on your network.

Historically, this has been rock solid, fast, and reliable. In recent months; however, we’ve had some challenges. The lightweight Linux configuration (and by lightweight, I mean lightweight; it fits on a floppy disk and uses 4MB memory) has been having challenges keeping up with some of the more demanding lab profiles. Being based on an older virtualization technology (specifically using emulated adapters) has contributed to this, as they have limited throughput. Keep in mind at the time this was designed, web access in labs was rare, and even when it was in place, it was sporadic and transported very little traffic.

While the simplicity of configuration has changed, we have just rolled out a full replacement stack for our beloved “floppy firewall.” More on this later…suffice it to say, we are future proof for years to come.

Web Access (Preconfigured)

There are scenarios where folks want to bypass our light NAT device and inject their own NAT device, such as a custom firewall or a Windows RRAS server. For those scenarios, you may configure as follows:

With this configuration, you assume much more responsibility. Any interface in a VM that you configure as shown above, will simply pull a DHCP address from a special network that sits behind our firewalls and give you (mostly) unrestricted web access. Among other things, you must now . . .

  1. Ensure you use dynamic MAC addresses.
  2. Ensure you correctly address resume from save state.
  3. Firewall your interface.
  4. Configure an internal/external interface.
  5. Configure NAT/Routing as needed.
  6. Hide/protect the NAT VM from students (if desired).

So, you have a lot more power, but as it goes with power, comes responsibility.

Public IP

The last major option, and the one with the most risks and tradeoffs is the Public IP. This method uses the same configuration as the previous Web Access (Preconfigured) in terms of what your responsibilities are; however, the IP you get is an unrestricted (mostly) public IP address.

So why would you want to use this? Simple, you need to access your lab instance from the Internet, such as if you are federating two students, allowing cloud services to connect to your lab, or any of a dozen other scenarios.

For labs with Public IP’s, they do not benefit from being behind our datacenter firewall and NAT devices; so those VM’s are fully exposed to the Internet directly. It is extra important that lab authors recognize this and take steps to protect the lab instance from attacks. This includes things such as disabling services, using complex passwords, and other normal hardening activities.

“Mostly” Unrestricted

All web traffic, regardless of source, inbound and outbound, NAT and Public IP, passes through a set of filtering devices which do the sorts of things you expect. They perform basic intrusion detection, they block malware sites (i.e., hacking, pirated software, phishing, porn, etc.), they scan for attacks and perform dynamic port, and IP filtering. This is all based on industry standard practices and updated several times a day. For both Web Access (Preconfigured) and Web Access (NAT), we perform a more aggressive set of tests than the public IP options.

For NAT web access scenarios, we can monitor outbound web traffic and dynamically block harmful locations.

For Public IP web access, we do not monitor web traffic, but we monitor inbound activity patterns and identify lab VM’s that are being used for malicious activity.

Updates

One of the single biggest challenges with Web Access in labs is Windows Update. Labs are built and then used for years. If these labs are set to allow automatic software updates, every time these labs are launched they begin to process hundreds of updates; sometimes totaling Gigabytes of data. This consumes network bandwidth, generates disk IO in the VM, and uses CPU and Memory. One of the biggest causes of “slow performance” for a lab is the immediate processing of hundreds of updates when the lab launches.

To combat this, we severely limit the bandwidth available to the most common update sources. Some update sources we block entirely. This is not infrastructure, this is training; and unless your training is on how to install updates, allowing updates is a very bad idea. Not only are you performance impacting your labs, but you may cause your lab instructions to become out of sync with the environment. We have had cases where Windows has Self-Upgraded from Windows 7 to Windows 10. Imagine reaching exercise 5 and discovering your Windows 7 VM is now Windows 10.

We publish best practices for this, and they boil down to “Don’t allow Windows Update.” We don’t encourage it, we actively try to reduce it, and in some cases, block it. Nothing productive comes from installing the same updates hundreds, or thousands of times over the course of a lab’s life.

Floppy Firewall Grows Up – Announcing Enhanced Firewall

One major step we have taken in the past few weeks is to rip and replace our beloved Floppy Firewall with what we are calling our Enhanced Firewall. This is a full featured Linux based firewall that we inject in to every NAT lab environment. It is based on modern distributions, fully integrated with both VMware and Hyper-V, runs on Azure, and supports features such as traffic shaping, diagnostics, performance monitoring, port forwarding, and more. For lab authors, nothing changes, you simply choose the Web Access (NAT) option, input your IP preferences, and we configure and deploy it for you in your lab environment. Nothing else to think about.

Evolving with Enhanced Firewall

Our newly released Enhanced Firewall is the foundation for a new suite of networking services that will allow customers to build rich network scenarios and introduce protection on labs with Public IPs. Specifically, we are building on two key areas:

  1. Rule Injection – Enhanced Firewall allows us to inject very complex configuration. In the near future, we will expose a UI in Lab on Demand that allows you to define standard 5-tuple rules in your lab web access.
  2. Port Publishing – We will expose the ability to create publishing rules which will allow you to place an Enhanced Firewall in front of a public IP address; thereby, removing much of the work you need to do to protect it, and allowing you to select specific ports that pass through to your lab instance. This will be done directly in the lab UI.

Finally, our Enhanced Firewall has a web management console, and we will expose this console to lab sessions, allowing students to make firewall configuration changes in the lab to test scenarios or learn.

Hardware Upgrades

Finally, we have made some significant investments in our datacenter physical infrastructure and have upgraded our firewall/filtering appliances, increasing the capacity and throughput significantly. We have deployed new TCP/IP configurations and implemented a more fault tolerant network design that isolates web traffic for better performance, monitoring, and quality of service.