Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Tuesday, 22 September 2009

Installing Microsoft Application Virtualization (Part 1)

Introduction

When Microsoft acquired SoftGrid in January 2007, App-V 4.5 was the first version fully branded under the Microsoft umbrella. The most notable change that was made is, logically, the new name. After some other names like Microsoft Application Virtualization (you will find this name when researching the product) the final name became App-V (based on Microsoft hypervisor Hyper-V). You will also see the name System Centre Application Virtualization Management Server for this product.

Beside the name change, several new features introduced such as Dynamic Suite Composition and the Lightweight Streaming Server. This article will not go into the details of these new features since we are not interested in producing a step by step installation guide of App-V in this article.

The installation of the App-V architecture is composed of three components:

  • App-V Server
  • App-V Client
  • App-V Sequencer

Installation App-V Server

Before starting the installation of the App-V Server component you need to decide if you will be using the full App-V environment or the streaming option only. The full App-V environment is exactly the same as the previous SoftGrid version, most importantly, in terms of its database and the full management console. The management console provides you with options which include the assignment of applications to users based on group membership and software license metering. The Streaming only arranges that the sequences can be started using the client, but you should arrange via other software products or scripts authorization (by default everyone can start the applications using the streaming only feature), adding the application shortcuts to the end user and the software licensing part. It depends on several factors which option you choose in your infrastructure. In this article I will describe the installation of the full environment option.

The software installation will be started using the delivered setup.exe. For the full environment you need to start this executable out of the management installation folder. For the streaming part you should start the same named executable in the streaming folder.

Microsoft is (logically) using the MSI installer for the installation of the App-V product. The first window is the welcome message with information about the installation. Nothing interested is mentioned here so we will quickly continue with the next steps.


Figure 1

Of course there is also a license agreement that should be accepted before the installation can be carried out.


Figure 2

In the next screen the user name and organization information needed to fill in.


Figure 3

In the Setup Type Window the option appears to select the installation methodology. I select the custom option to show all the possibilities and explain what do these options mean in your infrastructure.


Figure 4

Because of the custom setup the available installations options appear. The first App Virt Management Server is the actually streaming component. This component is serving the client request to streamed applications. This component requires MS Core XML Services 6 to be installed on the server.

The second option is the Management Console of the suit. The console makes a connection with the App Virt Management Service.

This is the third option. This is a web services based component. To install this option you need Internet Information Services and .Net Framework 2.0

You can install the components on one server or on separated servers. For example the Management Service on an existing IIS server and the console on a special management server. For the article I will install all components on the same server.


Figure 5

If you did not install the required supporting software the following screen will be displayed. You need to cancel the installation and install that software first.


Figure 6

As mentioned before the full App-V environment requires a database. The installation wizard automatically searches for available MS SQL servers and you need to choose on which server you would like to host the database.


Figure 7

Next you can use an existing database (if you have more than one App-V Server, I will discuss this later) or to create a new database. By default the database will be created using the default paths defined on the SQL server, but you can change that with the option “Use the following location when creating the database”.


Figure 8

If you would like to use the secure communication option available within App-V you need to have a server certificate installed before you start the installation. That certificate can be configure or App-V in the Connection Security Window. In this article I will not use secure connections and did not install a certificate.


Figure 9

By default App-V uses port 554 to stream the applications, but you have the possibility here to change the use a different port.


Figure 10

App-V has only one type of permission within the console. There is no delegation of control possible. You can only give a group Full Control within the App-V infrastructure. In this case I will give the administrator role to the Domain Admins.


Figure 11

Secondly you also need to specify which users are allowed to access the App-V infrastructure. This part only allows setup a connection to the server, but does not specify which applications the users can use. In this case I will use the Domain Users.


Figure 12

The next step is to specify the content path. This content path is the folder in which you will store the sequences so they can be streamed to the clients. If you change the default path you need to create the directory in advance. You can always change the path later in the console.


Figure 13

The installation wizard has now collected all the necessary information and will install the App-V server locally on the disks.


Figure 14

After all the installation the last Windows will appear mentioning that the installation is completed.


Figure 15

A restart is required before the App-V server can be used.


Figure 16

After the installation there are a few settings you should configure before starting to use the server. This can be accomplished by using the App-V Management Console. As mentioned before, this console can be installed on the same server or a separate machine. The shortcut to the console can be found within the Administrative Tools folder on the machine you installed the console. The first time you start the console you need to connect to the App-V server.


Figure 17

If the console and the web service are installed on the same server you use the localhost otherwise you have to fill in the server on which the App-V web service role installed. You can use your current credential or specify your special administrator account.

After logging on you need to specify some settings to finalize the installation and optimize the App-V server configuration.

The first step is to configure the Default Content Path which can be set within System Options below the web service server (the server you connect the management console to).


Figure 18

The second location you can choose some important options about the way memory and processor is being used on the App-V server. This should be configured on a per server basis within Server Groups - <Server Group Name> (default is Default Server Group).

  • Max Memory allocation: The Max Memory Allocation option specifies how much memory the SoftGrid Streaming server can use for the SFT file cache to support user settings. The default value can be rather small for busy SoftGrid Streaming server systems. This value should be raised to the amount of RAM that is in the SoftGrid Streaming server minus the amount of RAM that is needed for Operating System and other components.
  • Warn Memory Allocation: The Warn Memory Allocation value is the threshold where the server starts logging warings to the ‘sft-server.log’ file. This value is typically around 80% of the Max Memory Allocation value.
  • Max Block Size: The Max Block Size depicts the size in kilobytes of the buffer in RAM to be used to cache the largest contiguous block of data from a SFT file for a user session. This value is ignored in Softgrid 4.0 and higher and above as the Max Block Size is dynamically determined based on information within the SFT file.
  • Number of Core Processes: The number of core processes (default is 3) specifies the number of ‘SFTCore.exe’ processes that can run simultaneously on this server. Each process can handle up to 1.5GB of memory so in general there is no need to increase this number.
  • Max Chunk Size: The Max Chunk Size specifies the size in kilobytes of the largest block of code in any SFT file that may be streamed from this SoftGrid Streaming Server. The default is 64KB and it is recommended to leave it like that.


Figure 19

At the default provider properties, also several settings can be set like the way of refreshing the client with the server and the authentication and logging information to your needs.


Figure 20

Now the App-V server is ready to stream applications. Remember that the default test application is configured to use secure streaming (via port 332), so if you choose to use the App-V infrastructure on the default port you should reconfigure the default test application or add your own sequenced application to the console.

Conclusion

In this first part article I described the necessary steps to install an App-V server. In the upcoming article the installation and basic configuration of the App-V Sequencer and the App-V (TS) Client will be described.

Thursday, 17 September 2009

Maintaining VMware: Three Common Virtual Machine Tasks

Introduction

The line between PCs and VMs is beginning to blur – only seemingly separated by how physical components are arranged and utilized. Much of the same maintenance tasks you would do on a PC now apply to a VM. For example, you still need to install software and deploy desktops. So, how is this done virtually? In this article, we will look at some common administrative tasks you will perform as a VM administrator.

Making new Virtual Machines Quickly

The VMware VirtualCenter (or VC for short), is used for centralized configuration and management of your VM infrastructure. Many times, as an administrator you are asked to ‘build a new VM’. What this means is that someone is asking you to create a new system for them to utilize. To the person who will use this system, it's transparent as to how it's deployed to them – let us take a closer look at what happens behind the scenes.

Clones, Templates and ISOs are an administrators savior when working with VMware. To those who have spent years installing software such as applications and operating systems on bare metal hardware, this could not be any easier. For those using Symantec Ghost, Sysprep or any other form of cloning software, you will find this even easier. You no longer need to image a system with VMware.

Now, you can use ISO’s to set up your initial VM. By mapping an ISO image to your newly minted VM, you can pull just about any operating system imaginable into your VM inventory. Yes, you have to still be weary of 32 vs 64 bit operating systems and the fact that some OS’s still have issues during installation, but it could not get any easier. Once you have mapped and set up your VM, you can easily ‘duplicate’ it for further rollouts. For example, if you have a need for 30 Windows Vista Ultimate VMs, you could create one of them and then create a template out of it and use it for cloning purposes.

When cloning, you can easiely deploy a new VM from a template via ‘Virtual Machines and Templates’ as seen in figure 1.

Figure 1

Once you have a completed set of templates (for Linux, Windows or other) you can then deploy whichever you need quickly and easily.

Keeping your Virtual Machines Secure and Healthy

When working on your new VMs, you will have the same exact configuration settings to make a desktop system. Hotfixes, Service Packs, and updates need to be downloaded and installed. You will still have to customize your systems and set up networking, domain connections and other advanced configuations. Figure 2 shows Windows Vista being updated via ‘Windows Update’ in the VC console.

Figure 2

Next, it is wise to configure your firewall, automatic updates, spyware protection and UAC (for example). Make sure you also intsall Antivirus software!

Figure 3

As you can see, from defragging your hard disk to retooling your security, it is important that when working with a VM, you follow the same steps you would with any system. These are often forgotten about because new administrators sometimes tend to think that since VMs are contained within VMware, then there is some confusion as to how they are accessed or secured, such as using Remote Desktop for remote administration of your VM. Make sure you check your VirtualCenter logs and reports for issues on how VMs are operating within the ESX enviroment and then use Vista’s performance statistics. By checking both, you will know if you are running out of resources too quickly.

Using VMware Tools

To create the ultimate experience, install VMware Tools on top of your new VMs. This will help you work with the VM while using the VC. Figure 4 shows VMware tools properties which can be invoked from the System Tray (systray) icon. Here you can configure many options which will give you a better experience when working in the VC.

Figure 4

In the figure you will be able to see that you can configure specific options, such as, how time synchronization is performed on the client VM as well as which devices should be connected. You can enable or disable removable devices in the Devices tab. When synchronizing the time, you should specify whether you want the guest OS (VM) to keep the same time as the VirtualCenter.

Figure 5

You can also select which removable devices can be connected when starting the VM. In Figure 6, the IDE (hard disk), and NIC are selected.

Figure 6

Custom scripts may also be used in this case. You can write a script and invoke it here, which can be used to run commands, map drives and so on. These are used to map specific power states. A default script for each power state and is included in VMware Tools. These scripts are located in the guest operating system in C:\Program Files\VMware.

For example, if you wanted to suspend the guest operating system via a script, you can use the suspend-vm-default.bat file.

Next, if you want to shrink your virtual disks with VMware Tools, you can use the Shrink tab. The Shrink tab lets you prepare to export a virtual disk to another system using the smallest disk file size. Use the shrink option to save space that will be eaten up by your VM files.

Figure 7

Laslty, the About tab basically gives you some info on the product, as well as to alert you to the fact that the service is running. Here you can also find the VMware Tools build number. This helps you verify your VMware Tools version in use.

Summary

In this article we reviewed some of the most basic configuration steps you will take when working with a new VM. You should be familiar with updating systems, doing performance maintenance and intsalling and configure VMware Tools.

10 Basics of Linux that apply to managing VMware ESX through the service console

How the management of VMware ESX, using the service console OS (COS), is the same as the management of the Linux OS and Linux servers.

Introduction

If you are using the full version of VMware ESX you have the option to manage it, from the command line, using the service console operating system (called the COS). The service console, in VMware ESX, is really a modified version of Red Hat Enterprise Linux. Thus, basic Linux administration knowledge is very valuable when you go to manage VMware ESX from the command line.

On the other hand, if you are using VMware ESXi you likely do not access any CLI console from the server. Two command line options for managing ESXi from the command line are-

  1. The hidden ESXi service console – for information on this tiny Linux console, with very limited features, and how to access it see my article How to access the VMware ESXi Hidden Console.
  2. The VMware remote command line interface (RCLI) – for information on RCLI, see my article; Using VMware’s remote command line interface (RCLI) with VMware ESXi.

Now, here are my 10 basics of Linux administration that apply to managing VMware ESX:

1. Understanding file structure and navigation are critical

Just like navigating Linux or Windows from the command line, it is critical in ESX that you know how to navigate the file structure. Here are some common Linux & ESX commands you will use to get around:

  • ls - to list out files in a directory, just like the DOS dir command. Although, the DOS dir command actually does work in ESX as well. I prefer the long format of the ls command, ls -l


Figure 1:
the ls, dir, and ls - l commands in VMware ESX

  • cd – change directory
  • rm – to remove files
  • cp – to copy files
  • rename – to rename files
  • pwd – to show the current directory
One of the best Linux commands I ever learned was the command that allows me to find a file anywhere on a filesystem-
find ./ -print grep {what you are looking for}

Yes, this works great in ESX and it allows me to find the location of log files or executables when they are not in my path or I forget where they are stored. Here is an example of how I used this to find the location of the esxcfg-firewall command:


Figure 2: using the find command

2. Remote access is usually via SSH when using the CLI

Just as I connect to a Linux server using a SSH client like putty, I also connect to my ESX server. In fact, all the command line examples in this article were done with putty through SSH.

You should know that access to the ESX service console is not allowed, via SSH, for root, by default. To enable it, you need to go to the server’s console, edit /etc/ssh/sshd_config, set PermitRootLogin to yes, save it, and restart the ssh dameon with service sshd restart.

3. Local user administration is in /etc/passwd

Just as in Linux, it is best practice in ESX to create yourself a local user that can be used to su to the root account when local root privileges are needed (yes, even if you are using vCenter and likely will not use this a lot).

You could edit the /etc/passwd file, sure, but you should, instead, use useradd to add local users from the command line (but this is also easily done in the VI client if you connect directly to an ESX host). You can change passwords using passwd, just like in Linux.

One thing that is different is that you can set just about all of the ESX authorization settings by using esxcfg-auth.

4. Critical administration commands can be found in

As we learned back in #1 with the find command, the esxcfg-XXXX commands are located in /usr/sbin. These are ESX specific commands that you will need to use if administering the server from CLI.

Here is what they look like:


Figure 3: esxcfg commands located in /usr/sbin

5. Text file editing with vi and nano is a must

How are you going to edit text files like sshd_config to enable SSH remote access without a text editor? Well, you can’t. You must know how to use one of the Linux / ESX text file editors – vi or nano.

Like whiskey, vi is “an acquired taste” and takes some getting used to. If you are a Linux admin, you already know vi. For those who don’t, I encourage you to use nano as it works much like the Windows notepad.

Here is a look at nano:


Figure 4: Using nano to edit text files in VMware ESX

6. You will need to patch it with using RPMs but with different tools – rpm and esxupdate

Just like any OS, you will need to patch ESX. In Linux, this is typically done at the command line using rpm. While rpm is available in ESX, you should instead use esxupdate to apply ESX patches.

Still, the concept is the same and the applications are almost identical.

For more information on using esxupdate and patching in ESX see:

  • ESX Patch Management Guide
  • My other article, Using ESXUPDATE to update VMware ESX Server

7. Common network tools like ping, ifconfig, traceroute, proper network configuration are all crucial.

Just as in configuring Linux or even Windows from the command line, critical pieces of ESX Server aren’t going to work without the proper network configuration. The easiest way to do that in ESX is to use the VI client but you can do it at the command line using commands like esxcfg-nics, esxcfg-route, esxcfg-vmknic, esxcfg-vswif.

About half of what these commands do is to edit traditional Linux text configuration files like /etc/hosts, /etc/resolv.conf, /etc/sysconfig/network, /etc/vmware/esx.conf.

Just like any Linux host, in ESX you must have an IP address, proper subnet mask, default gateway (if you want to get outside your subnet), DNS servers (unless you are going local), your ESX host name must be able to be resolved as a FQDN, and you must have full network communication. That full network communication can be tested with traditional Linux commands like ping, traceroute, nslookup, and ifconfig

8. Process administration, at times, is necessary – ps, kill

Just as in Linux, at times, process administration is required. In ESX, you can view running processes with the ps (or process list) command. You can kill processes with the kill command.

Unlike Linux, ESX has some critical processes such as vmware-watchdog, vmware-hostd, vmklogger, and others.

9. Performance management from the CLI is quickly handled with top and esxtop

Eventually in any OS you will have a performance management issue. You can quickly resolve performance issues in Linux with top. In ESX, top also works but you should, instead, use esxtop.


Figure 5:
VMware ESXTOP

For more information on understanding performance statistics with esxtop, see the VMware ESX Resource Management Guide and Interpreting ESXTOP Statistics in the VMware Community.

10. Getting help with --help and man

And finally, getting help in Linux and in ESX are the same. To learn more about a command you can use that command and add “dash dash help” or “--help" after it. Even better, you can get more instructions using man, which stands for manual pages. For example, if I wanted to learn about esxcfg-firewall, I can just type man esxcfg-firewall and I see a screen like this:


Figure 6: VMware ESX man pages

Conclusion

Some would say “of course VMware ESX service console and Linux are the same, the ESX service console IS LINUX”. That is not exactly true as it is a modified version of Red Hat Enterprise Linux. Plus, what libraries and packages are loaded in it? What extra commands? What commands are removed? There are many differences. Also, the ESX service console Is may still be based on Linux but may be very different from other flavors of Linux like Ubuntu, Suse, or Fedora.


From this article, you learned 10 Linux system administration tasks / commands that you can perform in VMware ESX Server and, trust me, if you are not familiar with Linux already, this basic knowledge will be extremely helpful when you get to the ESX service console and need to, say, find and edit a configuration file.

Friday, 28 August 2009

Introduction to server virtualization

What is virtualization and why use it

Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern 2-socket 4-core server is more powerful than an 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.

When to use virtualization

Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.

While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.

While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

How to avoid the "all your eggs in one basket" syndrome

One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:

  • HTTP
  • FTP
  • DNS
  • DHCP
  • RADIUS
  • LDAP
  • File Services using Fiber Channel or iSCSI storage
  • Active Directory services

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.

For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.

Physical to virtual server migration

Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that you don't need to rebuild your servers and manually reconfigure them as a virtual server—you simply suck them in with the entire server configuration intact!

So if you have a data centre full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You don't even need to worry about license acquisition costs because the licenses are already paid for. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean your server room would look after such a migration. It would all fit inside of one rack and give you lots of room to grow.

As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. Ask yourself what would happen now if your legacy server died. Do you even remember how to rebuild and reconfigure all of your servers from scratch? (I'm guessing you're cringing right about now.) With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.

Patch management for virtualized servers

Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. You still need a patch management system that patches all of your servers, but there may be interesting developments in the future where you may be able to patch multiple operating systems at the same time if they share some common operating system or application binaries. Ideally, you would be able to assign a patch level to an individual or a group of similar servers. For now, you will need to patch virtual operating systems as you would any other system, but there will be some innovations in the virtualization sector that you won't be able to do with physical servers.

Licensing and support considerations

A big concern with virtualization is software licensing. The last thing anyone wants to do is pay for 16 copies of a license for 16 virtual sessions running on a single computer. Software licensing often dwarfs hardware costs, so it would be foolish to run an expensive software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead.

For something like Windows Server 2003 Standard Edition, you would need to pay for each virtual session running on a physical box. The exception to this rule is if you have the Enterprise Edition of Windows Server 2003, which allows you to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems.

If you're running open source software, you don't have to worry about licensing because that's always free—what you do need to be concerned about is the support contracts. If you're considering virtualizing open source operating systems or open source software, make sure you calculate the support costs. If the support costs are substantial for each virtual instance of the software you're going to run, it's best to squeeze the most out of your software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. The trick is to find the right ratio of hardware to licensing/support costs. When calculating hardware costs, be sure to calculate the costs of hardware maintenance, power usage, cooling, and rack space.

There are licensing and support considerations for the virtualization technology itself. The good news is that all the major virtualization players have some kind of free solution to get you started. Even one year ago, free virtualization was not possible when VMware was pretty much the only player in town, but there are now free solutions from VMware, Microsoft, Xen Source, and Virtual Iron. In the next virtualization article, we'll go more in-depth about the various virtualization players.

Tuesday, 21 July 2009

An SME’s Guide to Virtualisation

Virtualisation is now seen as essential in enabling organisations to manage their vital IT resources more flexibly and efficiently. Yet how challenging is it to successfully deploy virtualisation, especially at an SME? This guide, produced by Computer Weekly in association with IBM and Intel, covers the salient issues for an SME seeking to implement a virtualisation strategy.

Overview

Virtualisation is a growing trend in computing as organisations address the challenge of harnessing more processing power for more users, while reining in costs during the recession.

Surveys of SMEs conducted by IDC have revealed these businesses view virtualisation as presenting immediate cost advantages and opportunities to build and grow highly flexible IT environments.

IDC analyst Chris Ingle stresses that virtualisation is nothing new in the IT world, but the increased number of solutions now available for common, x86 servers means SMEs can do a lot of the things that previously only mainframe and Unix users could do.

"It democratises virtualisation and brings it within SME budgets and lets them do things that previously only larger companies could do," he says.

Choose the Right System

This presents valuable opportunities for SMEs to improve how they use resources and develop strategies for business continuity and disaster recovery, among other benefits.

But organisations need to consider carefully what they hope to achieve with virtualisation and choose the solution that best suits their needs. There are a number of different techniques for virtualising a server or building a virtual machine (VM).

Hypervisor Virtualisation

The most common is hypervisor virtualisation, where the VM emulates the actual hardware of an x86 server. This requires real resources from the host (the machine running the VMs).

A thin layer of software inserted directly on the computer hardware, or on a host operating system, allocates hardware resources dynamically and transparently, using a hypervisor or virtual machine monitor (VMM).

Each virtual machine contains a complete system (BIOS, CPU, RAM, hard disks, network cards), eliminating potential conflicts.

Common VM products include Microsoft’s Virtual Server and Virtual PC, along with EMC VMware’s range of products, such as VMware ACE, VMware Workstation and its two server products, ESX and GSX Server.

Risks & Benefits

For medium-sized organisations, virtualisation can lead to significant savings on equipment as well as more centralised management of what they have. It also allows them to harness and distribute greatly increased processing power very quickly.

The process of creating VMs is expected to get even easier for organisations, with Intel integrating improved virtualisation technology into its business-class processors. But this can be a double-edged sword. For instance, analysts warn that, because virtual environments are so cheap and easy to build, many organisations risk losing track of them.

New practices have to be put in place, responding to the increasing overlap in the internal areas of responsibility of the IT staff, as storage, server, and network administrators will need to co-operate more closely to tackle interconnected issues.

Virtualising at Operating System Level

One of the more commonly cited pitfalls of virtualisation is that companies can risk breaching software-licensing agreements as a virtual environment expands.

Without a method to control the mass duplication and deployment process of virtual machines, administrators will have a license compliance issue nightmare on their hands. Virtualising at the operating system (OS) level avoids this problem. Most applications running on a server can easily share a machine with others, if they can be isolated and secured. In most situations, different operating systems are not required on the same server, merely multiple instances of a single OS.

OS-level virtualisation systems provide the required isolation and security to run multiple applications or copies of the same OS on the same server. Products available include OpenVZ, Linux-Vserver, Solaris Zones and FreeBSB Jails. At first Linux-only, SWsoft recently launched its virtualisation technology for Windows. Called Virtuozzo, it virtualises the OS so multiple virtual private servers can run on a single physical server. Virtuozzo works by building on top of the operating system, supporting all hardware underneath. The VM does not need pre-allocated memory, as it is a process within the host OS, rather than being encapsulated within a virtualisation wrapper.

The upside to OS-based virtualisation is that only one OS licence is required to support multiple virtual private servers. The downside of this option is less choice, because each VM is locked to the underlying OS. In the case of Virtuozzo, it only guarantees support for Windows and RH Linux.

Paravirtualisation

Another approach to virtualisation gaining in popularity is paravirtualisation. This technique also requires a VMM, but most of its work is performed in the guest OS code, which in turn is modified to support the VMM and avoid unnecessary use of privileged instructions.

The paravirtualisation technique allows different OSs to be run on a single server, but requires them to be ported, that is they should know they are running under the hypervisor. Products such as UML and Xen use the paravirtualisation approach. Xen is the open source virtualisation technology which Novell is shipping with its own Linux distribution, SuSE, which also appears in the latest development of Red Hat’s Fedora, Core 4.

Server Sales Reach Tipping Point

IDC predicts something of an exodus towards virtualised server configurations over the next few years. The market analyst reported recently that the number of servers containing a virtualisation component shipped in Western Europe rose 26.5% to 358,000 units throughout 2008. IDC said these servers made up 18.3% of the market compared to 14.6% in 2007.

For the first time, last year the number of purely physical machines sold was eclipsed by sales of virtual- capable machines, which topped 2 million. IDC predicts declining IT hardware spending will result in VM sales exceeding physical machines by around 10% at some time during the year, and that the ratio of the two could be 3:2 by 2013.

In line with this trend, logical machines, or those with physical and virtual components, will realise a 15.7% increase over the same period. IDC notes that this highlights the importance to organisations of deploying the right tools to manage expanding virtual environments, seeing as both virtual and physical servers have to be operated, monitored and patched.

The research company also advises organisations ensure they have the right level of education if they are to properly exploit this new and potentially rewarding approach to corporate IT.

Friday, 17 July 2009

Virtualization is Changing the way IT Delivers Applications

Virtualization has rapidly become the hottest technology in IT, driven largely by trends such as server consolidation, green computing and the desire to cut desktop costs and manage IT complexity. While these issues are important, the rise of virtualization as a mainstream technology is having a far more profound impact on IT beyond just saving a few dollars in the data centre. The benefits and impact of virtualization on the business will be directly correlated to the strength of an organization’s application delivery infrastructure. Application delivery is the key to unlocking the power of virtualization, and organizations that embrace virtualization wrapped around application delivery will thrive and prosper, while those that do not will flounder. As virtualization takes centre stage, shifting roles in IT will require a new breed of professionals with broader skill sets to bridge IT silos and optimize business processes around the delivery of applications.

Going Mainstream
We are moving into a new era where virtualization will permeate every aspect of computing. Every processor, server, application and desktop will have virtualization capabilities built into its core. This will give IT a far more flexible infrastructure where the components of computing become dynamic building blocks that can be connected and reassembled on the fl y in response to changing business needs. In fact, three years from now, we will no longer be talking about virtualization as the next frontier in enterprise technology. It will simply be assumed. For example, today we normally assume that our friends, family and neighbours have high-speed Internet access from their homes. This was not the case a few years ago, when many were using sluggish dialup lines to access the Internet or had no access at all. High-speed Internet is now a mainstream as it will be for virtualization. Virtualization will be expected; it will be a given within the enterprise. As this occurs, the conversation within IT circles will shift from the question of how to virtualise everything to the question of what business problems can be solved now that everything is virtualised.

Virtualization and Application Delivery
The most profound impact of virtualization will be in the way organizations deliver applications and desktops to end users. In many ways, applications represent the closest intersection between IT and the business. Your organization’s business is increasingly represented by the quality of its user facing applications. May it be a large ERP solutions, custom web applications, e-mail, e-commerce, client-server applications or SOA, your success in IT today depends on ensuring that these applications meet the business goals. Unfortunately, trends such as mobility, globalization, off-shoring, and e-commerce are moving users further away from headquarters, while issues like data centre consolidation, security and regulatory compliance are making applications less accessible to users.

These opposing forces are pushing the topic of application delivery into the limelight. It is forcing IT executives to consider how their infrastructures get mission-critical, data centre-based applications out to users to lower costs, reduce risk and improve IT agility. Virtualization is now the key to application delivery. Today’s leading companies are employing virtualization technology to connect users and applications to propel their businesses forward.

Virtualization in the Enterprise
The seeds of virtualization were first planted over a decade ago, as enterprises began applying mainframe virtualization techniques to deliver Windows applications more efficiently with products such as Citrix® Presentation Server™. These solutions enabled IT to consolidate corporate applications and data centrally, while allowing users the freedom to operate from any location and on any network or device, where only screen displays, keyboard entry and mouse movement traversed the network. Today, products like Citrix® XenApp™ (the successor to Presentation Server) allow companies to create single master stores of all Windows application clients in the data centre and virtualise them either on the server or at the point of the end user. Application streaming technology within Citrix XenApp allows Windows-based applications to be cached locally in an isolation environment, rather than to be installed on the device. This approach improves security and saves companies millions of dollars when compared to traditional application installation and management methods.

Virtualization is also impacting the back end data and logic tier of applications with data centre products such as Citrix® XenServer™ and VMware ESX that virtualise application workloads on data centre servers. While these products are largely being deployed to reduce the number of physical servers in the data centres, the more strategic impact will be found in their ability to dynamically provision and shift application workloads on the fl y to meet end user requirements. The third major area concerning the impact of virtualization will be the corporate desktop, enabled by products such as Citrix® XenDesktop™. The benefits of such solutions include cost savings, but they also enable organizations to simplify how desktops are delivered to end users in a way that dramatically improves security and the end user experience (compared to traditional PC desktops). From virtualized servers in the data centres to virtualized end users desktops, the biggest impact of virtualization in the enterprise will be found within an organization’s application delivery infrastructure

Seeing the Big Picture
The mass adoption of virtualization technology will certainly require new skills, roles and areas of expertise within organizations and IT departments. Yet the real impact of virtualization will not hinge on the proper acquisition of new technical skills. Rather, by making the most of the virtualization opportunity, organizations will have to focus on breaking down traditional IT silos and adopt end-to-end virtualization strategies. Most IT departments today are organized primarily around technology silos. In many organizations, we find highly technical employees who operate on separate IT “islands,” such as servers, networks, security and desktops. Each group focuses on the health and well-being of its island, making sure that it runs with efficiency and precision. Unfortunately, this stand-alone approach is debilitating IT responsiveness, causing pundits like bestselling author Nicholas Carr to ask whether IT even matters to business anymore. To break this destructive cycle, IT employees must take responsibility for understanding and owning business processes that are focused horizontally (from the point of origin in the data centre all the way to the end users they are serving), building bridges from island to island. IT roles will increasingly require a wider, more comprehensive portfolio of expertise around servers, networking, security and systems management. IT personnel will need to have a broad understanding of all these technologies and how they work together as the focus on IT specialization gives way to a more holistic IT mindset.

Seeking Experts in Delivery
The new IT roles will require an expertise in delivery. IT will need to know how to use a company’s delivery infrastructure to quickly respond to new requirements coming from business owners and end users alike. IT specialization will not completely disappear, but it will not look anything like the silo entrenchment and technical specialization we see today. From this point forward, IT professionals will increasingly be organized around business process optimization to serve end users and line of business owners, rather than around independent technologies sitting in relative isolation. Across the board, the primary organizing principle in IT will shift from grouping people around technology silos to organizing them around common delivery processes. The companies that make this transition successfully will thrive, while those that do not will struggle to compete in an increasingly demanding and dynamic business world. IT organizations of the future will need to develop professionals who can see the parts as a whole and continually assess the overall health of the delivery system, responding quickly to changing business requirements. Employee work groups will continue to form around common processes, but the focus will be less about highly specialized knowledge and more about the efficiency of frequently repeated processes. IT professionals who understand the deep technical intricacies of IP network design, for example, will be in less demand than those who understand best practices in application delivery.

Guidelines for Staying In and Ahead of the Game
If you are not testing the waters of virtualization, you may already be behind. Experiment with virtualization now. Acquire applications and consider how to deliver them as part of your IT strategy. Three key recommendations are: n Change the mindset of your IT organization to focus on delivery of applications rather than installing or deploying them. Think about “delivery centres” rather than data centres. Most IT organizations today continue to deploy and install applications, although industry analysts advise that traditional application deployment is too complex, too static and costs too much to maintain, let alone to try to keep up with changes in the business. Delivering on the vision of an IT organization that is aligned with business goals requires an end-to-end strategy of efficiently delivering business applications to users.

  • Place a premium on knowledge of applications and business processes when hiring and training IT employees. IT will always be about technology, but do not perpetuate today’s “island” problem by continuing to hire and train around deep technical expertise in a given silo. If that happens, IT will continue to foster biased mindsets that perceive the world through a technologically biased silo lens, the opposite of what is needed today. IT leaders will increasingly need to be people who understand business processes. Like today’s automotive technicians, they will have to be able to view and optimize the overall health of the system, not the underlying gears and valves – or bits and bytes.
  • Select strategic infrastructure vendors who specialize in application delivery. Industry experts agree that the time is right to make the move from static application deployment to dynamic application delivery. IT will continue to use vendors that specialize in technical solutions that fit into various areas, such as networking, security, management and even virtualization. What is important, however, is forming a strategic relationship with a vendor that focuses not on technology silos, but on application delivery solutions. The vendor should be able to supply integrated solutions to incorporate virtualization, optimization and delivery systems that inherently work with one another, as well as the rest of your IT environment.

Thursday, 16 July 2009

Introduction to Server Virtualization

What is virtualization and why use it
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.

When to use virtualization
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.

While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.

While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

How to avoid the "all your eggs in one basket" syndrome
One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:

  • HTTP
  • FTP
  • DNS
  • DHCP
  • RADIUS
  • LDAP
  • File Services using Fiber Channel or iSCSI storage
  • Active Directory services

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.

For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.

Physical to virtual server migration
Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that you don't need to rebuild your servers and manually reconfigure them as a virtual server—you simply suck them in with the entire server configuration intact!

So if you have a data center full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You don't even need to worry about license acquisition costs because the licenses are already paid for. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean your server room would look after such a migration. It would all fit inside of one rack and give you lots of room to grow.

As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. Ask yourself what would happen now if your legacy server died. Do you even remember how to rebuild and reconfigure all of your servers from scratch? (I'm guessing you're cringing right about now.) With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.

Patch management for virtualized servers
Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. You still need a patch management system that patches all of your servers, but there may be interesting developments in the future where you may be able to patch multiple operating systems at the same time if they share some common operating system or application binaries. Ideally, you would be able to assign a patch level to an individual or a group of similar servers. For now, you will need to patch virtual operating systems as you would any other system, but there will be some innovations in the virtualization sector that you won't be able to do with physical servers.

Licensing and support considerations
A big concern with virtualization is software licensing. The last thing anyone wants to do is pay for 16 copies of a license for 16 virtual sessions running on a single computer. Software licensing often dwarfs hardware costs, so it would be foolish to run a $20,000 software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead.

For something like Windows Server 2003 Standard Edition, you would need to pay for each virtual session running on a physical box. The exception to this rule is if you have the Enterprise Edition of Windows Server 2003, which allows you to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems.

If you're running open source software, you don't have to worry about licensing because that's always free—what you do need to be concerned about is the support contracts. If you're considering virtualizing open source operating systems or open source software, make sure you calculate the support costs. If the support costs are substantial for each virtual instance of the software you're going to run, it's best to squeeze the most out of your software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. The trick is to find the right ratio of hardware to licensing/support costs. When calculating hardware costs, be sure to calculate the costs of hardware maintenance, power usage, cooling, and rack space.

There are licensing and support considerations for the virtualization technology itself. The good news is that all the major virtualization players have some kind of free solution to get you started. Even one year ago, free virtualization was not possible when VMware was pretty much the only player in town, but there are now free solutions from VMware, Microsoft, Xen Source, and Virtual Iron. In the next virtualization article, we'll go more in-depth about the various virtualization players.

The pros and cons of server virtualization

ISPs use server virtualization to share one physical server with multiple customers in a way that gives the illusion that each customer has its own dedicated server. Typically, an ISP will use server virtualization for IIS (Internet Information Server) and/or Microsoft Exchange Server. I've also seen administrators use server virtualization on a file and print server, but this isn’t nearly as common. Server virtualization on an IIS Server allows that server to host multiple Web sites, while employing it on an Exchange Server allows the server to manage e-mail for several companies. Let's look at the advantages and disadvantages to a virtualized ISP environment.

The money issue
Without a doubt, the greatest advantage of server virtualization is cost. For example, suppose that an ISP purchased a high-end server for $30,000. In addition, it needs an operating system for the server. A copy of Windows Server 2003 Enterprise Edition goes for about $8,000. Add in other components and the ISP could easily drop over $40,000 on a single server. Can you imagine if the server could only host a single Web site? The cost to the subscriber would be astronomical. On top of having to recoup a $40,000-plus investment in hardware, the ISP must also pay for bandwidth, salaries, building rental, and other business expenses before it can start turning a profit.

Although there are large companies such as Microsoft and Amazon that require multiple, dedicated Web servers, most of the time Web sites are small enough that quite a few sites can be hosted on a single server. This allows the ISP’s clients to share the hosting expense, driving down the price considerably.

Developing
Server virtualization is also great for development environments. An example is my own personal network. I own three Web sites, and have done every bit of the coding for these sites myself. To assist in the development process, I'm using a virtualized IIS Server.

My development server is a single computer running Windows 2000 Advanced Server and IIS. The server has been assigned seven IP addresses, each corresponding to one of seven sites. The first three sites are the production versions of my Web sites. Although I don’t actually host the sites from this server, I like to maintain a known good copy of each of my sites locally. The next three sites on the server are also copies of my three Web sites, but these are used for development. Every time I make a change to one of my sites, I make the change in this location. This allows me to test my changes without tampering with a production version of the site. The last site that the server hosts is for a new Web site that I'm working on that won’t go into production until the end of the year.

Problems with server virtualization
You must also watch out for pitfalls in server virtualization: scalability and security.

Scalability
Often, the terms scalability and availability are intertwined when people talk about networking. Both terms are relevant to server virtualization. Availability becomes an issue because if the virtualized server were to go offline, every site that the server is hosting will also fail. Most ISPs use a cluster or some other failover technique to prevent such outages.

Scalability is trickier. As I said, server virtualization provides a way for several small companies to share the costs associated with Web hosting. The problem is that while a company may start out small, it could grow quite large. A large company can easily dominate a virtualized server and begin robbing resources from the other sites.

For example, I own an e-commerce site that sells software. When I launched the site, it received very little traffic and wasn’t consuming much disk space. But now the site is getting thousands of visitors every day. On average, a couple of hundred people a day are downloading trial software, and the smallest download on the site is 15 MB. If 200 people are downloading a 15-MB file, there are almost 3 GB in transfers occurring every day.

Additionally, the site is designed so that when someone purchases software, the site creates a directory with a unique name and places the software into that directory. The idea is that the users can’t use the download location to figure out the path for downloading software that they haven't paid for. These temporary directories are stored for seven days.

The problem is that the more software I sell, the more temporary directories are created. Each of these directories contains anywhere from 15 MB to a couple of GB of data. I actually received a phone call from my ISP recently because I was consuming too much disk space and bandwidth. The ISP was using server virtualization and I was taking resources from other customers.

Obviously, Windows does provide mechanisms that you can use to minimize the effect of excessive use. For example, you could place disk quotas on each site, and you could use QoS to limit bandwidth consumption. However, these are issues that you need to consider before implementing your server, not after it begins to run low on resources.

Security
The virtualization process is designed to keep virtualized resources separate. I've seen a couple of cases, though, in which a virtualized server was accidentally visible to someone who wasn’t supposed to be able to see it. The unauthorized access problem happened a few months ago to one of my Web sites. My ISP uses the directory structure \CUSTOMERS\customer name\ to store each individual Web site. When you're in the Customers directory, you're supposed to see only Web sites that you own. However, one Sunday morning I was about to update one of my Web sites and I was able to see someone else’s site. Apparently, a permission entry had been set incorrectly. I made a quick phone call to my ISP and the permission was changed before any security breaches occurred.

Be careful with bleed over
Finally, bleed over is another issue to watch out for when subscribing to a virtualized server. Bleed over occurs when the contents of one virtual server affect other virtual servers. One of my Web sites has a chat room where I occasionally host live discussions with people in the IT industry. During the middle of a recent live chat, everyone involved in the chat received a pop-up window saying that the total bandwidth allocation had been exceeded. Everyone was booted out of the chat.

Needless to say, this was very embarrassing. I called my ISP and asked why this happened when I'd never experienced chat problems in the past. As it turns out, my ISP was not limiting bandwidth consumption. Instead, another site hosted on the same server had implemented a shareware bandwidth limitation program. Unfortunately, this utility limited bandwidth for the server as a whole, not just for the intended site. The ISP removed this component and the server returned to normal behaviour.

Tuesday, 16 June 2009

Virtualization is Changing the way IT Delivers Applications

Virtualization has rapidly become the hottest technology in IT, driven largely by trends such as server consolidation, green computing and the desire to cut desktop costs and manage IT complexity. While these issues are important, the rise of virtualization as a mainstream technology is having a far more profound impact on IT beyond just saving a few dollars in the data centre. The benefits and impact of virtualization on the business will be directly correlated to the strength of an organization’s application delivery infrastructure. Application delivery is the key to unlocking the power of virtualization, and organizations that embrace virtualization wrapped around application delivery will thrive and prosper, while those that do not will flounder. As virtualization takes centre stage, shifting roles in IT will require a new breed of professionals with broader skill sets to bridge IT silos and optimize business processes around the delivery of applications.

Going Mainstream
We are moving into a new era where virtualization will permeate every aspect of computing. Every processor, server, application and desktop will have virtualization capabilities built into its core. This will give IT a far more flexible infrastructure where the components of computing become dynamic building blocks that can be connected and reassembled on the fl y in response to changing business needs. In fact, three years from now, we will no longer be talking about virtualization as the next frontier in enterprise technology. It will simply be assumed. For example, today we normally assume that our friends, family and neighbours have high-speed Internet access from their homes. This was not the case a few years ago, when many were using sluggish dialup lines to access the Internet or had no access at all. High-speed Internet is now in mainstream, as it will be for virtualization. Virtualization will be expected; it will be a given within the enterprise. As this occurs, the conversation within IT circles will shift from the question of how to virtualize everything to the question of what business problems can be solved now that everything is virtualized.

Virtualization and Application Delivery
The most profound impact of virtualization will be in the way organizations deliver applications and desktops to end users. In many ways, applications represent the closest intersection between IT and the business. Your organization’s business is increasingly represented by the quality of its user facing applications. Whether large ERP solutions, custom web applications, e-mail, e-commerce, client-server applications or SOA, your success in IT today depends on ensuring that these applications meet the business goals. Unfortunately, trends such as mobility, globalization, offshoring, and e-commerce are moving users further away from headquarters, while issues like data centre consolidation, security and regulatory compliance are making applications less accessible to users.
These opposing forces are pushing the topic of application delivery into the limelight. It is forcing IT executives to consider how their infrastructures get mission-critical, data centre-based applications out to users to lower costs, reduce risk and improve IT agility. Virtualization is now the key to application delivery. Today’s leading companies are employing virtualization technology to connect users and applications to propel their businesses forward.

Virtualization in the Enterprise
The seeds of virtualization were first planted over a decade ago, as enterprises began applying mainframe virtualization techniques to deliver Windows applications more efficiently with products such as Citrix® Presentation Server™. These solutions enabled IT to consolidate corporate applications and data centrally, while allowing users the freedom to operate from any location and on any network or device, where only screen displays, keyboard entry and mouse movement traversed the network. Today, products like Citrix® XenApp™ (the successor to Presentation Server) allow companies to create single master stores of all Windows application clients in the data centre and virtualize them either on the server or at the point of the end user. Application streaming technology within Citrix XenApp allows Windows-based applications to be cached locally in an isolation environment, rather than to be installed on the device. This approach improves security and saves companies millions of dollars when compared to traditional application installation and management methods.
Virtualization is also impacting the back end data and logic tier of applications with data centre products such as Citrix® XenServer™ and VMware ESX that virtualize application workloads on data centre servers. While these products are largely being deployed to reduce the number of physical servers in the data centres, the more strategic impact will be found in their ability to dynamically provision and shift application workloads on the fl y to meet end user requirements. The third major area concerning the impact of virtualization will be the corporate desktop, enabled by products such as Citrix® XenDesktop™. The benefits of such solutions include cost savings, but they also enable organizations to simplify how desktops are delivered to end users in a way that dramatically improves security and the end user experience (compared to traditional PC desktops). From virtualized servers in the data centres to virtualized end users desktops, the biggest impact of virtualization in the enterprise will be found within an organization’s application delivery infrastructure

Seeing the Big Picture
The mass adoption of virtualization technology will certainly require new skills, roles and areas of expertise within organizations and IT departments. Yet the real impact of virtualization will not hinge on the proper acquisition of new technical skills. Rather, by making the most of the virtualization opportunity, organizations will have to focus on breaking down traditional IT silos and adopt end-to-end virtualization strategies. Most IT departments today are organized primarily around technology silos. In many organizations, we find highly technical employees who operate on separate IT “islands,” such as servers, networks, security and desktops. Each group focuses on the health and well-being of its island, making sure that it runs with efficiency and precision. Unfortunately, this stand-alone approach is debilitating IT responsiveness, causing pundits like bestselling author Nicholas Carr to ask whether IT even matters to business anymore. To break this destructive cycle, IT employees must take responsibility for understanding and owning business processes that are focused horizontally (from the point of origin in the data centre all the way to the end users they are serving), building bridges from island to island. IT roles will increasingly require a wider, more comprehensive portfolio of expertise around servers, networking, security and systems management. IT personnel will need to have a broad understanding of all these technologies and how they work together as the focus on IT specialization gives way to a more holistic IT mindset.

Seeking Experts in Delivery
The new IT roles will require an expertise in delivery. IT will need to know how to use a company’s delivery infrastructure to quickly respond to new requirements coming from business owners and end users alike. IT specialization will not completely disappear, but it will not look anything like the silo entrenchment and technical specialization we see today. From this point forward, IT professionals will increasingly be organized around business process optimization to serve end users and line of business owners, rather than around independent technologies sitting in relative isolation. Across the board, the primary organizing principle in IT will shift from grouping people around technology silos to organizing them around common delivery processes. The companies that make this transition successfully will thrive, while those that do not will struggle to compete in an increasingly demanding and dynamic business world. IT organizations of the future will need to develop professionals who can see the parts as a whole and continually assess the overall health of the delivery system, responding quickly to changing business requirements. Employee work groups will continue to form around common processes, but the focus will be less about highly specialized knowledge and more about the efficiency of frequently repeated processes. IT professionals who understand the deep technical intricacies of IP network design, for example, will be in less demand than those who understand best practices in application delivery.

Guidelines for Staying in and Ahead of the Game
If you are not testing the waters of virtualization, you may already be behind. Experiment with virtualization now. Acquire applications and consider how to deliver them as part of your IT strategy. Three key recommendations are: n Change the mindset of your IT organization to focus on delivery of applications rather than installing or deploying them. Think about “delivery centres” rather than data centres. Most IT organizations today continue to deploy and install applications, although industry analysts advise that traditional application deployment is too complex, too static and costs too much to maintain, let alone to try to keep up with changes in the business. Delivering on the vision of an IT organization that is aligned with business goals requires an end-to-end strategy of efficiently delivering business applications to users.
  • Place a premium on knowledge of applications and business processes when hiring and training IT employees. IT will always be about technology, but do not perpetuate today’s “island” problem by continuing to hire and train around deep technical expertise in a given silo. If that happens, IT will continue to foster biased mindsets that perceive the world through a technologically biased silo lens, the opposite of what is needed today. IT leaders will increasingly need to be people who understand business processes. Like today’s automotive technicians, they will have to be able to view and optimize the overall health of the system, not the underlying gears and valves - or bits and bytes.
  • Select strategic infrastructure vendors who specialize in application delivery. Industry experts agree that the time is right to make the move from static application deployment to dynamic application delivery. IT will continue to use vendors that specialize in technical solutions that fit into various areas, such as networking, security, management and even virtualization. What is important, however, is forming a strategic relationship with a vendor that focuses not on technology silos, but on application delivery solutions. The vendor should be able to supply integrated solutions to incorporate virtualization, optimization and delivery systems that inherently work with one another, as well as the rest of your IT environment.