- Chat clients such as IRC, AIM, and Yahoo IM are strictly forbidden, as they can transfer files.
- Accessing external mail servers is forbidden (antivirus policy); only use the internal server to send or receive.
- Network games, such as Doom or Quake, are forbidden, except between 8 a.m. and 6 p.m. all weekdays for members of management.
- Websites such as playboy.com are forbidden for legal reasons.
Wednesday, 23 September 2009
Firewall, Why?
Tuesday, 22 September 2009
Offline P2V Migrations using SCVMM 2008
Introduction
SCVMM 2008 supports online migrations for Windows operating systems that have Volume Shadow Copy Service (VSS) support and requires offline migrations for Windows operating systems that do not have VSS support. The following article provides the prerequisites, procedures and considerations for offline P2V migrations. In addition I will provide recommendations for when you should consider using offline P2V migrations even if the Windows operating system supports online migrations.
P2V migrations are all performed using a wizard in SCVMM 2008. The same wizard is used for online and offline migrations. All that is required to install is the Virtual Machine Management server and a library server. Both are installed on the SCVMM server by default.
The offline P2V migration wizard gathers the required information, makes decisions based on that information, and finally creates the job that will be executed to perform the migration. The migration wizard for an offline P2V migration involves the following steps:
- Launching the Wizard
- Specifying the source physical server
- Naming the virtual machine
- Gathering system information from the source physical server
- Modifying the volume configuration
- Modifying the IP address used for migration
- Modifying the processor and memory configuration of the migrated virtual machine
- Selecting the Hyper-V host for placement
- Selecting the path to place the virtual machine files on the target Hyper-V host
- Selecting the virtual network mapping for each network adapter
- Selecting additional properties like startup and shutdown actions
- Resolving any potential conversion issues
- Launching the migration process
Offline P2V Prerequisites
Before you attempt a P2V migration, there are a host of prerequisites that need to be validated. During the information gathering phase of a P2V migration, SCVMM will need to deploy a P2V agent (VMMP2VAGENT.EXE) to the source server to gather information. The source server is required to have a minimum of Windows Installer 3.1 installed for the agent to install successfully. Before you attempt a P2V migration, you should download Windows Installer from Microsoft Downloads. Search for KB893803.
During an offline P2V migration the source server is powered down and booted from a WinPE image so that the source server’s disks can be accessed without any locks on the files. The WinPE image will need access to the disk drives and the network interfaces to read the data from the source servers’ disk drives and transfer that data across the network to the host where the target virtual machine resides. During the information gathering phase, the P2V agent will identify any additional drivers or updates that are required and will be presented those requirements to you via the wizard. You will need to copy these files into the SCVMM folder structure following these steps:
- Download the required update packages or drivers indicated from the Microsoft Web site. The update packages will need to be renamed to an 8.3 naming format and possibly will need to be extracted. It may also be necessary to extract the drivers to get the raw driver files.
- Copy the renamed update packages into the folder C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\P2V Patch Import.
- Copy the raw driver files into the folder C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\Driver Import
Make sure to use drivers designed for Vista or later operating systems, because the version of WinPE used in SCVMM 2008 is based on Windows Vista.
Offline P2V Considerations
For some servers you should use offline P2V migration to make sure that transactional data is properly saved to disk to avoid corruption. Active Directory domain controllers are prime example. An online migration of a domain controller could potentially cause a USN rollback situation as described in Appendix A of the white paper “Running Domain Controllers in Hyper-V”.
You should perform P2V conversion of a domain controller in offline mode so that the directory data is consistent when the domain controller is turned back on. At no time during the P2V migration should the physical DC and the new virtual DC be running and connected to the production corporate network. Once the P2V migration is complete, the virtual machine should be powered on while connected to an isolated network so you can verify that the P2V migration process is complete. Once the migration is verified, the physical DC should never be turned back on.
Offline P2V Migration Performance
The speed of the migration process is affected by network, processor and disk I/O capabilities of the source server and target host. To minimize the time of a P2V migration you should consider doing the following:
- Dedicate a Hyper-V host for P2V migrations
- Place the Hyper-V host as close to the source server as possible
- Do not perform migrations over slow WAN links
- Place the source server and the target Hyper-V host on gigabit Ethernet networks
Step-By-Step Offline Migration of a Windows 2000 Server
Windows 2000 Server is the only supported Windows operating system that requires a P2V migration. This is required because Windows 2000 Server does not have VSS support. To demonstrate an offline P2V migration, we will migrate a Windows 2000 Server called PHYSICAL1 to a virtual machine called VIRTUAL1. These procedures assume that you already have SCVMM 2008 installed with managed Hyper-V hosts that have the capacity to support the PHYSCIAL1 being migrated.
Use the following procedures to perform a physical to virtual migration:
- In the SCVMM console Actions menu, click Convert Physical Server.
- On the Select Source page, enter PHYSICAL1 as the name of the physical computer that you would like to convert, enter the credentials that have local administrative rights on the server, and then click Next.
Figure 1
- On the Virtual Machine Identity page, enter VIRTUAL1 for the virtual machine name, modify the virtual machine owner, enter a description if desired, and then click Next.
- On the System Information page, click the Scan System button to scan PHYSICAL1. SCVMM will use the credentials you provided in step 2 and remotely connect to the server, transfer the P2V agent, install the agent, and then scan the server. When the scan is complete, the system information box at the bottom will display the operating system, the processor count, the hard drive information, and the network adapters. When you are done reviewing the system information, click Next.
Figure 2
- On the Volume Configuration page you can modify the original source server configuration and define the disk controller that the new virtual machine should be connected when migrated. Make modifications to the hard drive type, size, and controller to which the volume should be connected and click Next.
Figure 3
- On the Offline Conversion Options page, select the method that the WinPE image should use to obtain an IP address (DHCP or static IPv4 or IPv6 address). If you select a static IP address you will need to provide the IP address information. Once the required information is provided, click Next.
Figure 4
- On the Virtual Machine Configuration page, you will see that it is possible to modify the number of processors and the amount of memory to assign to the new virtual machine. For a Windows 2000 server, only a single processor configuration is supported. Make any required modifications to the memory, and then click Next.
Figure 5
- On the Select Host page, the available hosts are ranked based on performance and available capacity for this virtual machine. The hosts are ranked using a star ranking with the recommended host at the top of the list. Select the Hyper-V host that you want to place the new virtual machine and then click Next
Figure 6
- On the Select Path page, modify the path to store the new virtual machine in the desired location on the Hyper-V server and then click Next.
- On the Select Networks page, select the virtual network connection binding that you want for the virtual machine. In the case of a DC migration select the virtual machine to be in the Not Connected state to prevent unwanted network communications until you have verified everything is working correctly. Click Next.
Figure 7
- On the Additional Properties page, select the Automatic Stop and Start actions you prefer and then click Next.
Figure 8
- On the Conversion Information page, review any open issues for PHYSICAL1, and then when you have completed your review and resolved these issues, click Next.
- On the Summary page, review the conversion settings and click Create to start the physical to virtual migration process.
Figure 9
- During the conversion process, the Jobs window is displayed. You can use it to track the progress of the conversion.
Conclusion
System Center Virtual Machine Manager 2008 is Microsoft’s solution for providing online and offline physical to virtual migrations and placement on Virtual Server 2005 R2 and Hyper-V. SCVMM support for offline migration is primarily focused on Windows 2000 Server migrations, but should also be used for situations where the online VSS approach would result in possible corruption of data. This article provided information about SCVMM offline migration options and the required prerequisites for a successful migration. The wizard based step-by-step approach makes a P2V migration a simple and quick process.
Tuesday, 1 September 2009
Help Your Help Desk
Your IT staff has been cut in half. You're in the process of upgrading your server configuration to leverage virtualization and data encryption. You're comparing proposals from different email management service vendors. And, in the middle of all this action, you get a phone call from someone in payroll who is in a panic because he's unable to enter his password and can't access his accounting software. You spend five minutes calming the guy down and another five minutes diagnosing his machine remotely.
Granted, the aforementioned scenario is extreme, but your help desk should be able to help you as much as it purports to help your employees. Ideally, your payroll employee could have found information about his password problem in an easy-to-reference knowledge base and accessed a process that would have walked him through the steps to automatically reset his password.
So, what can you do to improve your help desk efficiencies, particularly in this economic climate? Here are a few tips to get you started.
Check Your Current Help Desk Solution
Time- and cost-cutting tools may already be a part of your current help desk solution, you might have a service desk tool that offers an automatic password reset option that's integrated with Active Directory or a self-service portal where users can search a knowledge base and see if there's anything published out there that resembles their problem.
Companies often have eschewed such applications because they think users won't know how to use them, and IT doesn't have time to train them. "But when you're doing double the work, you really want to spend that little bit of time to put something in place there, so that next time a customer calls, you walk them through that self-service, walk them through that password reset so that they can do it again the next time".
Set Up Knowledge Base Processes
If you haven't already, implement a user knowledge base there is a 15% reduction in support calls when users can solve issues themselves.
Putting together a knowledge base doesn't necessarily mean you have to hire a full-time knowledge architect to build it. It's just when you solve a problem, and you say, ‘Hey, wait a second. We've run into that before,' it should be easy just to publish that and make it available to the users.
Have An Escalation Process In Place
Help desk solutions provider, says that getting a help desk solution that integrates into your ticket systems means that your help desk can solve a high percentage of user questions, but when a level 2 answer is needed, the IT staff can get right on it.
"It's those 10 to 15 minute questions that knock an in-house staff off track and dilute their effectiveness. A ticketing system really give the power to your help desk,"
Log Every Help Desk Call
Make sure you log each one of your help desk calls in both a descriptive fashion to provide you with the nature of the problem and a coded one so that you know what product or process is causing the difficulty. This gives you a way to prioritize what the critical issues are so you can determine the root cause of the reason the person called.
Too often, in-house IT staffs run from fire to fire because no one has put together a big-picture overview about why problems are occurring. For example, if people are having trouble with PowerPoint, maybe it's worth holding a series of classes for the departments that are trying to use it but aren't well-trained. That might be a much lower-cost solution instead of reactively handling 100 questions a month.
Wednesday, 22 July 2009
Wireless Firewall Gateway White Paper
Introduction
With the deployment of wireless network access in the workplace, the requirement for a more enhanced security design emerges. Wireless technology offers a more accessible means of connectivity but does not address the security concerns involved with offering this less restrained service. In order to facilitate management of this network, maintain a secure network model, and keep a high level of usability, a multi-functional device to do these tasks must be placed in the wireless environment.
Design Objectives
The WFG (Wireless Firewall Gateway) is designed to take on several different roles in order for the process to be near transparent to the user. Since the wireless network is considered to be a distrusted environment, access is restricted in order to limit the amount of damage that can be inflicted on internal systems and the Internet if an intruder invokes an attack. This impedes the convenience of the wireless service to users who wish to access external sites on the Internet. Since unknown users are difficult to identify and hold accountable for damages, a method of user authentication is needed to ensure that the user takes responsibility for their actions and can be tracked for security concerns. A trusted user can then gain access to services and the commodity Internet from which unauthenticated users are blocked.
Keeping simplicity in mind, the WFG acts as a router between a wireless and external network with the ability to dynamically change firewall filters as users authenticate themselves for authorized access. It is also a server responsible for handing out IP addresses to users, running a website in which users can authenticate, and maintaining a recorded account of who is on the network and when.
Users of the wireless network are only required to have a web browser if they wish to authenticate and dynamic host configuration (DHCP) software, which comes standard on most operating systems. Minimal configuration is required by the user, allowing support for a variety of computer platforms with no additional software. The idea is to keep the wireless network as user-friendly as possible while maintaining security for everyone.
Internals
Given the multiple functionalities and enhanced security required for this device, a PC running OpenBSD Unix was chosen with three interfaces on different networks: wireless, external (gateway), and internal (management). The following sections elaborate upon the services that constitute the device's various roles:
- Dynamic Host Configuration Protocol (DHCP) Server is used to lease out individual IP addresses to anyone who configures their system to request one. Other vital information such as subnet mask, default gateway, and name server are also given to the client at this time. The WFG uses a beta DHCPv3 open-source server from the Internet Software Consortium with the additional ability to dynamically remove hosts from the firewall access list when DHCP releases a lease for any reason (client request, time-out, lease expiration, and so on). Configuration files for the server are located in /etc and follow the ISC standard (RFC) format. However, the server executable is customized and does not follow these standards. If the server needed to be upgraded, then the source code would need to be re-customized as well.
The DHCP server is configured to only listen on the subnet interface of the wireless network. This prevents anyone from the wired network to obtain a wireless IP address from this server. As an added security measure, packet filters prevent any DHCP requests coming in on any other interfaces. - Filtering - Stateful filtering is accomplished using OpenBSD's IPF software. IP routing is enabled in the kernel state allowing for the packet filtering to occur between the wireless and external network interfaces. Static filters are configured on boot up in the /etc/ipf.rules file and are designed to minimize remote access to the WFG. Only essential protocols such as NTP, DNS, DHCP, and ICMP are allowed to reach the system. This builds a secure foundation for the restricted environment. For the users who do not require an authenticated session, access is granted to selected servers for email, VPN, and web. Where applicable, packet filtering is done at a transport layer - UDP or TCP, to allow for stateful inspection of the traffic. This adds a higher level of security by not having to explicitly permit dynamic or private port sessions into the wireless network.
The same script that authenticates a user over the web also enables their access to the unrestricted environment. When a user connects to the web server, their IP address is recorded and upon successful login, gets pushed to the top of the firewall filter list, permitting all TCP and UDP connections out of the wireless network for that IP address.
In order to prevent succeeding users from being allowed trusted access when the IP address is recycled, the in-memory database software removes the firewall filter permit rule whenever the user's next lease binding state is set to free, expired, abandoned, released, or reset. The DHCP server will not issue the same IP address until it frees the lease of the last client. This helps avoids the security issue of someone hijacking an IP address that's been authenticated and using it after the valid user is no longer using the wireless service - Web Authentication - The need for web-based authentication is necessary so that any user running any platform can gain access to the wireless network. Apache (open-source) web server is designed to securely handle this task. The server implements Secure Socket Layer (SSL) for client/server public-and-private key RSA encryption. Connecting to the web server via HTTP automatically redirects the client browser to use HTTPS. This ensures that the username and password entered by a user will not be sent in clear text. To further increase security, the SSL certificate is signed by Verisign, a trusted Certificate Authority (CA), which assures that an attacker is not imitating the web server to retrieve a user's password information.
A website is setup where a user can go to type in their username and password information. This site displays the standard government system access warning and shows the IP address of the user's system (using PHP). Once a user has typed their username and password at the website where prompted, a Perl/CGI script then communicates with a Radius server with RSA's MD5 digest encryption to determine if the information submitted is correct. If the account information matches what is in the Radius database, then commands to allow their IP address, obtained through the Apache environment variables, are added to the IPF access rules. If the user is not found in the Radius database, or if the password entered is incorrect, a web page stating "Invalid Username and Password" is displayed to the user. If everything is successful, the user is notified of their privileged access. - Security - Every step is taken to ensure that a desirable security level is maintained both on the WFG system and the wireless network while not hindering functionality and usability. Only hosts connecting from the wireless network can access the web server. For system management purposes, Secure Shell (OpenSSH) connections are permitted from a single, secured host. All other methods of direct connection are either blocked by the firewall filters or denied access through the use of application-based TCP wrappers.
Users' authentication information is encrypted throughout the process: SSL encryption with a certificate signed by a trusted CA between the client's web browser and the server, and MD5 digest encryption between the web server and the Radius system for account verification.
Logs are kept for all systems, which gain access to both the restricted and authorized network. The DHCP server keeps a record of what MAC address (NIC address) requests an IP address and when it is released, then passes that information to syslog. Syslog then identifies all logging information from DHCP and writes it to /var/log/dhcpd. Additionally, any user who attempts to authenticate via the web interface has their typed username and source IP address logged with the current time along with whether or not they were successful. When a lease on an IP address expires and is removed from the firewall filters, it is noted with the authentication information in /var/log/wireless. These logs are maintained by the website script and DHCP server software, not syslog. Combined, it is possible to identify who is on the network at a given time - either by their userid, or by their burned-in physical address, for auditing purposes.
With the DHCP server managing the firewall filters, it is possible for a user to manually enter a static IP address and authenticate, with the permit rule never being removed. To prevent this, the CGI script reads in the dhcpd.leases file and determines if the source IP address, obtained through the environment variable $ENV{'REMOTE_ADDR'}, has an active lease. If no lease is found, or if the lease is expired or abandoned, authentication is denied.
Tuesday, 21 July 2009
Kaspersky Anti-Virus 2010: Advanced Web and PC Protection by Kaspersky
Kaspersky is known as one of the best anti-virus. The new version, 2010 provide new and improved features. Read this article to find how KAV 2010 has to offer.
Introduction
Kaspersky has released the new version of their top-rated anti-virus - Kaspersky 2010. The new version is boasting new features. This article will discuss what to expect with Kaspersky Anti-Virus 2010 (KAV).
Installation and System Requirements
The new version of Kaspersky Anti-Virus requires Windows XP, Vista and Windows 7. It will work in 32-bit and 64-bit systems. The vendor noted that safe mode will not work in XP 64-bit and will limitation if run in safe mode using 64-bit of Vista.
During installation of KAV 2010, the installer will display the standard End-User License Agreement. After agreeing with the EULA, another terms of agreement is displayed: Kaspersky Security Network Data Collection Statement in which if you agree, you will be participating in the Kaspersky Security Network by allowing the program to collect selected security and application data. This is similar to SpyNet in Windows Defender or any threat centre by many anti-malware vendors that will help them in providing protection signatures for risks in the wild. It is not required to agree to the Kaspersky Security Network because the installation will proceed if you uncheck the box that you agree to the terms of participation in the said security network.
You can customize the component that KAV installer will install: Virtual Keyboard, Proactive Defense, and Anti-virus for File, IM, Web, E-mail and Program Kernel and scan tasks. A system restart is not required to start using KAV 2010.
Features and Options
The features and options in using KAV 2010 are quite extensive but don’t let it stop you in trying the program. Most of the options are very useful and offer what most computers need as protection:
- File Anti-Virus: Protection by KAV against known malware
- Mail Anti-Virus: E-mail protection
- Web Anti-Virus: Network protection to scan web traffic
- IM Anti-Virus: Scans instant messaging for malicious objects
- Proactive Defense: Heuristic protection
- Anti-Phishing: Fraud Protection
Extra Tools and features in KAV 2010 are displayed in its Security + window which let you use the following tools:
- Virtual Keyboard
- Rescue Disk
- Browser Tune-up
- Privacy Cleaner
Windows Settings Troubleshooting Utility: To check the security settings in Windows e.g. if Autorun is enabled and if Windows Update is disabled.
Special Game mode
Browser Helper Object to identify unsafe website - I noticed that this feature is not working. I visited few unsafe website and even search the internet for known malware links but I can't see any visible color coding that it is supposed to display or warn.
Performance, Tasks and Update
KAV 2010 lets you rollback to using previous database if the new database is corrupted or providing false positive. This is quite useful since false positive or corrupted download can happen. It’s always recommended that we configure the anti-malware to send copy in quarantine for any threat it will detect.
The scan tasks in KAV 2010 are similar to what we expect with advanced anti-virus except that KAV 2010 is also offering Vulnerability Scan. KAV is running with acceptable memory during PC usage and during scanning.
Protection and Detection
I used 201 confirmed malware samples to test KAV 2010’s resident protection. The malware samples were located in Virtual PC. To proceed with the test, I started transferring the directory that has 201 malware files from Virtual PC to the hosts system where Kaspersky is installed. The resident protect able to detect 159 malware only and left 42 undetected. Running an on-demand scan failed to detect the 42 malware. I changed the settings to its highest protection level but the result is the same.
An SME’s Guide to Virtualisation
Virtualisation is now seen as essential in enabling organisations to manage their vital IT resources more flexibly and efficiently. Yet how challenging is it to successfully deploy virtualisation, especially at an SME? This guide, produced by Computer Weekly in association with IBM and Intel, covers the salient issues for an SME seeking to implement a virtualisation strategy.
Overview
Virtualisation is a growing trend in computing as organisations address the challenge of harnessing more processing power for more users, while reining in costs during the recession.
Surveys of SMEs conducted by IDC have revealed these businesses view virtualisation as presenting immediate cost advantages and opportunities to build and grow highly flexible IT environments.
IDC analyst Chris Ingle stresses that virtualisation is nothing new in the IT world, but the increased number of solutions now available for common, x86 servers means SMEs can do a lot of the things that previously only mainframe and Unix users could do.
"It democratises virtualisation and brings it within SME budgets and lets them do things that previously only larger companies could do," he says.
Choose the Right System
This presents valuable opportunities for SMEs to improve how they use resources and develop strategies for business continuity and disaster recovery, among other benefits.
But organisations need to consider carefully what they hope to achieve with virtualisation and choose the solution that best suits their needs. There are a number of different techniques for virtualising a server or building a virtual machine (VM).
Hypervisor Virtualisation
The most common is hypervisor virtualisation, where the VM emulates the actual hardware of an x86 server. This requires real resources from the host (the machine running the VMs).
A thin layer of software inserted directly on the computer hardware, or on a host operating system, allocates hardware resources dynamically and transparently, using a hypervisor or virtual machine monitor (VMM).
Each virtual machine contains a complete system (BIOS, CPU, RAM, hard disks, network cards), eliminating potential conflicts.
Common VM products include Microsoft’s Virtual Server and Virtual PC, along with EMC VMware’s range of products, such as VMware ACE, VMware Workstation and its two server products, ESX and GSX Server.
Risks & Benefits
For medium-sized organisations, virtualisation can lead to significant savings on equipment as well as more centralised management of what they have. It also allows them to harness and distribute greatly increased processing power very quickly.
The process of creating VMs is expected to get even easier for organisations, with Intel integrating improved virtualisation technology into its business-class processors. But this can be a double-edged sword. For instance, analysts warn that, because virtual environments are so cheap and easy to build, many organisations risk losing track of them.
New practices have to be put in place, responding to the increasing overlap in the internal areas of responsibility of the IT staff, as storage, server, and network administrators will need to co-operate more closely to tackle interconnected issues.
Virtualising at Operating System Level
One of the more commonly cited pitfalls of virtualisation is that companies can risk breaching software-licensing agreements as a virtual environment expands.
Without a method to control the mass duplication and deployment process of virtual machines, administrators will have a license compliance issue nightmare on their hands. Virtualising at the operating system (OS) level avoids this problem. Most applications running on a server can easily share a machine with others, if they can be isolated and secured. In most situations, different operating systems are not required on the same server, merely multiple instances of a single OS.
OS-level virtualisation systems provide the required isolation and security to run multiple applications or copies of the same OS on the same server. Products available include OpenVZ, Linux-Vserver, Solaris Zones and FreeBSB Jails. At first Linux-only, SWsoft recently launched its virtualisation technology for Windows. Called Virtuozzo, it virtualises the OS so multiple virtual private servers can run on a single physical server. Virtuozzo works by building on top of the operating system, supporting all hardware underneath. The VM does not need pre-allocated memory, as it is a process within the host OS, rather than being encapsulated within a virtualisation wrapper.
The upside to OS-based virtualisation is that only one OS licence is required to support multiple virtual private servers. The downside of this option is less choice, because each VM is locked to the underlying OS. In the case of Virtuozzo, it only guarantees support for Windows and RH Linux.
Paravirtualisation
Another approach to virtualisation gaining in popularity is paravirtualisation. This technique also requires a VMM, but most of its work is performed in the guest OS code, which in turn is modified to support the VMM and avoid unnecessary use of privileged instructions.
The paravirtualisation technique allows different OSs to be run on a single server, but requires them to be ported, that is they should know they are running under the hypervisor. Products such as UML and Xen use the paravirtualisation approach. Xen is the open source virtualisation technology which Novell is shipping with its own Linux distribution, SuSE, which also appears in the latest development of Red Hat’s Fedora, Core 4.
Server Sales Reach Tipping Point
IDC predicts something of an exodus towards virtualised server configurations over the next few years. The market analyst reported recently that the number of servers containing a virtualisation component shipped in Western Europe rose 26.5% to 358,000 units throughout 2008. IDC said these servers made up 18.3% of the market compared to 14.6% in 2007.
For the first time, last year the number of purely physical machines sold was eclipsed by sales of virtual- capable machines, which topped 2 million. IDC predicts declining IT hardware spending will result in VM sales exceeding physical machines by around 10% at some time during the year, and that the ratio of the two could be 3:2 by 2013.
In line with this trend, logical machines, or those with physical and virtual components, will realise a 15.7% increase over the same period. IDC notes that this highlights the importance to organisations of deploying the right tools to manage expanding virtual environments, seeing as both virtual and physical servers have to be operated, monitored and patched.
The research company also advises organisations ensure they have the right level of education if they are to properly exploit this new and potentially rewarding approach to corporate IT.
Friday, 17 July 2009
Putting Security in its Place
We have been doing security wrong for a number of years. This is a poorly kept secret, as everybody knows that technologies invented in the days of floppy disks are woefully inadequate for protecting today’s business. The industry pours huge amounts of resources into extending the life of schemes that try to identify attacks or deviations from corporate security policies in order to protect the business against service disruptions or loss of confidential data. The mistake is the misunderstanding that security itself is a business solution; security is a critical feature of a successful business solution. Today’s best-practice security approaches not only ineffectively secure the business, they impede new business initiatives. The answer to reducing runaway security investments lies in virtualization-based application delivery infrastructures that bypass traditional security problems and focus on delivering business services securely.
Defence-in-depth is a broadly accepted concept built on the premise that existing security technologies will fail to do the job. For example, an antivirus product in the network may catch 70 percent of known attacks, but that means it will still miss 30 percent. It is common for larger enterprises to have different vendors scanning e-mail at the network edge, on the e-mail servers and on user endpoints under the theory that the arithmetic will be on their side and one of these products will block an attack.
However, practice shows that the effectiveness of defence-in-depth falls well short of theory, and operating duplicate products comes at a great cost to the business. IT can continue to layer on traditional technologies with consistently dismal results. What is needed is an approach that fundamentally changes the business operations to avoid many of the existing security traps, and positions IT to deliver the business to any user, anywhere, using any device.
The predicates of virtualised application delivery have significant security contributions, without having to purchase and operate additional security products. The new approaches are made possible by advances in data-centre virtualization, availability of high-speed bandwidth and innovations in endpoint sophistication. Now, it is entirely possible to execute browsers in the secure data-centre for the end-user, remotely project displays and manage user interfaces, and have all of this done transparently to the end-user. The security characteristics of a virtualised application delivery are worth noting:
- Keep executables and data in a controlled data-centre environment. IT can better maintain compliant copies of applications and can better protect confidential data within the managed confines of a virtual data-centre. Most malicious attacks enter the enterprise through remote endpoints. Processing desktop applications in the data-centre reduces the exposure of business disruptions due to malicious code infections and data loss. IT operating costs are also reduced, as IT spends less time and resources maintaining employee endpoints with easy access to hosted applications.
- Minimize the time window of vulnerability when desktop applications and data can get into trouble. Vitalising desktop applications — either by hosting the application or desktop via remote display protocols in the data-centre or by streaming application images from the IT managed application delivery centre for local execution at the endpoint — reduces the amount of time an application is exposed to potential infections. Application delivery starts the end-users with a clean copy of the application, and the application copy is erased when the end-user is done. Any infection that is picked up disappears as the user again launches a clean copy the next time the application is requested.
- Remove the end-user from the security equation. Traditional approaches place too much of the security burden on the end-user, who is responsible for maintaining software, respecting confidential data and being knowledgeable of dangers lurking in the Internet. IT should be managing corporate security, and virtualised application delivery makes it much easier for the user to do the right thing.
IT is challenged with making it easier for the business to attract new customers, while continuing to meet high security standards. The burden needs to be reduced for end-users that presently are expected to install software agents, upgrade software regularly and take special action when informed of security events. Not only that, but users are limited in choices of endpoint devices, operating systems and connectivity choices. These disconnects between end-users and the organization, and between end-users and IT, inhibit productivity and business growth. Fortunately, IT is implementing new infrastructure models from the data-centre to the endpoint that more readily serve applications to users with intrinsic security at reduced operating costs.
IT is orchestrating the power of virtualised data-centres, high-speed bandwidth availability and high performance endpoints to offer end-users a true IT service with consistent secure access from anywhere at any time with any device. The ability to provide an integrated application delivery system, where applications are served on demand instead of deployed ahead of time, is the new model that has put security in its place. Application executables and sensitive data need not reside at the endpoint, where security becomes the responsibility of the end-user. A dynamic service approach enables IT to extend control of the technical infrastructure to the endpoint with resultant gains in security, application availability and cost reduction.
- Virtualised data-centres deliver cost savings in server utilization, are also delivering cost savings in dynamic desktop and application provisioning. As users request applications, the IT service can transparently launch a virtual desktop in the data-centre or stream a copy of the executable from the application delivery centre for local execution. Authenticated end-users have easy, secure access to business applications.
- The availability of high-speed bandwidth allows IT to effectively service end-users’ application requests over the Internet. Remote display protocols drive end-user interactions, allowing the application to execute in the safe confines of the data-centre with the look and feel of a locally executing application; application streaming protocols allow copies of executables to be efficiently downloaded and launched on demand for local execution when the network is unavailable. In both cases, IT ensures the user runs only the most recent compliant copy of the application. Security issues are significantly reduced simply by allowing the IT service to ensure that the user starts with clean copies of application images each and every time.
- The enterprise needs to support a wide variety of endpoint devices to make it easy for new customers to access applications. Thus, IT is required to make legacy Windows applications available not only to desktops and laptops running various operating systems, but also intelligent handhelds such as phones and PDAs. The most expeditious way of providing this service is also the most secure – virtualise the application in the data-centre, giving the user a choice of browser, remote display or streamed application access. In each scenario, IT reduces security exposures through heightened application control while the end-users can more readily get their business done.
It is time to start meeting security requirements the right way – by fundamentally changing the way applications are provided to end-users. The traditional model of executing applications by installing software directly on isolated PCs is well over 30 years old – well before the Internet connected users. It is not a surprise that this approach fails dramatically to meet today’s security requirements. An integrated approach that takes advantage of virtualization, Web based connectivity and power of endpoints to minimize security risks is essential. The direction of an integrated application delivery service enables IT to use ubiquitous Web-based technology to support new users and drive the costs out of supporting existing users. The business benefits of increased availability combined with the security benefits of greater IT control make the evolution to a cloud-based application delivery service inevitable.
A simple example shows the power of a virtualised application delivery system. A merger to create a stronger international presence for the enterprise creates a need to quickly grant access to corporate applications for the new employees. IT provides a securely configured browser that executes virtually in the data-centre. The new business offices easily transition to corporate applications without having to reconfigure internal systems such as firewalls or endpoints that may not be compliant with corporate security policies. The new offices are more quickly indoctrinated into the new organization, and the security risks of non-compliant configurations are simply bypassed. The virtual application delivery capability has put security in its place, removing additional costs and showing the agility to streamline IT alignment with business needs.
In an ideal world, security just wouldn’t matter. Organizations would go about the business of satisfying customers without concern for malicious attacks or painful losses of confidential data. Unfortunately we’re not there yet. However, by implementing virtualised application delivery approaches; IT can simply avoid many insecure situations while gaining the desired agility to keep IT services aligned with the business. This is putting security in its place – a feature to enable the success of business operations.
- Extend the virtualised data-centre to accommodate end-user desktops and applications. Application delivery using remote display technologies is a good way to deliver business value to remote offices where IT does not have to deploy applications on local endpoints and confidential data remains controlled in the data-centre. Put metrics in place to measure the IT time savings of only applying patches and software upgrades to applications in the data-centre.
- Test out the user experience of streamed applications. For example, employees working from home can improve security by executing a fresh copy of a browser or e-mail client from the corporate application delivery centre. Similarly, employees can work on an airplane totally disconnected from the network with applications that have been streamed to their laptop for the business trip. Let application delivery transparently stream compliant images from the data-centre to the desktop, and reduce the risk of malicious code lingering on corporate endpoints. Check out user satisfaction with performance while knowing that each user session begins with the most secure application that IT can deliver, and IT can deliver applications at the speed of business.
- Have your IT architects report back on delivering compliant end-user desktops and applications as an IT service. Once IT is comfortable in the cost savings and increased control of end-user environments in the data-centre, the next logical step is to enhance the application delivery service so IT can have the same procedures for both local and remote users. Look at additional cost savings by consolidating network security into the data-centre and achieve greater scale with network traffic accelerators.
The way to run a more secure business is to run a more secure application environment, where IT effectively controls executables and virtualization shrinks the vulnerability of desktop applications. IT managers should question why we keep putting applications in harm’s way on end-user desktops. Start moving towards virtualised application delivery – you will gain flexibility in running your business, you will gain tighter control and security of critical applications and confidential data, and you will lose a big expense bill from administering obsolete security technologies.
Virtualization is Changing the way IT Delivers Applications
Virtualization has rapidly become the hottest technology in IT, driven largely by trends such as server consolidation, green computing and the desire to cut desktop costs and manage IT complexity. While these issues are important, the rise of virtualization as a mainstream technology is having a far more profound impact on IT beyond just saving a few dollars in the data centre. The benefits and impact of virtualization on the business will be directly correlated to the strength of an organization’s application delivery infrastructure. Application delivery is the key to unlocking the power of virtualization, and organizations that embrace virtualization wrapped around application delivery will thrive and prosper, while those that do not will flounder. As virtualization takes centre stage, shifting roles in IT will require a new breed of professionals with broader skill sets to bridge IT silos and optimize business processes around the delivery of applications.
Going Mainstream
We are moving into a new era where virtualization will permeate every aspect of computing. Every processor, server, application and desktop will have virtualization capabilities built into its core. This will give IT a far more flexible infrastructure where the components of computing become dynamic building blocks that can be connected and reassembled on the fl y in response to changing business needs. In fact, three years from now, we will no longer be talking about virtualization as the next frontier in enterprise technology. It will simply be assumed. For example, today we normally assume that our friends, family and neighbours have high-speed Internet access from their homes. This was not the case a few years ago, when many were using sluggish dialup lines to access the Internet or had no access at all. High-speed Internet is now a mainstream as it will be for virtualization. Virtualization will be expected; it will be a given within the enterprise. As this occurs, the conversation within IT circles will shift from the question of how to virtualise everything to the question of what business problems can be solved now that everything is virtualised.
Virtualization and Application Delivery
The most profound impact of virtualization will be in the way organizations deliver applications and desktops to end users. In many ways, applications represent the closest intersection between IT and the business. Your organization’s business is increasingly represented by the quality of its user facing applications. May it be a large ERP solutions, custom web applications, e-mail, e-commerce, client-server applications or SOA, your success in IT today depends on ensuring that these applications meet the business goals. Unfortunately, trends such as mobility, globalization, off-shoring, and e-commerce are moving users further away from headquarters, while issues like data centre consolidation, security and regulatory compliance are making applications less accessible to users.
These opposing forces are pushing the topic of application delivery into the limelight. It is forcing IT executives to consider how their infrastructures get mission-critical, data centre-based applications out to users to lower costs, reduce risk and improve IT agility. Virtualization is now the key to application delivery. Today’s leading companies are employing virtualization technology to connect users and applications to propel their businesses forward.
Virtualization in the Enterprise
The seeds of virtualization were first planted over a decade ago, as enterprises began applying mainframe virtualization techniques to deliver Windows applications more efficiently with products such as Citrix® Presentation Server™. These solutions enabled IT to consolidate corporate applications and data centrally, while allowing users the freedom to operate from any location and on any network or device, where only screen displays, keyboard entry and mouse movement traversed the network. Today, products like Citrix® XenApp™ (the successor to Presentation Server) allow companies to create single master stores of all Windows application clients in the data centre and virtualise them either on the server or at the point of the end user. Application streaming technology within Citrix XenApp allows Windows-based applications to be cached locally in an isolation environment, rather than to be installed on the device. This approach improves security and saves companies millions of dollars when compared to traditional application installation and management methods.
Virtualization is also impacting the back end data and logic tier of applications with data centre products such as Citrix® XenServer™ and VMware ESX that virtualise application workloads on data centre servers. While these products are largely being deployed to reduce the number of physical servers in the data centres, the more strategic impact will be found in their ability to dynamically provision and shift application workloads on the fl y to meet end user requirements. The third major area concerning the impact of virtualization will be the corporate desktop, enabled by products such as Citrix® XenDesktop™. The benefits of such solutions include cost savings, but they also enable organizations to simplify how desktops are delivered to end users in a way that dramatically improves security and the end user experience (compared to traditional PC desktops). From virtualized servers in the data centres to virtualized end users desktops, the biggest impact of virtualization in the enterprise will be found within an organization’s application delivery infrastructure
Seeing the Big Picture
The mass adoption of virtualization technology will certainly require new skills, roles and areas of expertise within organizations and IT departments. Yet the real impact of virtualization will not hinge on the proper acquisition of new technical skills. Rather, by making the most of the virtualization opportunity, organizations will have to focus on breaking down traditional IT silos and adopt end-to-end virtualization strategies. Most IT departments today are organized primarily around technology silos. In many organizations, we find highly technical employees who operate on separate IT “islands,” such as servers, networks, security and desktops. Each group focuses on the health and well-being of its island, making sure that it runs with efficiency and precision. Unfortunately, this stand-alone approach is debilitating IT responsiveness, causing pundits like bestselling author Nicholas Carr to ask whether IT even matters to business anymore. To break this destructive cycle, IT employees must take responsibility for understanding and owning business processes that are focused horizontally (from the point of origin in the data centre all the way to the end users they are serving), building bridges from island to island. IT roles will increasingly require a wider, more comprehensive portfolio of expertise around servers, networking, security and systems management. IT personnel will need to have a broad understanding of all these technologies and how they work together as the focus on IT specialization gives way to a more holistic IT mindset.
Seeking Experts in Delivery
The new IT roles will require an expertise in delivery. IT will need to know how to use a company’s delivery infrastructure to quickly respond to new requirements coming from business owners and end users alike. IT specialization will not completely disappear, but it will not look anything like the silo entrenchment and technical specialization we see today. From this point forward, IT professionals will increasingly be organized around business process optimization to serve end users and line of business owners, rather than around independent technologies sitting in relative isolation. Across the board, the primary organizing principle in IT will shift from grouping people around technology silos to organizing them around common delivery processes. The companies that make this transition successfully will thrive, while those that do not will struggle to compete in an increasingly demanding and dynamic business world. IT organizations of the future will need to develop professionals who can see the parts as a whole and continually assess the overall health of the delivery system, responding quickly to changing business requirements. Employee work groups will continue to form around common processes, but the focus will be less about highly specialized knowledge and more about the efficiency of frequently repeated processes. IT professionals who understand the deep technical intricacies of IP network design, for example, will be in less demand than those who understand best practices in application delivery.
Guidelines for Staying In and Ahead of the Game
If you are not testing the waters of virtualization, you may already be behind. Experiment with virtualization now. Acquire applications and consider how to deliver them as part of your IT strategy. Three key recommendations are: n Change the mindset of your IT organization to focus on delivery of applications rather than installing or deploying them. Think about “delivery centres” rather than data centres. Most IT organizations today continue to deploy and install applications, although industry analysts advise that traditional application deployment is too complex, too static and costs too much to maintain, let alone to try to keep up with changes in the business. Delivering on the vision of an IT organization that is aligned with business goals requires an end-to-end strategy of efficiently delivering business applications to users.
- Place a premium on knowledge of applications and business processes when hiring and training IT employees. IT will always be about technology, but do not perpetuate today’s “island” problem by continuing to hire and train around deep technical expertise in a given silo. If that happens, IT will continue to foster biased mindsets that perceive the world through a technologically biased silo lens, the opposite of what is needed today. IT leaders will increasingly need to be people who understand business processes. Like today’s automotive technicians, they will have to be able to view and optimize the overall health of the system, not the underlying gears and valves – or bits and bytes.
- Select strategic infrastructure vendors who specialize in application delivery. Industry experts agree that the time is right to make the move from static application deployment to dynamic application delivery. IT will continue to use vendors that specialize in technical solutions that fit into various areas, such as networking, security, management and even virtualization. What is important, however, is forming a strategic relationship with a vendor that focuses not on technology silos, but on application delivery solutions. The vendor should be able to supply integrated solutions to incorporate virtualization, optimization and delivery systems that inherently work with one another, as well as the rest of your IT environment.
Thursday, 16 July 2009
High Availability and Disaster Recovery for Virtual Environments
Virtualization is increasingly being used by IT departments for server consolidation and testing purposes. Virtual servers offer flexibility, but if a single physical server containing multiple virtual servers fails, then the impact of data loss is enormous.
Introduction
Virtual servers are used to reduce operational costs and to improve system efficiency. The growth in virtual servers has created challenges for IT departments regarding high availability and data protection. It is not enough to protect physical servers but also virtual servers as they contain business critical data and information. Virtual servers offer the flexibility, but at the same time if a single physical server containing multiple virtual servers fails, then the impact of data loss is enormous.
Virtualization Benefits
Companies are adopting virtualization at a rapid speed because of the tremendous benefit it offers and some of them include:
- Server Consolidation: Virtualization helps to consolidate multiple servers into one single physical server thus offering improved operational performance.
- Reduced Hardware Costs: As the number of physical servers goes down, the cost of servers and associated costs like IT infrastructure, space, etc. will also decrease.
- Improved Application Security: By having a separate application in each virtual machine, any vulnerability is segregated and it does not affect other applications.
- Reduced Maintenance: Since virtual servers can easily be relocated and migrated, maintenance of hardware and software can be done with minimal downtime.
- Enhanced Scalability – The ease with which virtual servers can be deployed will result in improved scalability of IT implementation.
File or Block Level Replication
Different kinds of replication techniques can be used to replicate data between two servers both locally and remotely. In block level, replication is performed by the storage controllers or by mirroring the software. In file-system level (replication of file system changes), the host software performs the replication. In both block and file level replication, it does not matter what type of applications are getting replicated. They are basically application agnostic, but some vendors do offer solutions with some kind of application specificity. But these solutions cannot provide the automation, granularity and other advantages that come with application-specific solution. Also, one needs to be concerned about the following:
- Replicated server is always in a passive mode - cannot be accessed for reporting/monitoring purposes.
- Possibility of virus/corruption getting propagated from production server to replicated server.
Application Specific Replication Approach
In this approach, the replication is done at a mailbox or database level and it is very application specific. One can pick and choose the mailboxes or databases that need to be replicated. In the case of Exchange Server, one can set up a granular plan for key executives, sales and IT people, in which the replication occurs more frequently to achieve the required Recovery Point Objective (RPO) and Recovery Time Objective (RTO). For everyone else in the company, another plan can be set up where the replication intervals are not that frequent.
Another advantage of this approach is that the replicated or failover server is in an Active mode. The failover server can be accessed for reporting and monitoring purposes. With other replication approaches, the failover server is in a Passive mode and cannot be used for maintenance, monitoring or reporting purposes.
Backup and Replication
Some solutions offer both backup and replication as part of a single solution. In this case, the backup is integrated with replication and the users get a two-in-one solution. Considered two-tier architecture, these solutions consists of an application and agent environment. The application server also hosts the network share that stores all the backup files. The files are stored on this network share and not on any particular target server so as to prevent loss of backup files. If the target server goes down, users would like to continue to access their backup files in order to rebuild the target server with as little downtime as possible.
The mailboxes and databases will be backed to the backup server and then replicated to the remote failover server. The full back and restore is done first and then only the changes will be applied through incremental. For restoring emails, mailboxes and databases, the local backup data can be used and for disaster recovery purposes, the remote failover server can be utilized.
Virtual Environments
Many high availability solutions protect data that reside on virtual servers. Customers can have multiple physical servers at the primary location and at the offsite disaster recovery location they can have one physical server with multiple virtual servers. Also, multiple virtual servers from the primary site can be easily backed up and replicated to the disaster recovery site.
With some disaster recovery solutions, both on physical and virtual servers, the appropriate agents are installed and these agents have very small footprint. Because of the limited footprint, the impact on these servers is minimal from a performance perspective. With other replication solutions, one has to install the entire application on the virtual servers and this will take a huge toll on performance.
Physical to Virtual Servers
In this scenario, the production environment has physical servers and the disaster recovery site is deployed in a virtual environment. Both the physical and virtual servers are controlled by the Application and it can be located either at the production site or at the remote site.
Figure 1
Virtual to Virtual Environments
In order to achieve significant cost savings, some companies not only virtualize their disaster recovery site but also use virtual servers in the production environment. One can have one or more physical servers housing many virtual servers both at production and remote sites.
Figure 2
Failover/Failback
When a disaster strikes the primary site, then all the users will be failed over to the remote site. Once the primary is rebuilt, one can go through the failback process to the original primary servers very easily. Also, only a particular virtual server containing Exchange or SQL server can be failed over without affecting other physical or virtual servers.
The only way to make sure that your disaster recovery solution works is to test it periodically. Unfortunately, to do that one has to failover the entire Exchange or SQL server. Administrators will be leery about doing this for fear of crashing the production Exchange or SQL server. Some solutions can create a test mailbox or database and use it for failover/failback testing periodically. Through this approach, customers can be fully assured that their disaster recovery solution will work when it is badly needed and have peace of mind.
Migration
Virtual servers in conjunction with certain disaster recovery solutions can be used as a migration tool. If a physical server goes bad, then one can failover to the remote failover virtual server. Once the primary site is rebuilt, then the failback can be easily achieved. With some applications, there is no need to have identical versions of Exchange on primary and failover servers. In fact, one can run Exchange 2003 on primary server and Exchange 2007 on failover server. This feature can be used as a migration tool. For example, you can failover to the failover server which runs Exchange 2007. Upgrade the original primary to Exchange 2007 and failback again. This scenario is applicable to SQL 2000, SQL 2005 and SQL 2008 servers also.
Conclusion
Companies are increasingly adopting virtual servers as virtualization offers many compelling benefits. This increase in virtualization poses tremendous disaster recovery and data protection challenges to IT Administrators. There is a greater need to implement the appropriate high availability and failover solutions to protect these servers.
Can Terminal Services be considered Virtualization?
Virtualization is a hot topic and at the moment very hyped up. Manufacturers would like to use that hype to boost their products by linking it to the virtualization market. In this craze Terminal Services was also labeled as a "Virtualization product". In this article let’s look at the facts and I’ll also give my opinion about this virtualization label.
Introduction
Although virtualization techniques were mentioned a long time ago (around 1960), within the ICT market the launch of VMWare caused the big success of the virtualization market. Their server virtualization product, which made it possible to run multiple servers on one physical system, started the virtualization space. After server virtualization other virtualization products and fields followed quickly like application virtualization, operating system virtualization and desktop virtualization. Products which were already available before the virtualization market want to hitch a ride on the virtualization craze. I was a bit surprised when both Microsoft and Citrix determined that Terminal Services and Citrix Presentation Server are virtualization products.
What is…?
Before we can start determining whether Terminal Services can be labeled as a virtualization product, we need to first find out what the definitions of virtualization and terminal services are.
Virtualization:
Virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource.
Terminal Services:
Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network (WAN) or Local Area Network (LAN), as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible to a remote client machine.
Terminal Services Virtualization?
Both Microsoft and Citrix are using the virtualization space to position their Terminal Services/Citrix Presentation Server/XenApp product features. Microsoft calls it presentation virtualization, while Citrix used the term session virtualization. Microsoft also describes Terminal Service virtualization as follows:
Microsoft Terminal Services virtualizes the presentation of entire desktops or specific applications, enabling your customers to consolidate applications and data in the data center while providing broad access to local and remote users. It lets an ordinary Windows desktop application run on a shared server machine yet present its user interface on a remote system, such as a desktop computer or thin client.
If we go a bit deeper, Microsoft is describing their interpretation of presentation virtualization as follows: Presentation virtualization isolates processing from the graphics and I/O, making it possible to run an application in one location but have it controlled in another. It creates virtual sessions, in which the executing applications project their user interfaces remotely. Each session might run only a single application, or it might present its user with a complete desktop offering multiple applications. In either case, several virtual sessions can use the same installed copy of an application.
Ok, now we have the definitions of virtualization, terminal services and the way Microsoft explains why terminal services are a virtualization technique, it is time to determine if Microsoft is right with their assumption.
Terminal Services is virtualization!
Reading the explanation of virtualization, two important definitions are mentioned: abstraction and hiding the physical characteristics.
From the user perspective the application is not available on his workstation/thin client, but is running somewhere else. Using the definition of hiding physical characteristics, Terminal Services can be seen, from a user perspective, as virtualization. Because the application is not installed locally the user does not have any physical identification with the application.
With the IT perspective in mind Terminal Service can also be seen as virtualization based on the definition that (physical) resources can function as multiple virtual resources. Traditionally, installed applications on a local workstation can be started by one user at a time. By installing the application on a Terminal Server (in combination with a third party SBC add-on) applications can be used by more users at the same time. Although an application cannot be seen as a 100% physical resource, you can see Terminal Services as a way of offering a single resource that will be shown as multiple virtual resources.
In summary, Terminal Services can be seen as virtualization because the application is abstracted from the local workstation and the application appears to function as multiple virtual resources.
Terminal Services is not virtualization!
However, let’s take a closer look at the physical resources. Hardware virtualization, application virtualization and OS virtualization really do separate from the physical resource. With application virtualization the application is not physically available on the system, OS virtualization does not need a hard disk to operate, and with hardware virtualization the virtual machine does not communicate (directly) with real hardware. However Terminal Services, from an IT perspective, still needs physical resources. Terminal Services is not really virtualizing anything, only the location where the application/session is started and the methodology of displaying the application to the user are different. In other words, as Microsoft describes in their explanation, Terminal Services isolates processing from the graphics and I/O, but this is still done using another device without an additional layer in between.
Conclusion
Back to the main question: is Terminal Services virtualization? And the answer is …… it depends. It depends how you look at the concept of virtualization and your point of view on Terminal Services. Terminal Service can be seen as virtualization if you check it from the user perspective view (the application is not running physically on the workstation or thin client) or the view that a single application/session can be used at once by more than one user. If you look at how other virtualization techniques work, Terminal Services does not function the same way and physically nothing is running in a separate layer.
So there is no clear answer and the answer is subjective depending on how you look at virtualization and Terminal Services. My personal opinion is that Terminal Services cannot be labeled as virtualization, because it is not comparable with other virtualization techniques. Through my eyes Terminal Services is not adding an additional (virtualization) layer, but is only dividing the processes between two systems. I think both Microsoft and Citrix are using the "virtualization" term to gain advantages through the current boom of the virtualization market, but both know that if you look at the IT techniques it is not "real" virtualization.