Wednesday, 22 July 2009

Wireless Firewall Gateway White Paper

Introduction

With the deployment of wireless network access in the workplace, the requirement for a more enhanced security design emerges. Wireless technology offers a more accessible means of connectivity but does not address the security concerns involved with offering this less restrained service. In order to facilitate management of this network, maintain a secure network model, and keep a high level of usability, a multi-functional device to do these tasks must be placed in the wireless environment.

Design Objectives

The WFG (Wireless Firewall Gateway) is designed to take on several different roles in order for the process to be near transparent to the user. Since the wireless network is considered to be a distrusted environment, access is restricted in order to limit the amount of damage that can be inflicted on internal systems and the Internet if an intruder invokes an attack. This impedes the convenience of the wireless service to users who wish to access external sites on the Internet. Since unknown users are difficult to identify and hold accountable for damages, a method of user authentication is needed to ensure that the user takes responsibility for their actions and can be tracked for security concerns. A trusted user can then gain access to services and the commodity Internet from which unauthenticated users are blocked.

Keeping simplicity in mind, the WFG acts as a router between a wireless and external network with the ability to dynamically change firewall filters as users authenticate themselves for authorized access. It is also a server responsible for handing out IP addresses to users, running a website in which users can authenticate, and maintaining a recorded account of who is on the network and when.

Users of the wireless network are only required to have a web browser if they wish to authenticate and dynamic host configuration (DHCP) software, which comes standard on most operating systems. Minimal configuration is required by the user, allowing support for a variety of computer platforms with no additional software. The idea is to keep the wireless network as user-friendly as possible while maintaining security for everyone.

Internals

Given the multiple functionalities and enhanced security required for this device, a PC running OpenBSD Unix was chosen with three interfaces on different networks: wireless, external (gateway), and internal (management). The following sections elaborate upon the services that constitute the device's various roles:

  1. Dynamic Host Configuration Protocol (DHCP) Server is used to lease out individual IP addresses to anyone who configures their system to request one. Other vital information such as subnet mask, default gateway, and name server are also given to the client at this time. The WFG uses a beta DHCPv3 open-source server from the Internet Software Consortium with the additional ability to dynamically remove hosts from the firewall access list when DHCP releases a lease for any reason (client request, time-out, lease expiration, and so on). Configuration files for the server are located in /etc and follow the ISC standard (RFC) format. However, the server executable is customized and does not follow these standards. If the server needed to be upgraded, then the source code would need to be re-customized as well.

    The DHCP server is configured to only listen on the subnet interface of the wireless network. This prevents anyone from the wired network to obtain a wireless IP address from this server. As an added security measure, packet filters prevent any DHCP requests coming in on any other interfaces.

  2. Filtering - Stateful filtering is accomplished using OpenBSD's IPF software. IP routing is enabled in the kernel state allowing for the packet filtering to occur between the wireless and external network interfaces. Static filters are configured on boot up in the /etc/ipf.rules file and are designed to minimize remote access to the WFG. Only essential protocols such as NTP, DNS, DHCP, and ICMP are allowed to reach the system. This builds a secure foundation for the restricted environment. For the users who do not require an authenticated session, access is granted to selected servers for email, VPN, and web. Where applicable, packet filtering is done at a transport layer - UDP or TCP, to allow for stateful inspection of the traffic. This adds a higher level of security by not having to explicitly permit dynamic or private port sessions into the wireless network.

    The same script that authenticates a user over the web also enables their access to the unrestricted environment. When a user connects to the web server, their IP address is recorded and upon successful login, gets pushed to the top of the firewall filter list, permitting all TCP and UDP connections out of the wireless network for that IP address.

    In order to prevent succeeding users from being allowed trusted access when the IP address is recycled, the in-memory database software removes the firewall filter permit rule whenever the user's next lease binding state is set to free, expired, abandoned, released, or reset. The DHCP server will not issue the same IP address until it frees the lease of the last client. This helps avoids the security issue of someone hijacking an IP address that's been authenticated and using it after the valid user is no longer using the wireless service

  3. Web Authentication - The need for web-based authentication is necessary so that any user running any platform can gain access to the wireless network. Apache (open-source) web server is designed to securely handle this task. The server implements Secure Socket Layer (SSL) for client/server public-and-private key RSA encryption. Connecting to the web server via HTTP automatically redirects the client browser to use HTTPS. This ensures that the username and password entered by a user will not be sent in clear text. To further increase security, the SSL certificate is signed by Verisign, a trusted Certificate Authority (CA), which assures that an attacker is not imitating the web server to retrieve a user's password information.

    A website is setup where a user can go to type in their username and password information. This site displays the standard government system access warning and shows the IP address of the user's system (using PHP). Once a user has typed their username and password at the website where prompted, a Perl/CGI script then communicates with a Radius server with RSA's MD5 digest encryption to determine if the information submitted is correct. If the account information matches what is in the Radius database, then commands to allow their IP address, obtained through the Apache environment variables, are added to the IPF access rules. If the user is not found in the Radius database, or if the password entered is incorrect, a web page stating "Invalid Username and Password" is displayed to the user. If everything is successful, the user is notified of their privileged access.

  4. Security - Every step is taken to ensure that a desirable security level is maintained both on the WFG system and the wireless network while not hindering functionality and usability. Only hosts connecting from the wireless network can access the web server. For system management purposes, Secure Shell (OpenSSH) connections are permitted from a single, secured host. All other methods of direct connection are either blocked by the firewall filters or denied access through the use of application-based TCP wrappers.

    Users' authentication information is encrypted throughout the process: SSL encryption with a certificate signed by a trusted CA between the client's web browser and the server, and MD5 digest encryption between the web server and the Radius system for account verification.

    Logs are kept for all systems, which gain access to both the restricted and authorized network. The DHCP server keeps a record of what MAC address (NIC address) requests an IP address and when it is released, then passes that information to syslog. Syslog then identifies all logging information from DHCP and writes it to /var/log/dhcpd. Additionally, any user who attempts to authenticate via the web interface has their typed username and source IP address logged with the current time along with whether or not they were successful. When a lease on an IP address expires and is removed from the firewall filters, it is noted with the authentication information in /var/log/wireless. These logs are maintained by the website script and DHCP server software, not syslog. Combined, it is possible to identify who is on the network at a given time - either by their userid, or by their burned-in physical address, for auditing purposes.

    With the DHCP server managing the firewall filters, it is possible for a user to manually enter a static IP address and authenticate, with the permit rule never being removed. To prevent this, the CGI script reads in the dhcpd.leases file and determines if the source IP address, obtained through the environment variable $ENV{'REMOTE_ADDR'}, has an active lease. If no lease is found, or if the lease is expired or abandoned, authentication is denied.

Tuesday, 21 July 2009

Kaspersky Anti-Virus 2010: Advanced Web and PC Protection by Kaspersky

Kaspersky is known as one of the best anti-virus. The new version, 2010 provide new and improved features. Read this article to find how KAV 2010 has to offer.

Introduction

Kaspersky has released the new version of their top-rated anti-virus - Kaspersky 2010. The new version is boasting new features. This article will discuss what to expect with Kaspersky Anti-Virus 2010 (KAV).

Installation and System Requirements

The new version of Kaspersky Anti-Virus requires Windows XP, Vista and Windows 7. It will work in 32-bit and 64-bit systems. The vendor noted that safe mode will not work in XP 64-bit and will limitation if run in safe mode using 64-bit of Vista.

During installation of KAV 2010, the installer will display the standard End-User License Agreement. After agreeing with the EULA, another terms of agreement is displayed: Kaspersky Security Network Data Collection Statement in which if you agree, you will be participating in the Kaspersky Security Network by allowing the program to collect selected security and application data. This is similar to SpyNet in Windows Defender or any threat centre by many anti-malware vendors that will help them in providing protection signatures for risks in the wild. It is not required to agree to the Kaspersky Security Network because the installation will proceed if you uncheck the box that you agree to the terms of participation in the said security network.

You can customize the component that KAV installer will install: Virtual Keyboard, Proactive Defense, and Anti-virus for File, IM, Web, E-mail and Program Kernel and scan tasks. A system restart is not required to start using KAV 2010.

Features and Options

The features and options in using KAV 2010 are quite extensive but don’t let it stop you in trying the program. Most of the options are very useful and offer what most computers need as protection:

  • File Anti-Virus: Protection by KAV against known malware
  • Mail Anti-Virus: E-mail protection
  • Web Anti-Virus: Network protection to scan web traffic
  • IM Anti-Virus: Scans instant messaging for malicious objects
  • Proactive Defense: Heuristic protection
  • Anti-Phishing: Fraud Protection

Extra Tools and features in KAV 2010 are displayed in its Security + window which let you use the following tools:

  • Virtual Keyboard
  • Rescue Disk
  • Browser Tune-up
  • Privacy Cleaner

Windows Settings Troubleshooting Utility: To check the security settings in Windows e.g. if Autorun is enabled and if Windows Update is disabled.

Special Game mode

Browser Helper Object to identify unsafe website - I noticed that this feature is not working. I visited few unsafe website and even search the internet for known malware links but I can't see any visible color coding that it is supposed to display or warn.

Performance, Tasks and Update

KAV 2010 lets you rollback to using previous database if the new database is corrupted or providing false positive. This is quite useful since false positive or corrupted download can happen. It’s always recommended that we configure the anti-malware to send copy in quarantine for any threat it will detect.

The scan tasks in KAV 2010 are similar to what we expect with advanced anti-virus except that KAV 2010 is also offering Vulnerability Scan. KAV is running with acceptable memory during PC usage and during scanning.

Protection and Detection

I used 201 confirmed malware samples to test KAV 2010’s resident protection. The malware samples were located in Virtual PC. To proceed with the test, I started transferring the directory that has 201 malware files from Virtual PC to the hosts system where Kaspersky is installed. The resident protect able to detect 159 malware only and left 42 undetected. Running an on-demand scan failed to detect the 42 malware. I changed the settings to its highest protection level but the result is the same.

An SME’s Guide to Virtualisation

Virtualisation is now seen as essential in enabling organisations to manage their vital IT resources more flexibly and efficiently. Yet how challenging is it to successfully deploy virtualisation, especially at an SME? This guide, produced by Computer Weekly in association with IBM and Intel, covers the salient issues for an SME seeking to implement a virtualisation strategy.

Overview

Virtualisation is a growing trend in computing as organisations address the challenge of harnessing more processing power for more users, while reining in costs during the recession.

Surveys of SMEs conducted by IDC have revealed these businesses view virtualisation as presenting immediate cost advantages and opportunities to build and grow highly flexible IT environments.

IDC analyst Chris Ingle stresses that virtualisation is nothing new in the IT world, but the increased number of solutions now available for common, x86 servers means SMEs can do a lot of the things that previously only mainframe and Unix users could do.

"It democratises virtualisation and brings it within SME budgets and lets them do things that previously only larger companies could do," he says.

Choose the Right System

This presents valuable opportunities for SMEs to improve how they use resources and develop strategies for business continuity and disaster recovery, among other benefits.

But organisations need to consider carefully what they hope to achieve with virtualisation and choose the solution that best suits their needs. There are a number of different techniques for virtualising a server or building a virtual machine (VM).

Hypervisor Virtualisation

The most common is hypervisor virtualisation, where the VM emulates the actual hardware of an x86 server. This requires real resources from the host (the machine running the VMs).

A thin layer of software inserted directly on the computer hardware, or on a host operating system, allocates hardware resources dynamically and transparently, using a hypervisor or virtual machine monitor (VMM).

Each virtual machine contains a complete system (BIOS, CPU, RAM, hard disks, network cards), eliminating potential conflicts.

Common VM products include Microsoft’s Virtual Server and Virtual PC, along with EMC VMware’s range of products, such as VMware ACE, VMware Workstation and its two server products, ESX and GSX Server.

Risks & Benefits

For medium-sized organisations, virtualisation can lead to significant savings on equipment as well as more centralised management of what they have. It also allows them to harness and distribute greatly increased processing power very quickly.

The process of creating VMs is expected to get even easier for organisations, with Intel integrating improved virtualisation technology into its business-class processors. But this can be a double-edged sword. For instance, analysts warn that, because virtual environments are so cheap and easy to build, many organisations risk losing track of them.

New practices have to be put in place, responding to the increasing overlap in the internal areas of responsibility of the IT staff, as storage, server, and network administrators will need to co-operate more closely to tackle interconnected issues.

Virtualising at Operating System Level

One of the more commonly cited pitfalls of virtualisation is that companies can risk breaching software-licensing agreements as a virtual environment expands.

Without a method to control the mass duplication and deployment process of virtual machines, administrators will have a license compliance issue nightmare on their hands. Virtualising at the operating system (OS) level avoids this problem. Most applications running on a server can easily share a machine with others, if they can be isolated and secured. In most situations, different operating systems are not required on the same server, merely multiple instances of a single OS.

OS-level virtualisation systems provide the required isolation and security to run multiple applications or copies of the same OS on the same server. Products available include OpenVZ, Linux-Vserver, Solaris Zones and FreeBSB Jails. At first Linux-only, SWsoft recently launched its virtualisation technology for Windows. Called Virtuozzo, it virtualises the OS so multiple virtual private servers can run on a single physical server. Virtuozzo works by building on top of the operating system, supporting all hardware underneath. The VM does not need pre-allocated memory, as it is a process within the host OS, rather than being encapsulated within a virtualisation wrapper.

The upside to OS-based virtualisation is that only one OS licence is required to support multiple virtual private servers. The downside of this option is less choice, because each VM is locked to the underlying OS. In the case of Virtuozzo, it only guarantees support for Windows and RH Linux.

Paravirtualisation

Another approach to virtualisation gaining in popularity is paravirtualisation. This technique also requires a VMM, but most of its work is performed in the guest OS code, which in turn is modified to support the VMM and avoid unnecessary use of privileged instructions.

The paravirtualisation technique allows different OSs to be run on a single server, but requires them to be ported, that is they should know they are running under the hypervisor. Products such as UML and Xen use the paravirtualisation approach. Xen is the open source virtualisation technology which Novell is shipping with its own Linux distribution, SuSE, which also appears in the latest development of Red Hat’s Fedora, Core 4.

Server Sales Reach Tipping Point

IDC predicts something of an exodus towards virtualised server configurations over the next few years. The market analyst reported recently that the number of servers containing a virtualisation component shipped in Western Europe rose 26.5% to 358,000 units throughout 2008. IDC said these servers made up 18.3% of the market compared to 14.6% in 2007.

For the first time, last year the number of purely physical machines sold was eclipsed by sales of virtual- capable machines, which topped 2 million. IDC predicts declining IT hardware spending will result in VM sales exceeding physical machines by around 10% at some time during the year, and that the ratio of the two could be 3:2 by 2013.

In line with this trend, logical machines, or those with physical and virtual components, will realise a 15.7% increase over the same period. IDC notes that this highlights the importance to organisations of deploying the right tools to manage expanding virtual environments, seeing as both virtual and physical servers have to be operated, monitored and patched.

The research company also advises organisations ensure they have the right level of education if they are to properly exploit this new and potentially rewarding approach to corporate IT.

Monday, 20 July 2009

Thin Client Computing meets Companies Energy Reduction Requirements

Introduction: What is thin client computing?
Thin Client Computing is a technology whereby applications are deployed, managed, supported and executed on the server and not on the client. Instead, only the screen information is transmitted between the server and client. This architecture solves the many fundamental problems that occur when executing the applications on the client itself.

In server based computing environments, hardware & software upgrades, application deployment, technical support, and data storage & backup, are simplified because only the servers need to be managed. Data and applications reside on a few centrally managed servers rather than on hundreds or thousands of clients. PCs become terminals. They can be replaced by simpler, less expensive and more importantly, easier to manage devices called "thin clients”.

A thin client mainly focuses on conveying input and output between the user and the remote server. A thin client does not have local storage and requires little processing resources. In contrast, a thick or fat client does as much processing as possible and passes only data for communications and storage to the server.

Meet the emission commitments
Today environmental activeness is not just a marketing tool. Reduce emissions is a political issue. With no or less agreement on how nations should actually go about achieving a more carbon free environment. Conflicting debates regarding a cap-and-trade carbon emission or an introduction to imposing carbon tax on all users are held worldwide.

In fact industries and governments are noticeably under political pressure to meet their commission commitments under the Kyoto Protocol. Company’s emission rights have to comply with the company’s commitments and if the result does not comply they will be fined. A carbon tax rate set on the consumption of carbon in any form would encourage industries to consume less in order to save expenses. In any case, investments in technological innovations with which companies can reduce their greenhouse gas emissions, must be the result.

Accordingly to Gartner IT contributes two percent of global carbon dioxide emissions and by 2010 environmental issues will be among the top five IT management concerns in North America, Europe and Australia. In the USA today about 1% of the national electricity consumption is caused by PCs and in Germany 110000t electronic waste per year is caused by IT.

In succession, CIOs need to be aware of what constitutes to the environmental impact of the whole organisation and on what extent IT can be a liability in this aspect. This paper focuses on the client and points out some of the ways to reduce a company’s environmental impact by moving to server based computing (SBC.) Environmental impact happens in a direct and indirect way during all phases of PC production and/or use. Here we do not focus in detail on production chains, in-house-use or the recycling process of a Thin Clients v/s PC. This is a summary that points out the advantages of SBC in regards to reduce emission and looks on direct and indirect impact of SBC in general.

View on impact:
First Degree Impact:
The most direct way of impact which Gartner classifies as `First Degree Impact’. This is the impact of IT itself which includes electronic waste and consumption of energy in the data centre.
Second Degree Impact: Besides this ´First Degree Impact´ we have to consider a ´Second Degree Impact´ which is the impact of IT on business operations and the supply chains.
Third Degree Impact: Moreover the ´Third Degree Impact´, which describes the ´in use´ phase of the enterprise’s products or services, plays a relevant rule and can contribute to reduce CO2 emission.

Operating figures and key data:

A Thin Client consists of less electronic elements and spare and wear parts than a PC and this reduces its:

weight: Thin Clients weight 30% of PCs.
volume: their specific volume is 20 % of PCs.
electronic use: Thin Clients consume only 30% of electricity.

Evidence:
Moving IT to thin client technology causes a direct first degree impact of 70% less consumption of energy, and a significant cutback of electronic waste and asset disposition. Moreover the second and third degree impact contributes meaningfully to reduction of CO2 emission.

Ten Arguments:

  • Thin Client Computer hardware consumes 20W to 40W compared to an average PC that consumes 60W to 110W during operation mode. However, considering that a single PC cannot be replaced by one TC due to the fact that for every 20 to 50 users you need one Terminal Server, because executable files are processed on a terminal server, still makes electricity consumption about 70% less.
  • Less components are causing less electronic waste as a direct impact for the organisation.
  • A thin client can be used longer and has a longer life cycle since it consists of less removable components and since processing is executed on the server. A longer life cycle reduces electronic waste.
  • Less components cut down the complexity of the manufacturing. The supplier chain in general is less complex.
  • Thin Clients need less maintenance during the actual operation because they consist of less removable components which reduces again the impact of the supplier chain.
  • Thin Clients have dimensions of only one fifth of PCs, therefore the transportation and shipment consumes less volume and obviously emission is reduced as a second degree impact. Both PCs and Thin Clients are produced in ASIA, while the raw materials are shipped from Africa or South America.
  • Heat emission of a Thin Client is less since no HDD is included. This plays a significant rule for an organisation situated in the hot areas of the world. The result is a cut down in cooling system usage.
  • Converting inventory of fat clients into Thin Clients expands the life cycle of fat clients. As a result the annual amount of electronic waste is reduced.
  • Publish Applications to home workspaces reduces the need for workforce mobility and implicates reduction of emission caused by travel.
  • Centrally manage the shutdown of Thin Clients during off hours reduces electricity consumption and CO2 emission. A machine in sleep mode consumes 35w.

Conclusion
This paper is aimed to focus on the effects that thin client computing impact has on environmental affairs and a number of direct and indirect effects have been discussed when moving to server based computing (SBC).

Every company does experience the potentials of impact in different ways within their individual organisation. Therefore understanding where an organisation offers the most opportunities to decrease CO2 emission as well as understanding the SBC products and where the most impact can be realised by implementing and using them is the individual challenge.

A good start is to look at the relative weight of each company department’s overall environmental impact and the situation is certainly different for the manufacturing industry than for the service sector or for governmental institutions. Then looking for the right vendor who can provide the product and/or service to reduce pollution and energy consumption is constitutive.

The environmental value of IT has become an important matter for running an organisation, and SBC can definitely contribute to improve a company’s Carbon Footprint.

Introduction: What is thin client computing?

Friday, 17 July 2009

A Network Architecture for Business Value Acceleration By Cisco

Introduction
Nearly every enterprise today is affected by globalization, outsourcing, private equity competition, increased regulation, Web 2.0 or all of the above, placing increased demands on enterprise computing requirements. To survive and prosper, companies must reduce operating costs, increase automation and control, and prepare to scale the number of business relationships they can support.

The platform to facilitate this transformation is common across the enterprise – the network. The transport-centric vision of the network is now giving way to a converged vision in which business objectives and network architecture meet. But what does this really mean?

Agility and efficiency are no longer a matter of building solutions to support a specific business model. Rather, the ability to rapidly evolve to support innovation in business models must be part of the enterprise architecture strategy from the beginning of business process change. A service-oriented network architecture (SONA) can create a platform that enables change and accelerates business value. The SONA framework was developed by Cisco® and is being used successfully within its own IT organization and by many of its customers to align business goals with enterprise architecture.

The Transformation Process
Everything starts with enterprise architecture, the global plan for how all processes in a company will be implemented. But many enterprise architecture initiatives fail to engage the business. A successful transformation using a SONA framework addresses both processes and business goals:

  1. The business context is created, providing the foundational assumptions for the future-state architecture.
  2. Strategic requirements are analyzed, while articulating a set of architecture principles.
  3. Key business functions to fulfil the business strategy are evaluated.

As these requirements are articulated, enterprise architecture teams can identify the IT services that support the business functions and processes needed to achieve the business strategy.

As technology influences – including Web 2.0 and service oriented applications – gain momentum, companies realize that traditional definitions of enterprise architecture are too small to contain the scope of the solution. Many of the services that are crucial in these implementations find their natural home in the network, not in the application.

Supporting these new technologies requires that network architecture become part of the design process, not just an invisible transport layer. If business transformation is not supported by the right network design, the efforts will most likely not deliver on performance requirements.

The Network in a Service Oriented World
The architectural complexity of the information highway is changing dramatically – from a simple two-lane road connected by switches and routers to one with a much more complex structure, featuring a variety of special-purpose checkpoints along the way.

Checkpoints include well-established functions such as firewalls and encryption functionality like Secure Sockets Layer (SSL). But upon closer inspection, it becomes clear that many common services for security and identity management work identically in every application, making them perfect candidates to be provisioned in the network.

The core functions of many types of applications (GRC, in particular) can be enhanced by adding checkpoints that look inside the packets flowing through the network and recognize important events, which are then sent to applications. Radio frequency identification (RFID) and other real-world awareness services that report on the location of people and things feed their information into a network, where it is consumed by applications that need it. Virtualization allows one point on a network to imitate many different devices and services.

Yet, in the face of all of these demands and opportunities, the shape of the network architecture has changed very little. In spite of bigger pathways and more complicated topology, loading more packet volume and IP-based services on today’s network will eventually lead to a traffic jam and prevent enterprises from cost-effective transformations.

Which Types of Applications as Services?
Enterprise systems today can improve their performance using a feedback loop based on data collected or through other means such as location-based services. Services that capture events are particularly important and are ideally suited to move into the network. In an extended event-driven business network, a supply chain event indicating a primary material shortage might have tremendous downstream implications. But it is useless unless that event is communicated to all the networks and people that need to be aware of it.

The services used in these contexts must have the operational characteristics of production systems to succeed. Many of the most valuable services are extensions of core systems at the hub of the enterprise, such as enterprise resource planning (ERP) and customer relationship management (CRM).

As hub systems become available through services in ways that protect the transactional integrity of the data, the value of these systems extends to the edge of the enterprise where the information may be used in looser, more collaborative processes.

Which Services in the Network
What does it mean, exactly, for services to migrate to the network? Essentially, it means that code that was running in an application server now runs on routers, switches and other special-purpose devices used to run and manage the network.

This gives applications a simpler architecture and extended reach. Applications can siphon functions that are better performed by network-based services and gain enhanced functionality by network services that recognize important events and feed them to the applications. Applications remain the brain; the network becomes an extended nervous system.

The network is the natural platform for a certain class of generic services for unified communications, authentication, virtualization, mobility and voice. Because the network is the only ubiquitous component in the IT landscape, it is the natural home for the most generic services. Services likely to migrate into the network include backup, identity management, location-based services, caching and GRC-related events, which are all generic and operate in the same way regardless of the application context. Provisioning services that are used by every application in the same way from the network is less costly, faster and easier, and is the only way to help ensure consistency and compliance.

The network is also the natural platform for collaboration-related services that provide location awareness, instant messaging, telepresence and voice conferencing. For example, such services would allow a hospital to deliver a multi-gigabyte digital X-ray image to the reader closest to a doctor who is location-aware to the network. The same intelligence could avoid delivering such a large file over a less optimal network location if the doctor were using a mobile device.

Architectural Implications
One major implication of providing these services through the network is a convergence of enterprise architectures and network architectures. Moving services into the network requires tight coordination of business planning, enterprise architecture and network architecture. Organizations must examine architecting the network to the business strategy before moving to the provisioning phases.

Network topology will be influenced and network device capacities and capabilities must change as they are asked to do more. There must be an optimal number of points for information collection to recognize and capture application-oriented events and deliver other services. Each collection point must have appropriate access to traffic and processing capability.

To create such a platform means examining networks that were designed sometimes decades ago and then incrementally enhanced. A vision will be required, followed by a roadmap to achieve that vision. The two most likely first steps after establishing the vision involve network provisioning of event recognition for applications as described earlier, and security.

Traditional network security functions such as firewalls, SSL encryption, and virtual private networks (VPNs), along with newer message-level and application-level security and reducing unwanted traffic provide an example of how services can migrate to the network for added functionality and cost savings.

Security and Web 2.0
As companies pursue Web 2.0 business models and implement Web services-based application programming interfaces (APIs), fresh security challenges arise that require a more flexible, responsive architecture. Web services that enable e-commerce transactions or update supply chain information carry significant security risks. Misuse of these services can be incredibly damaging and the protection provided by the network and other security mechanisms must be an order of magnitude more robust than before.

Crafting a SONA
Applications can be made immensely more effective using Cisco’s SONA framework. When generic services are migrated to the network, along with specialized services for location or unified communications, the character of an IT infrastructure changes and becomes more flexible and supple.

As more and more services are added to the IP network, however, network architecture and capacity planning become more complex than just adding “more network.” For example, prioritization must be available for voice packets within the IP stream. And while security, identity management and similar services might have predictable growth curves, data centres supporting various virtualized services may face extremely irregular growth patterns.

Application-oriented networking and virtualization add their own requirements for topology. Melding the network with enterprise architecture makes getting to the right architecture for network-based services more difficult still. A successful approach to implementing a SONA framework is an incremental journey of several coordinated steps. Pursuing a business strategy without incorporating network centric principles at the origination of the idea may cause business value to be lost in the vast functional potential.

Value of Getting It Right from the Start
The primary benefit of identifying enterprise architecture strategies early in the IT planning process is the ability to create more business value to keep pace with the ever-changing global marketplace. It requires working closely with valued technology partners well versed in the implementation of service-oriented networks and applications, which will help accelerate business value creation by: increasing internal process flexibility; reducing costs through standardization; fostering innovation inside and outside a company; improving the value created by enterprise applications; and boosting adoption of Web 2.0-enabled business models.

Who Stands to Gain from SONA?
As with any significant technology shift, companies embrace new concepts and ways of doing business differently. Forward-thinking organizations will recognize the benefits of constructing a SONA to harness the benefits of Web 2.0 and other emerging technologies, and will more quickly reap the benefits when compared to their more cautious competitors. Companies that move quickly to prepare a scalable and robust infrastructure for service delivery inside and outside the firewall stand to increase business value and gain competitive advantage. The entire foundation of this new wave of business value is reliable, manageable and operationally robust dynamic services, which can only be delivered by the strategically architected network.

Putting Security in its Place

We have been doing security wrong for a number of years. This is a poorly kept secret, as everybody knows that technologies invented in the days of floppy disks are woefully inadequate for protecting today’s business. The industry pours huge amounts of resources into extending the life of schemes that try to identify attacks or deviations from corporate security policies in order to protect the business against service disruptions or loss of confidential data. The mistake is the misunderstanding that security itself is a business solution; security is a critical feature of a successful business solution. Today’s best-practice security approaches not only ineffectively secure the business, they impede new business initiatives. The answer to reducing runaway security investments lies in virtualization-based application delivery infrastructures that bypass traditional security problems and focus on delivering business services securely.

Defence-in-depth is a broadly accepted concept built on the premise that existing security technologies will fail to do the job. For example, an antivirus product in the network may catch 70 percent of known attacks, but that means it will still miss 30 percent. It is common for larger enterprises to have different vendors scanning e-mail at the network edge, on the e-mail servers and on user endpoints under the theory that the arithmetic will be on their side and one of these products will block an attack.

However, practice shows that the effectiveness of defence-in-depth falls well short of theory, and operating duplicate products comes at a great cost to the business. IT can continue to layer on traditional technologies with consistently dismal results. What is needed is an approach that fundamentally changes the business operations to avoid many of the existing security traps, and positions IT to deliver the business to any user, anywhere, using any device.

The predicates of virtualised application delivery have significant security contributions, without having to purchase and operate additional security products. The new approaches are made possible by advances in data-centre virtualization, availability of high-speed bandwidth and innovations in endpoint sophistication. Now, it is entirely possible to execute browsers in the secure data-centre for the end-user, remotely project displays and manage user interfaces, and have all of this done transparently to the end-user. The security characteristics of a virtualised application delivery are worth noting:

  • Keep executables and data in a controlled data-centre environment. IT can better maintain compliant copies of applications and can better protect confidential data within the managed confines of a virtual data-centre. Most malicious attacks enter the enterprise through remote endpoints. Processing desktop applications in the data-centre reduces the exposure of business disruptions due to malicious code infections and data loss. IT operating costs are also reduced, as IT spends less time and resources maintaining employee endpoints with easy access to hosted applications.
  • Minimize the time window of vulnerability when desktop applications and data can get into trouble. Vitalising desktop applications — either by hosting the application or desktop via remote display protocols in the data-centre or by streaming application images from the IT managed application delivery centre for local execution at the endpoint — reduces the amount of time an application is exposed to potential infections. Application delivery starts the end-users with a clean copy of the application, and the application copy is erased when the end-user is done. Any infection that is picked up disappears as the user again launches a clean copy the next time the application is requested.
  • Remove the end-user from the security equation. Traditional approaches place too much of the security burden on the end-user, who is responsible for maintaining software, respecting confidential data and being knowledgeable of dangers lurking in the Internet. IT should be managing corporate security, and virtualised application delivery makes it much easier for the user to do the right thing.

IT is challenged with making it easier for the business to attract new customers, while continuing to meet high security standards. The burden needs to be reduced for end-users that presently are expected to install software agents, upgrade software regularly and take special action when informed of security events. Not only that, but users are limited in choices of endpoint devices, operating systems and connectivity choices. These disconnects between end-users and the organization, and between end-users and IT, inhibit productivity and business growth. Fortunately, IT is implementing new infrastructure models from the data-centre to the endpoint that more readily serve applications to users with intrinsic security at reduced operating costs.

IT is orchestrating the power of virtualised data-centres, high-speed bandwidth availability and high performance endpoints to offer end-users a true IT service with consistent secure access from anywhere at any time with any device. The ability to provide an integrated application delivery system, where applications are served on demand instead of deployed ahead of time, is the new model that has put security in its place. Application executables and sensitive data need not reside at the endpoint, where security becomes the responsibility of the end-user. A dynamic service approach enables IT to extend control of the technical infrastructure to the endpoint with resultant gains in security, application availability and cost reduction.

  • Virtualised data-centres deliver cost savings in server utilization, are also delivering cost savings in dynamic desktop and application provisioning. As users request applications, the IT service can transparently launch a virtual desktop in the data-centre or stream a copy of the executable from the application delivery centre for local execution. Authenticated end-users have easy, secure access to business applications.
  • The availability of high-speed bandwidth allows IT to effectively service end-users’ application requests over the Internet. Remote display protocols drive end-user interactions, allowing the application to execute in the safe confines of the data-centre with the look and feel of a locally executing application; application streaming protocols allow copies of executables to be efficiently downloaded and launched on demand for local execution when the network is unavailable. In both cases, IT ensures the user runs only the most recent compliant copy of the application. Security issues are significantly reduced simply by allowing the IT service to ensure that the user starts with clean copies of application images each and every time.
  • The enterprise needs to support a wide variety of endpoint devices to make it easy for new customers to access applications. Thus, IT is required to make legacy Windows applications available not only to desktops and laptops running various operating systems, but also intelligent handhelds such as phones and PDAs. The most expeditious way of providing this service is also the most secure – virtualise the application in the data-centre, giving the user a choice of browser, remote display or streamed application access. In each scenario, IT reduces security exposures through heightened application control while the end-users can more readily get their business done.

It is time to start meeting security requirements the right way – by fundamentally changing the way applications are provided to end-users. The traditional model of executing applications by installing software directly on isolated PCs is well over 30 years old – well before the Internet connected users. It is not a surprise that this approach fails dramatically to meet today’s security requirements. An integrated approach that takes advantage of virtualization, Web based connectivity and power of endpoints to minimize security risks is essential. The direction of an integrated application delivery service enables IT to use ubiquitous Web-based technology to support new users and drive the costs out of supporting existing users. The business benefits of increased availability combined with the security benefits of greater IT control make the evolution to a cloud-based application delivery service inevitable.

A simple example shows the power of a virtualised application delivery system. A merger to create a stronger international presence for the enterprise creates a need to quickly grant access to corporate applications for the new employees. IT provides a securely configured browser that executes virtually in the data-centre. The new business offices easily transition to corporate applications without having to reconfigure internal systems such as firewalls or endpoints that may not be compliant with corporate security policies. The new offices are more quickly indoctrinated into the new organization, and the security risks of non-compliant configurations are simply bypassed. The virtual application delivery capability has put security in its place, removing additional costs and showing the agility to streamline IT alignment with business needs.

In an ideal world, security just wouldn’t matter. Organizations would go about the business of satisfying customers without concern for malicious attacks or painful losses of confidential data. Unfortunately we’re not there yet. However, by implementing virtualised application delivery approaches; IT can simply avoid many insecure situations while gaining the desired agility to keep IT services aligned with the business. This is putting security in its place – a feature to enable the success of business operations.

  • Extend the virtualised data-centre to accommodate end-user desktops and applications. Application delivery using remote display technologies is a good way to deliver business value to remote offices where IT does not have to deploy applications on local endpoints and confidential data remains controlled in the data-centre. Put metrics in place to measure the IT time savings of only applying patches and software upgrades to applications in the data-centre.
  • Test out the user experience of streamed applications. For example, employees working from home can improve security by executing a fresh copy of a browser or e-mail client from the corporate application delivery centre. Similarly, employees can work on an airplane totally disconnected from the network with applications that have been streamed to their laptop for the business trip. Let application delivery transparently stream compliant images from the data-centre to the desktop, and reduce the risk of malicious code lingering on corporate endpoints. Check out user satisfaction with performance while knowing that each user session begins with the most secure application that IT can deliver, and IT can deliver applications at the speed of business.
  • Have your IT architects report back on delivering compliant end-user desktops and applications as an IT service. Once IT is comfortable in the cost savings and increased control of end-user environments in the data-centre, the next logical step is to enhance the application delivery service so IT can have the same procedures for both local and remote users. Look at additional cost savings by consolidating network security into the data-centre and achieve greater scale with network traffic accelerators.

The way to run a more secure business is to run a more secure application environment, where IT effectively controls executables and virtualization shrinks the vulnerability of desktop applications. IT managers should question why we keep putting applications in harm’s way on end-user desktops. Start moving towards virtualised application delivery – you will gain flexibility in running your business, you will gain tighter control and security of critical applications and confidential data, and you will lose a big expense bill from administering obsolete security technologies.

Virtualization is Changing the way IT Delivers Applications

Virtualization has rapidly become the hottest technology in IT, driven largely by trends such as server consolidation, green computing and the desire to cut desktop costs and manage IT complexity. While these issues are important, the rise of virtualization as a mainstream technology is having a far more profound impact on IT beyond just saving a few dollars in the data centre. The benefits and impact of virtualization on the business will be directly correlated to the strength of an organization’s application delivery infrastructure. Application delivery is the key to unlocking the power of virtualization, and organizations that embrace virtualization wrapped around application delivery will thrive and prosper, while those that do not will flounder. As virtualization takes centre stage, shifting roles in IT will require a new breed of professionals with broader skill sets to bridge IT silos and optimize business processes around the delivery of applications.

Going Mainstream
We are moving into a new era where virtualization will permeate every aspect of computing. Every processor, server, application and desktop will have virtualization capabilities built into its core. This will give IT a far more flexible infrastructure where the components of computing become dynamic building blocks that can be connected and reassembled on the fl y in response to changing business needs. In fact, three years from now, we will no longer be talking about virtualization as the next frontier in enterprise technology. It will simply be assumed. For example, today we normally assume that our friends, family and neighbours have high-speed Internet access from their homes. This was not the case a few years ago, when many were using sluggish dialup lines to access the Internet or had no access at all. High-speed Internet is now a mainstream as it will be for virtualization. Virtualization will be expected; it will be a given within the enterprise. As this occurs, the conversation within IT circles will shift from the question of how to virtualise everything to the question of what business problems can be solved now that everything is virtualised.

Virtualization and Application Delivery
The most profound impact of virtualization will be in the way organizations deliver applications and desktops to end users. In many ways, applications represent the closest intersection between IT and the business. Your organization’s business is increasingly represented by the quality of its user facing applications. May it be a large ERP solutions, custom web applications, e-mail, e-commerce, client-server applications or SOA, your success in IT today depends on ensuring that these applications meet the business goals. Unfortunately, trends such as mobility, globalization, off-shoring, and e-commerce are moving users further away from headquarters, while issues like data centre consolidation, security and regulatory compliance are making applications less accessible to users.

These opposing forces are pushing the topic of application delivery into the limelight. It is forcing IT executives to consider how their infrastructures get mission-critical, data centre-based applications out to users to lower costs, reduce risk and improve IT agility. Virtualization is now the key to application delivery. Today’s leading companies are employing virtualization technology to connect users and applications to propel their businesses forward.

Virtualization in the Enterprise
The seeds of virtualization were first planted over a decade ago, as enterprises began applying mainframe virtualization techniques to deliver Windows applications more efficiently with products such as Citrix® Presentation Server™. These solutions enabled IT to consolidate corporate applications and data centrally, while allowing users the freedom to operate from any location and on any network or device, where only screen displays, keyboard entry and mouse movement traversed the network. Today, products like Citrix® XenApp™ (the successor to Presentation Server) allow companies to create single master stores of all Windows application clients in the data centre and virtualise them either on the server or at the point of the end user. Application streaming technology within Citrix XenApp allows Windows-based applications to be cached locally in an isolation environment, rather than to be installed on the device. This approach improves security and saves companies millions of dollars when compared to traditional application installation and management methods.

Virtualization is also impacting the back end data and logic tier of applications with data centre products such as Citrix® XenServer™ and VMware ESX that virtualise application workloads on data centre servers. While these products are largely being deployed to reduce the number of physical servers in the data centres, the more strategic impact will be found in their ability to dynamically provision and shift application workloads on the fl y to meet end user requirements. The third major area concerning the impact of virtualization will be the corporate desktop, enabled by products such as Citrix® XenDesktop™. The benefits of such solutions include cost savings, but they also enable organizations to simplify how desktops are delivered to end users in a way that dramatically improves security and the end user experience (compared to traditional PC desktops). From virtualized servers in the data centres to virtualized end users desktops, the biggest impact of virtualization in the enterprise will be found within an organization’s application delivery infrastructure

Seeing the Big Picture
The mass adoption of virtualization technology will certainly require new skills, roles and areas of expertise within organizations and IT departments. Yet the real impact of virtualization will not hinge on the proper acquisition of new technical skills. Rather, by making the most of the virtualization opportunity, organizations will have to focus on breaking down traditional IT silos and adopt end-to-end virtualization strategies. Most IT departments today are organized primarily around technology silos. In many organizations, we find highly technical employees who operate on separate IT “islands,” such as servers, networks, security and desktops. Each group focuses on the health and well-being of its island, making sure that it runs with efficiency and precision. Unfortunately, this stand-alone approach is debilitating IT responsiveness, causing pundits like bestselling author Nicholas Carr to ask whether IT even matters to business anymore. To break this destructive cycle, IT employees must take responsibility for understanding and owning business processes that are focused horizontally (from the point of origin in the data centre all the way to the end users they are serving), building bridges from island to island. IT roles will increasingly require a wider, more comprehensive portfolio of expertise around servers, networking, security and systems management. IT personnel will need to have a broad understanding of all these technologies and how they work together as the focus on IT specialization gives way to a more holistic IT mindset.

Seeking Experts in Delivery
The new IT roles will require an expertise in delivery. IT will need to know how to use a company’s delivery infrastructure to quickly respond to new requirements coming from business owners and end users alike. IT specialization will not completely disappear, but it will not look anything like the silo entrenchment and technical specialization we see today. From this point forward, IT professionals will increasingly be organized around business process optimization to serve end users and line of business owners, rather than around independent technologies sitting in relative isolation. Across the board, the primary organizing principle in IT will shift from grouping people around technology silos to organizing them around common delivery processes. The companies that make this transition successfully will thrive, while those that do not will struggle to compete in an increasingly demanding and dynamic business world. IT organizations of the future will need to develop professionals who can see the parts as a whole and continually assess the overall health of the delivery system, responding quickly to changing business requirements. Employee work groups will continue to form around common processes, but the focus will be less about highly specialized knowledge and more about the efficiency of frequently repeated processes. IT professionals who understand the deep technical intricacies of IP network design, for example, will be in less demand than those who understand best practices in application delivery.

Guidelines for Staying In and Ahead of the Game
If you are not testing the waters of virtualization, you may already be behind. Experiment with virtualization now. Acquire applications and consider how to deliver them as part of your IT strategy. Three key recommendations are: n Change the mindset of your IT organization to focus on delivery of applications rather than installing or deploying them. Think about “delivery centres” rather than data centres. Most IT organizations today continue to deploy and install applications, although industry analysts advise that traditional application deployment is too complex, too static and costs too much to maintain, let alone to try to keep up with changes in the business. Delivering on the vision of an IT organization that is aligned with business goals requires an end-to-end strategy of efficiently delivering business applications to users.

  • Place a premium on knowledge of applications and business processes when hiring and training IT employees. IT will always be about technology, but do not perpetuate today’s “island” problem by continuing to hire and train around deep technical expertise in a given silo. If that happens, IT will continue to foster biased mindsets that perceive the world through a technologically biased silo lens, the opposite of what is needed today. IT leaders will increasingly need to be people who understand business processes. Like today’s automotive technicians, they will have to be able to view and optimize the overall health of the system, not the underlying gears and valves – or bits and bytes.
  • Select strategic infrastructure vendors who specialize in application delivery. Industry experts agree that the time is right to make the move from static application deployment to dynamic application delivery. IT will continue to use vendors that specialize in technical solutions that fit into various areas, such as networking, security, management and even virtualization. What is important, however, is forming a strategic relationship with a vendor that focuses not on technology silos, but on application delivery solutions. The vendor should be able to supply integrated solutions to incorporate virtualization, optimization and delivery systems that inherently work with one another, as well as the rest of your IT environment.

Thursday, 16 July 2009

Introduction to Server Virtualization

What is virtualization and why use it
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.

When to use virtualization
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.

While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.

While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

How to avoid the "all your eggs in one basket" syndrome
One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:

  • HTTP
  • FTP
  • DNS
  • DHCP
  • RADIUS
  • LDAP
  • File Services using Fiber Channel or iSCSI storage
  • Active Directory services

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.

For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.

Physical to virtual server migration
Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that you don't need to rebuild your servers and manually reconfigure them as a virtual server—you simply suck them in with the entire server configuration intact!

So if you have a data center full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You don't even need to worry about license acquisition costs because the licenses are already paid for. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean your server room would look after such a migration. It would all fit inside of one rack and give you lots of room to grow.

As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. Ask yourself what would happen now if your legacy server died. Do you even remember how to rebuild and reconfigure all of your servers from scratch? (I'm guessing you're cringing right about now.) With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.

Patch management for virtualized servers
Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. You still need a patch management system that patches all of your servers, but there may be interesting developments in the future where you may be able to patch multiple operating systems at the same time if they share some common operating system or application binaries. Ideally, you would be able to assign a patch level to an individual or a group of similar servers. For now, you will need to patch virtual operating systems as you would any other system, but there will be some innovations in the virtualization sector that you won't be able to do with physical servers.

Licensing and support considerations
A big concern with virtualization is software licensing. The last thing anyone wants to do is pay for 16 copies of a license for 16 virtual sessions running on a single computer. Software licensing often dwarfs hardware costs, so it would be foolish to run a $20,000 software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead.

For something like Windows Server 2003 Standard Edition, you would need to pay for each virtual session running on a physical box. The exception to this rule is if you have the Enterprise Edition of Windows Server 2003, which allows you to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems.

If you're running open source software, you don't have to worry about licensing because that's always free—what you do need to be concerned about is the support contracts. If you're considering virtualizing open source operating systems or open source software, make sure you calculate the support costs. If the support costs are substantial for each virtual instance of the software you're going to run, it's best to squeeze the most out of your software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. The trick is to find the right ratio of hardware to licensing/support costs. When calculating hardware costs, be sure to calculate the costs of hardware maintenance, power usage, cooling, and rack space.

There are licensing and support considerations for the virtualization technology itself. The good news is that all the major virtualization players have some kind of free solution to get you started. Even one year ago, free virtualization was not possible when VMware was pretty much the only player in town, but there are now free solutions from VMware, Microsoft, Xen Source, and Virtual Iron. In the next virtualization article, we'll go more in-depth about the various virtualization players.

The pros and cons of server virtualization

ISPs use server virtualization to share one physical server with multiple customers in a way that gives the illusion that each customer has its own dedicated server. Typically, an ISP will use server virtualization for IIS (Internet Information Server) and/or Microsoft Exchange Server. I've also seen administrators use server virtualization on a file and print server, but this isn’t nearly as common. Server virtualization on an IIS Server allows that server to host multiple Web sites, while employing it on an Exchange Server allows the server to manage e-mail for several companies. Let's look at the advantages and disadvantages to a virtualized ISP environment.

The money issue
Without a doubt, the greatest advantage of server virtualization is cost. For example, suppose that an ISP purchased a high-end server for $30,000. In addition, it needs an operating system for the server. A copy of Windows Server 2003 Enterprise Edition goes for about $8,000. Add in other components and the ISP could easily drop over $40,000 on a single server. Can you imagine if the server could only host a single Web site? The cost to the subscriber would be astronomical. On top of having to recoup a $40,000-plus investment in hardware, the ISP must also pay for bandwidth, salaries, building rental, and other business expenses before it can start turning a profit.

Although there are large companies such as Microsoft and Amazon that require multiple, dedicated Web servers, most of the time Web sites are small enough that quite a few sites can be hosted on a single server. This allows the ISP’s clients to share the hosting expense, driving down the price considerably.

Developing
Server virtualization is also great for development environments. An example is my own personal network. I own three Web sites, and have done every bit of the coding for these sites myself. To assist in the development process, I'm using a virtualized IIS Server.

My development server is a single computer running Windows 2000 Advanced Server and IIS. The server has been assigned seven IP addresses, each corresponding to one of seven sites. The first three sites are the production versions of my Web sites. Although I don’t actually host the sites from this server, I like to maintain a known good copy of each of my sites locally. The next three sites on the server are also copies of my three Web sites, but these are used for development. Every time I make a change to one of my sites, I make the change in this location. This allows me to test my changes without tampering with a production version of the site. The last site that the server hosts is for a new Web site that I'm working on that won’t go into production until the end of the year.

Problems with server virtualization
You must also watch out for pitfalls in server virtualization: scalability and security.

Scalability
Often, the terms scalability and availability are intertwined when people talk about networking. Both terms are relevant to server virtualization. Availability becomes an issue because if the virtualized server were to go offline, every site that the server is hosting will also fail. Most ISPs use a cluster or some other failover technique to prevent such outages.

Scalability is trickier. As I said, server virtualization provides a way for several small companies to share the costs associated with Web hosting. The problem is that while a company may start out small, it could grow quite large. A large company can easily dominate a virtualized server and begin robbing resources from the other sites.

For example, I own an e-commerce site that sells software. When I launched the site, it received very little traffic and wasn’t consuming much disk space. But now the site is getting thousands of visitors every day. On average, a couple of hundred people a day are downloading trial software, and the smallest download on the site is 15 MB. If 200 people are downloading a 15-MB file, there are almost 3 GB in transfers occurring every day.

Additionally, the site is designed so that when someone purchases software, the site creates a directory with a unique name and places the software into that directory. The idea is that the users can’t use the download location to figure out the path for downloading software that they haven't paid for. These temporary directories are stored for seven days.

The problem is that the more software I sell, the more temporary directories are created. Each of these directories contains anywhere from 15 MB to a couple of GB of data. I actually received a phone call from my ISP recently because I was consuming too much disk space and bandwidth. The ISP was using server virtualization and I was taking resources from other customers.

Obviously, Windows does provide mechanisms that you can use to minimize the effect of excessive use. For example, you could place disk quotas on each site, and you could use QoS to limit bandwidth consumption. However, these are issues that you need to consider before implementing your server, not after it begins to run low on resources.

Security
The virtualization process is designed to keep virtualized resources separate. I've seen a couple of cases, though, in which a virtualized server was accidentally visible to someone who wasn’t supposed to be able to see it. The unauthorized access problem happened a few months ago to one of my Web sites. My ISP uses the directory structure \CUSTOMERS\customer name\ to store each individual Web site. When you're in the Customers directory, you're supposed to see only Web sites that you own. However, one Sunday morning I was about to update one of my Web sites and I was able to see someone else’s site. Apparently, a permission entry had been set incorrectly. I made a quick phone call to my ISP and the permission was changed before any security breaches occurred.

Be careful with bleed over
Finally, bleed over is another issue to watch out for when subscribing to a virtualized server. Bleed over occurs when the contents of one virtual server affect other virtual servers. One of my Web sites has a chat room where I occasionally host live discussions with people in the IT industry. During the middle of a recent live chat, everyone involved in the chat received a pop-up window saying that the total bandwidth allocation had been exceeded. Everyone was booted out of the chat.

Needless to say, this was very embarrassing. I called my ISP and asked why this happened when I'd never experienced chat problems in the past. As it turns out, my ISP was not limiting bandwidth consumption. Instead, another site hosted on the same server had implemented a shareware bandwidth limitation program. Unfortunately, this utility limited bandwidth for the server as a whole, not just for the intended site. The ISP removed this component and the server returned to normal behaviour.