Wednesday, 23 September 2009

Firewall, Why?


Firewalls are usually seen as a requirement if you are going to attach your network to other networks, especially the Internet. Unfortunately, some network administrators and managers do not understand the strengths a firewall can offer, resulting in poor product choice, deployment, configuration and management. Like any security technology, firewalls are only effective if the implementation is done properly and there is proper maintenance and response to security events.

Additionally, with the proper deployment of firewalls other security strategies are often much easier to integrate, such as VPNs and IDS systems. So what makes firewalls good, and what can you do to ensure they are used properly?

Perimeter Defence

One of firewalls' weaknesses is also one of their strengths. Firewalls are typically deployed as a perimeter defence, usually intersecting network links that connect your network to others. If the firewall is properly deployed on all paths into your network, you can control what enters and leaves your network.

Of course, as with any form of perimeter defence, if an attack is launched from inside, firewalls are not too effective. However, this deployment on your network perimeter allows you to prevent certain kinds of data from entering your network, such as scans and probes, or even malicious attacks against services you run.

Conversely, it allows you to restrict outbound information. It would be nearly impossible to configure every workstation to disallow IRC, but blocking ports 6667-7000 (the most common IRC ports) is relatively easy on your perimeter firewalls.

While you can employ access control lists on servers internally, this still allows attackers to scan them, and possibly talk to the network portion of the OS on the server — making a number of attacks possible. This perimeter also allows you to deploy IDS systems much more easily, since "chokepoints" will have already been created, and you can monitor all data coming in or leaving.

VPN deployment also becomes easy. Instead of loading up VPN software on every desktop that might need it, you can simply employ VPN servers at those network access points, either as separate servers or directly on your firewall, which is becoming increasingly popular.

Concentrated Security

Controlling one, or even multiple firewalls is a much easier job than maintaining access control lists on numerous separate internal servers that are probably not all running the same operating system or services. With firewalls you can simply block all inbound mail access except for the official mail server. If someone forgets to disable email server software on a newly installed server, you do not need to worry about an external attacker connecting to it and exploiting any flaws.

Most modern firewall products are administered from a central console. You get an overall view of your network and can block or allow services as needed very quickly and efficiently.

With VPN-capable firewalls you can easily specify that access to certain networks must be done via encrypted tunnels, or otherwise blocked. With VPN software on each client, you would have more to worry about with misconfiguration or user interference. This results in sensitive data being accidentally sent out unencrypted. If your firewall is set up to block all but a few specific outbound services, then no matter what a user does - even to bring in their own laptop - they will probably not be able to access the blocked services. Enforcing this without firewalls and instead on each client machine is nearly impossible.

Enforcement of Security Policies

You may have a set of corporate guidelines for network usage that include such items as:
  1. Chat clients such as IRC, AIM, and Yahoo IM are strictly forbidden, as they can transfer files.
  2. Accessing external mail servers is forbidden (antivirus policy); only use the internal server to send or receive.
  3. Network games, such as Doom or Quake, are forbidden, except between 8 a.m. and 6 p.m. all weekdays for members of management.
  4. Websites such as playboy.com are forbidden for legal reasons.
Enforcing the first policy without a firewall would be possible, but difficult. In theory, if you managed to secure every single desktop machine and prevent users from installing software, it would be possible. Then you would need to prevent people from attaching "rogue" laptops and so forth to the internal LAN with software preinstalled. While possible, this is a Herculean task compared to configuring a dozen rules (or even a hundred rules) on your firewalls to prevent access to the ports and servers that IRC, AIM and the rest use.

The second policy would be very difficult to enforce without a firewall. You would need to do the above steps to prevent people from installing their own email software or using rogue machines such as laptops with it preinstalled. Moreover, any email software you do use (such as Outlook or Eudora) would need to be configured so that users could not modify any preferences, add new accounts and so on. This is not possible in almost all email clients.

The third policy is virtually impossible to enforce without a firewall. You would need to take the above steps to prevent any user except for management installing the software. One possibility would be to place the software on a network share and only make it available from 6 p.m. to 8 a.m., and on weekends to users of the management group. However, many network games would not function properly, and you would have to prevent the software from being copied off, etc.

Even with all this, the software may still continue to function after 8 a.m. if it is running on the client machine (or it might crash horribly). In any event, this is much easier to enforce with a firewall such as FW-1: enable user authentication, then define a policy that allows users of the management group access to the ports used by these games at the appropriate times.

Enforcing policy number four is basically impossible as well without a firewall. While some Web clients do allow you to list sites that are off limits, keeping the browsers on multiple workstations up to date would be a virtually impossible task. Compare that with configuring the firewall to force WWW access through an application-level.

A Secure Network Is a Healthy Network

Generally speaking, any security implementation done in a network will help with its overall health. Cataloguing systems and software versions to decide what needs upgrading first, implementing automated software upgrade procedures, and so on all helps with the overall health of your network and its systems.

A network configuration that creates chokepoints for firewall deployment also means you can easily implement a DMZ, a zone with servers to handle inbound and outbound information with the public. These servers can typically run a hardened and stripped down OS and application software. A proxy email server, for example, only needs to be able to accept and send email. There is no need for user accounts, POP or IMAP services, or GroupWare software integration.
Usually the simpler a system is, the easier it is to secure, and hence the harder it is for an attacker to break into. Securing a messy network is almost impossible. You must find out what you have, which versions, where the servers are deployed, what network links exist, and so on

Secure Socket Tunneling Protocol

SSTP (Secure Socket Tunnelling Protocol) and the VPN capabilities it will offer in future

The article will give a clear understanding of SSTP and compare standard VPN vs SSTP VPN. The article will also cover the advantages of utilizing both SSTP and VPN simultaneously and what the benefits of using SSTP will be.

VPN

Virtual private network, also referred to as VPN, is a network that is constructed with the use of public wires to join nodes, enabling the user to create networks for the transfer of data. The systems use encryption and various other security measures to ensure that the data is not intercepted by unauthorized users. For years VPN has been used successfully but has recently become problematic due to the increase in the number of organizations encouraging roaming user access. Alternative measures have been looked at to enable this type of access. Many organizations have begun to utilize IPSec and SSL VPN as an alternative. The other new alternative being SSTP, also referred to as ‘Microsoft’s SSL VPN’.

Problems with typical VPN

VPNs typically use an encrypted tunnel that keeps the tunneled data confidential. By doing this when the tunnel routes through typical NATed paths the VPN tunnel stops working. VPNs typically connect a node to an endpoint. It may happen that both the node and the endpoint have the same internal LAN address and, if NAT is involved, all sorts of complications can arise.

SSL VPN

Secure Socket Layer, also referred to as SSL, uses a cryptographic system that uses two keys to encrypt data, the public and private key. The public key is known to everyone and the private only to the recipient. Through this SSL a secure connection between a client and a server is created. SSL VPN allows users to establish secure remote-access from virtually any internet connected web browser, unlike with VPN. The hurdle of unstable connectivity is removed. With SSL VPN an entire session is secured, whereas with only SSL this is not accomplished.

SSTP

Secure socket tunneling protocol, also referred to as SSTP, is by definition an application-layer protocol. It is designed to employ a synchronous communication in a back and forth motion between two programs. It allows many application endpoints over one network connection, between peer nodes, thereby enabling efficient usage of the communication resources that are available to that network.

SSTP protocol is based on SSL instead of PPTP or IPSec and uses TCP Port 443 for relaying SSTP traffic. Although it is closely related to SSL, a direct comparison can not be made between SSL and SSTP as SSTP is only a tunneling protocol unlike SSL. Many reasons exist for choosing SSL and not IPSec as the basis for SSTP. IPSec is directed at supporting site- to-site VPN connectivity and thus SSL was a better base for SSTP development, as it supports roaming. Other reasons for not basing it on IPSec are:


  • It does not force strong authentication,
  • User clients are a must have,
  • Differences exist in the quality and coding of user clients from vendor to vendor,
  • Non-IP protocols are not supported by default,
  • Because IPSec was developed for site to site secure connections, it is likely to present problems for remote users attempting to connect from a location with a limited number of IP addresses.
SSL VPN proved to be a more compatible basis for the development of SSTP

SSL VPN addresses these issues and more. Unlike basic SSL, SSL VPN secures an entire session. No static IPs are required, and a client is unnecessary in most cases. Since connections are made via a browser over the Internet, the default connection protocol is TCP/IP. Clients connecting via SSL VPN can be presented with a desktop for accessing network resources. Transparent to the user, traffic from their laptop can be restricted to specific resources based on business defined criteria.

SSTP - an extension of VPN

The development of SSTP was brought about by the lack of capability of VPN. The main shortcoming of VPN is its unstable connectivity. This is a consequence of its insufficient coverage areas. SSTP increases the coverage area of VPN connection ubiquitously, rendering this problem no more. SSTP establishes a connection over secure HTTPS; this allows clients to securely access networks behind NAT routers, firewalls and web proxies, without the concern for typical port blocking issues.

SSTP is not designed for site to site VPN connections but is intended to be used for client to site VPN connections.

The success of SSTP can be found in the following features:

  • SSTP uses HTTPS to establish a secure connection
  • The SSTP (VPN) tunnel will function over Secure-HTTP. The problems with VPN connections based on the Point-to-Point Tunneling Protocol (PPTP) or Layer 2 Tunneling Protocol (L2TP) will be eliminated. Web proxies, firewalls and Network Address Translation (NAT) routers located on the path between clients and servers will no longer block VPN connections.
  • Typical port blocking is decreased
  • Blocking issues involving connections in relation to PPTP GRE port blocking or L2TP ESP port blocking via a firewall or NAT router preventing the client from reaching the server will no longer be a problem as ubiquitous connectivity is achieved. Clients will be able to connect from anywhere on the internet.
  • SSTP will be built into Longhorn server
  • SSTP Client will be built into Windows Vista SP1
  • SSTP won't require retraining issues as the end-user VPN controls remain unchanged. The SSTP based VPN tunnel plugs directly into current interfaces for Microsoft VPN client and server software.
  • Full support for IPv6. SSTP VPN tunnel can be established across IPv6 internet.
  • It uses integrated network access protection support for client health-check.
  • Strong integration into MS RRAS client and server, with two factor authentication capabilities.
  • Increases the VPN coverage from just a few points to almost any internet connection.
  • SSL encapsulation for traversal over port 443.
  • Can be controlled and managed using application layer firewalls like ISA server.
  • Full network VPN solution, not just an application tunnel for one application.
  • Integration in NAP.
  • Policy integration and configuration possible to help with client health checks.
  • Single session created for the SSL tunnel.
  • Application independent.
  • Stronger forced authentication than IPSec
  • Support for non IP protocols, this is a major improvement over IPSec.
  • No need to buy expensive, hard to configure hardware firewalls that do not support Active directory integration and integrated two factor authentication.

How SSTP based VPN connection works in seven steps

  1. The SSTP client needs internet connectivity. Once this internet connectivity is verified by the protocol, a TCP connection is established to the server on port 443.
  2. SSL negotiation now takes place on top of the already established TCP connection whereby the server certificate is validated. If the certificate is valid, the connection is established, if not the connection is torn down.
  3. The client sends an HTTPS request on top of the encrypted SSL session to the server.
  4. The client now sends SSTP control packets within the HTTPS session. This in turn establishes the SSTP state machine on both sides for control purposes, both sides now intiate the PPP layer communication.
  5. PP negotiation using SSTP over HTTPS now takes place at both ends. The client is now required to authenticate to the server.
  6. The session now binds to the IP interface on both sides and an IP address assigned for routing of traffic.
  7. Traffic can now traverse the connection being either IP traffic or otherwise.
Microsoft is confident that this protocol will help alleviate VPN connection issues, The RRAS team are now readying RRAS for SSTP integration and the protocol will be part of the solution going forward. The only prerequisite at present is that the client runs Vista and Longhorn server. The feature set provided by this little protocol is both rich and flexible and the protocol will enhance the user and administrator experience. I predict that devices will start to incorporate this protocol into the stack for secure communication and the headaches of NAT will soon be forgotten as we move into a 443/SSL incorporated solution.

Conclusion

SSTP is a great addition to the VPN toolkit to enable users to remotely and securely connect to the corporate network. Blocking of remote access and NAT issues seem to be forgotten when using this protocol and the technology is stable, well documented and working. This is a great product and it is very welcome in this time of remote access.

Green Computing - The Future



What is Green Computing?

Global warming and environmental change have become big issues with governments, corporations and your average Joe alike all seeking out new ways to green up their daily activities. Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment, which begs the question: What is Green Computing?

Green Computing is the study and practice of minimising the environmental impact of computers through efficient: manufacturing, use, and disposal.

Problems of Electronic Waste

Electronic waste is an increasing problem globally due to the quick obsolescence of electronics, which make up a staggering 70% of all hazardous waste. Computer waste is high in many toxic materials such as heavy metals and flame-retardant plastics, which easily leach into ground water and bio-accumulate. In addition, chip manufacturing uses some of the deadliest gases and chemicals known to man and requires huge amounts of resources.

In an average year 24 million computers in the United States become obsolete. Only about 14% (or 3.3 million) of these will be recycled or donated. The rest - more than 20 million computers in the U.S. -- will be dumped, incinerated, shipped as waste exports or put into temporary storage to be dealt with later. We never stop to consider what happens when our laptop dies and we toss it. The reality is that it either rots in a landfill or children in developing countries end up wrestling its components apart by hand, melting toxic bits to recover traces of valuable metals like gold.

Wasting Electricity

The manufacturing of a computer consumes 1818 kw/h of electricity before it even gets turned on and when running, a typical computer uses 120 watts. Research shows that most PC’s are left idle all day, and many of them are left on continuously. Every time we leave computers on we waste electricity without considering where that electricity comes from. The majority of the world’s electricity is generated by burning fossil fuels which emit pollutants such as sulphur, and carbon dioxide into the air. These emissions can cause respiratory disease, smog, acid rain and global climate change.

The Future of Green Computing

A Canadian company, Userful Inc. (www.userful.com) have come up with a solution that turns 1 computer into 10 - DiscoverStation. Quickly becoming the standard for green computing worldwide, DiscoverStation leverages the unused computing power of modern PC’s to create an environmentally efficient alternative to traditional desktop computing. Multiple users can work on a single computer by simply attaching up to 10 monitors, mice and keyboards. This makes it possible to reduce CO2 emissions by up to 15 tons per year per system and reduce electronic waste by up to 80%. Userful has recently stated that in the last year their software has saved over 13,250* tons of CO2 emissions, the equivalent of taking 2,300 cars off the road. (More info at: http://userful.com/greenpc)

The European Union

The European Union is tackling the problem twofold. Companies are now required to produce computers free of the worst toxic materials and are responsible for taking back their old products. Faced with disassembling parts and cycling them back into the fabrication process, companies are making more careful decisions about how those parts are assembled in the first place. In 2002 NEC came out with the first computer to use lead-free solder, a fully recyclable plastic case, and which contained no toxic flame-retardants. Since then many computer companies worldwide have started selling lead-free PCs and it is becoming common practice for companies to offer their customers free recycling of their old computers.

Go Green

Here are some suggestions that will help you reduce your computer energy

  • Don't use screen savers. They waste energy, not save it.
  • By computers & monitors labelled “energy star” which can be programmed to automatically “power-down” or “sleep” when not in use.
  • If you are using more than 1 PC, Userful's 10 to 1 advantage can save electricity and your wallet.
  • Turn your computer and peripherals off when not in use. This will not harm the equipment.
  • Use flat panel monitors, which use about half of the electricity of a cathode-ray tube (CRT) display.
  • Buy ink jet printers, not laser printers. Ink jet printers use 80 to 90 percent less energy than laser printers and print quality can be excellent.
If all of us did this every day, we could make a small difference. We only have one earth; let's treat it right.

Tuesday, 22 September 2009

Can Terminal Services be considered Virtualization?

Virtualization is a hot topic and at the moment very hyped up. Manufacturers would like to use that hype to boost their products by linking it to the virtualization market. In this craze Terminal Services was also labeled as a “Virtualization product”. In this article let’s look at the facts and I’ll also give my opinion about this virtualization label.

Introduction

Although virtualization techniques were mentioned a long time ago (around 1960), within the ICT market the launch of VMWare caused the big success of the virtualization market. Their server virtualization product, which made it possible to run multiple servers on one physical system, started the virtualization space. After server virtualization other virtualization products and fields followed quickly like application virtualization, operating system virtualization and desktop virtualization. Products which were already available before the virtualization market want to hitch a ride on the virtualization craze. I was a bit surprised when both Microsoft and Citrix determined that Terminal Services and Citrix Presentation Server are virtualization products.

What is…?

Before we can start determining whether Terminal Services can be labeled as a virtualization product, we need to first find out what the definitions of virtualization and terminal services are.

Virtualization: Virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource.

Terminal Services: Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network (WAN) or Local Area Network (LAN), as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible to a remote client machine.

Terminal Services Virtualization?

Both Microsoft and Citrix are using the virtualization space to position their Terminal Services/Citrix Presentation Server/XenApp product features. Microsoft calls it presentation virtualization, while Citrix used the term session virtualization. Microsoft also describes Terminal Service virtualization as follows:

Microsoft Terminal Services virtualizes the presentation of entire desktops or specific applications, enabling your customers to consolidate applications and data in the data center while providing broad access to local and remote users. It lets an ordinary Windows desktop application run on a shared server machine yet present its user interface on a remote system, such as a desktop computer or thin client.

If we go a bit deeper, Microsoft is describing their interpretation of presentation virtualization as follows: Presentation virtualization isolates processing from the graphics and I/O, making it possible to run an application in one location but have it controlled in another. It creates virtual sessions, in which the executing applications project their user interfaces remotely. Each session might run only a single application, or it might present its user with a complete desktop offering multiple applications. In either case, several virtual sessions can use the same installed copy of an application.

Ok, now we have the definitions of virtualization, terminal services and the way Microsoft explains why terminal services are a virtualization technique, it is time to determine if Microsoft is right with their assumption.

Terminal Services is virtualization!

Reading the explanation of virtualization, two important definitions are mentioned: abstraction and hiding the physical characteristics.

From the user perspective the application is not available on his workstation/thin client, but is running somewhere else. Using the definition of hiding physical characteristics, Terminal Services can be seen, from a user perspective, as virtualization. Because the application is not installed locally the user does not have any physical identification with the application.

With the IT perspective in mind Terminal Service can also be seen as virtualization based on the definition that (physical) resources can function as multiple virtual resources. Traditionally, installed applications on a local workstation can be started by one user at a time. By installing the application on a Terminal Server (in combination with a third party SBC add-on) applications can be used by more users at the same time. Although an application cannot be seen as a 100% physical resource, you can see Terminal Services as a way of offering a single resource that will be shown as multiple virtual resources.

In summary, Terminal Services can be seen as virtualization because the application is abstracted from the local workstation and the application appears to function as multiple virtual resources.

Terminal Services is not virtualization!

However, let’s take a closer look at the physical resources. Hardware virtualization, application virtualization and OS virtualization really do separate from the physical resource. With application virtualization the application is not physically available on the system, OS virtualization does not need a hard disk to operate, and with hardware virtualization the virtual machine does not communicate (directly) with real hardware. However Terminal Services, from an IT perspective, still needs physical resources. Terminal Services is not really virtualizing anything, only the location where the application/session is started and the methodology of displaying the application to the user are different. In other words, as Microsoft describes in their explanation, Terminal Services isolates processing from the graphics and I/O, but this is still done using another device without an additional layer in between.

Conclusion

Back to the main question: is Terminal Services virtualization? And the answer is …… it depends. It depends how you look at the concept of virtualization and your point of view on Terminal Services. Terminal Service can be seen as virtualization if you check it from the user perspective view (the application is not running physically on the workstation or thin client) or the view that a single application/session can be used at once by more than one user. If you look at how other virtualization techniques work, Terminal Services does not function the same way and physically nothing is running in a separate layer.

So there is no clear answer and the answer is subjective depending on how you look at virtualization and Terminal Services. My personal opinion is that Terminal Services cannot be labeled as virtualization, because it is not comparable with other virtualization techniques. Through my eyes Terminal Services is not adding an additional (virtualization) layer, but is only dividing the processes between two systems. I think both Microsoft and Citrix are using the “virtualization” term to gain advantages through the current boom of the virtualization market, but both know that if you look at the IT techniques it is not “real” virtualization.

Can Terminal Services be considered Virtualization?

Virtualization is a hot topic and at the moment very hyped up. Manufacturers would like to use that hype to boost their products by linking it to the virtualization market. In this craze Terminal Services was also labeled as a “Virtualization product”. In this article let’s look at the facts and I’ll also give my opinion about this virtualization label.

Introduction

Although virtualization techniques were mentioned a long time ago (around 1960), within the ICT market the launch of VMWare caused the big success of the virtualization market. Their server virtualization product, which made it possible to run multiple servers on one physical system, started the virtualization space. After server virtualization other virtualization products and fields followed quickly like application virtualization, operating system virtualization and desktop virtualization. Products which were already available before the virtualization market want to hitch a ride on the virtualization craze. I was a bit surprised when both Microsoft and Citrix determined that Terminal Services and Citrix Presentation Server are virtualization products.

What is…?

Before we can start determining whether Terminal Services can be labeled as a virtualization product, we need to first find out what the definitions of virtualization and terminal services are.

Virtualization: Virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource.

Terminal Services: Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network (WAN) or Local Area Network (LAN), as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible to a remote client machine.

Terminal Services Virtualization?

Both Microsoft and Citrix are using the virtualization space to position their Terminal Services/Citrix Presentation Server/XenApp product features. Microsoft calls it presentation virtualization, while Citrix used the term session virtualization. Microsoft also describes Terminal Service virtualization as follows:

Microsoft Terminal Services virtualizes the presentation of entire desktops or specific applications, enabling your customers to consolidate applications and data in the data center while providing broad access to local and remote users. It lets an ordinary Windows desktop application run on a shared server machine yet present its user interface on a remote system, such as a desktop computer or thin client.

If we go a bit deeper, Microsoft is describing their interpretation of presentation virtualization as follows: Presentation virtualization isolates processing from the graphics and I/O, making it possible to run an application in one location but have it controlled in another. It creates virtual sessions, in which the executing applications project their user interfaces remotely. Each session might run only a single application, or it might present its user with a complete desktop offering multiple applications. In either case, several virtual sessions can use the same installed copy of an application.

Ok, now we have the definitions of virtualization, terminal services and the way Microsoft explains why terminal services are a virtualization technique, it is time to determine if Microsoft is right with their assumption.

Terminal Services is virtualization!

Reading the explanation of virtualization, two important definitions are mentioned: abstraction and hiding the physical characteristics.

From the user perspective the application is not available on his workstation/thin client, but is running somewhere else. Using the definition of hiding physical characteristics, Terminal Services can be seen, from a user perspective, as virtualization. Because the application is not installed locally the user does not have any physical identification with the application.

With the IT perspective in mind Terminal Service can also be seen as virtualization based on the definition that (physical) resources can function as multiple virtual resources. Traditionally, installed applications on a local workstation can be started by one user at a time. By installing the application on a Terminal Server (in combination with a third party SBC add-on) applications can be used by more users at the same time. Although an application cannot be seen as a 100% physical resource, you can see Terminal Services as a way of offering a single resource that will be shown as multiple virtual resources.

In summary, Terminal Services can be seen as virtualization because the application is abstracted from the local workstation and the application appears to function as multiple virtual resources.

Terminal Services is not virtualization!

However, let’s take a closer look at the physical resources. Hardware virtualization, application virtualization and OS virtualization really do separate from the physical resource. With application virtualization the application is not physically available on the system, OS virtualization does not need a hard disk to operate, and with hardware virtualization the virtual machine does not communicate (directly) with real hardware. However Terminal Services, from an IT perspective, still needs physical resources. Terminal Services is not really virtualizing anything, only the location where the application/session is started and the methodology of displaying the application to the user are different. In other words, as Microsoft describes in their explanation, Terminal Services isolates processing from the graphics and I/O, but this is still done using another device without an additional layer in between.

Conclusion

Back to the main question: is Terminal Services virtualization? And the answer is …… it depends. It depends how you look at the concept of virtualization and your point of view on Terminal Services. Terminal Service can be seen as virtualization if you check it from the user perspective view (the application is not running physically on the workstation or thin client) or the view that a single application/session can be used at once by more than one user. If you look at how other virtualization techniques work, Terminal Services does not function the same way and physically nothing is running in a separate layer.

So there is no clear answer and the answer is subjective depending on how you look at virtualization and Terminal Services. My personal opinion is that Terminal Services cannot be labeled as virtualization, because it is not comparable with other virtualization techniques. Through my eyes Terminal Services is not adding an additional (virtualization) layer, but is only dividing the processes between two systems. I think both Microsoft and Citrix are using the “virtualization” term to gain advantages through the current boom of the virtualization market, but both know that if you look at the IT techniques it is not “real” virtualization.

High Availability and Disaster Recovery for Virtual Environments

Introduction

Virtual servers are used to reduce operational costs and to improve system efficiency. The growth in virtual servers has created challenges for IT departments regarding high availability and data protection. It is not enough to protect physical servers but also virtual servers as they contain business critical data and information. Virtual servers offer the flexibility, but at the same time if a single physical server containing multiple virtual servers fails, then the impact of data loss is enormous.

Virtualization Benefits

Companies are adopting virtualization at a rapid speed because of the tremendous benefit it offers and some of them include:

  • Server Consolidation: Virtualization helps to consolidate multiple servers into one single physical server thus offering improved operational performance.
  • Reduced Hardware Costs: As the number of physical servers goes down, the cost of servers and associated costs like IT infrastructure, space, etc. will also decrease.
  • Improved Application Security: By having a separate application in each virtual machine, any vulnerability is segregated and it does not affect other applications.
  • Reduced Maintenance: Since virtual servers can easily be relocated and migrated, maintenance of hardware and software can be done with minimal downtime.
  • Enhanced Scalability – The ease with which virtual servers can be deployed will result in improved scalability of IT implementation.

File or Block Level Replication

Different kinds of replication techniques can be used to replicate data between two servers both locally and remotely. In block level, replication is performed by the storage controllers or by mirroring the software. In file-system level (replication of file system changes), the host software performs the replication. In both block and file level replication, it does not matter what type of applications are getting replicated. They are basically application agnostic, but some vendors do offer solutions with some kind of application specificity. But these solutions cannot provide the automation, granularity and other advantages that come with application-specific solution. Also, one needs to be concerned about the following:

  • Replicated server is always in a passive mode - cannot be accessed for reporting/monitoring purposes.
  • Possibility of virus/corruption getting propagated from production server to replicated server.

Application Specific Replication Approach

In this approach, the replication is done at a mailbox or database level and it is very application specific. One can pick and choose the mailboxes or databases that need to be replicated. In the case of Exchange Server, one can set up a granular plan for key executives, sales and IT people, in which the replication occurs more frequently to achieve the required Recovery Point Objective (RPO) and Recovery Time Objective (RTO). For everyone else in the company, another plan can be set up where the replication intervals are not that frequent.

Another advantage of this approach is that the replicated or failover server is in an Active mode. The failover server can be accessed for reporting and monitoring purposes. With other replication approaches, the failover server is in a Passive mode and cannot be used for maintenance, monitoring or reporting purposes.

Backup and Replication

Some solutions offer both backup and replication as part of a single solution. In this case, the backup is integrated with replication and the users get a two-in-one solution. Considered two-tier architecture, these solutions consists of an application and agent environment. The application server also hosts the network share that stores all the backup files. The files are stored on this network share and not on any particular target server so as to prevent loss of backup files. If the target server goes down, users would like to continue to access their backup files in order to rebuild the target server with as little downtime as possible.

The mailboxes and databases will be backed to the backup server and then replicated to the remote failover server. The full back and restore is done first and then only the changes will be applied through incremental. For restoring emails, mailboxes and databases, the local backup data can be used and for disaster recovery purposes, the remote failover server can be utilized.

Virtual Environments

Many high availability solutions protect data that reside on virtual servers. Customers can have multiple physical servers at the primary location and at the offsite disaster recovery location they can have one physical server with multiple virtual servers. Also, multiple virtual servers from the primary site can be easily backed up and replicated to the disaster recovery site.

With some disaster recovery solutions, both on physical and virtual servers, the appropriate agents are installed and these agents have very small footprint. Because of the limited footprint, the impact on these servers is minimal from a performance perspective. With other replication solutions, one has to install the entire application on the virtual servers and this will take a huge toll on performance.

Physical to Virtual Servers

In this scenario, the production environment has physical servers and the disaster recovery site is deployed in a virtual environment. Both the physical and virtual servers are controlled by the Application and it can be located either at the production site or at the remote site.


Figure 1

Virtual to Virtual Environments

In order to achieve significant cost savings, some companies not only virtualize their disaster recovery site but also use virtual servers in the production environment. One can have one or more physical servers housing many virtual servers both at production and remote sites.


Figure 2

Failover/Failback

When a disaster strikes the primary site, then all the users will be failed over to the remote site. Once the primary is rebuilt, one can go through the failback process to the original primary servers very easily. Also, only a particular virtual server containing Exchange or SQL server can be failed over without affecting other physical or virtual servers.

The only way to make sure that your disaster recovery solution works is to test it periodically. Unfortunately, to do that one has to failover the entire Exchange or SQL server. Administrators will be leery about doing this for fear of crashing the production Exchange or SQL server. Some solutions can create a test mailbox or database and use it for failover/failback testing periodically. Through this approach, customers can be fully assured that their disaster recovery solution will work when it is badly needed and have peace of mind.

Migration

Virtual servers in conjunction with certain disaster recovery solutions can be used as a migration tool. If a physical server goes bad, then one can failover to the remote failover virtual server. Once the primary site is rebuilt, then the failback can be easily achieved. With some applications, there is no need to have identical versions of Exchange on primary and failover servers. In fact, one can run Exchange 2003 on primary server and Exchange 2007 on failover server. This feature can be used as a migration tool. For example, you can failover to the failover server which runs Exchange 2007. Upgrade the original primary to Exchange 2007 and failback again. This scenario is applicable to SQL 2000, SQL 2005 and SQL 2008 servers also.

Conclusion

Companies are increasingly adopting virtual servers as virtualization offers many compelling benefits. This increase in virtualization poses tremendous disaster recovery and data protection challenges to IT Administrators. There is a greater need to implement the appropriate high availability and failover solutions to protect these servers.

Determining Guest OS Placement - Part 2

In the previous article in this series, I began discussing some of the various techniques used for matching virtual servers to physical hardware. Although the first article in this series does a fairly good job of covering the basics, there are still a couple of other issues that you may have to consider. In this article, I want to conclude the series by giving you a couple more things to think about.

Step Three: Establish Performance Thresholds

The first thing that I want to give you to think about is individual virtual machine performance. I have already talked about resource allocation in the previous article, but in a way performance is a completely separate issue.

One of the main reasons for this is that in a virtualized environment, all of the guest operating systems share a common set of physical resources. In some cases it is possible to reserve specific resources for a particular virtual machine. For example, you can structure memory configuration in a way that guarantees that each virtual machine will receive a specific amount of physical memory. Likewise, you can use processor affinity settings to control the number of cores that each virtual machine has access to. While these are all good steps to take, they do not actually guarantee that a guest operating system will perform in the way that you might expect.

The reason for this is that sometimes there is an overlapping need for shared resources. In some cases, this can actually work in your favor, but, in other cases overlapping resource requirements can be detrimental to a guest operating system’s performance.

The reason why I say this is that Microsoft usually recommends a one-to-one mapping of virtual processors to processor cores. Having said that though, it’s possible to map multiple virtual processors to a single processor core. With that in mind, imagine what would happen if you tried to run six virtual machines on four physical CPU cores.

What would happen in this situation really just depends on how those virtual machines are being used, and how much CPU time they consume. For instance, if each virtual machine was only using about 25% of the total processing capacity of a physical core then performance would probably not even be an issue (at least not for my CPU standpoint).

The problem is that most of the time the load that a virtual machine places on a CPU does not remain constant. If you have ever done any performance monitoring on a non-virtualized Windows server, then you know that even when a machine is running at idle, there are fluctuations in CPU utilization. Occasionally the CPU will spike to 100% utilization, but it also occasionally dips to 0% utilization.

And you will recall, earlier I said that sometimes shared resources can be beneficial to a virtual server, but sometimes they can be detrimental to it. The reason why I say this is that in situations in which the other virtual machines are underutilizing shared resources, a virtual machine may be able to borrow some of those resources from the other virtual machines to help it to perform better. Of course this capability varies depending upon how the virtual servers are configured, and which resources are needed. At the same time, if multiple virtual machines try to consume an abnormally large amount of resources at the same time, it can result in a situation in which the physical hardware cannot keep up with the demand and performance suffers until the demand for resources goes back to normal.

With this example in mind, the question that you have to ask yourself is whether or not it is acceptable for multiple virtual machines to lay claim to the same set of physical resources at the same time. Of course the only way that you can answer this question is to do some performance benchmarking and find out what level of resource consumption is normal for each virtual machine.

Step Four: Perform a Juggling Act

The final step in the process is to perform a juggling act. In some ways, this is not so much a step as it is a byproduct of working in the corporate world. The reason why I say that the last step is to perform a juggling act is that oftentimes you may find that what works best from an IT perspective does not mesh with the company's business requirements. In these types of situations, you will have to find a balance between functionality and corporate mandates. Often this boils down to security concerns.

For example, one of the biggest security fears in regard to virtualization is something called an escape attack. The basic idea behind an escape attack is that an attacker is able to somehow escape from the constraints of the guest operating system, and then gain access to the host operating system. Once an attacker is able to do that, they could theoretically take control over every other guest operating system that is running on the host server.

To the best of my knowledge, nobody has ever successfully performed an escape attack in a Hyper-V environment. Even so, many organizations are still jumpy when it comes to the possibility. After all, zero day exploits do occur from time to time, and Hyper-V has not really been around long enough to warrant total confidence in its security.

Do not get me wrong. I am not saying that Hyper-V is insecure. I am just saying that like any other new product, there may be security holes that have yet to be discovered.

Given the possibility that someone might eventually figure out how to perform an escape attack against Hyper-V, some organizations have begun to mandate that only virtual machines that are specifically designed to act as front end servers can be placed on certain virtual machines. Front end servers typically reside at the network perimeter, and are therefore the most prone to attack. By their very nature, front end servers are designed to shield the backend servers from attack.

Grouping all of the front end servers together on a common host machine ensures that if someone ever does perform an escape attack, they would not gain access to anything other than a bunch of front end servers. Since front end servers do not typically contain any data, this approach would help to protect backend servers from being compromised through an escape attack.

So what is wrong with this approach? Nothing, from a security standpoint. From a utilization standpoint though, this approach may present a colossal waste of server resources. In smaller organizations, front end servers tend to consume very few hardware resources. If your goal was to get the most bang for your hardware buck, you would want to pair low utilization virtual servers with high utilization virtual servers. That way, the two balance each other out, and you can evenly distribute the virtual machine workload across your hardware. In this particular case, the organization’s security requirements take precedence over making the most effective use of the organization’s hardware.

Conclusion

In this article series, I have explained that while a single physical server is usually capable of hosting multiple virtual servers, it is important to group virtual servers together in a way that makes the most efficient use of hardware resources without over taxing the hardware in the process. I then went on to explain that sometimes an organization’s business needs or their security policy may prevent you from making the absolute best possible use of your server hardware.