- Chat clients such as IRC, AIM, and Yahoo IM are strictly forbidden, as they can transfer files.
- Accessing external mail servers is forbidden (antivirus policy); only use the internal server to send or receive.
- Network games, such as Doom or Quake, are forbidden, except between 8 a.m. and 6 p.m. all weekdays for members of management.
- Websites such as playboy.com are forbidden for legal reasons.
Wednesday, 23 September 2009
Firewall, Why?
Secure Socket Tunneling Protocol
- It does not force strong authentication,
- User clients are a must have,
- Differences exist in the quality and coding of user clients from vendor to vendor,
- Non-IP protocols are not supported by default,
- Because IPSec was developed for site to site secure connections, it is likely to present problems for remote users attempting to connect from a location with a limited number of IP addresses.
- SSTP uses HTTPS to establish a secure connection
- The SSTP (VPN) tunnel will function over Secure-HTTP. The problems with VPN connections based on the Point-to-Point Tunneling Protocol (PPTP) or Layer 2 Tunneling Protocol (L2TP) will be eliminated. Web proxies, firewalls and Network Address Translation (NAT) routers located on the path between clients and servers will no longer block VPN connections.
- Typical port blocking is decreased
- Blocking issues involving connections in relation to PPTP GRE port blocking or L2TP ESP port blocking via a firewall or NAT router preventing the client from reaching the server will no longer be a problem as ubiquitous connectivity is achieved. Clients will be able to connect from anywhere on the internet.
- SSTP will be built into Longhorn server
- SSTP Client will be built into Windows Vista SP1
- SSTP won't require retraining issues as the end-user VPN controls remain unchanged. The SSTP based VPN tunnel plugs directly into current interfaces for Microsoft VPN client and server software.
- Full support for IPv6. SSTP VPN tunnel can be established across IPv6 internet.
- It uses integrated network access protection support for client health-check.
- Strong integration into MS RRAS client and server, with two factor authentication capabilities.
- Increases the VPN coverage from just a few points to almost any internet connection.
- SSL encapsulation for traversal over port 443.
- Can be controlled and managed using application layer firewalls like ISA server.
- Full network VPN solution, not just an application tunnel for one application.
- Integration in NAP.
- Policy integration and configuration possible to help with client health checks.
- Single session created for the SSL tunnel.
- Application independent.
- Stronger forced authentication than IPSec
- Support for non IP protocols, this is a major improvement over IPSec.
- No need to buy expensive, hard to configure hardware firewalls that do not support Active directory integration and integrated two factor authentication.
- The SSTP client needs internet connectivity. Once this internet connectivity is verified by the protocol, a TCP connection is established to the server on port 443.
- SSL negotiation now takes place on top of the already established TCP connection whereby the server certificate is validated. If the certificate is valid, the connection is established, if not the connection is torn down.
- The client sends an HTTPS request on top of the encrypted SSL session to the server.
- The client now sends SSTP control packets within the HTTPS session. This in turn establishes the SSTP state machine on both sides for control purposes, both sides now intiate the PPP layer communication.
- PP negotiation using SSTP over HTTPS now takes place at both ends. The client is now required to authenticate to the server.
- The session now binds to the IP interface on both sides and an IP address assigned for routing of traffic.
- Traffic can now traverse the connection being either IP traffic or otherwise.
Green Computing - The Future
- Don't use screen savers. They waste energy, not save it.
- By computers & monitors labelled “energy star” which can be programmed to automatically “power-down” or “sleep” when not in use.
- If you are using more than 1 PC, Userful's 10 to 1 advantage can save electricity and your wallet.
- Turn your computer and peripherals off when not in use. This will not harm the equipment.
- Use flat panel monitors, which use about half of the electricity of a cathode-ray tube (CRT) display.
- Buy ink jet printers, not laser printers. Ink jet printers use 80 to 90 percent less energy than laser printers and print quality can be excellent.
Tuesday, 22 September 2009
Can Terminal Services be considered Virtualization?
Virtualization is a hot topic and at the moment very hyped up. Manufacturers would like to use that hype to boost their products by linking it to the virtualization market. In this craze Terminal Services was also labeled as a “Virtualization product”. In this article let’s look at the facts and I’ll also give my opinion about this virtualization label.
Introduction
Although virtualization techniques were mentioned a long time ago (around 1960), within the ICT market the launch of VMWare caused the big success of the virtualization market. Their server virtualization product, which made it possible to run multiple servers on one physical system, started the virtualization space. After server virtualization other virtualization products and fields followed quickly like application virtualization, operating system virtualization and desktop virtualization. Products which were already available before the virtualization market want to hitch a ride on the virtualization craze. I was a bit surprised when both Microsoft and Citrix determined that Terminal Services and Citrix Presentation Server are virtualization products.
What is…?
Before we can start determining whether Terminal Services can be labeled as a virtualization product, we need to first find out what the definitions of virtualization and terminal services are.
Virtualization: Virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource.
Terminal Services: Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network (WAN) or Local Area Network (LAN), as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible to a remote client machine.
Terminal Services Virtualization?
Both Microsoft and Citrix are using the virtualization space to position their Terminal Services/Citrix Presentation Server/XenApp product features. Microsoft calls it presentation virtualization, while Citrix used the term session virtualization. Microsoft also describes Terminal Service virtualization as follows:
Microsoft Terminal Services virtualizes the presentation of entire desktops or specific applications, enabling your customers to consolidate applications and data in the data center while providing broad access to local and remote users. It lets an ordinary Windows desktop application run on a shared server machine yet present its user interface on a remote system, such as a desktop computer or thin client.
If we go a bit deeper, Microsoft is describing their interpretation of presentation virtualization as follows: Presentation virtualization isolates processing from the graphics and I/O, making it possible to run an application in one location but have it controlled in another. It creates virtual sessions, in which the executing applications project their user interfaces remotely. Each session might run only a single application, or it might present its user with a complete desktop offering multiple applications. In either case, several virtual sessions can use the same installed copy of an application.
Ok, now we have the definitions of virtualization, terminal services and the way Microsoft explains why terminal services are a virtualization technique, it is time to determine if Microsoft is right with their assumption.
Terminal Services is virtualization!
Reading the explanation of virtualization, two important definitions are mentioned: abstraction and hiding the physical characteristics.
From the user perspective the application is not available on his workstation/thin client, but is running somewhere else. Using the definition of hiding physical characteristics, Terminal Services can be seen, from a user perspective, as virtualization. Because the application is not installed locally the user does not have any physical identification with the application.
With the IT perspective in mind Terminal Service can also be seen as virtualization based on the definition that (physical) resources can function as multiple virtual resources. Traditionally, installed applications on a local workstation can be started by one user at a time. By installing the application on a Terminal Server (in combination with a third party SBC add-on) applications can be used by more users at the same time. Although an application cannot be seen as a 100% physical resource, you can see Terminal Services as a way of offering a single resource that will be shown as multiple virtual resources.
In summary, Terminal Services can be seen as virtualization because the application is abstracted from the local workstation and the application appears to function as multiple virtual resources.
Terminal Services is not virtualization!
However, let’s take a closer look at the physical resources. Hardware virtualization, application virtualization and OS virtualization really do separate from the physical resource. With application virtualization the application is not physically available on the system, OS virtualization does not need a hard disk to operate, and with hardware virtualization the virtual machine does not communicate (directly) with real hardware. However Terminal Services, from an IT perspective, still needs physical resources. Terminal Services is not really virtualizing anything, only the location where the application/session is started and the methodology of displaying the application to the user are different. In other words, as Microsoft describes in their explanation, Terminal Services isolates processing from the graphics and I/O, but this is still done using another device without an additional layer in between.
Conclusion
Back to the main question: is Terminal Services virtualization? And the answer is …… it depends. It depends how you look at the concept of virtualization and your point of view on Terminal Services. Terminal Service can be seen as virtualization if you check it from the user perspective view (the application is not running physically on the workstation or thin client) or the view that a single application/session can be used at once by more than one user. If you look at how other virtualization techniques work, Terminal Services does not function the same way and physically nothing is running in a separate layer.
So there is no clear answer and the answer is subjective depending on how you look at virtualization and Terminal Services. My personal opinion is that Terminal Services cannot be labeled as virtualization, because it is not comparable with other virtualization techniques. Through my eyes Terminal Services is not adding an additional (virtualization) layer, but is only dividing the processes between two systems. I think both Microsoft and Citrix are using the “virtualization” term to gain advantages through the current boom of the virtualization market, but both know that if you look at the IT techniques it is not “real” virtualization.
Can Terminal Services be considered Virtualization?
Virtualization is a hot topic and at the moment very hyped up. Manufacturers would like to use that hype to boost their products by linking it to the virtualization market. In this craze Terminal Services was also labeled as a “Virtualization product”. In this article let’s look at the facts and I’ll also give my opinion about this virtualization label.
Introduction
Although virtualization techniques were mentioned a long time ago (around 1960), within the ICT market the launch of VMWare caused the big success of the virtualization market. Their server virtualization product, which made it possible to run multiple servers on one physical system, started the virtualization space. After server virtualization other virtualization products and fields followed quickly like application virtualization, operating system virtualization and desktop virtualization. Products which were already available before the virtualization market want to hitch a ride on the virtualization craze. I was a bit surprised when both Microsoft and Citrix determined that Terminal Services and Citrix Presentation Server are virtualization products.
What is…?
Before we can start determining whether Terminal Services can be labeled as a virtualization product, we need to first find out what the definitions of virtualization and terminal services are.
Virtualization: Virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource.
Terminal Services: Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network (WAN) or Local Area Network (LAN), as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible to a remote client machine.
Terminal Services Virtualization?
Both Microsoft and Citrix are using the virtualization space to position their Terminal Services/Citrix Presentation Server/XenApp product features. Microsoft calls it presentation virtualization, while Citrix used the term session virtualization. Microsoft also describes Terminal Service virtualization as follows:
Microsoft Terminal Services virtualizes the presentation of entire desktops or specific applications, enabling your customers to consolidate applications and data in the data center while providing broad access to local and remote users. It lets an ordinary Windows desktop application run on a shared server machine yet present its user interface on a remote system, such as a desktop computer or thin client.
If we go a bit deeper, Microsoft is describing their interpretation of presentation virtualization as follows: Presentation virtualization isolates processing from the graphics and I/O, making it possible to run an application in one location but have it controlled in another. It creates virtual sessions, in which the executing applications project their user interfaces remotely. Each session might run only a single application, or it might present its user with a complete desktop offering multiple applications. In either case, several virtual sessions can use the same installed copy of an application.
Ok, now we have the definitions of virtualization, terminal services and the way Microsoft explains why terminal services are a virtualization technique, it is time to determine if Microsoft is right with their assumption.
Terminal Services is virtualization!
Reading the explanation of virtualization, two important definitions are mentioned: abstraction and hiding the physical characteristics.
From the user perspective the application is not available on his workstation/thin client, but is running somewhere else. Using the definition of hiding physical characteristics, Terminal Services can be seen, from a user perspective, as virtualization. Because the application is not installed locally the user does not have any physical identification with the application.
With the IT perspective in mind Terminal Service can also be seen as virtualization based on the definition that (physical) resources can function as multiple virtual resources. Traditionally, installed applications on a local workstation can be started by one user at a time. By installing the application on a Terminal Server (in combination with a third party SBC add-on) applications can be used by more users at the same time. Although an application cannot be seen as a 100% physical resource, you can see Terminal Services as a way of offering a single resource that will be shown as multiple virtual resources.
In summary, Terminal Services can be seen as virtualization because the application is abstracted from the local workstation and the application appears to function as multiple virtual resources.
Terminal Services is not virtualization!
However, let’s take a closer look at the physical resources. Hardware virtualization, application virtualization and OS virtualization really do separate from the physical resource. With application virtualization the application is not physically available on the system, OS virtualization does not need a hard disk to operate, and with hardware virtualization the virtual machine does not communicate (directly) with real hardware. However Terminal Services, from an IT perspective, still needs physical resources. Terminal Services is not really virtualizing anything, only the location where the application/session is started and the methodology of displaying the application to the user are different. In other words, as Microsoft describes in their explanation, Terminal Services isolates processing from the graphics and I/O, but this is still done using another device without an additional layer in between.
Conclusion
Back to the main question: is Terminal Services virtualization? And the answer is …… it depends. It depends how you look at the concept of virtualization and your point of view on Terminal Services. Terminal Service can be seen as virtualization if you check it from the user perspective view (the application is not running physically on the workstation or thin client) or the view that a single application/session can be used at once by more than one user. If you look at how other virtualization techniques work, Terminal Services does not function the same way and physically nothing is running in a separate layer.
So there is no clear answer and the answer is subjective depending on how you look at virtualization and Terminal Services. My personal opinion is that Terminal Services cannot be labeled as virtualization, because it is not comparable with other virtualization techniques. Through my eyes Terminal Services is not adding an additional (virtualization) layer, but is only dividing the processes between two systems. I think both Microsoft and Citrix are using the “virtualization” term to gain advantages through the current boom of the virtualization market, but both know that if you look at the IT techniques it is not “real” virtualization.
High Availability and Disaster Recovery for Virtual Environments
Introduction
Virtual servers are used to reduce operational costs and to improve system efficiency. The growth in virtual servers has created challenges for IT departments regarding high availability and data protection. It is not enough to protect physical servers but also virtual servers as they contain business critical data and information. Virtual servers offer the flexibility, but at the same time if a single physical server containing multiple virtual servers fails, then the impact of data loss is enormous.
Virtualization Benefits
Companies are adopting virtualization at a rapid speed because of the tremendous benefit it offers and some of them include:
- Server Consolidation: Virtualization helps to consolidate multiple servers into one single physical server thus offering improved operational performance.
- Reduced Hardware Costs: As the number of physical servers goes down, the cost of servers and associated costs like IT infrastructure, space, etc. will also decrease.
- Improved Application Security: By having a separate application in each virtual machine, any vulnerability is segregated and it does not affect other applications.
- Reduced Maintenance: Since virtual servers can easily be relocated and migrated, maintenance of hardware and software can be done with minimal downtime.
- Enhanced Scalability – The ease with which virtual servers can be deployed will result in improved scalability of IT implementation.
File or Block Level Replication
Different kinds of replication techniques can be used to replicate data between two servers both locally and remotely. In block level, replication is performed by the storage controllers or by mirroring the software. In file-system level (replication of file system changes), the host software performs the replication. In both block and file level replication, it does not matter what type of applications are getting replicated. They are basically application agnostic, but some vendors do offer solutions with some kind of application specificity. But these solutions cannot provide the automation, granularity and other advantages that come with application-specific solution. Also, one needs to be concerned about the following:
- Replicated server is always in a passive mode - cannot be accessed for reporting/monitoring purposes.
- Possibility of virus/corruption getting propagated from production server to replicated server.
Application Specific Replication Approach
In this approach, the replication is done at a mailbox or database level and it is very application specific. One can pick and choose the mailboxes or databases that need to be replicated. In the case of Exchange Server, one can set up a granular plan for key executives, sales and IT people, in which the replication occurs more frequently to achieve the required Recovery Point Objective (RPO) and Recovery Time Objective (RTO). For everyone else in the company, another plan can be set up where the replication intervals are not that frequent.
Another advantage of this approach is that the replicated or failover server is in an Active mode. The failover server can be accessed for reporting and monitoring purposes. With other replication approaches, the failover server is in a Passive mode and cannot be used for maintenance, monitoring or reporting purposes.
Backup and Replication
Some solutions offer both backup and replication as part of a single solution. In this case, the backup is integrated with replication and the users get a two-in-one solution. Considered two-tier architecture, these solutions consists of an application and agent environment. The application server also hosts the network share that stores all the backup files. The files are stored on this network share and not on any particular target server so as to prevent loss of backup files. If the target server goes down, users would like to continue to access their backup files in order to rebuild the target server with as little downtime as possible.
The mailboxes and databases will be backed to the backup server and then replicated to the remote failover server. The full back and restore is done first and then only the changes will be applied through incremental. For restoring emails, mailboxes and databases, the local backup data can be used and for disaster recovery purposes, the remote failover server can be utilized.
Virtual Environments
Many high availability solutions protect data that reside on virtual servers. Customers can have multiple physical servers at the primary location and at the offsite disaster recovery location they can have one physical server with multiple virtual servers. Also, multiple virtual servers from the primary site can be easily backed up and replicated to the disaster recovery site.
With some disaster recovery solutions, both on physical and virtual servers, the appropriate agents are installed and these agents have very small footprint. Because of the limited footprint, the impact on these servers is minimal from a performance perspective. With other replication solutions, one has to install the entire application on the virtual servers and this will take a huge toll on performance.
Physical to Virtual Servers
In this scenario, the production environment has physical servers and the disaster recovery site is deployed in a virtual environment. Both the physical and virtual servers are controlled by the Application and it can be located either at the production site or at the remote site.
Figure 1
Virtual to Virtual Environments
In order to achieve significant cost savings, some companies not only virtualize their disaster recovery site but also use virtual servers in the production environment. One can have one or more physical servers housing many virtual servers both at production and remote sites.
Figure 2
Failover/Failback
When a disaster strikes the primary site, then all the users will be failed over to the remote site. Once the primary is rebuilt, one can go through the failback process to the original primary servers very easily. Also, only a particular virtual server containing Exchange or SQL server can be failed over without affecting other physical or virtual servers.
The only way to make sure that your disaster recovery solution works is to test it periodically. Unfortunately, to do that one has to failover the entire Exchange or SQL server. Administrators will be leery about doing this for fear of crashing the production Exchange or SQL server. Some solutions can create a test mailbox or database and use it for failover/failback testing periodically. Through this approach, customers can be fully assured that their disaster recovery solution will work when it is badly needed and have peace of mind.
Migration
Virtual servers in conjunction with certain disaster recovery solutions can be used as a migration tool. If a physical server goes bad, then one can failover to the remote failover virtual server. Once the primary site is rebuilt, then the failback can be easily achieved. With some applications, there is no need to have identical versions of Exchange on primary and failover servers. In fact, one can run Exchange 2003 on primary server and Exchange 2007 on failover server. This feature can be used as a migration tool. For example, you can failover to the failover server which runs Exchange 2007. Upgrade the original primary to Exchange 2007 and failback again. This scenario is applicable to SQL 2000, SQL 2005 and SQL 2008 servers also.
Conclusion
Companies are increasingly adopting virtual servers as virtualization offers many compelling benefits. This increase in virtualization poses tremendous disaster recovery and data protection challenges to IT Administrators. There is a greater need to implement the appropriate high availability and failover solutions to protect these servers.
Determining Guest OS Placement - Part 2
In the previous article in this series, I began discussing some of the various techniques used for matching virtual servers to physical hardware. Although the first article in this series does a fairly good job of covering the basics, there are still a couple of other issues that you may have to consider. In this article, I want to conclude the series by giving you a couple more things to think about.
Step Three: Establish Performance Thresholds
The first thing that I want to give you to think about is individual virtual machine performance. I have already talked about resource allocation in the previous article, but in a way performance is a completely separate issue.
One of the main reasons for this is that in a virtualized environment, all of the guest operating systems share a common set of physical resources. In some cases it is possible to reserve specific resources for a particular virtual machine. For example, you can structure memory configuration in a way that guarantees that each virtual machine will receive a specific amount of physical memory. Likewise, you can use processor affinity settings to control the number of cores that each virtual machine has access to. While these are all good steps to take, they do not actually guarantee that a guest operating system will perform in the way that you might expect.
The reason for this is that sometimes there is an overlapping need for shared resources. In some cases, this can actually work in your favor, but, in other cases overlapping resource requirements can be detrimental to a guest operating system’s performance.
The reason why I say this is that Microsoft usually recommends a one-to-one mapping of virtual processors to processor cores. Having said that though, it’s possible to map multiple virtual processors to a single processor core. With that in mind, imagine what would happen if you tried to run six virtual machines on four physical CPU cores.
What would happen in this situation really just depends on how those virtual machines are being used, and how much CPU time they consume. For instance, if each virtual machine was only using about 25% of the total processing capacity of a physical core then performance would probably not even be an issue (at least not for my CPU standpoint).
The problem is that most of the time the load that a virtual machine places on a CPU does not remain constant. If you have ever done any performance monitoring on a non-virtualized Windows server, then you know that even when a machine is running at idle, there are fluctuations in CPU utilization. Occasionally the CPU will spike to 100% utilization, but it also occasionally dips to 0% utilization.
And you will recall, earlier I said that sometimes shared resources can be beneficial to a virtual server, but sometimes they can be detrimental to it. The reason why I say this is that in situations in which the other virtual machines are underutilizing shared resources, a virtual machine may be able to borrow some of those resources from the other virtual machines to help it to perform better. Of course this capability varies depending upon how the virtual servers are configured, and which resources are needed. At the same time, if multiple virtual machines try to consume an abnormally large amount of resources at the same time, it can result in a situation in which the physical hardware cannot keep up with the demand and performance suffers until the demand for resources goes back to normal.
With this example in mind, the question that you have to ask yourself is whether or not it is acceptable for multiple virtual machines to lay claim to the same set of physical resources at the same time. Of course the only way that you can answer this question is to do some performance benchmarking and find out what level of resource consumption is normal for each virtual machine.
Step Four: Perform a Juggling Act
The final step in the process is to perform a juggling act. In some ways, this is not so much a step as it is a byproduct of working in the corporate world. The reason why I say that the last step is to perform a juggling act is that oftentimes you may find that what works best from an IT perspective does not mesh with the company's business requirements. In these types of situations, you will have to find a balance between functionality and corporate mandates. Often this boils down to security concerns.
For example, one of the biggest security fears in regard to virtualization is something called an escape attack. The basic idea behind an escape attack is that an attacker is able to somehow escape from the constraints of the guest operating system, and then gain access to the host operating system. Once an attacker is able to do that, they could theoretically take control over every other guest operating system that is running on the host server.
To the best of my knowledge, nobody has ever successfully performed an escape attack in a Hyper-V environment. Even so, many organizations are still jumpy when it comes to the possibility. After all, zero day exploits do occur from time to time, and Hyper-V has not really been around long enough to warrant total confidence in its security.
Do not get me wrong. I am not saying that Hyper-V is insecure. I am just saying that like any other new product, there may be security holes that have yet to be discovered.
Given the possibility that someone might eventually figure out how to perform an escape attack against Hyper-V, some organizations have begun to mandate that only virtual machines that are specifically designed to act as front end servers can be placed on certain virtual machines. Front end servers typically reside at the network perimeter, and are therefore the most prone to attack. By their very nature, front end servers are designed to shield the backend servers from attack.
Grouping all of the front end servers together on a common host machine ensures that if someone ever does perform an escape attack, they would not gain access to anything other than a bunch of front end servers. Since front end servers do not typically contain any data, this approach would help to protect backend servers from being compromised through an escape attack.
So what is wrong with this approach? Nothing, from a security standpoint. From a utilization standpoint though, this approach may present a colossal waste of server resources. In smaller organizations, front end servers tend to consume very few hardware resources. If your goal was to get the most bang for your hardware buck, you would want to pair low utilization virtual servers with high utilization virtual servers. That way, the two balance each other out, and you can evenly distribute the virtual machine workload across your hardware. In this particular case, the organization’s security requirements take precedence over making the most effective use of the organization’s hardware.
Conclusion
In this article series, I have explained that while a single physical server is usually capable of hosting multiple virtual servers, it is important to group virtual servers together in a way that makes the most efficient use of hardware resources without over taxing the hardware in the process. I then went on to explain that sometimes an organization’s business needs or their security policy may prevent you from making the absolute best possible use of your server hardware.