Tuesday, 30 June 2009

Sharepoint Data Security Risks

The challenges of securing data on Microsoft SharePoint sites, lists, pages and the information made available through data-links to backend systems (through BDC and manually created data-links).

Introduction

Microsoft Office SharePoint Server 2007 (MOSS) and Windows SharePoint Services 3.0 (WSS) gives companies the opportunity to gather data from many sources and publish this on a central location for users to access. But what do the SharePoint administrators need to consider to make sure confidential information is not made available to everyone?

In this article we focus on the challenges of securing data on Microsoft SharePoint sites, lists, pages and the information made available through data-links to backend systems (through BDC and manually created data-links). The audience for this article is primarily the network-/server administrators and SharePoint designers / publishers.

Why do we need to consider securing information?

There are different inputs on what we need to consider when publishing content on SharePoint intranet sites. Sometimes it can be difficult to control the information available if the structure of content and the security access to this is not well planned from the beginning. As the intranet grows the designers and publishers learn how to use SharePoint for team collaboration, document management and dynamic reports. This content is made available for other employees and here is the key question: What information are they allowed to search for and read? SharePoint, being the place to centralize and structure information, really supports employees and teams work processes but can also be a data security breach if it is not secured correctly.

Let’s begin with an example scenario:

Toy Company A makes key performance indicators (KPI) available on their SharePoint site to show the executives how their department performs financially on a web page. The designer creates a data-connection to the financial database to extract the data. The executives have a blog on the same site where they comment on the KPI every week.

In this example the designers have to secure the financial data connection so that only the required information is extracted from the database and to make sure that only the executives can access the data and the blog. The executives must know the importance of this security configuration to ensure and check that the policy is followed. In the worst case the designers use BDC with a full access account to the financial database and make it searchable – and do not limit access to the site for only the executives. In that case every user can search for financial data, read it and the blog comments on the intranet.

Making sure that the intranet is a secure place is a task that must be well planned. If every person from the architect to the end user is aware of this and understands why and how to secure the content (and follow the policies), the intranet is a safe place for your data.

How can users get away with company data?

When we are talking about data security risks the obvious question is: “how can we avoid people seeing or even getting a copy of our confidential information?” Well, today it is very hard to be one hundred percent sure that no one gets a copy and takes it outside the company. Oh yes, it can be done but how many companies have such restrictive security policies around the world? Not many.

The SharePoint infrastructure has a very good “feature” that I like: Users cannot see content that is restricted. YES, that is the way we want it! And with Information Rights Management (IRM) implemented we have great user control. But how can data get out of the SharePoint and be used elsewhere? Of course a SharePoint backup contains a lot of information, so keep these in a place with no user access. But if the users have read access to the content, they could use…

Internet browser

  • Copy-paste the data to any application.
  • Export the data to an XML-file via the URL protocol (owssvr.dll)

Office products

  • Connections and exports can be made to Office applications
  • The “Connect to Outlook” can make data available offline and be exported

Other programs

  • Calls can be made to the SharePoint farm e.g. through Windows Powershell or other applications

Data copying could be an issue and the tools are right in front of the employees. We can hide links and pages from your users but we need to set correct permissions on the lists, items, document libraries, etc to avoid data copying/loss.

Pinpoint where to check or tighten the security

Okay, now we know why we need to address security on the intranet. Identifying where this can go wrong is the next challenge – and a big one too – and perhaps a bit more technical. Microsoft SharePoint is kind of a large chunk to swallow at the beginning. But working with the individual parts for some time solves the puzzle one bit at a time. As you can see in Figure 2 I have divided into separate sections the different data and how it is connected in the SharePoint structure and arrows of the communication.

Notice the question marks? These places are where the security level must be considered. I will start explaining some considerations I thought of from the top of this diagram.

  • When a user accesses the SharePoint intranet: What type of authentication must be used? Should the traffic be encrypted with SSL?
  • SharePoint data is e. g. made by lists, pages and document libraries. Should access to some/all of these be restricted in different levels? No Access Readers Publishers/Editors Administrators
    Are the administrators of the sites, SharePoint Service Providers (SSP) the correct ones?
  • Custom pages can contain manually configured data connections. Ensure correct permissions is set on the pages/files. See “External data sources” below if custom connections is used.
  • Custom solutions can contain code that access many areas. Choose only to install custom solutions you trust. Choose the correct security level when installing the solution (see link section). See “External data sources” below if custom connections is used.
  • Search Crawls: The default content access account has full read access of all web applications in the farm. Manually configure read access for the following: SharePoint sites outside the server farm Business Data Catalog applications Web sites File shares Microsoft Exchange Server public folders Lotus Notes.
  • Access to Business Data Catalog (BDC) data Choose the correct security access level for users to ensure confidential information is not exposed to everyone. Choose the correct authentication method for your BDC connection (see link section). If you configure search crawl consider the access to the crawled data.
  • Custom database connections can be made for the designers. Make sure the connections is only available to the employees that needs the information.
  • External data sources: Do the data connections use other credentials? Can the used credentials access more than needed? Use Pass-through/Single Sign-On authentication if possible. If RevertToSelf is used, remember this option uses the Application Pool account to access the data source.
  • Service accounts: Only use least privilege rights for your service accounts (see link section).
  • WSS/MOSS Servers: Secure the server with Antivirus for the OS and SharePoint. Patch the server with security patches when needed. Use a firewall to limit the risk of attacks. Secure the servers physically. Keep the central administration port a secret for non-administrators.
  • MOSS SQL database server (of not on the same server as WSS/MOSS): Secure the server with Antivirus for the OS and SharePoint. Patch the server with security patches when needed. Use a firewall to limit the risk of attacks. Secure the servers physically. Use SQL alias and non-standard ports for communication – especially if DMZ is used.
  • Network communication Encrypt the communication between servers if possible.

Use the figure and list above to check how the health of your intranet is and discuss the decisions made for your SharePoint farm. This is not a complete checklist but see it as a guideline for a more secure farm. Small and medium sized companies tend to tighten security on the box in Figure 2 called “SharePoint Data” but of course this will vary from installation to installation.

Summary

In this article we covered why we should control the access to data and what scenario to avoid. We looked into how we could pin point security breaches and had a look into the security considerations on specific parts of a SharePoint intranet farm.

Thursday, 25 June 2009

The Difference Between Application and Session Layer Firewalls

In today’s application centric interconnected environments, the next generation of firewalls (application layer firewalls) are required to reduce the attack surface area.

The story so far, in a time long ago man first used trees and logs to protect their livestock within their village, many potential threats like lions and other external tribesmen were deterred, but not stopped. As technology improved the nomads become farmers and fences were developed that were made out of stone, these fences were not only superior to the wooden logs but were harder to circumvent. Eventually entire villages were at the centre of the fortress, the high fortress walls were able to keep the livestock and population safe.

In the beginning

The same is true for firewalls; in the beginning only routers with access lists were available because that is all that was required. Managing a network using only access control lists and some basic filtering was more than enough protection for deterring unauthorised users. This was the case because routers were at the heart of every network and more specifically these devices were used to route traffic to and from WAN connections like branch offices and the Internet.

The fact is, very little has changed with regards to routers other than some slight modifications to the way they filter traffic and the organisations that manufacture these devices have focused on increasing security up to the layer that these devices are capable of performing at. What am I saying? A fence built out of logs will always be a fence made of wood, not as good as stone.

Session layer firewalls are also known as Circuit level firewalls or circuit gateways. These session layer firewalls have the following features; they operate at the TCP layer of the OSI model. Typically these firewalls use NAT (Network Address Translation) to protect the internal network and these gateways have little or no connection to the application layer, thus cannot filter more complicated connections. These firewalls are only able to protect traffic on a basic rule base like source destination port.

Next generation

As technology has developed, the need to govern access to the outbound networks was required. Users were able to browse the internet and take advantage of the log-built fence weakness because they could bypass the fence by just pretending to be sheep when in fact they were wolves in sheep’s clothing.

This meant that the user could easily bypass the Layer 5 device’s security by telneting to a port that was open outbound but was not a telnet port (from port 23 telnet to port 80). The Router with access lists would allow the user to connect to the port although the port was not the telnet port but was a port for another service. This meant that the router was not inspecting the packet (sheep) as it passed through the fence. The router was only doing a simple inspection, if it looks like a sheep and is leaving the barn to go out into the field then I will let it through. So the wolves could easily roam amongst the sheep. This technology was implemented in both directions and in the 90s was the state of most firewalls.

In the late 90s mainstream proxy servers came on the scene that incorporated basic firewalling technology. These “proxy firewalls” were able to intercept the traffic between the source and the destination, subject and object and because the “proxy firewall” is in the middle it has the ability to inspect the packets against predefined rule sets that have more restrictive components.

More about the technology

Session layer firewalls operate at Layer 5 of the OSI model. Previously this would be enough protection for a network in the 90s but as attacks developed into application level attacks and as the growth of the internet and sophistication of hosted code has developed, session layer firewalls are no longer adequate. The result is that a firewall without an application layer protection mechanism will result in any misconfiguration and operating system vulnerability being directly exposed to the Internet by virtue of the fact that all the session layer firewall is able to provide is a routing table and access control list as a basic level of protection.

Small advances in session layer firewalls enable the firewall to inspect traffic at a deeper level for common protocols, but these measures are easily bypassed with tools like metasploit and backtrack. In today’s online environment the only option is to install an application layer firewall that does more than ACL and source destination port. Deeper packet inspection, stateful connection management and application layer filtering is a vital component that is necessary when interacting with modern applications. For this reason organisations that are serious about security would not consider session layer firewalls (routers with access lists) over application layer firewalls.

Third generation firewalls are known as an Application layer firewall or proxy firewall, this firewall has the capability to proxy in both directions thus protecting both the subject and the object from ever coming into direct contact with each other. The proxy mediates the connection and thus is able to filter and manage the access and content to and from the object or subject. This can be enabled in various ways with integration into already existing directories, like LDAP for user and user group access.

The application layer firewall is also able to emulate the server that it is exposing to the internet so that the visiting user experiences a faster more secured connection. The fact is that when the user visits the published server the user is actually visiting the Layer 7 firewall’s published port and the request is inspected and then parsed through the rule base for processing. Once this passes the rule base and it matches the respective rule it is then passed on to the server, but the difference is that this connection can be served out of perfected cache thus improving performance and the security of the connection.

Figure 1

The above diagram depicts the OSI model, Layer 5 is the Session layer, and Layer 7 is the Application layer. The layer above the application layer is referred to as Layer 8 and this is typically the layer that houses the Users and Policies.

OSI summary

Put simply, the OSI model is a layered model of network architecture. This model governs how two systems that are interconnected communicate.

The top layer (application layer) is typically the layer at which the “proxy based firewalls” operate at. Application layer firewalls are third generation firewalls, these firewalls scan down to the layers below. When compared to a session layer or circuit layer firewall the application layer firewall incorporates the features of the session layer firewall and other more improved features like reverse proxy for secure website publishing.

In the future

Today attacks are already so advanced that most session layer firewalls do not even stop the most basic application attacks. For this reason older Layer 5 based firewalls need to be complimented or replaced with more secure “application layer firewalls” For this reason PCI DSS only allows these types of firewalls to be in place when protecting credit card information.

Summary

No matter how much we hang on to our old habits and old technology, newer improved methods of firewalling are here. The Internet is attempting to collapse into port 80 and 443 and security professionals are being challenged with the management of the users that learn how to encrypt their traffic to avoid management. The solution is to implement application layer firewalls that are able to scan within encrypted streams. On the outside more structured application level attacks are being crafted on a daily basis, the only way to deal with such threats is to use newer more sophisticated application layer firewalls.

Wednesday, 24 June 2009

Secure Socket Tunneling Protocol

SSTP (Secure Socket Tunnelling Protocol) and the VPN capabilities it will offer in future

The article will give a clear understanding of SSTP and compare standard VPN vs SSTP VPN. The article will also cover the advantages of utilizing both SSTP and VPN simultaneously and what the benefits of using SSTP will be.

VPN

Virtual private network, also referred to as VPN, is a network that is constructed with the use of public wires to join nodes, enabling the user to create networks for the transfer of data. The systems use encryption and various other security measures to ensure that the data is not intercepted by unauthorized users. For years VPN has been used successfully but has recently become problematic due to the increase in the number of organizations encouraging roaming user access. Alternative measures have been looked at to enable this type of access. Many organizations have begun to utilize IPSec and SSL VPN as an alternative. The other new alternative being SSTP, also referred to as ‘Microsoft’s SSL VPN’.

Problems with typical VPN

VPNs typically use an encrypted tunnel that keeps the tunneled data confidential. By doing this when the tunnel routes through typical NATed paths the VPN tunnel stops working. VPNs typically connect a node to an endpoint. It may happen that both the node and the endpoint have the same internal LAN address and, if NAT is involved, all sorts of complications can arise.

SSL VPN

Secure Socket Layer, also referred to as SSL, uses a cryptographic system that uses two keys to encrypt data, the public and private key. The public key is known to everyone and the private only to the recipient. Through this SSL a secure connection between a client and a server is created. SSL VPN allows users to establish secure remote-access from virtually any internet connected web browser, unlike with VPN. The hurdle of unstable connectivity is removed. With SSL VPN an entire session is secured, whereas with only SSL this is not accomplished.

SSTP

Secure socket tunneling protocol, also referred to as SSTP, is by definition an application-layer protocol. It is designed to employ a synchronous communication in a back and forth motion between two programs. It allows many application endpoints over one network connection, between peer nodes, thereby enabling efficient usage of the communication resources that are available to that network.

SSTP protocol is based on SSL instead of PPTP or IPSec and uses TCP Port 443 for relaying SSTP traffic. Although it is closely related to SSL, a direct comparison can not be made between SSL and SSTP as SSTP is only a tunneling protocol unlike SSL. Many reasons exist for choosing SSL and not IPSec as the basis for SSTP. IPSec is directed at supporting site- to-site VPN connectivity and thus SSL was a better base for SSTP development, as it supports roaming. Other reasons for not basing it on IPSec are:

  • It does not force strong authentication,
  • User clients are a must have,
  • Differences exist in the quality and coding of user clients from vendor to vendor,
  • Non-IP protocols are not supported by default,
  • Because IPSec was developed for site to site secure connections, it is likely to present problems for remote users attempting to connect from a location with a limited number of IP addresses.
SSL VPN proved to be a more compatible basis for the development of SSTP

SSL VPN addresses these issues and more. Unlike basic SSL, SSL VPN secures an entire session. No static IPs are required, and a client is unnecessary in most cases. Since connections are made via a browser over the Internet, the default connection protocol is TCP/IP. Clients connecting via SSL VPN can be presented with a desktop for accessing network resources. Transparent to the user, traffic from their laptop can be restricted to specific resources based on business defined criteria.

SSTP - an extension of VPN

The development of SSTP was brought about by the lack of capability of VPN. The main shortcoming of VPN is its unstable connectivity. This is a consequence of its insufficient coverage areas. SSTP increases the coverage area of VPN connection ubiquitously, rendering this problem no more. SSTP establishes a connection over secure HTTPS; this allows clients to securely access networks behind NAT routers, firewalls and web proxies, without the concern for typical port blocking issues.

SSTP is not designed for site to site VPN connections but is intended to be used for client to site VPN connections.
The success of SSTP can be found in the following features:
  • SSTP uses HTTPS to establish a secure connection
    • The SSTP (VPN) tunnel will function over Secure-HTTP. The problems with VPN connections based on the Point-to-Point Tunneling Protocol (PPTP) or Layer 2 Tunneling Protocol (L2TP) will be eliminated. Web proxies, firewalls and Network Address Translation (NAT) routers located on the path between clients and servers will no longer block VPN connections.
  • Typical port blocking is decreased
    • Blocking issues involving connections in relation to PPTP GRE port blocking or L2TP ESP port blocking via a firewall or NAT router preventing the client from reaching the server will no longer be a problem as ubiquitous connectivity is achieved. Clients will be able to connect from anywhere on the internet.
  • SSTP will be built into Longhorn server
  • SSTP Client will be built into Windows Vista SP1
    • SSTP won't require retraining issues as the end-user VPN controls remain unchanged. The SSTP based VPN tunnel plugs directly into current interfaces for Microsoft VPN client and server software.
  • Full support for IPv6. SSTP VPN tunnel can be established across IPv6 internet.

  • It uses integrated network access protection support for client health-check.
  • Strong integration into MS RRAS client and server, with two factor authentication capabilities.
  • Increases the VPN coverage from just a few points to almost any internet connection.
  • SSL encapsulation for traversal over port 443.
  • Can be controlled and managed using application layer firewalls like ISA server.
  • Full network VPN solution, not just an application tunnel for one application.
  • Integration in NAP.
  • Policy integration and configuration possible to help with client health checks.
  • Single session created for the SSL tunnel.
  • Application independent.
  • Stronger forced authentication than IPSec
  • Support for non IP protocols, this is a major improvement over IPSec.
  • No need to buy expensive, hard to configure hardware firewalls that do not support Active directory integration and integrated two factor authentication.


Figure 1.1: The SSTP connection mechanism


How SSTP based VPN connection works in seven steps
  1. The SSTP client needs internet connectivity. Once this internet connectivity is verified by the protocol, a TCP connection is established to the server on port 443.
  2. SSL negotiation now takes place on top of the already established TCP connection whereby the server certificate is validated. If the certificate is valid, the connection is established, if not the connection is torn down.
  3. The client sends an HTTPS request on top of the encrypted SSL session to the server.
  4. The client now sends SSTP control packets within the HTTPS session. This in turn establishes the SSTP state machine on both sides for control purposes, both sides now intiate the PPP layer communication.
  5. PPP negotiation using SSTP over HTTPS now takes place at both ends. The client is now required to authenticate to the server.
  6. The session now binds to the IP interface on both sides and an IP address assigned for routing of traffic.
  7. Traffic can now traverse the connection being either IP traffic or otherwise.
Microsoft is confident that this protocol will help alleviate VPN connection issues, The RRAS team are now readying RRAS for SSTP integration and the protocol will be part of the solution going forward. The only prerequisite at present is that the client runs Vista and Longhorn server. The feature set provided by this little protocol is both rich and flexible and the protocol will enhance the user and administrator experience. I predict that devices will start to incorporate this protocol into the stack for secure communication and the headaches of NAT will soon be forgotten as we move into a 443/SSL incorporated solution.

Conclusion

SSTP is a great addition to the VPN toolkit to enable users to remotely and securely connect to the corporate network. Blocking of remote access and NAT issues seem to be forgotten when using this protocol and the technology is stable, well documented and working. This is a great product and it is very welcome in this time of remote access.

Biometrics and You!

Most of us will remember watching television or movies not that long ago that showed a pretty neat technology, which showed people having their identity verified via a facial scan. That looked very high tech, and also like it belonged in science fiction books. Well what was very novel a few years ago is now very much in the realm of the possible, and one could argue commonplace. There is far more to biometrics though then a facial scan. Other biometrics exist such as the now more common thumbprint scanner on some laptops. These two methods of biometric identification are not the only ones though. You can also see or may have heard of retina scans, iris scans, and voice recognition, amongst others. What do all of these methods have in common? Well, each method will generate a unique identifier based on the biometric used. Everyone’s voice, retina, iris, thumbprint is actually unique, and can therefore be used as an identifier. For some high security installations a combination of biometric methods are used to identify individuals seeking access to restricted areas.

Why or how did biometrics come into existence? Well this technology in all of its various implementations was borne out of a need to have a highly secure means of identifying someone. Whether it be the military, government organizations, banks, or others, there exists a very real need to be 100% sure that the person is who they say they are. This is especially true amidst the increase of targeted computer attacks against individuals, which can result in key-loggers being installed on a person’s computer. Quite often it is easiest to target the company, or government worker at this home computer vice the hardened corporate/government network. While the attacker may have the person’s username and password they will not however have their thumbprint, or other biometric fail safe.

This is already being seen in what is now called “three factor authentication” schemes that have improved upon the well known “two factor authentication” methods in use today. Having a third authentication factor that is a biometric is very much a vast improvement in safeguarding access to sometimes extremely sensitive data. It is not only the military and government that have very sensitive data, but also private sector areas such as pharmaceutical companies and banks. The time has indeed come for the advent of biometrics, as an additional form of authentication.

How does it affect me?

Well there are vendors now actively marketing their products with onboard biometrics technology. One of those vendors would be IBM and their Thinkpad laptop series. For many people and companies, laptop theft is very much of concern. The contents of the laptop can be as earlier mentioned, highly classified data. You would not want an unknown entity to have access to it. Having a laptop with onboard biometric technology is, or can be to some, a very desirable solution. Just remember that like any security solution you would be best to layer your defences.

This thumbprint reader on laptops and desktops is the most visible biometric security solution today in production networks. Is it a failsafe solution though? Hardly - there have already been concerns over the simplicity of lifting someone’s thumbprint off of a glass, or a piece of gum for that matter. The funny thing is that if you ask many of the vendors for these biometric solutions they will tell you that they are not security devices in and of themselves. Think about it now. All it really takes is for an attacker to simply crack open the laptop or desktop, and extract the hard drive. With this low tech attack the biometric access control has been neatly side stepped. You really must, as mentioned, use this biometric technology with other layered security solutions. Encrypting the entire hard drive comes readily to mind, whether it be through a power/on or power/off solution.

I thought it was foolproof!

Well we can see from the above paragraph that the use of biometrics is not the all encompassing security control that some may think they are. In actuality they are quite weak, and again, take the example of simply extracting the hard drive vice authenticating via the thumbprint scanner. Where this technology does help is that it is another hurdle an attacker must bypass or compensate for. Attackers will always go for the low hanging fruit, and typically shun hardened targets. Each layer counts! There are other biometrics of course beyond the talked about thumbprint. The problem with them is that they are not that portable, or cheap for that matter. Also there is not a mass market for them, and thereby is still very much what I would call a maturing technology.

What about identity theft?

The use of biometrics also raises other privacy concerns that are, in my mind, rather well founded ones. If the use of biometrics really took off, and for some proposed technologies it might, does that mean there will be a database of digital data? By that I mean will there be a central repository of thumbprints somewhere held by some company as a means of authenticating customers? That database would be very hard to resist for malicious hackers, and you could say even worse, the government. Intelligence and police agencies would be salivating to get their hands on such a vast store of unique identification files. This is especially true in light of anti-terror legislation giving the government what would have been unthinkable powers several years ago. Again we also have that most pernicious and highly skilled threat of black hat hackers. Such a repository of information may not be of use right now, but may very well become so in the near future.

Is it worth it?

With all of the concerns over the possible security and identity concerns within the field of biometrics, is it worth bothering with it at all? Well for that you really need to quantify what you consider to be manageable risk. What level of risk are you willing to accept, and are you really exercising due diligence? The answer to that really would take a bit of thinking and common sense. Firstly, you would need to see if it helps you in practicing due diligence in the effort to secure your data or customers data. Secondly, it really is not a bad idea to have if you are working with sensitive data. Any step that you can take to help further safeguard your data is a good one. Remember not every attacker is a high tech wonder boy. Quite often laptops are simply stolen for their resale value, and one of the first thing done is to tear out and throw away the hard drive. It is my opinion that biometrics are definitely here to stay, and are not going away any time soon. You would be well advised to see if they do indeed fit into your corporate or personal security plans. After all we should, as security professionals, objectively evaluate any new technology that crosses our path. I sincerely hope that this article was of interest to you and as always welcome your feedback. Till next time!

Security Zoning for Virtualized Environments

An important consideration when assessing the security of a virtualized environment: network security zoning.
Author: Thomas Shinder

Introduction

There is a reason why virtualization was the last Big Thing (the current Big Thing is cloud computing, which has dependencies on virtualization). You can use client and server virtualization technologies to reduces the size of your datacenter and client footprints, you can consolidate clients and servers to reduce your overall power requirements and energy expenditures, and you can put a stop to rack space issues by moving servers from multiple physical machines to many fewer virtual servers. Virtualization is a success because it solves real world problems and it works

However, there is one area that virtualization does not address: security. Virtualization technologies are not security technologies. In fact, virtualization introduces security issues that mirror those in physical environments. Security becomes even more important in virtualized environments because of the potential multiplicative effects of compromised virtual servers.

For this reason you need to be even more mindful of core security concepts and their implementation. One of the most important concepts that you need to apply in all networks, and especially in virtualized client and server networks, is that of security zoning. A security zone is a collection of resources that share a common security exposure or security risk. There are several ways to characterized security zones, such as:

  • All members of the same security zone share a common risk exposure
  • All members of the same security zone share similar value to the organization. High value assets are never in the same security zone as low value assets
  • Internet facing hosts are always in a different security zone that non-Internet facing hosts
  • A compromise in one security zone should not be able to lead to compromise in other security zones; damage to the compromised security zone should be isolated to that zone and not impact other zones
  • Security zones must be separated by physical or logical segmentation; access control devices or software must be used to enable least privilege access between members of different security zones. You could use a firewall to create physical segmentation, or advanced software methods such as IPsec to create virtual network segmentation

Security zoning and segmentation should be considered in virtual as well as physical environments. To illustrate some examples of how you can do this, let’s look at how you might address segmentation in a simple three zone configuration:

  • Internet edge security zone
  • Client systems security zone
  • Network services security zone

The figure below shows an example of how a simple server consolidation focused virtualization project might be set up. There is a single virtual server and this virtual server hosts the firewall, the domain controller, the mail server and the file server. These, virtual machine are connected to the same physical network as the client systems.

This is a poor security model because:

  • Internet facing virtual machines are co-located on the same virtual server as the network services virtual machines. A break-out in the Internet facing firewall virtual machines could have negative impact on network services machines, which belong to a different security zone
  • Client systems are on the same physical network as the network services virtual machines. A break-out on the client systems segment could have an adverse effect on the network services virtual machines. Client systems need to be segmented away from the network services virtual machines and their security zone

While this is a common design in simple server consolidation projects, it’s an exceptionally poor design from a security point of view. Let’s see what we can do to improve this situation.


Fig 1

Figure 1

The figure below shows a slightly better security configuration. This design adds a second virtual server. The first virtual server hosts only edge security devices; this effectively segments the Internet facing firewalls away from non-Internet facing hosts, thus successfully segmenting the firewall security zone away from the network services security zone. Should be virtual firewalls or the virtual server on which the firewalls run be compromised, there is reduced risk that a break out on these devices will have an adverse impact on the virtual machines located on a second virtual server.

The second virtual server hosts only virtual machines that belong to the network services security zone. However, the client systems are still on the same physical network as the network services virtual machines. This is not an optimal configuration because if there is a break out in the client systems security zone, there are no access control or security devices in between these security zones to limit the potential impact of the breakout.

So while this design is better than the first one where virtual machines from all security zones are co-located on the same virtual server, there is still more we can do to create a secure configuration.


Fig 2

Figure 2

The figure below shows an improvement in the design, where all security zones are segregated from one another. The virtual server on the edge of the network contains only a firewall array. Notice that in order to create such a robust design, you’ll need to use a “software” firewall. With virtualization taking front stage in so many networks, it’s only a matter of time where edge firewalls will be routinely virtualized on edge virtual servers. You can do this now with virtual offerings from a number of vendors, such as Check Point firewalls or Microsoft ISA or TMG firewalls. There are also a number of Linux based virtual firewalls available.

We still have a second virtual server, but notice that the network services virtual server is physically segmented from the client systems network by the edge virtual server hosting the firewalls. In this example the edge system could host virtual firewalls that are multi-homed and thus connect to the network services physical network and the client systems physical network, or you could create a more complex firewall environment on the edge virtual server, where separate firewalls connect systems on the client systems network to the network services network.

The key here is that there is some type of physical or logical (or both) segmentation between systems that belong to different security zones. The Internet is isolated from all internal networks, the network services virtual machines are isolated from the client systems and the client systems are isolated from the network services virtual machines.

With this design virtual machines belonging to different security zones are placed to different physical virtual servers. Non-virtualized assets are physically or logically isolated using network choke points, such as the virtualized firewalls on the edge virtual server.


Fig 3

Figure 3

As virtualization technologies advance, you will likely find that client side virtualization will become more popular. One way to virtualizing the client side is to create a “virtual desktop infrastructure”. There are a number of ways to approach VDI; one way to do this is to host multiple distinct virtual client systems on a virtual server. Users then connect with thin clients to these dedicated client virtual machines. This approach has an advantage over “presentation virtualization”, which is the terminal service client experience, in that users are actually connecting to full operating systems on virtual clients instead of the somewhat watered down client side experience terminal services clients often see.

In the figure below you can see that we have added a third virtual server that hosts the VDI. Many client systems are installed on this virtual server and users with thin client systems can be located anywhere, since the thin client has almost no attack surface due to the OS being located on the virtual server, not on the thin client. In this example think of the operating system contained on the client virtual machines as being “streamed” to the thin client.

To optimize the security of this infrastructure, we still need to consider security zoning. To help insure proper segmentation of the identified security zone, you will need to deploy a third virtual machine for the VDI and segment that virtual server from the other security zones, as seen in the figure below. Again, the take home message is that resources belonging to different security zones must not be hosted on the same virtual server, and that the virtual servers hosting virtual machines belonging to different security zones need to be physically or logically isolated from one another.


Fig 4

Figure 4

Summary

In this article we examined a key consideration when assessing the security of a virtualized environment: network security zoning. In many virtualization projects, administrators will focus more on designing the virtualization architecture and forget that virtualization is not a security technology and that the same principles of security employed on physical networks need to be enacted on virtual networks. We saw in this article that one way to deal with this situation is to identify the different security zones on your network and then co-locate virtual machines belonging to the same security zone on the same virtual server. In addition to avoid the mixing virtual machines belonging to different security zones to the same virtual server (or virtual cluster), we also pointed out the importance of placing network security devices that perform access controls between different security zones so that a compromise in one zone does not impact assets in other security zones. Going forward, you’ll need to consider VDI and other client virtualization technologies and consider how to isolate the virtual clients from resources that belong to high security zones.

Tuesday, 23 June 2009

More VOIP, More Security: What needs to be done when securing VOIP

How to implement a VOIP solution whilst abiding by a security framework, and the challenges that we can expect when implementing VOIP?

VOIP adoption seems to be speeding up, as bandwidth and hardware becomes more affordable and as legacy systems are being displaced. IT professionals are now becoming telecoms experts and in turn needing to pay more attention to VOIP. This move towards an integrated Voice into IT/IS systems has significant security implications that need to be understood so that reasonable counter measures can be implemented. In this article we will cover how to implement a VOIP solution whilst abiding by a security framework, and the challenges that we can expect when implementing VOIP.

VOIP today
As technology moves forward the prices of the solutions drop and more features are bundled with the solutions, the more people and organisations that adopt the solutions the cheaper and easier the solutions are to implement. Unfortunately security takes time to catch up and security is often retrofitted onto the solution as an afterthought. This is true for solutions that incorporate VOIP. For this reason it is important to work with a security framework that secures the implemented solution. Devices today are computationally powerful enough to handle present and future encryption ciphers making encryption a possibility.

Authentication
Before a user can use the VOIP service they will need to authenticate, and identify themselves to the service. This process seems simple enough but there are some factors that need to be understood and some challenges that need to be overcome. The authentication mechanism should be structured so that firstly the device is identified and authenticated and then the user, this can be achieved by quarantining any device prior the authentication phase. Once the device is identified and authenticated the device can then be logically moved to the production communications VLAN at this point a security policy on the switches can enforce encryption, meaning that any credentials transacted by a user remains in an encrypted thus secure state.

When it comes to authentication, identity management is something that goes hand in hand. Utilising the existing authentication directories like AD or other LDAP type directories is important. This way the existing investment is leveraged and a unified already existing mechanism is used, saving time and improving security, vendors are working hard make this secure.

Confidentiality
To address confidentiality, the technical control is encryption; this will ensure that communications are kept secure and that unauthorised users are not able to eves drop on the communications. The complexity manifests when the traffic traverses multiple gateways that are not controlled by your organisation. This is why it is important to leverage a technology that complies with standards and that is easily configurable. This will ensure that the VOIP packet will remain encrypted for the packet’s lifetime.

One thing to look out for is expanding the packet size that would result in latency (delays), you can do this by selecting the incorrect encryption type for the mode of transport.

Protocols
Protocols like RTP, SRTP, ZRTP and MIKEY are common in modern VOIP deployments these protocols are secured using AES encryption (in counter mode).

SIP or Session Information Protocol is quickly being adopted as the preferred VIOP protocol, in early 2005. Using SIP clients, users will be able to create an identity bound to a SIP server, this identity can then be used to log on to the server and use it as a gateway, to route calls both internally and externally. The neat thing is, this can be achieved from anywhere, locally and remotely, making remote working a possibility. Leveraging security mechanisms proposed in the NISC framework IT professionals are able to offer these features to their user base. Users are then able to securely log on, and communicate using SIP clients (soft phones) as if they were based at the office. Vendors like Cisco, Mitel, Avaya and many others have and are developing SIP based solutions.

Key management is always something to pay attention to when dealing with encryption, SRTP lightens the key management overhead as single master key can provide keying material for confidentiality and integrity protection, both for the SRTP stream and the corresponding SRTCP stream. In some instances a single master key can protect several SRTP streams.

New protocols like RSIP (Realm-Specific IP) will help in the future to resolve some of the complexities of NAT/IPsec challenges. IP Next Layer (IPNL) are solutions that provision a clear tunnel between both communicating hosts. This will make future communications more secure, quicker and more efficient.

Networks
Building a secure VOIP network requires insight into VOIP hardware and software; this lends itself to the possibility of compromise. For simplicity’s sake it is a lot easier to separate the VOIP network out into its own isolated network only linking the elements of the network to the corporate data network where necessary and through secure means like application layer firewalls. This is to ensure that any soft spots on the VOIP network are not easily exploitable and that strong access control mechanism are in pace to manage the traffic flow between your corporate LAN and your VOIP network.

Network to Network VPNs are essential to ensure that internet work traffic remains secure and that its integrity remains.

Wireless
Wireless has received a lot of attention where security is concerned, this is with good reason. Encryption of VOIP over wireless is mandatory and without it a compromise is virtually guaranteed. IPSec is a fair countermeasure, when securing wireless.

Devices
Devices and servers need to be kept physically secure from unauthorised use. Logical security is also important as now the solution lends itself to remote exploitation. Your phone account is a resource like any other and there are already many horror stories of how accounts have been abused by remote exploitation. Because of unified identity management it is important to ensure that your users are changing their passwords on a periodic basis as described in your security policy (administrative control).

Recently on a UK television program it was demonstrated how office phones were modified by scammers so that small wireless cameras could be hidden into them so that the user’s credentials could be captured.

Another common attack is that the server is impersonated in order to capture the user’s credentials, once this occurs the user is redirected to the legitimate server. The countermeasure to this is that the legitimate server is physically and logically protected and that it needs to authenticate to the user or client software before the user authenticates to it.

Messaging and storage
Often overlooked is the messaging and storage component of any VOIP solution, in previous years a common attack was to log on to someone’s voice mail remotely by typing in the default network password 1234 or 0000. This would give you access to voice mail and in fact an element of control is possible remotely with this feature. The countermeasure is force a change of password before the service is used, this will ensure secure operation of the service. The storage can also be attacked, this would typically make the solution vulnerable, this could be achieved both physically and logically and countermeasures need to be implemented to mitigate any attacks.

Summary
When considering a VOIP security solution it is vital to comply with existing standards, VOIP technologies are still evolving, security is still being retrofitted and it is not part of the solution, out of the box. If VOIP solutions are properly implemented the end result will be a more secure more reliable efficient, cost effective solution that will be around for years to come.

Thursday, 18 June 2009

Virtual Private Networking

The majority of typical VPN-related documents define VPN, as the extension of a private network. However this type of definition means nothing and only characterizes the VPN concept as a determining factor of a private network, which is still somewhat unclear. VPN is an abbreviation for Virtual Private Network. Everybody knows what a “network” is making explanations pointless. A private network is one where all data paths are secret to a certain extent, yet open to a limited group of persons, for example, to employees of a specific company. In theory, the simplest way to create such a network would be to isolate this network from the Internet. However in the case of a business with some remote branches it is not so simple. Leased lines could be a solution, but a costly one, which would not necessarily ensure the required degree of security. Furthermore, leased circuits suffer from a single faulty link syndrome – the connection that goes down from time to time may create a major or minor disaster for a company. Besides, there is sometimes a need to give access to a part of the resources of a private network to external users, and that would not be possible over a physically separated network. Of course, a solution might be to employ a remote server however it would involve additional fees for phone sessions that may also be long distance. What about the buzz word “virtual”? At present, VPN is far from being a physically separated structure. It uses the existing infrastructure that encompasses both LANs and WANs, and where an IPv6 backbone may be supported as well as any networking, provided that it may be seamlessly integrated with today’s technologies. Transfer of data over a public network is accomplished using one of the available tunneling technologies and all data can be encrypted to boost security.

After this introduction, the definition of a VPN as a dedicated private network based on the existing public network infrastructure and incorporating data encryption and tunneling techniques to provide data security is pretty straightforward.

What are the benefits of using VPN?
There are several reasons to use VPNs. Sensitive data security is undoubtedly a major issue, as well as other matters such as losing passwords, which contrary to popular belief is not the worst possible fate.

For example, if a multi-branch software development company stores its software source files in a central CVS repository, and a rival IT Company manages to intercept network traffic between the branch and the CVS, it may steal some brilliant ideas incorporated in the software programs. The amount of damages involved in such a case may be millions of dollars however possible lawsuits are rarely as quick and straightforward as expected.

From the economic point of view, creation of a VPN may be less expensive than maintaining leased lines, although the cost of VPN firmware may appear to be enormous. In fact, lost data may turn out to be far more expensive. Costly equipment is rarely needed to implement a VPN. In practice, any Microsoft Windows machine can be used as the VPN client and any Windows 2000 or Windows .NET computer can be configured to be the VPN server.

What protocols can be used over a VPN connection?
The VPN networking concept is based, to a certain extent, on point-to-point links. In VPN networks they are emulated using data encapsulation and tunneling i.e. the data is wrapped with a header that provides necessary routing information. In order to enhance data confidence and integrity, packets being sent may be encrypted prior to entering the tunnel and even if intercepted (which is not difficult as they are sent over a public network), they will remain indecipherable without encryption keys. VPN connections permit users working from home or away on a business trip to obtain a remote, dynamically, set up VPN connection with their company’s Intranet. From the user’s perspective, the VPN is a point-to-point connection within the organisation’s server, which logically operates to some extent as a leased line or provides support for dial-in VPN connectivity.

The VPN gear can be built around various protocols depending on both the hardware and software capabilities. For Windows 2000 general use VPNs, commonly recognized protocols are PPTP and L2TP, combined with IPSec.

Between these two protocols, PPTP is older. It is a Layer 2 (OSI) tunneling protocol that encapsulates PPP frames as IP datagram. For both tunnel creation and maintenance, PPTP uses the TCP protocol. The encapsulation uses an ephemeral (random) client-side port while the PPTP server is associated with port 1723. Packets are encapsulated using Generic Routing Encapsulation (GRE). PPTP encapsulation for payloads is as follows:

  • The payload is encapsulated with a PPP frame.
  • The PPP frame is encapsulated with a GRE header and trailer.
  • The GRE frame is sent as a new payload for a new IP datagram between the client and PPTP server.

Data encryption is a vital part of a VPN. PPTP uses PPP mechanisms to provide data confidentiality. In Microsoft Windows 2000 implementations, the PPP frame is encrypted with MPPE. The keys for encryption are generated from the MS-CHAP or EAP-TLS authentication process, therefore the client must use either protocol for the communications with the VPN server to be encrypted, otherwise all payloads will be sent in plaintext over the tunnel.

PPTP is documented in RFC 2637.
L2TP protocol is a combination of Cisco® Systems Inc. Layer 2 Forwarding and PPTP using the best of both. L2TP is more flexible than PPTP, its use, however, implies that a more powerful computer is needed than for a PPTP implementation. L2TP operates in Layer 2 (OSI) tunneling protocol. It encapsulates PPP frames to send them between the server and the client. It has been designed to operate directly with various non-IP WAN technologies. Like PPTP, L2TP encapsulates original IP datagram’s over the network. Since encryption for L2TP is provided with IPSec, encapsulation is divided into two layers – the initial L2TP encapsulation and the IPSec encapsulation. The process is as follows:

  • The initial payload is encapsulated with a PPP frame.
  • The PPP frame is placed in a new IP datagram encapsulated with a UDP header and a L2TP header.
  • The L2TP encapsulated payload is IPSec i.e. it is added with an IPSec Encapsulating Security Payload (ESP) and an IPSec Authentication trailer (AUTH). In this way, integrity and authentication of messages are provided en route. At this stage, tunneled messages are not yet encrypted. IPSecESP is the mechanism to provide encryption keys to L2TP data. It is possible to have a non-encrypted L2TP connection where the PPP frame is sent in plaintext. However, such an insecure solution is absurd and is definitely not recommended.

L2TP is documented in RFC 2661.
The main differences between PPTP and L2TP are as follows:

  • PPTP requires an IP based network transport layer whilst L2TP only requires that the media provide point-to-point connectivity. So L2TP protocol can be used directly over IP Frame Relay, X.25 and ATM. PPTP cannot support non-IP media directly.
  • PPTP supports only a single tunnel between the VPN server and the client. With L2TP, multiple tunnels can be supported to transport payloads end-to-end. Therefore, multi-tunnel operations are possible with L2TP corresponding to various levels of the Quality of Service (QoS) and security.
  • L2TP protocol provides header compression mechanisms. When this function is enabled, the L2TP header is smaller than the PPTP header, and will result in fewer simultaneous RTP sessions being required to produce bandwidth efficiencies.

PPTP is still more popular than L2TP, and is used in Microsoft Windows 95, Windows 98, Windows NT 4.0, Windows 2000 and Windows XP/.NET systems. L2TP is supported from Windows 2000 onwards. L2TP implementations for older Windows versions may be available as third party products. In practice it might be difficult to find such solutions whereas for economic reasons, upgrading older OSes to Windows 2000 or XP could be less expensive than extensions of the same.

Although L2TP and PPTP are the main tunneling protocols used in Windows 2000, a certain VPN implementation can be built using the IPSec protocol already mentioned in the discussion on tunneled payload confidence ensured by L2TP. IPSec exists as the Layer 3 (OSI) data tunneling model that uses a specific mode - the ESP Tunnel mode that offers strong IP datagram encapsulation and encryption being sent over a public IP network. With this mode, whole IP datagram’s are encapsulated and encrypted using ESP. The IP datagram is finally encapsulated with a new IP header and the new datagram obtained is sent over a network. Upon receipt of the L2TP datagram, the recipient processes the data-link frame to authenticate the content and sends the data to the destination site.

What security mechanisms are available through VPN?
Authorisation - VPN connections are only created for users and routers that have been authorised. For Windows 2000, authorization of VPN connections is determined by dial-in properties on the user account and remote access policies. If a user or router is not authorised for such connections, the server will disable them.

Authentication – This is a vital security concern. Authentication takes place at two levels:

  1. Machine-level authentication – when IPSec protocol is used for a VPN connection, machine-level authentication is performed through the exchange of machine certificates during the establishment of the IPSec connection.
  2. User-level authentication – before data can be sent over the PPTP or L2TP tunnel, the user must be authenticated. This is done through the use of a PPP authentication method.

Data encryption - the protocols used to create VPN connections allow encrypted data to be sent over a network. Although it is possible to have a non-encrypted connection, this is not recommended. Note that data encryption for a VPN connection does not provide end-to-end security (encryption), but only security between the client and the VPN server. In order to provide a secure end-to-end connection, the IPSec protocol can be used once a VPN connection has been established.

Packet filtering – in order to enhance security of the VPN server, packet filtering must be configured so that the server only performs VPN routing. To this end, appropriate RRAS filters should be used (for Windows 2000) on the Internet interface of the VPN.

Securing Your Wireless Network

Now that I have explained why it is so important to secure your wireless network, I want to spend the rest of this article explaining the steps that you should take in doing so. Unfortunately, I can’t give you the exact step-by-step procedure because every manufacturer of wireless hardware uses a different interface for configuring the device. Even so, the things that I will be discussing are nearly universal and will be valid for almost all Wi-Fi networks.

Use Encryption
By far the most important thing that you can do to secure your wireless network is to use encryption. Almost every wireless access point has some type of encryption mechanism built in. Most older access points offer WEP encryption, and newer access points offer a choice between WEP and WPA.

You are much better off using WPA than WEP. The WEP encryption method is flawed because if someone is able to capture enough data, it is possible to decipher WEP. Even so, it takes most home users weeks to do enough Web surfing to produce enough traffic for WEP to be compromised.

My advice would be that if your wireless hardware doesn’t support WPA, then you should upgrade to hardware that does offer WPA support. If an upgrade just isn’t in the budget, then you should go ahead and turn on WEP encryption. Sure, WEP is flawed, but flawed encryption is better than no encryption. Besides, there are enough people with insecure wireless networks that most of the time if a hacker sees that your network is encrypted with WEP, they will move on to an easier target than spending weeks trying to capture enough data to decrypt WEP.

The only other drawback to using encryption on your access point is that it can be a little complicated to set up if you aren’t the technical type. If you can’t figure out how to set up wireless encryption, then invite the neighbourhood nerd over for dinner and have them enable encryption. Do whatever you have to do, but get encryption enabled.

Don’t Announce Yourself
Wi-Fi access points use a mechanism called identifier broadcasting to announce themselves. The problem with identifier broadcasting is that you already know that you have a wireless network, so there is no need in announcing it to you. The only people that the broadcast really benefits are hackers. Not all wireless access points allow you to disable identifier broadcasting, but if yours does allow you to disable it, then you should.

While you are at it, you should also change your SSID or ESSID. The SSID or ESSID is basically just a name that’s assigned to the wireless access point. The reason why it is important to change the SSID or ESSID is because you don’t want your access point to have an out of the box name. Think about it for a minute. Wireless hardware manufacturers assign the same SSID or ESSID to every access point that rolls off of the assembly line. Even if you aren’t broadcasting your access point’s identification to the world, it isn’t that hard to figure out that you have an access point in your house. If the access point isn’t broadcasting an SSID or an ESSID then the first thing that a hacker will usually try is to attach to the access point by using common default SSID or ESSID names.

It is also important that you change your access point’s default password for the same reason. You don’t want a hacker to be able to take control of your access point just because it still has the default password assigned to it. If a hacker were to take control of the access point, they could actually lock you out of your own network.

Limit Access To Your Access Point
Another thing that you can do to help secure your wireless access point is to limit which computers are allowed to use it. Every network interface card (including wireless cards) has what’s known as a Media Access Control (MAC) address associated with it. Most wireless access points contain a mechanism that you can use to tell the access point that only network cards with these specific MAC addresses are allowed to use the network.

You can determine a machine’s MAC address by opening a command prompt window on the workstation and entering the command IPCONFIG /ALL. This command is designed to display the machine’s TCP/IP configuration. However, it will list the machine’s MAC address under the Physical Address heading.

Limiting access to the access point by MAC address isn’t a perfect security mechanism. A hacker can use a protocol analyzer to determine which MAC addresses are in use on your network. They can then spoof a valid address and bypass your address filter. Even so, it is important to use address filtering. The reason is because none of the wireless security mechanisms that I’ve shown you are perfect, but all of the mechanisms that I’ve shown you provide relatively good security

Cisco's 1841 Router

Cisco's 1841 router was created with the smaller branch office in mind. This router is a low-end device making the 1841 as one of the cheaper models manufactured by Cisco. The 1841 Cisco router has low failure rates and is enterprise class hardware. Typical of Cisco products, this router has openings for standard Cisco cards offering network interfaces and features while running on the IOS software. With such a comfort level in the IT community for Cisco products and its IOS, setup time and maintenance usually have a minimal learning curve compared to competing manufacturers. The 1841 router fits in rack mounts making it suitable for data closet installation. However, the 1841 has only a single power supply revealing its intended place in the field offices rather than central routing for a large company.
This particular model comes with these features:
  • 2 10/100 Ethernet ports (copper - RJ45)
  • 2 Wan Interface Card (WIC) slots for the ports of your choice
  • 1 internal expansion slot
  • Standard pair of console/auxiliary console ports
  • 1 USB port for console access (local device management)
  • 128 Meg RAM; only expandable to 384 Meg.
  • 1U height

The 1841 routers come with three-speed fans controlled by a thermostat in the chassis. For noise abatement and extended life, fan speeds are variable depending on the cooling needs. The 1841 routers come with internal clocks, but are dependent on a non-replaceable battery. If the battery fails, this would require the chassis be sent back to the factory for repair - which should be covered under warrantee.

For VoIP implementations a separate appliance will be needed since the 1841 router capabilities do not include VoIP or voice even though it has 2 WICs. A single power supply is a drawback, but for most implementations this means no redundant power supply. For installations of 300 users or less, the Cisco 1841 meets the needs of a field office. It is overkill for a job of less than 20 nodes where a smaller router or a PIX firewall is recommended.

Whatever the router selection, Network Address Translation, a secondary Internet circuit to the headquarters and a reasonable amount of access control lists (ACLS) should be included in its capabilities.

Thin Client

A thin client (sometimes also called a lean or slim client) is a client computer or client software in client-server architecture networks which depends primarily on the central server for processing activities, and mainly focuses on conveying input and output between the user and the remote server. In contrast, a thick or fat client does as much processing as possible and passes only data for communications and storage to the server.

The term was coined in 1993 by Tim Negris, VP of Server Marketing at Oracle Corp., while working with company founder Larry Ellison on the launch of the landmark Oracle7 release of the company's flagship relational database management system (RDBMS). Ellison had charged Negris with finding a way to boldly differentiate Oracle's server-centric software from the decidedly desktop-oriented products of then-rival Microsoft. Thin Client became Ellison's relentless battle cry, repeated in hundreds of speeches, interviews and articles attendant to the release of Oracle7 and many other products after that.

Many thin client devices run only web browsers or remote desktop software, meaning that all significant processing occurs on the server. However, recent devices marketed as thin clients can run complete operating systems such as Debian Linux, qualifying them as diskless nodes or hybrid clients. Some thin clients are also called "access terminals." Many people that already have computers want the same functionality that a thin client has. Computers can simulate a thin client in a single window (as thru a browser) or with a separate operating system boot-up. Either way, these are often called "fat clients" to differentiate them from thin clients and computers without thin-client functionality.

As a consequence, the term "thin client", in terms of hardware, has come to encompass any device marketed as, or used as, a thin client in the original definition – even if its actual capabilities are much greater. The term is also sometimes used in an even broader sense which includes diskless nodes
The thin client is a PC with less of everything. In designing a computer system, there are decisions to be made about processing, storage, software and user interface. With the reality of reliable high-speed networking, it is possible to change the location of any of these with respect to the others. A gigabit/s network is faster than a PCI bus and many hard drives, so each function can be in a different location. Choices will be made depending on the total cost, cost of operation, reliability, performance and usability of the system. The thin client is closely connected to the user interface.

In a thin client/server system, the only software that is installed on the thin client is the user interface, certain frequently used applications, and a networked operating system. This software can be loaded from a local drive, the server at boot, or as needed. By simplifying the load on the thin client, it can be a very small, low-powered device giving lower costs to purchase and to operate per seat. The server, or a cluster of servers has the full weight of all the applications, services, and data. By keeping a few servers busy and many thin clients lightly loaded, users can expect easier system management and lower costs, as well as all the advantages of networked computing: central storage/backup and easier security.
Size comparison - Clientron U700 vs traditional Desktop PCBecause the thin client is relatively passive and low-maintenance, but numerous, the entire system is simpler and easier to install and to operate. As the cost of hardware plunges and the cost of employing a technician, buying energy, and disposing of waste rises, the advantages of thin clients grow. From the user's perspective, the interaction with monitor, keyboard, and cursor changes little from using a thick client.
A single PC can usually power five or more thin clients. A more powerful PC or server can support up to a hundred thin clients at a time. A high-end server can power over 700 clients.

Thin clients are a great investment for schools and businesses who want to maximize the number of workstations they can purchase on a budget. A simple $70 unit could replace a computer in a school or business. It would also save a lot of power in the long run, due to low power consumption.

Wednesday, 17 June 2009

Filtering HTTP over SSL connections

Web traffic has posed one of the biggest security issues. And to overcome this, URL filtering solutions are used. Filtering solution screens an incoming web page, checks the page against the set of rules and policies to determine whether the page access is to be allowed or not.
Filtering solutions detect and block HTTP communication as per web filtering policies but because enterprises keep port 443 (HTTPS) open, filtering policy cannot be applied when user visits secure (HTTPS) sites as content is encrypted.
Hence the primary circumvention method used to evade these carefully crafted web filtering policies, is the use of HTTPS connections. Clearly, HTTPS connections pose a serious threat as it provides employees with an easy way to avoid the enterprise’s Internet Usage policy to conceal their activities.
Using Secure Proxy is the easiest way to make use of HTTPS connection. To use proxy, user simply points his browser to the HTTPS proxy web site and makes a request to access the destination (blocked) site to proxy. HTTPS proxy initiates its own request as opposed to actually passing the user’s request. It fetches the page on behalf of the user and responds back to the user as if it was the destination. This way user and the destination (blocked) site never actually interact directly. As HTTPS proxy returns the encrypted content directly to the user, gateway only sees the SSL encrypted traffic. URL filtering solution cannot sniff in the encrypted traffic to determine the correct URL making filtering policies ineffective.
How does Cyberoam solve this problem?
Cyberaom approach includes SSL certificate inspection along with the filtering policies to control SSL traffic.
Cyberoam parses SSL handshake (SSLv2, SSLv3, and TLS) and extracts “Common Name” (CN) from the certificate. It applies control filters on common name. Based on the outcome of filters, user is either served the page or the connection is terminated.
Apart from secure proxies, client-based proxies, HTTP proxies and open proxies are also used to evade filtering policies. Cyberoam filters the usage of these proxies with the help of its keyword and URL filtering techniques as well as Signature based detection technique.
Additionally, to control rogue employees, SSL traffic filtering can applied on individual user or group of users, single URL, group of URLs or entire URL category.

Windows NTLM Vs Cyberoam Clientless Single Sign On Authentication

Single Sign On (SSO) is the ability of a user to authenticate himself to a network one time, and thereafter to have access to all authorized network resources without additional authentication.
What is NTLM?

NTLM is a suite of authentication and session security protocols used in various Microsoft network protocol implementations. It is used throughout Microsoft's systems as an integrated single sign-on mechanism.

What is CTAS?

Cyberoam introduces Clientless Single Sign On as a Cyberoam Transparent Authentication Suite (CTAS).

With Single Sign On authentication, user automatically logs on to the Cyberoam when logs on to Windows through his windows username and password. Hence, eliminating the need of multiple logins and username & passwords.

But, Clientless Single Sign On not only eliminates the need to remember multiple passwords - Windows and Cyberoam, it also eliminates the installation of SSO clients on each workstation. Hence, delivering high ease-of-use to end-users, higher levels of security in addition to lowering operational costs involved in client installation.

NTLMCTAS
OS dependency

Yes

It can authenticate only systems are running on the Windows platform only.

No

It can authenticate domain irrespective of the operating system they have on their computers.

It works with Windows, Macintosh & Linux.

Applications supported

Only browser-based applications and Microsoft implementations of SMTP, POP3, IMAP (all part of exchange).

User has to authenticate for each application he wants to use.

All the applications

Re-authentication is not required in order to access any application.

Processing load

System load increases as each new session gets authenticated when a new browser instance is opened.

As the user is authenticated just once and agent polls the log off information, system is not burdened on sending keep alive messages to Cyberoam.

UTM

UTM Appliance
External threats like spyware, phishing, pharming, viruses and more are targeting the individual user, extracting corporate and personal confidential information or turning their devices into parts of massive botnets to further the attack. In addition, internal users are compromising enterprise security out of ignorance or malicious intent and are posing the single largest threat to enterprise security.

Individual security solutions, while dealing with different aspects of threats, do not give adequate and rapid response to threats. A Unified Threat Management solution provides comprehensive protection to enterprises with tightly integrated multiple security features working together on a single appliance. A single UTM appliance makes it very easy to manage an enterprise’s security strategy, with just one device to manage, one source of support and single way to set up and maintain every aspect of its security solution. A UTM solution is highly cost-effective and offers a centralized console that enables monitoring of network security at remote locations.

Identity-based controls and visibility are critical components of network security. With identity and network data combined, enterprises are able to identify patterns of behaviour by specific users or groups that can signify misuse, unauthorized intrusions, or malicious attacks from inside or outside the enterprise. Activities and security policy rules can be enforced on network segments based on identity.

Cyberoam – Unified Threat Management
Cyberoam is the leading provider of identity-based Unified Threat Management network security solutions for small, medium and large enterprises. The Cyberoam UTM appliance includes Stateful Inspection Firewall, VPN (SSL VPN & IPSec), Bandwidth Management, Multiple Link Load Balancing & Gateway Failover and Reporting module over a single platform. Cyberoam offers the subscription services of Gateway Anti-virus and Anti-spyware, Gateway Anti-spam, Intrusion Prevention System – IPS and Content & Application Filtering.

Cyberoam provides the most comprehensive, zero-hour protection that is cost-effective and easy-to-manage to enterprises with its highly integrated solutions like the Stateful Inspection firewall, Gateway Anti-virus and Anti-spyware, Gateway anti-spam, intrusion prevention system and Content & Application Filtering. Cyberoam firewall vpn appliance offers secure, encrypted tunnels for secure remote access to the mobile workforce to the central network in enterprises.

Cyberoam offers multi-lingual support with Chinese, Hindi and English languages GUI are, enhancing user experience in some of the largest and fastest growing markets.

Cyberoam’s Active-Active High Availability provides efficient, continuous access to business-critical applications, information, and services. Active-Active HA increases overall network performance by sharing the load of processing network traffic between two Cyberoam appliances and providing continued security by eliminating the problem of single point failure. The cluster appears to your network to be a single device, adding increased performance without changing your network configuration. Primary appliance acts as the load balancer and load balances all the TCP communications including TCP communications from Proxies but will not load balance VPN traffic.