Showing posts with label Network Seucrity. Show all posts
Showing posts with label Network Seucrity. Show all posts

Tuesday, 22 September 2009

OS Virtualization in Practice - Part 1

Introduction

In a previous article series I explained the basics of operating system virtualization, including the techniques, advantages, disadvantages and possible scenarios. But of course you would like to see how this works in practice. In this article I will show you how to operate system virtualization using Citrix Provisioning Server version 5.

Preparation: Server Installation and Initial Configuration

The Citrix Provisioning Server exists as a server component and a component on the client which makes it possible to create the virtual disk (vDisk). This also starts a client using the operating system streaming technique. The installation of the client component will be described later on this article; we will focus first on the installation of the server component.

The installation has some requirements. The configuration needs to be stored on a Microsoft SQL database. This can be a dedicated Microsoft SQL server or, for smaller/test environments, an SQL Express version. In a production environment, a dedicated SQL server is advisable because, ideally, you would want to install at least two Citrix Provisioning Servers for fault tolerance and you would also like to have access to the database even if one server is unavailable.

Note:
You need to have ample disk space available for storing the virtual disks.

When using more Provisioning Servers, the disk space should be claimed on a file share, SAN or NAS. With a single server, this can also be located on the local storage. A Citrix License Server is also required (the same as for using XenApp or other Citrix products).

The server component also requires .Net Framework 3.0, which will be automatically installed during the installation, if not available already. The installation of the server component is started using the PVSRV_Server.exe. The installation is pretty straight forwarded. First, the license agreement needs to be accepted, followed by specifying some customer information and the location where the application needs to be installed. The most important part of the installation is the question about the components which you would like to install. Like most products, it offers to install all components or just a selection of those. Which components you will need depends on which roles are already available in your infrastructure and which you would like to use for this product. Remember that only one PXE role can be available per IP subnet.


Figure 1:
Selecting the components which will be installed

After this question the installation will actually start. When the installation is complete, the intial configuration will automatically start. During this intial configuration you can specify which supporting services you will use, like DHCP and PXE services, where they are located and if you would like to create an new farm (creating an new Provisioning Server environment) or to join an existing farm (adding a Provisioning Server to current infrastructur). Also, the location of the virtual disk store, the network interface cards for the streaming protocol (including ports), enabling/disabling the TFTP service and the bootstrap configuration are a part of this intial configuration. This configuration can also be started later on to change these settings afterwards. Do not forget to add the DHCP options for PXE booting to your DHCP role, when using the PXE role.


Figure 2: Initial Configuration


Figure 3: Initial Configuration

The server component automatically installed the Provisioning Server console, but there is also a separate installation available to install the console on the administrator workstation or server.

Basic Configuration of the Provisioning Server environment

After the installation, the console needs to be started and connected to the farm the first time by specifying the name of (one of) the Provision Server console and the communication port. After the connection is established several settings can be configured like automatic updates of virtual disk, high availability, delegation of control, tuning of the streams and much more. These topics are pretty interesting, but will go into too much detail for this article. I will keep to the most basic configuration settings, which are needed to get the system up and running. The first thing that should be created is a so called Site. A site is a collection of servers, devices and connected stores. A site is simply created by right clicking the site component in the right pane. In the next dialog a (unique) name for the site needs to be specified and additionally a description, administration security and “auto add” can be configured. After a site is configured, additionally at least one Device Collection needs to be created. In this dialog also a name, description, a template target device (the configration of this client will be used to configure the new clients) can be set. When the site configuration is finished, a store needs to be created. In this store the vDisks will be created, which will be used to host the operating system for the streaming technology.


Figure 4: Create a store for the vDisks

The store needs to have a name and needs to be connected to a site which “owns” the store. Logically, a path needs to be specified on which the virtual disks will be stored. Also the write cache location can be specified (the location where the streamed content on the client will be cached). Lastly, the server(s) need to be specified which will use the store for streaming the content.

Creating a Virtual Operating System

Now that the most basic configuration is done; the next step is to create a virtual disk, which can host the operating system (and the applications installed on the operating system). This is accomplished by right clicking on the store and choosing “create vDisk”. A dialog will pop-up in which you need to specify the following information: the site that will contain the vDisk, the server being used to create the vDisk, a (file)name for the vDisk, a description, the size of the vDisk and the VHD format.


Figure 5: Create a vDisk

In this version 5 the Microsoft VHD format is introduced, which gives the options to create a dynamic VHD format. With the dynamic format not the total specified size of the disk is reserved, but just a small file is created. When the vDisk is used to store files in it, the size will be automatically adjusted. A progress bar will be displayed during the creation and when the vDisk is created a file can be found on the specified path of the store. As you can see in the figure below the file is just about a 2 MB size.


Figure 6: The actual file of the vDisk

Now that the virtual disk is created, it is time to fix the operating system with the corresponding applications placed on the virtual disk. For this you need to install a system with the corresponding operating systems and applications using the traditional way; for example, using a CD set or an electronic software deployment. When the system is completely installed, the last step is to install the Provisioning Server Device software. This installation exists as a single executable. During the installation wizard, only very basic dialogs are presented; accepting the license agreement, supplying customer information and specifying the destination location where the files should be installed. After the installation the system needs to be restarted before you can use the client. But actually, the system needs to be configured within the Provisioning Server software before the system can be restarted. I will continue with this part in the next article, which will be published soon.

Conclusion

In this first article of this article series I showed you the first steps in using operating system streaming. We installed the Provisioning Server and the required supporting software, configured the basis setup of software, created a virtual disk and configured a system to create the image on the virtual disk. The next article will continue with actually creating the image, building a new system using the virtual disk and updating a virtual disk with updates.

Thursday, 17 September 2009

Understanding and Customizing VMware ESX Server Performance Charts

Why do you need VMware Performance charts?

As a VMware Administrator you must know what is going on in your virtual infrastructure. When things are not going as planned, you need to troubleshoot it. Performance charts are key in being able to troubleshoot performance issues in your virtual infrastructure. Thus, you need performance charts to:

  • Have a informal baseline of what your utilization is today, in a visual form
  • Troubleshoot performance issues when you have performance issues
  • Optimize your virtual infrastructure performance to keep performance as good as it can be and to make the right decisions in the future

What do VMware’s Virtual Infrastructure Performance Charts offer you?

Whether you use VMware’s ESXi server only (just one server) or if you have the Enterprise Virtual Infrastructure Suite (with vCenter and 100 ESX Server), VMware’s Performance charts are available to you and offer you many features.

With the standalone ESXi edition you will only have charts at the host and guest level. On the other hand, with vCenter, you will have performance charts available at the cluster level (if you have created a VMHA or DRS cluster).

Depending on the level of the chart that you are using, you will be offered different information. For example, on a guest you can graph CPU, memory, disk, network, and system. On an ESX host, you will have those plus “management agent”. On a cluster, you will only see CPU and memory. Keep in mind that when I say “CPU” there are many CPU-related performance settings under CPU. For example you can access average CPU usage, CPU used, CPU guaranteed, CPU extra, CPU ready, CPU system, and CPU wait. Other performance categories such as memory, network, and disk will each have their own performance criteria that you can manually add to your graph.

Figure 1: Sample VMware ESX Performance Chart

By clicking Change Chart Options, charts can be created for real-time information, past day, week, month, year, or a custom timeframe. Here is what that window looks like:

Figure 2: Changing your VMware ESX Performance Chart Options

Custom charts can be saved so that you can pull them up quickly when needed. Charts can also be saved as graphics or printed. To give your chart more screen space, you can choose to have the chart “popup” on its own window (get it out of the VI client window). This is certainly something you want to do if you are going to look at a graph for more than a few seconds. Here is what a “popup” chart looks like:

Figure 3: VMware ESX PopUp Performance Chart

To me, the most important thing about using the standard VMware ESX Performance charts is 1) knowing what level to go to look for something (cluster, host, or guest) and 2) knowing what statistic to look for (CPU, memory, disk, network, or other).

Just like using any other troubleshooting tool, your success and efficient use of it comes from your experience in troubleshooting performance issues and your knowledge of your environment and applications. For example, where do you start? I would start on the cluster (if you have one), then to the ESX host, then to the guest that is causing trouble. At each of these levels, I would look at CPU, memory, disk, and network.

After using the standard performance charts for a while, it is likely that you will want to customize them pretty quickly to find out just what you need to know. Let us find out how to do that.

How do you customize VMware ESX Performance Charts?

As you saw in Figure 2, above, it is easy to customize your performance charts and even save those customizations. Let me give you an example. Say that I wanted to create a custom chart for my ESX host (and even a group of ESX hosts) that shows CPU performance for the last month. To do this, I would go to the ESX server in the VI client and click on the Performance tab. From there, click on Change Chart Options. I would go to the CPU section and click on Past Month. To save your new chart, click on Save Chart Settings. Here is what it looks like:

Figure 4: Savings a VMware ESX Custom Performance Chart

By doing this, the next time you come into the performance chart, you can click Change Chart Options and load this saved chart by selecting it under Saved Chart Settings.

Figure 5: Loading a VMware ESX Saved Performance Chart

In fact, if you want this saved chart to load every time you bring up this graph, you can check the checkbox that says Always Load these Settings at Startup.

So as you found out, customizing VMware ESX Performance charts is easy but what if the VI Client and vCenter just don’t offer you enough performance information.

How do you get more VMware ESX Performance information?

If you need more performance information and more intelligent performance solutions, I can make a few recommendations:

  1. vKernel – specializing in performance appliances for VMware. When it comes to VMware performance, the most useful tool, in my opinion, is vKernel’s performance modeling tool, Modeler. With Modeler, you can find out all the “what if” answers to the performance questions, before you make performance changes. Read about it in an article by Gabrie Van Zanten at How to Model and Predict Changes to your VMware ESX Infrastructure using vKernel Modeler
  2. Veeam Monitor – recently announced in a free edition, Veeam Monitor is a powerful performance monitoring application. Read about it in my article The benefits of VMware ESX performance monitoring with Veeam Monitor free edition
  3. Solarwinds VM Monitor – I wrote an article about this free tool that uses SNMP to give you a quick dashboard view of your ESX Server and guest VM performance. Note: only works with ESX Server, not ESXi.
  4. Akorri BalancePoint – a comprehensive performance management application that even interfaces with your storage area network (SAN)

Conclusion

In conclusion, VMware ESX performance charts are very powerful but also are limited to their core functionality. With the built-in performance charts you can view VMware ESX host, guest, and cluster performance, create & save custom charts, and view performance on so many different performance objects. I hope you will spend a little more time using VMware ESX Performance Charts with the help of this article.

Tuesday, 14 July 2009

ISA Firewall Web Caching Capabilities

Introduction

ISA can act as a firewall, as a combined firewall and Web caching server (the best “bang for the buck”), or as a dedicated Web caching server. You can deploy ISA as a forward caching server or a reverse caching server. The Web proxy filter is the mechanism that ISA uses to implement caching functionality.

Note:
If you configure ISA as a caching-only server, it will lose most of its firewall features and you will need to deploy another firewall to protect the network.

ISA supports both forward caching (for outgoing requests) and reverse caching (for incoming requests). The same ISA firewall can perform both forward and reverse caching at the same time.

With forward caching the ISA firewall sits between the internal clients and the Web servers on the Internet. When an internal client sends a request for a Web object (a Web page, graphics or other Web file), it must go through the ISA firewall. Rather than forwarding the request out to the Internet Web server, the ISA firewall checks its cache to determine whether a copy of the requested object already resides there (because someone on the internal network has previously requested it from the Internet Web server).

If the object is in cache, the ISA firewall sends the object from cache, and there is no need to send traffic over the Internet. Retrieving the object from the ISA firewall’s cache on the local network is faster than downloading it from the Internet Web server, so internal users see an increase in performance.

If the object is not in the ISA firewall’s cache, the ISA firewall sends a request for it from the Internet Web server. When it is returned, the ISA firewall stores the object in cache so that the next time it is requested, that request can be fulfilled from the cache.

With reverse caching, the ISA firewall acts as an intermediary between external users and the company’s Web servers. When a request for an object on the company Web server comes in from a user over the Internet, the ISA firewall checks its cache for the object. If it’s there, the ISA firewall impersonates the internal Web server and fulfills the external user’s request without ever “bothering” the Web server. This reduces traffic on the internal network.

In either case, the cache is an area on the ISA firewall’s hard disk that is used to store the requested Web objects. You can control the amount of disk space to be allocated to the cache (and thus, the maximum size of the cache). You can also control the maximum size of objects that can be cached, to ensure that a few very large objects can’t “hog” the cache space.

Caching also uses system memory. Objects are cached to RAM as well as to disk. Objects can be retrieved from RAM more quickly than from the disk. ISA allows you to determine what percentage of random access memory can be used for caching (by default, ISA uses 10 percent of the RAM, and then caches the rest of the objects to disk only). You can set the percentage at anything from 1percent to 100 percent. The RAM allocation is set when the Firewall service starts. If you want to change the amount of RAM to be used, you have to stop and restart the Firewall service.

The ability to control the amount of RAM allocated for caching ensures that caching will not take over all of the ISA Server computer’s resources.

Note:
In keeping with the emphasis on security and firewall functionality, caching is not enabled by default when you install the ISA firewall. You must enable it before you can use the caching capabilities.

Using the Caching Feature

Configuring a cache drive enables both forward and reverse caching on your ISA firewall. There are a few requirements and recommendations for the drive that you use as the cache drive:

  • The cache drive must be a local drive. You can not configure a network drive to hold the cache.
  • The cache drive must be on an NTFS partition. You can not use FAT or FAT32 partitions for the cache drive.
  • It is best (but not required) that you not use the same drive on which the operating system and/or ISA Server application are installed. Performance will be improved if the cache is on a separate drive. In fact, for best performance, not only should it be on a separate drive, but the drive should be on a separate I/O channel (that is, the cache drive should not be on a drive slaved with the drive that contains the page file, OS, or ISA program files). Furthermore, if performance of ISA firewall is a consideration, MSDE logging consumes more disk resources than text logging. Therefore, if MSDE logging is used, the cache drive should also be on a separate spindle from the MSDE databases.

Note:
You can use the convert.exe utility to convert a FAT or FAT32 partition to NTFS, if necessary, without losing your data.

The file in which the cache objects are stored is named dir1.cdat. It is located in the urlcache folder on the drive that you have configured for caching. This file is referred to as the cache content file. If the file reaches its maximum size, older objects will be removed from the cache to make room for new objects.

A cache content file cannot be larger than 64GB (you can set a smaller maximum size, of course). If you want to use more than 64GB for cache, you must configure multiple drives for caching and spread the cache over more than one file.

You should never try to edit or delete the cache content file.

ISA Firewall Cache Rules

ISA uses cache rules to allow you to customize what types of content will be stored in the cache and exactly how that content will be handled when a request is made for objects stored in cache.

You can create rules to control the length of time that a cache object is considered to be valid (ensuring that objects in the cache do not get hopelessly out of date), and you can specify how cached objects are to be handled after they expire.

ISA gives you the flexibility to apply cache rules to all sites or just to specific sites. A rule can further be configured to apply to all types of content or just to specified types.

Cache Rules to Specify Content Types That Can Be Cached

A cache rule lets you specify which of the following types of content are to be cached:

  • Dynamic content This is content that changes frequently, and thus, is marked as not cacheable. If you select to cache dynamic content, retrieved objects will be cached even though they are marked as not cacheable.
  • Content for offline browsing In order for users to be able to browse while offline (disconnected from the Internet, all content needs to be stored in the cache. Thus, when you select this option, ISA will store all content, including “non-cacheable” content, in the cache.
  • Content requiring user authentication for retrieval Some sites require that users be authenticated before they can access the content. If you select this option, ISA will cache content that requires user authentication.

You can also specify a Maximum object size. By using this option, you can set limits on the size of Web objects that will be cached under a particular cache rule.

Using Cache Rules to Specify How Objects are Retrieved and Served from Cache

In addition to controlling content type and object size, a cache rule can control how ISA will handle the retrieval and service of objects from the cache. This refers to the validity of the object. An object’s validity is determined by whether its Time to Live (TTL) has expired. Expiration times are determined by the HTTP or FTP caching properties or the object’s properties. Your options include:

  • Setting ISA to retrieve only valid objects from cache (those that have not expired). If the object has expired, the ISA will send the request on to the Web server where the object is stored and retrieve it from there.
  • Setting ISA to retrieve requested objects from the cache even if they are not valid. In other words, if the object exists in the cache, ISA will retrieve and serve it from there even if it has expired. If there is no version of the object in the cache, the ISA will send the request to the Web server and retrieve it from there.
  • Setting ISA to never route the request. In this case, the ISA relies only upon the cache to retrieve the object. Objects will be returned from cache whether or not they are valid. If there is no version of the object in the cache, the ISA will return an error. It will not send the request to the Web server.
  • Setting ISA to never save the object to cache. If you configure the rule this way, the requested object will never be saved to the cache.

Note:
The default TTL for FTP objects is one day. TTL boundaries for cached HTTP objects (which are defined in the cache rule) consist of a percentage of the age of the content, based on when it was created or last changed.

You can also control whether HTTP and FTP content are to be cached for specific destinations, and you can set expiration policies for the HTTP and FTP objects. You can also control whether to enable caching of SSL content.

Because SSL content often consists of sensitive information (which is the reason it’s being protected by SSL), you might consider not enabling caching of this type of content for better security.

If you have multiple cache rules, they will be processed in order from first to last, with the default rule processed after all the custom rules. The default rule is automatically created when you install ISA. It is configured to retrieve only valid objects from cache, and to retrieve the object from the Internet if there is no valid object in the cache.

The Content Download Feature

The content download feature is used to schedule ISA to download new content from the Internet at pre-defined times so that when Web Proxy clients request those objects, updated versions will be in the cache. This enhances performance and ensures that clients will receive up-to-date content more quickly.

You can monitor Internet access and usage to determine which sites users access most frequently and predict which content will be requested in the future. Then you can schedule content download jobs accordingly. A content download job can be configured to periodically download one page (URL), multiple pages, or the entire site. You can also specify how many links should be followed in downloading the site. You can configure ISA to cache even those objects that are indicated as not cacheable in the cache control headers. However, a scheduled content download job would not complete if the Web server on which the object is stored requires client authentication.

To take advantage of this feature, you must enable the system policy configuration group for Scheduled Content Download Jobs, and then configure a content download job.

When you enable the Schedule Content Download Jobs system policy configuration group, this causes ISA to block unauthenticated HTTP traffic from the local host (the ISA server) – even if you have another policy rule configured that would allow such traffic. There is a workaround that will make it possible to allow this traffic and still use content download jobs. This involves creating a rule to allow HTTP access to All Networks and being sure that another rule higher in the order is configured to allow HTTP access from the local host.

Control Caching via HTTP Headers

There are two different factors that affect how HTTP (Web) content is cached. The configuration of the caching server is one, but Webmasters can also place information within the content and headers to indicate how their sites and objects should be cached.

Meta tags are commands within the HTML code of a document that specify HTTP expiration or non-cacheable status, but they are only processed by browser caches, not by proxy caches. However, HTTP headers are processed by both proxy caches and browser caches. They are not inserted into the HTML code; they are configured on the Web server and sent by the Web server before the HTML content is sent.

HTTP 1.1 supports a category of headers called cache control response headers. Using these headers, the Webmaster can control such things as:

  • Maximum age (the maximum amount of time the object is considered valid, based on the time of the request)
  • Cacheability
  • Revalidation requirements

ETags and Last-Modified headers are generated by the Web server and used to validate whether an object is fresh.

In Microsoft Internet Information Services, cache control response headers are configured in the HTTP Headers tab of the property pages of the Web site or Web page.

ISA does not cache responses to requests that contain certain HTTP headers. These include:

  • Cache-control: no-cache response header
  • Cache-control: private response header
  • Pragma: no-cache response header
  • www-authenticate response header
  • Set-cookie response header
  • Cache-control: no-store request header
  • Authorization request header (except if the Web server also sends a cache-control: public response header)

Summary

In this article we looked at a part of the ISA firewall that we do not talk about too much – the firewall’s Web caching feature. You can use the ISA firewall as a combined firewall and Web caching device, or even use the firewall as a Web caching device only. No matter how you choose to deploy the firewall, your ISA firewall can cache Web content to speed up your end users’ Internet experience.

Wednesday, 8 July 2009

A Proxy By Any Other Name

In almost every corporate computer network today there are proxies to be found. This is pretty much a standard computer security practice. The confusion starts when people start talking about all the various proxy types. Within the confines of this article all of the various proxy types will be discussed.

Most corporate computer networks today are designed with a purpose in mind. That purpose is usually a balance of security and usability. The end state of almost every corporate computer network today is to facilitate the work of the employee. Making their life easier through a simplified computing experience makes good business sense. One must also take into account network security concerns as well. This is where the proxy enters the picture. Just what is a proxy though? Well a proxy server is a computer operating as a server vice workstation. This proxy server in turn offers other computers an indirect means of accessing other computer services. Services such as a Web server for example located somewhere on the Internet. Simply put, the workstation opens its homepage of say EonConnects.net and that request is in turn relayed to the proxy server. The server will check to see if it has a cached version of this page and if not it will then go get it and relay it back to the workstation in question.

The nuts and bolts of it

If the above noted scenario still doesn’t make a whole lot of sense to you then think of it this way. Having such a proxy server will, for one, speed up the browsing experience for a corporate user. It is much faster to serve up a cached page then it is to retrieve it every time. When the proxy server or, in this case, the caching proxy receives a page request it will, as mentioned, check to see if it already has it. It will also see if the cached page has expired or not. Should the validity of the resource requested have expired then it will go and get a new copy of that resource. That alone makes it worth having a proxy server on a network. There are many other advantages to having one though. Those advantages very much impact the security posture of a corporate network as well, hence the prevalent usage of them. One of the most obvious advantages is being able to centralize all web page requests in one location. This will establish a chokepoint that can be exploited for security purposes.

The transparent proxy

Just as I mentioned above, having the ability to have all client requests go through a single computer gives one the ability to monitor client usage. By client I mean a corporate workstation. This centralization is done by configuring the client browser to use the transparent proxy server’s address. Though this definition of a transparent proxy is a popular one it is also incorrect. In reality a transparent proxy is a combination of proxy server and NAT technology. In essence client connections are NAT’d (network address translation) so that they can be routed to the transparent proxy. Having this type of setup is also a major pain, I am told, to implement and maintain.

The reverse proxy

What the devil is a reverse proxy you ask!? Good question indeed. Typically a reverse proxy is installed in close proximity to one, or several web servers. What in actuality happens is that the reverse proxy itself is the point of first contact for all traffic being directed at the web servers. Why go through the bother of this though? Well for several reasons actually. One of the primary ones is for security purposes as this reverse proxy is a first layer and acts as a buffer for the web servers themselves. Another reason is for SSL connections. Encryption is a computer intensive task and having it performed on the reverse proxy vice, the actual web server makes sense in terms of performance. Were the web servers themselves handling both the encryption part as well as the actual web server part then that machine would quickly become rather slow. For that reason the reverse proxy is equipped to handle the SSL connections and normally has some type of acceleration hardware installed on it for this very purpose.

Another key reason that the reverse proxy is employed is for load balancing. Think of a popular website that has a lot of visitors at any given time. It makes sense that there would be multiple web servers there to handle all incoming page requests. With a reverse proxy in front of these back end web servers no one box gets crushed but rather the load is balanced across all web servers. This certainly helps for overall performance. Another feature of the reverse proxy is the ability to cache certain content in an effort to further take a load off of the web servers. Lastly, the reverse proxy can also handle any compression duties that are required. All in all there is a tremendous amount of work being done by the reverse proxy.

Split proxies

Just when you think you’re done there is always something else! In this case that would be the split proxy. Well much as its name infers, the split proxy is simply a couple of proxies that are installed over a couple of computers. It’s that simple really. Although this type of proxy configuration is one that I have never come across, I have heard of them being used. One of its main selling points is the ability to compress data and that is a boon when slow networks are involved.

Wrap up

Over the course of this article we have seen the various types of proxies in use today in many corporate network environments. As we have seen many of them are used for specific reasons. There is not really one proxy type that can do it all, hence the variety of them. One of the greatest abilities of the proxy is to help enforce an acceptable usage policy on a corporate network. All too often we hear about someone who was fired for inappropriate use of company computer assets. What that neat use of the English language means is that someone was likely surfing for pornography from work and on company time no less in all likelihood. Even though someone doing this is acting foolishly and deserves to be terminated there are other reasons as well to control and monitor employee Internet usage. You can imagine for example how well it would go over for a high profile, publicly traded company to have an employee caught downloading kiddie porn. If that type of news hits the media all of sudden your company stock price could take a nose dive. Having a proxy in place within a corporate setting is really not only common sense, but also a necessity in reality. While most company employees are hard working and above board there will always be one or two who are not. Having the ability to catch and deal with them quickly is very much desired. Well I will end the article on that note and as always hope it was of use to you.

Securing Your OCS Deployment

Taking a look at the security concerns involved with unified communications and how to add security to OCS.

Office Communications Server (OCS) is Microsoft’s Unified Communications solutions for enterprises, but as with all UC deployments, applications that enable voice, video, IM, file transfers and application sharing can pose security issues. In this article, we address those concerns and discuss OCS’s built-in security features, configuration choices for best security practices, and integrated software solutions (both from Microsoft and third parties) to add security to OCS.

A unified communications system is vulnerable to such threats as eavesdropping or sniffing, identity/IP address spoofing, RTP replay, and so forth, as well as viruses/worms, man-in-the-middle and denial of service (DoS) attacks. Because the confidentiality and integrity of your communications are critical to your business, it’s essential to protect against all of these threats.

Built-in security features in OCS 2007

OCS 2007 provides many new features that LCS 2005 didn’t have, including:

  • Enterprise VoIP
  • Multi-party IM
  • On-premise web conferencing that allows participation by outside users who don’t have enterprise credentials

In addition, features such as presence and federation support have been improved and enhanced.

With new features come new security challenges, but Microsoft has addressed many of these with built-in features. As always, the best security is multi-faceted, so the security framework upon which OCS is built has many components.

Active Directory

Windows server security in a domain is built around the Active Directory, and OCS uses AD to store global settings (used by multiple OCS servers in a forest), data identifying the roles of OCS servers, and user settings.

You must prepare AD for OCS by extending the schema to include OCS classes and attributes, creating OCS objects and attributes and add permissions on objects in each domain. You do this in one of two ways: by using the LcsCmd.exe command line tool on the OCS CD, or by using the Setup.exe deployment tool for OCS 2007. The command line tool can be run remotely. The deployment tool has a graphical interface and wizards to guide you through each task.

The specific steps to prepare AD include:

  • Prep Schema (run once)
  • Prep Forest (run once)
  • Prep Domain (run on every domain where you deploy OCS)

For step by step information on how to prepare AD for OCS, see the Microsoft Office Communications Server 2007 Active Directory Guide. Active Directory Guide.

Authentication

OCS can use standard Windows authentication protocols, depending on the user:

  • Kerberos v5 is the most secure and is used for internal clients with AD credentials.
  • NTLM is used for clients outside the LAN who have AD credentials.
  • Digest protocol is used for on-premise conferencing clients outside the LAN who don’t have AD credentials (they must, however, have been invited to use on-premise conference and must have been supplied with a valid conference key).

Network encryption

To protect data traveling over the network, OCS 2007 encrypts communications by default. Endpoint authentication and encryption are accomplished by using Transport Layer Security (TLS) and Mutual Transport Layer Security (MTLS). Server-to-server SIP communications use MTLS and client-server SIP communications use TLS. These protocols protect against man-in-the-middle and eavesdropping.

TLS and MTLS are also used to encrypt instant messages. TLS encryption is optional for internal client-to-client IMs. OCS communications with public IM servers is encrypted; however, it is up to the public IM provider to encrypt communications between the public IM server and the outside client.

The Secure Real-time Transport Protocol (SRTP) is used to encrypt streaming media. SRTP protects RTP data by adding authentication, confidentiality and replay protection.

Public Key Infrastructure

Server authentication for OCS 2007 is based on the use of digital certificates issued by a trusted CA. This can be an internal or public CA (you may need a public CA if the OCS server needs to communicate with systems outside the LAN). OCS is designed to work with a Windows 2003 Public Key Infrastructure (PKI).

For OCS, all server certificates are required to support Enhanced Key Usage (EKU) to authenticate the servers. This is used by MTLS. Server certificates must also include at least one Certificate Revocation List (CRL) distribution point.

Federation security features

Like its predecessor, Live Communications Server 2005 (with SP1), OCS 2007 has the capability of federating with the major public instant messaging providers (MSN, Yahoo! and AOL). It also supports “enhanced federation,” which allows peer enterprises to be discovered using DNS SRV records. OCS 2007 includes new security features for the federation model. These include:

  • Restriction on how many users a federated peer can communicate with over a specified time period. This is designed to prevent “directory harvesting” by which an attacker tries different user names to find a valid one.
  • Restriction on the rate at which the Access Edge Server will accept messages from the federated peer, based on analysis of the traffic.

Administrators can also restrict access by adding domains to the Deny list, or blocking peer certificates via the certificate store.

Blocking unwanted or dangerous IMs

You can use the Intelligent IM filter to block unwanted or potentially harmful instant messages and file transfers. You can configure the filters to use the criteria you want, in order to selectively block IMs and file transfers. For example, you can block IMs containing hyperlinks or you can allow the IM to go through with the hyperlink disabled. You can block files with specific extensions.

More information

For much more detailed information on using OCS’s built in security features, see the Microsoft Office Communications Server Security Guide.

Hardening your servers and clients

The OCS server, along with other servers in your infrastructure, should be “hardened” by locking down both the operating systems and applications as much as possible. You can do this through Group Policy. TheWindows Server 2003 Security Guide provides specific information on how to harden Server 2003 servers.

Unused services on your servers should be disabled. The SQL Server database used to store OCS information should be protected. In short, best network security practices become even more important when you have an OCS server on the network. And of course, all servers should be kept updated with security patches and the latest virus signatures.

Client machines must also be configured for best security. You can use OCS group policy to disable the appropriate features and set the client for media encryption. Of course, the latest service packs and security updates should be installed on the client machines.

And don’t forget other OCS devices, such as OCS-compatible phones. You can use the Office Communications Server Software Update Service to automatically update all unified communications devices deployed in your organization.

To evaluate the overall health of your OCS 2007 servers and topology, you can download the Office Communications Server 2007 Best Practices Analyzer.

Microsoft integrated security solutions

In June, Microsoft released a public beta version of Forefront Security for OCS. This is the latest in the Forefront family of enterprise security products and allows you to scan for malicious software using multiple engines, and filter instant messages and files by keywords. It also includes automated signature updates and IM notification alerts.

Forefront Security for OCS is integrated with Access Edge role in OCS 2007 Enterprise edition, which secures messages to and from external public IM clients and federated networks as well as internal communications.

Third party security add-ons

Third party security products designed to protect OCS 2007 include:

  • Trend Micro IM Security for Microsoft Office Communications Server
  • Akonix L7 Enterprise,, for adding unified policy and risk management for OCS

Summary

Microsoft OCS 2007 is Microsoft’s answer to the unified communications question. It goes way beyond the scope of LCS 2005 and now manages all types of real-time communications, including VoIP and conferencing. In today’s threat-filled world, communications applications are among the most vulnerable, so it is important to consider security first when deploying OCS. This article has provided an overview of security considerations relating to OCS 2007.

Understanding Microsoft’s Secure Remote Access Offerings

Introduction

Remote access is a hot topic. It is hot because it should be hot. There are a lot of drivers for remote access, but the overarching issue is that people need access to information from anywhere, at anytime, from any device. The outdated vision of access based on specific device or location is gone. Especially in corporate scenarios, people expect to get the business intelligence they need, when they need it, and be able to use a laptop, or desktop, or kiosk, or Smartphone, or even an MP3 player to get to that information. IT has to be an enabler.

Microsoft is inline with this vision of anywhere, anytime access, and has a number of technologies you can use to enable secure remote access. Notice that I’ve injected the term “secure”. Enabling remote access isn’t technically complex. Any simple NAT device or router can enable remote access to business applications and services. The trick is to enable secure remote access so that you do not put your data, your servers and perhaps your job at risk.

From my count, here are the key Microsoft technologies available to you today that enable secure remote access into your organization:

  • Windows Server 2008 NPS Routing and Remote Access VPN services
  • Windows Server 2008 Terminal Services Gateway
  • Microsoft ISA 2006 and Forefront Threat Management Gateway (TMG)
  • Intelligent Application Gateway 2007 and Unified Access Gateway (UAG)

Windows Server 2008 NPS Remote Access VPN Services

Windows Servers have included a VPN server component since Windows NT. Since Windows NT, you have always had available to you the Point to Point Tunneling Protocol for VPN (PPTP). The problem with PPTP today is that most security experts consider it a deprecated VPN protocol and it should not be used in production networks due to some inherent security weaknesses in the protocol. While there are ways to bolster the level of security for PPTP (such as using two factor authentication for log on), PPTP is generally of interest only for historical purposes.

Windows 2000 Server introduced the L2TP/IPsec VPN protocol. This a major advance for Windows, since the IPsec tunnel that is used to secure the information is created before credentials transfer takes place. L2TP is used to create the virtual network, and IPsec is used to create privacy on that virtual network connection. Another major advantage of L2TP/IPsec is that both user and machine authentication can be accomplished, because of the use of IPsec. Windows 2000 Server also extended the user authentication schemes available by enabling more advanced EAP authentication methods, so that certificates and smartcards could be used for user authentication.

Windows Server 2008 increased your VPN options by adding the Secure Socket Tunneling Protocol (SSTP). SSTP is essentially PPP over SSL. The great advantage of this protocol is that it runs over SSL, and just about any firewall or proxy allows outbound SSL. That’s right. SSTP will work when the client is behind either a firewall or a proxy (and even proxy based firewalls, like the ISA or TMG firewall). SSTP is included as part of the Windows Server 2008 NPS Routing and Remote Access Service, and it can leverage all the same user authentication protocols that L2TP/IPsec use. The only downside of SSTP at this time is that you to be very careful with some of the configuration steps and the order in which you perform them, otherwise, management can be very complicated. With that said, SSTP remains a tremendous boom for Windows VPN administrators

Windows Server Terminal Services

Like the Routing and Remote Access VPN solutions available for the last several versions of Windows Server, Windows Server has also included a Terminal Services component. While not included with the RTM version Windows NT, it was available later in the NT product cycle. Terminal Services was then incorporated into the operating system with the release of Windows Server 2000. There were some improvements made to the terminal services offering with Windows Server 2003, but it was not until Windows Server 2008 that we saw major improvements.

In Windows Server 2008, and in the upcoming Windows Server 2008 R2, you have major enhancements to the Terminal Services offerings. Still included is the basic Terminal Server, which allows users to connect to the terminal server using the RDP protocol. That said, I should mention that the RDP protocol has been vastly improved. But it is not just the improvements in the RDP protocol that make the Windows Server 2008 Terminal Services offering so compelling. It’s actually a collection of several improvements. These include:

  • Terminal Services Web Access
  • Terminal Services Gateway
  • Terminal Service RemoteApp

While previous versions of Windows Server had a Terminal Services Web Access feature, Windows Server 2008 significantly improves on the experience because it integrates other new features of Windows Server 2008 Terminal Services into the Web site. In addition, access to computers and applications through the Terminal Services Web site can now be controlled using policy based access rules.

Terminal Services Gateway (TSG) enables policy-based Terminal Services access from anywhere in the world. A problem with remote access to Terminal Services in the past was that many firewalls would not allow outbound access to the default RDP port, which is TCP 3389. And of course, since proxies typically handle only HTTP protocols, Terminal Services clients could not reach terminal services over the Internet when the clients were located behind a Web proxy. TSG solves this problem by allowing the Terminal Services client to tunnel RDP inside of RPC, which is then tunneled inside HTTP, and secured by SSL, thus requiring only an outbound SSL connection to be allowed to the TSG. After the client connects to the TSG, policy-based access rules allow you to control which terminal servers or applications the user can connect to.

Did you notice that I said terminal servers or applications? That’s right. With the new Windows Server 2008 Terminal Server, you have the option to publish terminal servers and/or applications. Terminal Services RemoteApp allows you to publish, over Terminal Services, applications. So if you wanted your users to have access to Word and PowerPoint, you can publish those applications over the Terminal Services Gateway and users would be presented with the applications only, instead an entire desktop. This is a great boon to security, since it enables the principle of least privilege – giving users access only to what they need, which are the applications, instead of the entire desktop, which is not what they need. And this access is accomplished over the TSG, which enables strong policy-based access to these applications.

Internet Security and Acceleration Server 2006 and the Forefront Threat Management Gateway (TMG)

Now we move away from the platform services included with Windows Server and look at some of the network security applications Microsoft has to offer for secure remote access. Microsoft made its first attempt at a network security device when it introduced its Proxy Server product in the second half of the 1990s. This culminated in their first mature product, which was Proxy Server 2.0. While Proxy Server 2.0 was a fine proxy server, it was not designed to be an edge network security device for enabling secure remote access.

Microsoft took the jab at secure remote access for a network edge security device with the introduction of Microsoft Internet Security and Acceleration Server (ISA) 2000 at the end of the year 2000. This product was a multifunction device, enabling secure outbound access, secure server publishing and secure Web publishing. In addition, ISA 2000 included strong support for remote access VPN users as well as site to site VPN. On top of that, ISA 2000 was designed as an edge network firewall, so that you no longer needed to put a router-based firewall (layer 3 firewall) in front of the ISA 2000 firewall.

However, the ISA 2000 firewall was built on a threat model that was extant in the1990s but is not longer true in the 21st century. That is to say, in the 1990s, the popular threat model was that anything outside the firewall was not trusted, and anything inside the firewall was trusted. Since this is no longer true, the next version of the ISA firewall, the ISA 2004 firewall, was released and was built on a threat model that assumed that no networks could be trusted and that strong stateful packet and application layer inspection needed to be applied to all connections going to and through the ISA firewall.

With ISA 2004, remote access security was significantly improved. For Web publishing (reverse Web proxy), the HTTP Security Filter was introduced to protect against attacks against Web site. A number of application filters were added or improved, to protect against exploits made to SMTP, DNS and other application servers. And most of all, the remote access and site to site VPN server components now enabled you to create strong user/group based access controls and applied the same stateful packet and application layer inspection that was performed on all other connections to or through the ISA firewall.

The ISA 2004 firewall was the first Microsoft firewall that could be said to be an enterprise-ready, edge network firewall, on par with Check Point, ASA and Netscreen.

ISA 2006 was released two years later and included all the remote access security features included with the 2004 ISA firewall. It included several improvements for remote access security such as:

  • Support for Kerberos Constrained Delegation (KCD) so that you can publish Web sites that require users to use two-factor, certificate based authentication at the firewall
  • Several enhancements to it’s forms-based authentication feature, so that users can use a flexible form to authenticate to the firewall before being allowed to the published Web site
  • Expanded support for an number of new two-factor authentication methods, such as RADIUS one-time passwords
  • LDAP server authentication for published Web sites, so that Active Directory repositories could be used when the firewall was not a domain member
  • Web Farm Load Balancing, which enabled ISA 2006 admins to avoid the high cost of external, hardware load balancers and publish farms of Web servers behind the ISA firewall

ISA 2006 can also be configured to enable secure remote access to all of the Windows Server 2008 Terminal Services offerings, allowing for another layer of protection for remote Terminal Services access.

The Forefront Threat Management Gateway (TMG) is the next version of the ISA firewall. TMG includes all of the secure remote access technologies included in previous versions of the firewall, but ups the ante on outbound access security, adding malware protection and a uniquely powerful IDS to the mix. In addition, Web content filtering is enabled out of the box for TMG, something that ISA firewall administrators have been wanting for a long time.

Intelligent Application Gateway 2007 and UAG

The Intelligent Application Gateway 2007 (IAG 2007) is for organizations that look for the highest level of security for remote access connections. In contrast to the ISA or TMG firewall, the IAG 2007 SSL VPN gateway is a single purpose device: a remote access gateway for inbound connections to network services. While the ISA and TMG firewalls can provide the same or superior level of security for inbound connections to network services as any other firewall on the market today, IAG 2007 provides the highest level of security possible for incoming connections to Web and non-Web services.

IAG includes a number of software modules, known as Application Optimizers, which confer a very high level of protection for remote access to Web services. The Application Optimizers enable IAG to perform deep application layer inspection for the Web services it publishes. IAG's deep application layer inspection employs both positive and negative logic filtering. Positive logic filtering enables IAG to allow only known-good communications to the published Web service, while negative logic filters block known bad connections.

Four types of connectivity are available with the IAG 2007 SSL VPN gateway. These include:

  • Reverse Web proxy. IAG can act as a high security reverse Web proxy by employing application intelligence to remote connection to Web services
  • Port Forwarder. For remote access to non-Web applications that require simple protocols using a single port, the IAG port forwarder allows clients to connect to network applications over the SSL VPN tunnel using the port forwarder
  • Socket Forwarder. For remote access to more complex application that require multiple primary or secondary connections (such as Outlook MAPI/RPC), remote access clients can use the IAG socket forwarder. All protocols communicated over the socket forwarder is are also protected by SSL
  • Network Connector. The Network Connector enables full network layer VPN access over the SSL VPN connection. This is useful for administrators who require unencumbered remote access to the network.

In addition to the SSL VPN gateway features, IAG 2007 also enables PPTP and L2TP/IPsec remote access VPN client access. This allows you to use IAG 2007 as your centralized remote access gateway, without having to split the management and monitoring of remote access connections to your network between several devices or types of devices.

The next version of the IAG, known as the Unified Access Gateway, will continue to build on the strong application layer intelligence included with IAG and will add more secure remote access options. The most interesting of these is support for Microsoft’s new Direct Access remote connectivity option, which will enable users located anywhere in the world to transparently connect to the corporate network, including domain connectivity.

The major barrier to success for Direct Access is its dependency on IPv6. While there are advantages to IPv6, most networks are not architected to support IPv6 because there isn’t a strong business case to switch over to IPv6. In addition, there isn’t widespread understanding of IPv6, which makes it dangerous to implement on networks as it generates traffic that the majority of network administrators do not understand.

In order to mitigate the connectivity and security challenges introduced with Direct Access and IPv6, the UAG will employ NAT-PT (Network Address Translation – Protocol Translation). NAT-PT allow native IPv6 hosts and applications to communicate with native IPv4 hosts and applications and vice versa. This feature will make it much simpler, and more secure, to implement a Direct Access solution for tomorrows Windows 7 and Windows Server 2008 R2 networks.

Summary

In this article we covered the secure remote access options currently available to Microsoft networks. Some of these options have been available since early versions of Windows NT, while some would not be available until you’ve implemented Windows 7 and Windows Server 2008 R2. Each of them has its own advantages and disadvantages, and each of them provides a different level of security, for different types of remote access. Hopefully, after reading this article, you will have a better idea of the remote access options available to you and will be able to choose the one that looks like it will serve your needs best, so that you can then search for more information for that (those) solutions.

Remote Authentication: Different Types and Uses

Computer networks have arguably helped worker efficiency and helped a company’s bottom line. Well with that has come the need for workers to, at times, remotely log into the corporate network. This is ideally done via secure means. Within the confines of this article we will look at several of these methods.

Remote authentication

Corporate networks have not only grown in size over the years, but they have also grown in complexity. Over the years new services have appeared and been implemented to satisfy the growing demand for easy to use programs. This driving force to meet end user satisfaction goes on relentlessly and has accounted for much of today’s innovations. One of the most desired advantages has been for some workers to have the ability to work from home. These tele-commuters are one of the recent changes that have affected the work force and much to the benefit of the worker. This ability to tele-commute has greatly affected employee morale for the better. The problem is that these workers must also be able to communicate with the corporate network both remotely and securely. It is of little surprise that these concerns have been dealt with via a variety of solutions that all work quite well.

RADIUS is not just for Algebra

One of the solutions that was designed to accommodate the remote worker is that of RADIUS. Remote Authentication Dial-In User Service is what the acronym actually stands for. It is actually fairly descriptive as that is pretty much what it is used for. The worker will remotely authenticate for access to that remote network. I have previously mentioned that I like to map protocols before to the OSI Reference Model. This helps one visualize just what protocols belong where in the grand scheme of things. In the OSI model RADIUS fits into the application layer. This protocol is no exception either to the client/server model. A client will log into the RADIUS server and supply the required credentials. Also RADIUS uses UDP as a transport protocol to ferry about its information.

Like many well known protocols RADIUS has some well known ports that it is normally configured to be listening on. They are port 1812 and port 1813 with port 1813 being used for RADIUS accounting. Those ports are also RFC compliant, but what does RFC compliant actually mean? Well when the designers of RADIUS were sitting around talking about the design specifications for RADIUS they decided that they would make RADIUS use ports 1812 and 1813. The various design considerations were eventually all consolidated into what is called an RFC. After a period of time that RFC was accepted and thusly the ports of 1812 and 1813 were then called RFC compliant, as they were included in the original design of it.

I want details!

The devil is always in the details, and if you want details it is always best to go to the definitive source. In our case that would be RFC 2138 which deals with RADIUS itself and contains all of the details about it. Seen as most people break out into hives if they think of reading an RFC I will summarize a few important details for you. One of the biggest things to realize about RADIUS is that it will support various authentication methods. Notably, you can use PPP, PAP, and CHAP to name most of them. If you are familiar with Cisco gear or are in charge of supporting the routers and switches from them, then you are no doubt familiar with the various authentication methods offered by RADIUS.

Now once a user has supplied the required username and password combination and the RADIUS server receives it, it will do one of a couple of things. The RADIUS server will check its database for the received credentials and based on that, either reject the session or allow it. Further to the username and password combination, the RADIUS server can also check for validity by the port number. Typically RADIUS works as follows;

  • Access-Request: where the user sends their credentials to the server
  • Acess-Challenge: where the server sends a challenge and the user must respond

Based on the above access control the user is either authenticated or rejected. RADIUS itself, as mentioned earlier, uses UDP as its transport protocol, and that was decided during the initial design considerations for RADIUS. Using UDP has its advantages, notably there being less overhead and speed. This and other reasons was the driving force behind the choice of this transport protocol over TCP and its connection oriented design. Lastly, we should also realize that, like many application layer protocols, RADIUS has codes that were written into its core functionality. These codes deal with the access, accounting and status of RADIUS be it client or server. For further reading on this protocol I would suggest reading the above noted hyperlink for RFC 2138.

TACACS and TACACS+

Terminal Access Controller Access Control System or TACACS is similar to RADIUS and is used to regulate access to the network. One of the biggest differences between TACACS and RADIUS is that TACACS primarily uses TCP for its transport protocol needs vice the UDP that RADIUS will use. There are also three versions of TACACS with TACACS+ being the most recent. It is important to note that TACACS+ is not backwards compatible with the other earlier versions. This protocol is also an application layer protocol and observes the client/server model. Seen as TACACS+ is also a well known protocol it stands to reason that there is also a well known port associated with this activity, which is TCP port 49. That being said XTACACS does use UDP. There is always the exception to the rule!

Other notable differences between RADIUS and TACACS+ are that RADIUS only encrypts the password in the access request packet that is sent to the RADIUS server. TACACS+ on the other hand will encrypt the entire packet body, but will leave the TACACS+ header intact. TACACS+ does have weaknesses though, which can be exploited by a determined attacker. It is vulnerable to “birthday attacks” in which two messages use the same hash function and packet sniffing to mention a few.

Wrap-up

While the above noted are two means of using authentication methods, they are not the only ones. Every network has its quirks and various architectures. With that said you would be best to take into account the various details of your network and from there make the best decision regarding what authentication method best suits your needs. Some of these methods can also in turn use other ones as well, such as TACACS+ and Kerberos. The bottom line is that every time you involve another layer or program to your network you are introducing another possible attack vector. You would be well advised to go with a mature technology for your remote authentication solution.

Lastly, it also makes sense that before purchasing such a means, that you make sure you can integrate it seamlessly into your existing production environment. While this article was very much a high level overview of some of the methods, there is a veritable mass of information available on this courtesy of the Internet and Google.

Thursday, 18 June 2009

Cisco's 1841 Router

Cisco's 1841 router was created with the smaller branch office in mind. This router is a low-end device making the 1841 as one of the cheaper models manufactured by Cisco. The 1841 Cisco router has low failure rates and is enterprise class hardware. Typical of Cisco products, this router has openings for standard Cisco cards offering network interfaces and features while running on the IOS software. With such a comfort level in the IT community for Cisco products and its IOS, setup time and maintenance usually have a minimal learning curve compared to competing manufacturers. The 1841 router fits in rack mounts making it suitable for data closet installation. However, the 1841 has only a single power supply revealing its intended place in the field offices rather than central routing for a large company.
This particular model comes with these features:
  • 2 10/100 Ethernet ports (copper - RJ45)
  • 2 Wan Interface Card (WIC) slots for the ports of your choice
  • 1 internal expansion slot
  • Standard pair of console/auxiliary console ports
  • 1 USB port for console access (local device management)
  • 128 Meg RAM; only expandable to 384 Meg.
  • 1U height

The 1841 routers come with three-speed fans controlled by a thermostat in the chassis. For noise abatement and extended life, fan speeds are variable depending on the cooling needs. The 1841 routers come with internal clocks, but are dependent on a non-replaceable battery. If the battery fails, this would require the chassis be sent back to the factory for repair - which should be covered under warrantee.

For VoIP implementations a separate appliance will be needed since the 1841 router capabilities do not include VoIP or voice even though it has 2 WICs. A single power supply is a drawback, but for most implementations this means no redundant power supply. For installations of 300 users or less, the Cisco 1841 meets the needs of a field office. It is overkill for a job of less than 20 nodes where a smaller router or a PIX firewall is recommended.

Whatever the router selection, Network Address Translation, a secondary Internet circuit to the headquarters and a reasonable amount of access control lists (ACLS) should be included in its capabilities.