Monday, 31 August 2009

Going Green: A Primer

A Look at What “Green” Means For IT

It's easy to say you want to go green and a lot tougher to figure out where to get started. Even worse, there's little consensus in the IT community on what Green IT is in the first place.

"There needs to be more definition behind it," says Derek Kober, vice president and director of the BPM Forum. "It's similar to cloud computing, with different people defining it in all sorts of ways. It can be viewed along the lines of doing more with less or accomplishing the same amount of compute while being more energy-efficient."

The need for a universally understood definition has never been higher, as IT decision makers struggle to integrate greater sustainability into their infrastructure planning and operations. Going green is increasingly being viewed as an outright necessity as business requirements expand; recession-racked companies slash IT budgets; and access to consistent, inexpensive power becomes more elusive. Just because IT isn't getting the money to revamp the data centre doesn't mean it's any less accountable to do more with less.

"The general demand on compute these days is not shrinking," says Jason Coari, senior marketing manager with SGI (www.sgi.com), which has partnered with the BPM Forum on the Think Eco-Logical initiative to help companies simultaneously become more environmentally friendly and streamline the bottom line. "Datasets are growing exponentially as more rich media moves over corporate networks and the Internet. The kind of computing horsepower needed to support that is growing, and companies are finding that it's expensive from both a capital and operations perspective."

The Green Necessity

With this in mind, Green IT isn't just a feel-good initiative. Instead, it's a key enabler of improved IT performance and is often the only way to meet fast-expanding business needs without busting the budget or disrupting operations. Frances Edmonds, HP Canada's director of environmental programs, says we all need to broaden our understanding of Green IT.

"There's a lot of green washing out there," says Edmonds. "Green IT means a lot of different things, and it's not just buying a piece of hardware that has a higher energy efficiency rating. It also encompasses what materials were used to manufacture that product and how it will be recycled at end-of-life."

More Than Power & Cooling

Data centre managers often focus green efforts on power and cooling. John Phelps, research vice president of servers with Gartner, says most organizations need greater visibility into how these resources are being used.

"Many data centres aren't measuring how much energy they're using," says Phelps, citing a recent Gartner survey in which 70% of respondents did not have a separate IT budget line item for energy costs. "IT is often the company's largest user of energy, but where's the incentive to cut back if there's a meter for the whole building but not on the data centre itself?"

Even with measurement in place, Phelps says many organizations remain myopic. Going green involves a much wider focus, including green data centre infrastructure, recycling electronics and paper, alternative and renewable energy sources, green education for employees, carpooling, teleworking, and print reduction programs.

Manage Resources Wisely

Demand management—reducing the IT load by deploying applications and related resources more efficiently—is another promising area. Phelps says consolidation and virtualization, data deduplication, removal of equipment following decommissioning, and rightsizing of servers by avoiding over provisioning can reduce inefficient usage that fails to drive productivity.

Jeffrey Hill, co-author of "Green IT For Dummies," says many data centres are already stuffed with too much equipment, and systems administrators don't know what they do and are afraid to pull the plug. He says they need to do precisely that.

"A large computer manufacturer recently audited their corporate data centre and found out that 35% of the applications running there could be either replaced, retired, or moved out into a workgroup environment, thus saving power, cooling, and valuable floor space," says Hill, who says IT doesn't have to dive head-first into virtualization to derive quick and inexpensive benefits.

"Virtualization is such an industry buzzword and the preferred solution to every ill that people tend to forget consolidation of physical servers, exclusive of virtualization, is a step that will yield similar results," says Hill. "Any reduction in the number of the physical servers will cause a reduction in the amount of power consumed and cooling required."

Tiered storage that shifts lower-demand data away from energy-consuming high-speed and high-availability drives toward lower-availability-and more energy-efficient-storage is another alternative. So is application architecture.

"There's a lot of code out there that's not very efficient, or it goes into a spin loop instead of a wait state and ends up driving some of this extra capacity," says Phelps. Tighter, more efficient code-reminiscent of the mainframe era, when compute cycles were scarce and expensive-would consume fewer IT resources than today's relatively bloated apps.

"Workload is the biggest consideration, and as workload goes down, electrical consumption goes down, as well," says Phelps, adding that newer, intelligent processors, UPSes, and other data centre equipment will allow managers to dynamically power equipment down during low-demand periods. "In the past, all of this equipment would have used the same energy whether running under 80% or 30% load."

Don't Go Overboard

As beneficial as Green IT can be to the bottom line, the BPM Forum's Kober warns against losing sight of underlying business needs. Overzealous greening of a data centre can put operations at risk if it compromises infrastructure performance or availability.

"Your environmental policies have to make sense," he says. "It's about making improvements in an organized and efficient way, and it really can be quite a balancing act."

Network Traffic Management: The Big Picture

Traffic Management, Shaping & QoS Provide the Tools to Guarantee Performance

That the capacity of enterprise networks has exploded over the last few years isn’t breaking news, but what’s underappreciated is the increasing diversity of traffic. Convergence is a mantra for many network managers-dedicated voice and data circuits are passé as every form of communication has been packetized for IP transport-and although this strategy makes efficient use of available capacity and is a big money-saver, it exposes limitations of historically data-only networks.

By default, all IP traffic receives equal claim on available capacity, yet divergent network applications such as phone calls and file transfers respond quite differently to bandwidth constraints, delays, or retransmissions. Traditional IP networks behave like a crowded thoroughfare where ambulances and fire engines must wait their turn at a signalized intersection just like everyone else. According to Jim Frey, research director at Enterprise Management Associates, the goal of traffic management is to provide more intelligent handling of network applications. In converged networks, with heterogeneous traffic, that requires a means of prioritizing and managing data flows using QoS priorities and other contention management techniques.

Steven House, director of product marketing at Blue Coat, sees two drivers for traffic management: to protect mission-critical, latency-sensitive applications such as VoIP, video, or remote desktop clients and to control "recreational" network traffic such as YouTube, Facebook, or P2P file-sharing. Frey largely agrees, noting that real-time communication has been the main catalyst behind QoS usage.

Back To Basics

The basics of QoS are quite simple-the ability to differentiate and discriminate between different traffic flows and provide preferred performance or bandwidth guarantees for time-sensitive applications under congested conditions. Unfortunately, the implementation is often mind-numbingly complex. Proper traffic classification is critical, says House; however, with more applications tunnelling through HTTP port 80, it often requires deep packet inspection rather than merely relying on IP and Transport layer data.

Once traffic is classified, Burton Group senior analyst Eric Siegel outlines numerous QoS techniques, including traffic conditioning, policing, and shaping, flow queuing, link fragmentation, and interleaving. Vendors have introduced a number of queuing algorithms with an alphabet soup of acronyms. However, for IT managers who don’t want to become experts in queuing theory, the bottom line, according to Siegel, is that real-time applications such as VoIP or IP teleconferencing require a strict priority queue above all other data, and remaining bandwidth should be allocated among flows managed via a class-based algorithm. Siegel adds that in order to avoid overloading available capacity and in turn violating performance guarantees, admission to the strict priority queue should be controlled using some form of flow conditioning.

Traffic Shaping & Conditioning Technologies

Although queue-based QoS prioritizes traffic, bumping the most critical packets or frames to the head of the line, Siegel says, "Flow [or traffic] conditioning techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network bottlenecks." These typically follow one of two fundamental strategies: policing, which monitors and discards packet flows that data rate limits, and shaping, which attempts to smooth out flows and avoid traffic bursts by buffering and signalling endpoints to reduce their transmission speed.

Traffic conditioning is often found in WAN accelerators; however, Siegel says several vendors offer special-purpose appliances that orchestrate conditioning across multiple LANs. Not surprisingly, most are high-end appliances designed for large enterprises or ISPs, although Siegel and Frey both note that many telecom providers now offer traffic management services appropriate for SMEs.

SME Traffic Management Basics

Complexity is the biggest problem with traffic management/QoS systems, which Siegel cautions "must be carefully designed and implemented to ensure that all network components work together properly to provide some traffic flows with better service than others"-a requirement, in addition to cost, that makes them infeasible for smaller enterprises. "A major goal for SMBs is to avoid complexity," he says, adding that SMEs should "think about ways to simplify the QoS situation, use it only when necessary, and use the simplest form that works." Yet all isn’t lost, because, Siegel notes, "In many cases, much simpler QoS technology, or no QoS at all, can provide the needed performance without the expense and management headaches of complex QoS systems." These might include an appliance, service provider, or just separate VoIP and data VLANs.

Frey points out that the same technology used to classify and prioritize traffic is needed for security threat detection; thus, many vendors have integrated QoS features into UTM appliances. Some dedicated traffic-shaping products are specifically designed for the SME. In addition to traffic shaping, these often incorporate features such as WAN and VPN load balancing and failover, content acceleration and filtering, and security features (firewall, IDS).

Applications For Traffic Management

Before embarking on a traffic management or QoS initiative, Siegel says it’s important to understand existing network conditions. "People never know what’s on their network," he says, adding that network managers should enable netflow accounting on routers and switches to gather traffic statistics. Yet, as House points out, netflow can’t identify and classify the new generation of network applications that tunnel traffic through port 80, a limitation easily overcome by traffic shaping appliances using deep packet inspection.

Siegel says traffic analysis may reveal that rather than having a QoS problem, the network may just have an "inappropriate use" problem, with employees downloading movies or other bandwidth-hogging content. Instead of a complex QoS solution, he quips, "You may just want to put in some [router] ACLs to blow this stuff away." House adds that packet-shaping appliances offer a less draconian solution because they can automatically identify and classify such uasge and build simple policies-for example, limiting all social networking traffic to 10% of the total bandwidth.

Although traffic management is a powerful tool, particularly on large, complex networks, it’s no panacea. "Quality of service is a useful technology for providing multiple service levels. However, it doesn’t provide additional bandwidth, and it can be expensive and complex to implement," notes Siegel, concluding "It’s always worth careful analysis to see if simpler, less-expensive alternatives can handle the situation instead of a full, complex QoS implementation."

Adding New Storage Frugally

Even In Times of Tightening Belts, New Storage Systems Are Possible

With the economy on everyone’s mind, adding new storage to an existing data centre can be a challenging process. Strict budgets and a tendency to nix already green-lighted projects can make it difficult for IT to gain the dollars to move forward.

In such times, though, IT has a responsibility to make sure it is adding storage wisely and using what is already there in a cost-effective manner. And even though SMEs might not be able to afford the latest and greatest gear available today, there are smart ways to implement upgrades to gain maximum bang for the storage buck.

Only Add When Necessary

The cardinal rule in the modern climate is not to add storage that you don’t need. Obvious as it seems, it is one of the most commonly abused rules based upon an entrenched overbuilding mentality. As result, there is usually an abundance of unutilized storage sitting around the data centre.

“Companies need to ask if they know what their capacity utilization is,” says Jim DeCaires, storage product marketing manager at Fujitsu America. “They also need to understand how effectively they are using their storage and if they can capture unused capacity. Most storage capacity is underutilized.”

Before requesting the purchase of additional storage, IT should achieve an understanding of current utilization. It can be very embarrassing - and financially damaging - if management finds out it just spent $50,000 on storage hardware it didn’t need. Such a discovery can make it all but impossible to gain approval for further purchase orders.

“Understand what your applications, databases, and end users really need from a capacity and performance perspective before adding new storage,” says Tim Arland, principal consultant for storage solutions at Forsythe Technologies.

On the other side of the coin, if management sees that IT avoided a major storage addition via some diligent homework, it will look far more favourably on the next request for more capacity.

Avoid Vendor Lock-In

Vendors often offer sweet deals for large packages of storage hardware, particularly when their gear is all that remains. The problem here is that you are then tied to that vendor for service contracts and upgrades. That can actually result in longer life cycles and more costs in the end as you are either legally tied in to that vendor or are faced with a substantial up-front investment to replace your hardware entirely.

The best strategy is to own storage gear from a couple of vendors so you can play one against the other. Such a scenario typically results in good deals from all sides.

“Use vendors against each other to create a balance in the bidding process to get your best pricing,” says DeCaires. “Understand pricing models from vendors, be aware of technology complexity that obscures pricing, and carefully examine all the elements of technology packages to ensure you are buying only what you really need and want.”

Avoid The Latest & Greatest

Vendors generally blow the trumpet loudly for their new wares. But these products usually need to have the kinks knocked out of them, and it can be a year or two before they truly are enterprise-ready. That’s why large companies are often very conservative when it comes to storage: They tend to stay a generation behind the development curve in order to deal in only the most stable platforms.

Fibre Channel over Ethernet, or FCoE, and solid-state drives are examples of technologies being touted heavily in the press, but they may not be the best way forward for a budget-constrained SME. However, that doesn’t mean that there are not some newer developments that can add value and help save on storage without entailing too much risk.

“New technologies should be approached with caution,” says DeCaires. “Companies should understand the hype and the reality. New and fairly new technologies that add value and can save on new storage include storage virtualization; thin provisioning, deduplication, and software storage management that delivers utilization monitoring and charge-back features.”

Implement Tiered Storage To Reduce Costs

Storage tier offers a way to boost performance for the most crucial applications while reducing storage costs overall. Tier 1 should be a small amount of the total data set that is mission-critical. That tier gets the best hardware and the highest performance. Another one or two tiers can then be set up using lower-cost disks for the bulk of the organization’s applications.

4 Ways To Save With VoIP

Get The Most Out Of Internet Telephony In Tough Times

Today’s economy is squeezing every penny it can out of the data centre - even Internet telephony is feeling the squeeze. As enterprises consider implementing VoIP technology, they are forced to consider ways to cut costs while trying to balance the best solution in the meantime. The good news is it’s possible - according to VoIP manufacturers and industry experts. Here are four ways that SMEs can save money when undertaking a new VoIP project.

Take Advantage of What VoIP Has To Offer

Make sure to uncover all the benefits of using VoIP that can add real value in addition to cost savings. Chris Maxwell, director of Voxeo Labs, says it’s important for IT and data center managers to look to unified communications applications that integrate a variety of modalities, including voice, video, presence, conferencing, and text to streamline communication. He elaborates, "Companies can take advantage of new instant messaging and SMS capabilities that are available now with VoIP or integrate voicemail with email and combine email contact lists with phone lists by doing so."

Maxwell says data centers should consider using softphones for remote workers or even employees at headquarters to cut costs. He says a good headset can usually overcome quality concerns when using a computer as a primary phone.

SIP trunking should also be considered as an alternative, according to Matthew Kovatch, vice president of sales at Taridium. "Depending on your call volume, a complete new VoIP business phone solution can be paid off within six to eight months by reducing telephone costs alone,” Kovatch notes. “Some companies… offer comprehensive consulting and legacy migration programs that tie into your existing infrastructure."

According to Kovatch, investing in open-standards VoIP may not be a bad idea, either. "Handsets, for example, can make up to 80% of initial hardware cost, and if you choose a proprietary vendor, you might be tied to the vendor forever with expensive and inconvenient hardware upgrades," he says. "And consider a managed VoIP service if you are concerned about acquisition costs." Kovatch says a managed service combines the reliability of an on-premise open-standards VoIP telephony system with the convenience of a simple monthly fee for equipment, phone service, and support.

Choose Your Infrastructure Wisely

Maxwell says starting with an IP PBX (Internet Protocol private branch exchange) is a good idea. "Companies can get their feet wet by trying out an IP PBX," he says. "Many small to medium-size enterprises are finding free, open-source PBXes.., and many other bundled IP switch technologies are becoming increasingly stable, more widely used, and highly functional. It’s possible to implement some of these devices quite easily and cheaply. The benefit is low cost; the trade-off may be in installing and configuring the software yourself."

In Maxwell’s opinion, considering your existing phone lines is also a good idea. "There are devices such as media gateways and ATAs (analogue telephone adapters) that serve as converters from analog phone lines to SIP-based VoIP lines," he says. "This allows companies to keep their current telephone provider, infrastructure, and phone numbers while serving VoIP to local and remote locations using ATAs. In fact, if you have a WAN between two offices, it’s possible to bring in calls to a single location, convert the calls to VoIP, and send the calls to remote locations via SIP to be answered by remote employees."

Glean From Reports & Monitoring

Criss Scruggs, senior manager of product marketing at NetIQ, says that now, more than ever, organizations are being asked to demonstrate the value and return of each new investment. So how do organizations justify their VoIP expenditures while saving money simultaneously? "While not really a trade secret, reporting is the way to accomplish this task," Scruggs says. "You should already be doing this for your critical applications, and by extending existing reporting capabilities to the VoIP network, you can not only demonstrate service levels, but also rapidly identify and resolve call-quality issues. In addition, reporting can be leveraged as a capacity-planning tool for the next phase of your VoIP implementation."

Scruggs adds that proper diagnostics and reporting not only save money, but also help SMEs to proactively address call-quality issues that will impact business performance and tangibly demonstrate the value of VoIP. "By wrapping reporting into your standard VoIP rollout, you can avoid future downtime for end users, justify to your customers the call quality and service delivery metrics as needed, and prevent business stakeholders from questioning the value of your communication investments," he says.

Scruggs also says that deploying a complex communications system across your network should not occur without proper monitoring for the proof of concept and post-deployment phases, which does present up-front costs but saves considerable cost over time. He says it is not uncommon for distributed organizations to believe their networks are prepared to adequately support VoIP and deliver the QoE (quality of experience) and service that users have come to expect with traditional telephony.

On the other hand, he says research shows that nearly 70% of those actively researching VoIP monitoring solutions are doing so after deployment in response to significant increases in trouble tickets and service quality complaints. "In today’s economy, most organizations cannot afford these issues, as they can result in lost revenue," Scruggs notes. "Deploying a monitoring system after your initial VoIP implementation can be time-consuming and costly."

Handle Your VoIP Network with Care

What do VoIP and cars have in common? Scruggs says that, when properly maintained, both VoIP networks and motor vehicles can deliver incredible benefits that far exceed the maintenance and purchase costs. "For example, by following the manufacturer’s maintenance timeline and due diligence recommendations, you can make your car last over 250,000 miles by offsetting future, larger issues and making the most of your initial investment," he says. "The same is true for VoIP. By spending your money on the right things from the get-go-assessment, monitoring, and reporting - you will achieve the best system performance and QoE for your users over time."

By not following this proper management path, Scruggs says the chances of sporadically paying much larger amounts to fix issues and potentially needing a major system overhaul increase dramatically. "These very issues and hidden costs can be easily avoided with proactive management, which may be an additional cost up front but will save significant funds in the long run."

VOIP WLAN Module

VOIP is currently being used by residential and business sectors around the globe. It combines voice and data all in one to clearly communicate to other people around the globe through the use of modems. It is a receiver that is being used to allow clear and a good quality of voice transmissions.

As of today there are so many people using communicating gadgets such as cellular phones as well as computers. With this system it prevents the so called "busy network" thus enhancing the quality of the coverage used. Relayed calls and messages that are being passed on to each user are not delayed. In business transactions using a VOIP WLAN module will enable big bosses to receive and send messages to their employees, supplier and customers in less time.

By using the internet for receiving and sending messages, the VOIP WLAN module helps relay the message immediately to their destination. In the case of using Skype, we can see and hear the people we are talking to on the other line. We can see their instant reaction, and we can hear them clearly. In using cellular phones we can hear the person on the other line clearly with no interruptions and with no delays on text messages.

We can very well say that using the VOIP WLAN module reduces errors in relaying messages from one party to another. It reduces the time for the messages to get to their destination and for receivers to actually view them. It also provides minimal costs for the user because he no longer has to repeat what he is saying nor does he need to send too many messages to the other person.

Framework for strategic IT decisions

Strategic IT-related decisions, like adopting an enterprise-wide best practice or selecting an IT vendor, can have a lasting impact on a firm’s competitive edge. This can be particularly true for areas like supply chain, customer relationship management enterprise-scale business planning and decision support systems. Unfortunately however, no one set of decision criteria fits all firms, even within a business vertical. Industry analysts can provide generic recommendations about vendor capabilities and consult on best practices, but it is the executive who has full visibility to the requirements of the firm and the characteristics of the employees.

How can IT executives translate their qualitative knowledge into improved strategic decisions? How can they justify these decisions? How can IT vendors factor in the relevant decision criteria into their own product strategies?

The difficult choices faced by an executive during strategic IT decision making can be illustrated in the context of the vendor selection problem. The choices include suite vendors who claim to provide enterprise-wide solutions (e.g. SAP, Peoplesoft, Oracle), best of breed vendors who claim to rise above the mediocre in specific high-priority domains (e.g. i2, Manugistics, Hyperion, Seibel), as well as vendors that are said to achieve “extreme specialization” in focused areas (e.g. Demantra, Optiant, Syncra, Cognos, Salesforce.com, Roadmap Technologies).

The executive needs to balance criteria like maturity of product functionality versus sales and implementation cycle times, customized versus out-of-the- box integration, and expected vendor life versus competitive or cost/revenue advantages from cutting-edge functionality. From the IT vendor’s perspective, the decision criteria are useful for product strategies; especially in light of rapid software commoditization in recent years a phenomenon that has driven the need for scale economies and specialization or “verticalization.”

Buck stops at the IT exec’s desk

The decision to adopt enterprise-wide “best practices” was not enforceable even a few years back, in spite of best intentions. However, the Internet and the e-business models have changed much. The IT executive has greater powers to control decision-making processes even at micro-levels, and consequently have increased accountability, with horror stories like a Nike or a Cisco.

There are several challenging questions and little objective guidance in the marketplace. The marketing literature and guidance from IT vendors might be biased, while an excessive focus on “performance management” (suggested by leading industry analysts) might churn out misleading or inconsequential metrics. Even vendor references, often considered an objective yardstick, might be misleading, as success in software implementation need not correlate with return on investment or key business solutions.

While industry analysts and management consultants can offer guidance, they might have limited insights on the specific business processes and resources of a given enterprise. The final decision and the accountability lie with the IT executive.

Significant impact

The decisions by IT executives, when considered collectively, have far-reaching consequences, spanning industry and academia. The impacts range from business best practices and analyst guidance to product strategies of IT vendors, ultimately reflecting on the interests of academicians and the perceived value of an entire field of study.

As an example, consider the perceived value of “analytics”, broadly construed, within the e-business enabled enterprise. The confusion created by vendor hype has led to not so successful but costly implementations, which in turn has resulted in a perception that advanced analytics is marginally useful for “real-world” scenarios. This has led to a collective de-emphasis of analytics in sales cycles and hiring decisions, influenced industry analysts and IT vendors to sideline these approaches, which in turn has further reduced the perceived value of these areas.

As a consequence, quantitative departments in all but the very top-tier business schools need to justify their existence, caused to a great extent by lack of corporate sponsorships and student interests. This phenomenon, in this case a “vicious cycle” of the adoption of analytical methodologies, has potentially impacted the ability of businesses and managers to harness the power of these approaches in their planning and execution strategies.

While the impact of strategic IT decisions might be easily appreciated, executives rarely have the time or the resources to consider these effects during the crucial decision-making process. At the very least, the best interests of the firm and the executives require that their decisions be guided by their corresponding short-term and long-term objectives. However, the immensity of the challenge often leads to overly simplified solutions, with a reliance on “quick and dirty” evaluations and “gut feel” decisions.

Current processes and pitfalls

The process of e-business vendor selection and adoption of best practices usually proceeds along seemingly well-defined steps. The request for proposal (RFP), sales and pre-sales cycles, implementation pilots, buying decisions and “go-live” cycles are routine and well documented. Problems caused by enforcing enterprise-wide best practices, especially when these result from explicit or perceived recommendations by IT vendors, can cause spectacular failures and case statistics. These, in turn, lead to questions about vendor selection, often resulting in vendor substitution. Introspection and “root cause” analyses can result in additional dollars spent toward analyst and consulting services. These “routine” processes hide significant complexities, as well as the rather subjective and ad hoc nature of the decision-making process.

Too much reliance on subjectivity, not objectivity

Anecdotal evidences suggest the ad hoc nature of decision-making processes. This author, as the manager of strategic products for large and niche IT vendors, has experienced situations where buying decisions have been made without product demonstrations or evaluations, and/or based on subjective criteria ranging from the like-ability or appeal of a salesperson or a senior executive to unjustified focus on the latest buzzwords made popular by analyst firms and vendor marketing literature.

Considerations like product feature/functionality or core competencies on the one hand, and alignment with the “strategic vision” of an enterprise on the other, are often dealt with in a summary and highly subjective fashion.

IT executives need to make quick decisions to remain competitive, for maintaining profitability, and for enhancing their top and bottom lines. The luxury of 20/20 hindsight, available to researchers, is not guaranteed. The proposed framework provides a guideline for better strategic decision-making by executives under these constraints.

A proposed framework

In many ways, strategic IT decision making resembles the science and the art of forecasting. While much might depend on key decisions, the value of seemingly incremental advances in the decision-making process is difficult to quantity, except through their benefits, or (sometimes spectacular) failures. Even these might not be adequate indicators however. A forecaster or strategic decision-maker can only make the best choice given the information available, along with the uncertainty that invariably accompanies the information and the involved processes.

A fair evaluation might not necessarily be how well the end results turned out to be, for these could be caused by external factors unknowable or beyond the control of the forecaster or strategic decision-maker. These factors make the decision-making process rather difficult to evaluate, even with 20/20 hindsight. However, the value of better decisions is that these yield better returns on the average. In the context of an e-business enabled enterprise, this can make the difference between profitability and growth or despair and decay.

This perceived analogy between the processes of forecasting and strategic decision making suggests taking a closer look at the established wisdom of the former to see if improvements can be made in the latter. This author turned to the works of Allan Hunt Murphy, a brilliant forecaster and statistician who fundamentally influenced his fields of study. An essay by Murphy appeared particularly relevant, where he suggested a strategy for evaluating forecasts. He proposed the following three kinds of “measures”, and called these “Type I”, “Type II” and “Type III”:

  • Type I: Do the forecasters utilize the best available information and skills?
  • Type II: Do the forecasts agree statistically with the observations?
  • Type III: Do the forecasts provide benefits to the end-users?

Of these, the Type II measure is the easiest to quantify and track on an ongoing basis. These can be statistical measures of skill that (for example) compares forecasts with actual observations on an ongoing basis, for successive forecast lead times.

Type I, however, is somewhat subjective, reflecting the nature of the forecasting process. However, the measure, albeit qualitative, is well defined.

Type III rests on Murphy’s belief that forecasts have no intrinsic value and are only useful in the context of their end use. This measure requires a definition of relevant utility metrics. Depicts Murphy’s evaluation measures as a three-dimensional “decision matrix”.

From Murphy to best practices for e-business

Strategic decisions about enterprise-wide adoption of e-business best practices are motivated by the need to understand the past, measure the present, anticipate the future, and react quickly to change. The analogous nature of forecasting and strategic IT decision making will be leveraged to extend Murphy’s formulations, and the decision matrix presented in Figure 1. The forecaster’s best knowledge and skills (Murphy’s Type I) translates to creating, managing and retaining knowledge across the enterprise. The need to measure and track accuracy metrics (Type II) is analogous to measuring the health of the business on an ongoing basis, for management by objectives and by exceptions, both for diagnosis and prognosis.

Finally, the utility of the forecasts in the contexts of their end-use (Type III) translates to strategic management, which equates to the aggregate state of the business in terms of stakeholder and market value, as well as the ability to respond quickly to change.

Adoption of best practices for e-business

Type I, “Knowledge management”

Global information visibility as well as knowledge creation, retention and sharing across and among organizations or trading partners, combined with information analysis processes for prediction and change detection. “Knowledge” in this context is broadly defined to include analysis of archived data and best practices, as well as prediction and anticipation of change by human experts, automated analytic tools, and human-computer interaction. These are the core processes that utilize and build the value, character and philosophy of an enterprise.

Type II, “Performance management”

Continuous performance measurement using key performance indicators (KPIs) and evaluation of business processes or decisions, these include monitoring pre-defined metrics that indicate the state of the business both in manual and automated modes, as well as having a process for defining new metrics in anticipation of, or as a response to, change. The performance measures need to be at sufficiently granular levels to enable tactical decisions by business line managers and at sufficiently aggregate levels for C-level executives to be able to feel the pulse of the enterprise as a whole. Measures at different levels of aggregation need to be adequately linked through automated allocation and consolidation mechanisms. These are diagnostic processes that measure the performance of the core knowledge management practices and could be useful as prognostic guidelines for change management.

Type III, “Strategic management”

Processes and mechanisms for controlling the overall health and direction of the enterprise, in a way that maximizes its value to the market and to the shareholder, these encompass the utilization of aggregate level knowledge and performance measures, but include the ability to provide strategic guidance that is quickly adopted throughout the enterprise. Examples of strategic decisions could be changes in the relative emphasis on cost-cutting efforts, revenue generation, profitability, or service levels. These decisions are caused by emerging market or business conditions, or anticipations thereof, and thus need quick translation to tactical if they are to be effective. Adopting best practices that facilitate strategic decisions as well as near real-time enterprise-wide implementation is a key requirement.

The IT executive needs to consider three orthogonal decision variables during the adoption of best practices for e-business: knowledge management, performance management, and strategic management.

From best practices to the most suitable IT vendor

The decision matrix for e-business translates logically to the one for IT vendor selection. The relevant decision criteria are the technologies, tools and application logic that support or facilitate the concepts and business processes discussed earlier. The ability of enabling technologies to support business processes, the degree of support, and the importance of the human factor, have been the topic of much discussion and will not be repeated here.

Type I, “Collaboration and analytics”

E-business technologies that support knowledge management can be broadly categorized into two groups: those that support collaboration within and among organizations or enterprises (i.e. information acquisition, management, visibility and transfer); and those that support information reconciliation and knowledge creation (e.g. analytical tools). Note that “analytics” is broadly construed in this context to include planner or human driven analysis as well as mathematical modeling. The former includes tools for decision support like spreadsheet analysis and OLAP while the latter includes advanced mathematical approaches like data mining and optimization.

Type II, “Metrics and reports”

The ability to define and measure the state of a business through e-business technologies requires the generation of pre-defined and ad hoc metrics, creation of presentation ready reports, continuous monitoring and tracking of key performance indicators at detailed and aggregate levels for business line managers and executives, as well as exceptions and alert mechanisms for handling special cases. The tools should facilitate performance management by fact and by exceptions, in the context of specific business verticals and within the constraints of the enterprise.

Type III, “Breadth of footprint”

The size of a vendor’s footprint encapsulates key decision considerations like the ability to provide a 360° view of the enterprise (for example, integration of back office and front office components), longevity amidst possible vendor consolidation, and the ability to execute and allocate the results of strategic decisions. While larger vendors tend to have an edge in these areas, the ability of smaller vendors to sustain niche positions and provide holistic solutions in their specific areas of expertise can be key considerations. The tradeoff between customized integration of best of breed solutions and “out of the box” integration by suite vendors need to be carefully balanced.

Vendors of e-business applications can be judged on the basis of three broad criteria: collaboration and analytics, metrics and reports, and breadth of footprint.

Managerial insights

The adoption of best practices for e-business, and corresponding IT vendor selection, needs to balance three orthogonal decision variables. Limited resources, opportunities and choices may force an executive to assign relative weights to each criterion. Unfortunately, there can be no one guideline that fits all enterprises and business requirements. However, it is useful to remember that strategic management as defined here is the end-goal, knowledge management is the means, and performance management provides a mechanism to check whether the means are sufficient and well aligned with the end. The executive needs to ultimately decide where in the decision matrix his or her organization fits best, in the context of the required business solutions.

Friday, 28 August 2009

Server Virtualization Performance

Server virtualization has been a hot topic for a few years now. The concept continues to excite IT managers with the possibility of running multiple OSes on one system. But in the midst of the hype, it's easy to overlook how issues such as CPU overhead can seriously impact server performance. Before you commit to server virtualization, the pitfalls and remedies deserve some exploration.

The Pitfalls

Virtualization technology vendors will claim that they can drive I/O capacities up to wire speed, but they do not discuss the amount of CPU power that is needed to do that. Salsburg says, "Workloads that are data-intensive may utilize far more of your CPU power than you expect. Future hypervisors, working with the processor and HBA/NIC vendors, will drive down this CPU overhead, but that is later on their roadmap." High CPU overhead will cause erratic and degraded performance.

The virtualization of the x86 architecture has been accomplished in two ways: full virtualization and paravirtualization. while paravirtualization offers important performance benefits, it also requires modification of the operating system, which may impact application certifications.

Full virtualization, on the other hand, relies on sophisticated but fragile software techniques to trap and virtualize the execution of certain sensitive, ‘non-virtualizable’ instructions in software via binary patching. With this approach, critical instructions are discovered at run-time and replaced with a trap into the VMM to be emulated in software. These techniques incur large performance overhead (as much as 20 to 40%). This becomes a problem in the area of system calls, interrupt virtualization, and frequent access to the privileged resources.

The successor to full and paravirtualization is native virtualization. He says with native virtualization the VMM can efficiently virtualize the x86 instruction set by handling the sensitive, “non-virtualizable” instructions using a classic trap and emulate model in hardware vs. software. “Native virtualization has just become available on the market in the last nine months. While it is a new approach, it offers considerable benefits to users in performance and ease of implementation. It also protects the investment in existing IT infrastructure. This new approach is worthy of consideration for those planning their next steps in server virtualization.”

Problems & Remedies

Two problems that come to mind are the issues of security and management. A typical three-tier security model (with the Web tier isolated from the application tier, which is isolated from the database tier) cannot be deployed on a single consolidated server today using current hypervisors. "If one tier is infected and this brings down the hypervisor, you have not sufficiently isolated one tier from another"

Regarding management, consolidating many OS images on a single server may diminish the operation costs for the hardware but not for the various OS images. "In addition, virtualization will spawn many more OS images, due to the simplicity of setting them up. The hypervisor vendors are working on better management, but their solutions do not today scale up to an enterprise-level management structure"

It's important to consider the applications and the deployment goals to match the appropriate virtualization technology. Whether virtual machine technology (such as VMware, Xen, or Virtual Iron) or OS virtualization (such as SWsoft Virtuozzo). "New issues are created through virtualization, such as virtual machine sprawl due to the ease of deploying a new virtualized server, as compared with setting up a new physical server"

Virtualization is a technology shift, a process change. "Once organizations move forward in their deployment, they often realize that no one virtualization technology is perfect for every need. For that reason, many organizations are deploying virtual machines for test and development because the big advantage to this technology is the ability to load many different operating systems on the same server. In the same organizations, they are using OS virtualization for high I/O and production applications because it enables density of up to hundreds of virtual environments per physical server"

Companies that plan to virtualize their x86 infrastructure need the right tools and expertise to manage this virtual infrastructure. “The best server virtualization solutions have built-in capabilities such as Live Capacity and Live Migration (transparent workload migration) that enable users to optimize virtual server utilization across a shared pool of resources.” With these types of tools, users can take advantage of policy-driven management capabilities that continuously sample performance data from every server and every virtual server to automatically relocate running OSes and applications from one physical server to another (without losing any state). "This streamlines the management of the data centre greatly while also reducing the potential for error"

Introduction to server virtualization

What is virtualization and why use it

Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern 2-socket 4-core server is more powerful than an 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.

When to use virtualization

Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.

While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.

While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

How to avoid the "all your eggs in one basket" syndrome

One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:

  • HTTP
  • FTP
  • DNS
  • DHCP
  • RADIUS
  • LDAP
  • File Services using Fiber Channel or iSCSI storage
  • Active Directory services

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.

For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.

Physical to virtual server migration

Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that you don't need to rebuild your servers and manually reconfigure them as a virtual server—you simply suck them in with the entire server configuration intact!

So if you have a data centre full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You don't even need to worry about license acquisition costs because the licenses are already paid for. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean your server room would look after such a migration. It would all fit inside of one rack and give you lots of room to grow.

As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. Ask yourself what would happen now if your legacy server died. Do you even remember how to rebuild and reconfigure all of your servers from scratch? (I'm guessing you're cringing right about now.) With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.

Patch management for virtualized servers

Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. You still need a patch management system that patches all of your servers, but there may be interesting developments in the future where you may be able to patch multiple operating systems at the same time if they share some common operating system or application binaries. Ideally, you would be able to assign a patch level to an individual or a group of similar servers. For now, you will need to patch virtual operating systems as you would any other system, but there will be some innovations in the virtualization sector that you won't be able to do with physical servers.

Licensing and support considerations

A big concern with virtualization is software licensing. The last thing anyone wants to do is pay for 16 copies of a license for 16 virtual sessions running on a single computer. Software licensing often dwarfs hardware costs, so it would be foolish to run an expensive software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead.

For something like Windows Server 2003 Standard Edition, you would need to pay for each virtual session running on a physical box. The exception to this rule is if you have the Enterprise Edition of Windows Server 2003, which allows you to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems.

If you're running open source software, you don't have to worry about licensing because that's always free—what you do need to be concerned about is the support contracts. If you're considering virtualizing open source operating systems or open source software, make sure you calculate the support costs. If the support costs are substantial for each virtual instance of the software you're going to run, it's best to squeeze the most out of your software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. The trick is to find the right ratio of hardware to licensing/support costs. When calculating hardware costs, be sure to calculate the costs of hardware maintenance, power usage, cooling, and rack space.

There are licensing and support considerations for the virtualization technology itself. The good news is that all the major virtualization players have some kind of free solution to get you started. Even one year ago, free virtualization was not possible when VMware was pretty much the only player in town, but there are now free solutions from VMware, Microsoft, Xen Source, and Virtual Iron. In the next virtualization article, we'll go more in-depth about the various virtualization players.

Eon Networks: Striding Through The Slowdown

Eon Networks Pvt. Ltd., a leading provider of IT Infrastructure solutions and services to domestic customers, announce continued progress during the second quarter of 2009. In this quarter we have pushed our self little extra and achieved higher sales by executing a number of high value orders for Tier 1 clients like Airtel.

Eon Networks is now in partnership with VMWare and NComputing for providing solution for Virtualisation technology. Both VMware (for virtualization software) and NComputing (for virtualization hardware) are recognised leaders in virtualization technology solutions. Virtualisation is proven to increase the utilization of existing hardware and reduce capital and operational costs. This partnership will enable Eon Networks to offer and deliver Desktop, Server and Application Virtualisation solutions to the customers in India.

"Virtualisation is a way to run multiple operating systems on a single piece of hardware. The two primary uses are server virtualization and desktop virtualization".

Our level of commitment, service, support, knowledge, delivery mechanism and competitive pricing policy has enabled us to develop a decent goodwill in the field which consequently helped us in partnering with leaders in the field of IT Infrastructure solutions.

We boast of our Alliance portfolio with the top-tier technology providers like Cisco, D-Link, 3Com, IBM, HP, Microsoft, Symantec, Trend, McAfee, Checkpoint, Cyberoam and SonicWall. Our unique solution-based methodology has enabled us to effectively address the business needs of our clients, optimize the returns on their IT investments, mitigate risk and focus on growth and profitability.

The vast range of technology solutions which we have provided to our clients in such a short span of time ranges from providing Desktop, Server and Application Virtualisations solutions, Website designing, corporate presentations, Graphic solution, Network Security, Storage, Routing & Switching, and Wireless Network across various verticals like Enterprise & SMBs in the IT, ITES, Construction, Educational, Financial & Consultancy industries.

Thursday, 27 August 2009

Can Terminal Services be considered Virtualization?

Virtualization is a hot topic and at the moment very hyped up. Manufacturers would like to use that hype to boost their products by linking it to the virtualization market. In this craze Terminal Services was also labeled as a “Virtualization product”. In this article let’s look at the facts and I’ll also give my opinion about this virtualization label.

Introduction

Although virtualization techniques were mentioned a long time ago (around 1960), within the ICT market the launch of VMWare caused the big success of the virtualization market. Their server virtualization product, which made it possible to run multiple servers on one physical system, started the virtualization space. After server virtualization other virtualization products and fields followed quickly like application virtualization, operating system virtualization and desktop virtualization. Products which were already available before the virtualization market want to hitch a ride on the virtualization craze. I was a bit surprised when both Microsoft and Citrix determined that Terminal Services and Citrix Presentation Server are virtualization products.

What is…?

Before we can start determining whether Terminal Services can be labeled as a virtualization product, we need to first find out what the definitions of virtualization and terminal services are.

Virtualization

Virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource.

Terminal Services

Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network (WAN) or Local Area Network (LAN), as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible to a remote client machine.

Terminal Services Virtualization?

Both Microsoft and Citrix are using the virtualization space to position their Terminal Services/Citrix Presentation Server/XenApp product features. Microsoft calls it presentation virtualization, while Citrix used the term session virtualization. Microsoft also describes Terminal Service virtualization as follows:

Microsoft Terminal Services virtualizes the presentation of entire desktops or specific applications, enabling your customers to consolidate applications and data in the data center while providing broad access to local and remote users. It lets an ordinary Windows desktop application run on a shared server machine yet present its user interface on a remote system, such as a desktop computer or thin client.

If we go a bit deeper, Microsoft is describing their interpretation of presentation virtualization as follows: Presentation virtualization isolates processing from the graphics and I/O, making it possible to run an application in one location but have it controlled in another. It creates virtual sessions, in which the executing applications project their user interfaces remotely. Each session might run only a single application, or it might present its user with a complete desktop offering multiple applications. In either case, several virtual sessions can use the same installed copy of an application.

Ok, now we have the definitions of virtualization, terminal services and the way Microsoft explains why terminal services are a virtualization technique, it is time to determine if Microsoft is right with their assumption.

Terminal Service Virtualization

Reading the explanation of virtualization, two important definitions are mentioned: abstraction and hiding the physical characteristics.

From the user perspective the application is not available on his workstation/thin client, but is running somewhere else. Using the definition of hiding physical characteristics, Terminal Services can be seen, from a user perspective, as virtualization. Because the application is not installed locally the user does not have any physical identification with the application.

With the IT perspective in mind Terminal Service can also be seen as virtualization based on the definition that (physical) resources can function as multiple virtual resources. Traditionally, installed applications on a local workstation can be started by one user at a time. By installing the application on a Terminal Server (in combination with a third party SBC add-on) applications can be used by more users at the same time. Although an application cannot be seen as a 100% physical resource, you can see Terminal Services as a way of offering a single resource that will be shown as multiple virtual resources.

In summary, Terminal Services can be seen as virtualization because the application is abstracted from the local workstation and the application appears to function as multiple virtual resources.

Terminal Services is not virtualization

However, let’s take a closer look at the physical resources. Hardware virtualization, application virtualization and OS virtualization really do separate from the physical resource. With application virtualization the application is not physically available on the system, OS virtualization does not need a hard disk to operate, and with hardware virtualization the virtual machine does not communicate (directly) with real hardware. However Terminal Services, from an IT perspective, still needs physical resources. Terminal Services is not really virtualising anything, only the location where the application/session is started and the methodology of displaying the application to the user are different. In other words, as Microsoft describes in their explanation, Terminal Services isolates processing from the graphics and I/O, but this is still done using another device without an additional layer in between.

Conclusion

Back to the main question: is Terminal Services virtualization? And the answer is …… it depends. It depends how you look at the concept of virtualization and your point of view on Terminal Services. Terminal Service can be seen as virtualization if you check it from the user perspective view (the application is not running physically on the workstation or thin client) or the view that a single application/session can be used at once by more than one user. If you look at how other virtualization techniques work, Terminal Services does not function the same way and physically nothing is running in a separate layer.

So there is no clear answer and the answer is subjective depending on how you look at virtualization and Terminal Services. My personal opinion is that Terminal Services cannot be labeled as virtualization, because it is not comparable with other virtualization techniques. Through my eyes Terminal Services is not adding an additional (virtualization) layer, but is only dividing the processes between two systems. I think both Microsoft and Citrix are using the "virtualization" term to gain advantages through the current boom of the virtualization market, but both know that if you look at the IT techniques it is not "real" virtualization.

How Virtual Private Networks Work

Introduction to How Virtual Private Networks Work

The world has changed a lot in the last couple of decades. Instead of simply dealing with local or regional concerns, many businesses now have to think about global markets and logistics. Many companies have facilities spread out across the country or around the world, and there is one thing that all of them need: A way to maintain fast, secure and reliable communications wherever their offices are.

Until fairly recently, this has meant the use of leased lines to maintain a wide area network (WAN). Leased lines, ranging from ISDN (integrated services digital network, 128 Kbps) to OC3 (Optical Carrier-3, 155 Mbps) fibre, provided a company with a way to expand its private network beyond its immediate geographic area. A WAN had obvious advantages over a public network like the Internet when it came to reliability, performance and security. But maintaining a WAN, particularly when using leased lines, can become quite expensive and often rises in cost as the distance between the offices increases.

As the popularity of the Internet grew, businesses turned to it as a means of extending their own networks. First came intranets, which are password-protected sites designed for use only by company employees. Now, many companies are creating their own VPN (virtual private network) to accommodate the needs of remote employees and distant offices.

Basically, a VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using a dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the company's private network to the remote site or employee. In this article, you will gain a fundamental understanding of VPNs, and learn about basic VPN components, technologies, tunnelling and security.

Virtual private networks help distant colleagues work together, much like desktop sharing. Click here to learn more.

What Makes a VPN?

A well-designed VPN can greatly benefit a company. For example, it can:

  • Extend geographic connectivity
  • Improve security
  • Reduce operational costs versus traditional WAN
  • Reduce transit time and transportation costs for remote users
  • Improve productivity
  • Simplify network topology
  • Provide global networking opportunities
  • Provide telecommuter support
  • Provide broadband networking compatibility
  • Provide faster ROI (return on investment) than traditional WAN

What features are needed in a well-designed VPN? It should incorporate:

  • Security
  • Reliability
  • Scalability
  • Network management
  • Policy management

There are three types of VPN. In the next couple of sections, we'll describe them in detail.

Remote-Access VPN

There are two common types of VPN. Remote-access, also called a virtual private dial-up network (VPDN), is a user-to-LAN connection used by a company that has employees who need to connect to the private network from various remote locations. Typically, a corporation that wishes to set up a large remote-access VPN will outsource to an enterprise service provider (ESP). The ESP sets up a network access server (NAS) and provides the remote users with desktop client software for their computers. The telecommuters can then dial a toll-free number to reach the NAS and use their VPN client software to access the corporate network.

A good example of a company that needs a remote-access VPN would be a large firm with hundreds of sales people in the field. Remote-access VPNs permit secure, encrypted connections between a company's private network and remote users through a third-party service provider.

Site-to-Site VPN

Through the use of dedicated equipment and large-scale encryption, a company can connect multiple fixed sites over a public network such as the Internet. Site-to-site VPNs can be one of two types:

  • Intranet-based - If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN.
  • Extranet-based - When a company has a close relationship with another company (for example, a partner, supplier or customer), they can build an extranet VPN that connects LAN to LAN, and that allows all of the various companies to work in a shared environment.

Analogy: Each LAN is an Island

Imagine that you live on an island in a huge ocean. There are thousands of other islands all around you, some very close and others farther away. The normal wa­y to travel is to take a ferry from your island to whichever island you wish to visit. Of course, travelling on a ferry means that you have almost no privacy. Anything you do can be seen by someone else.

Let's say that each island represents a private LAN and the ocean is the Internet. Travelling by ferry is like connecting to a Web server or other device through the Internet. You have no control over the wires and routers that make up the Internet, just like you have no control over the other people on the ferry. This leaves you susceptible to security issues if you are trying to connect between two private networks using a public resource.

Continuing with our analogy, your island decides to build a bridge to another island so that there is easier, more secure and direct way for people to travel between the two. It is expensive to build and maintain the bridge, even though the island you are connecting with is very close. But the need for a reliable, secure path is so great that you do it anyway. Your island would like to connect to a second island that is much farther away but decides that the cost are simply too much to bear.

This is very much like having a leased line. The bridges (leased lines) are separate from the ocean (Internet), yet are able to connect the islands (LANs). Many companies have chosen this route because of the need for security and reliability in connecting their remote offices. However, if the offices are very far apart, the cost can be prohibitively high - just like trying to build a bridge that spans a great distance.

So how does VPN fit in? Using our analogy, we could give each inhabitant of our islands a small submarine. Let's assume that your submarine has some amazing properties:

  • It's fast.
  • It's easy to take with you wherever you go.
  • It's able to completely hide you from any other boats or submarines.
  • It's dependable.
  • It costs little to add additional submarines to your fleet once the first is purchased.

Although they are travelling in the ocean along with other traffic, the inhabitants of our two islands could travel back and forth whenever they wanted to with privacy and security. That's essentially how a VPN works. Each remote member of your network can communicate in a secure and reliable manner using the Internet as the medium to connect to the private LAN. A VPN can grow to accommodate more users and different locations much easier than a leased line. In fact, scalability is a major advantage that VPNs have over typical leased lines. Unlike with leased lines, where the cost increases in proportion to the distances involved, the geographic locations of each office matter little in the creation of a VPN.

VPN Security: Firewalls

A well-des­igned VPN uses several methods for keeping your connection and data secure:

  • Firewalls
  • Encryption
  • IPSec
  • AAA Server

In the following sections, we'll discuss each of these security methods. We'll start with the firewall.

A firewall provides a strong barrier between your private network and the Internet. You can set firewalls to restrict the number of open ports, what types of packets are passed through and which protocols are allowed through. Some VPN products, such as Cisco's 1700 routers, can be upgraded to include firewall capabilities by running the appropriate Cisco IOS on them. You should already have a good firewall in place before you implement a VPN, but a firewall can also be used to terminate the VPN sessions.

Understanding Basic WLAN Security Issues

A wireless LAN is the perfect way to improve data connectivity in an existing building without the expense of installing a structured cabling scheme to every desk. Besides the freedom that wireless computing affords users, ease of connection is a further benefit. Problems with the physical aspects of wired LAN connections (locating live data outlets, loose patch cords, broken connectors, etc.) generate a significant volume of helpdesk calls. With a wireless network, the incidence of these problems is reduced.

There are however, a number of issues that anyone deploying a wireless LAN needs to be aware of. First and foremost is the issue of security. In most wired LANs the cables are contained inside the building, so a would-be hacker must defeat physical security measures (e.g. security personnel, identity cards and door locks). However, the radio waves used in wireless networking typically penetrate outside the building, creating a real risk that the network can be hacked from the parking lot or the street.

The designers of the IEEE 802.11b or Wi-Fi tried to overcome the security issue by devising a user authentication and data encryption system known as Wired Equivalent Privacy, or WEP.

Unfortunately, some compromises that were made in developing WEP have resulted in it being much less secure than intended: in fact a free program is now available on the Internet that allows a hacker with minimal technical knowledge to break into a WEP-enabled wireless network, without being detected, in no more than a few hours.

The IEEE standards group is working on an improved security system that is expected to overcome all of WEP's known shortcomings but it is unlikely that products incorporating the new technology will be widely available before late 2002 or early 2003.

In the meantime, security experts agree that all sensitive applications should be protected with additional security systems such as Internet Protocol Security (IPsec). However, if excessive security measures are forced on users of non-sensitive applications, the wireless network becomes cumbersome to use and system throughput is reduced.

A good wireless networking system should therefore provide a range of different user authentication and data encryption options so that each user can be given the appropriate level of security for their particular applications.

Another point to bear in mind is that each access point in a Wi-Fi network shares a fixed amount of bandwidth among all the users who are currently connected to it on a first-come, first-served basis. It is therefore important to make sure that sufficient access points are installed for the expected volume of users and traffic. Even then there is a tendency in a first-come, first-served kind of network for a small number of wireless devices (typically those who are physically closest to the access point) to grab most of the available bandwidth, resulting in poor performance for the remaining users. The best way to resolve this issue is to choose a system which has quality of service (QoS) features built into it.

Since one of the major benefits of wireless networking is user mobility, another important issue to consider is whether users can move seamlessly between access points without having to log in again and restart their applications. Seamless roaming is only possible if the access points have a way of exchanging information as a user connection is handed off from one to another.

Furthermore, most large corporate data networks are divided into a number of smaller pieces called subnets for traffic management and security reasons. In many instances wireless LAN vendors provide seamless roaming within a single subnet, but not when a user moves from one subnet to another.

There are a number of ways of dealing with the issues described above. Several of the best-known networking equipment vendors have developed their own product ranges to include special access points and wireless LAN interface cards, central firewall and security components, and routers with built-in QoS capabilities.

When all these elements are used together, the result is a secure, high-performance wireless network. However, such solutions are expensive and integrating the various components requires a considerable amount of patient networking expertise.

Another approach that is often advocated is the use of virtual private network (VPN) hardware. VPN hardware is designed to enable remote users to establish a secure connection to a corporate data network via an insecure medium, namely the Internet. On the face of it this is a very similar problem to connecting via a wireless link.

However there are drawbacks to using existing VPN products in a wireless LAN environment. For starters, a VPN solution on its own does not address the requirement for QoS and seamless roaming between subnets.

Also, a VPN solution imposes the same high level of security on all users whether or not their applications warrant it. In order to achieve this they require special VPN software to be installed on each user's computer. In a wireless network with large numbers of users, this translates to a major headache.

What network managers are asking for is an architecture that offers different levels of security to meet varying user needs, ranging from simple user name access with no encryption through to a full IPsec implementation for sensitive applications. Ideally, the solution should deliver up to 100 Mbps of throughput. Other features should include QoS features to allocate bandwidth fairly among users, and seamless roaming both within and between subnets.

The objective is to deploy and maintain secure, high performance wireless LANs with a minimum amount of time, effort and expense.