Showing posts with label Computer Security. Show all posts
Showing posts with label Computer Security. Show all posts

Thursday, 17 September 2009

High Availability and Disaster Recovery for Virtual Environments

Virtualization is increasingly being used by IT departments for server consolidation and testing purposes. Virtual servers offer flexibility, but if a single physical server containing multiple virtual servers fails, then the impact of data loss is enormous.

Introduction

Virtual servers are used to reduce operational costs and to improve system efficiency. The growth in virtual servers has created challenges for IT departments regarding high availability and data protection. It is not enough to protect physical servers but also virtual servers as they contain business critical data and information. Virtual servers offer the flexibility, but at the same time if a single physical server containing multiple virtual servers fails, then the impact of data loss is enormous.

Virtualization Benefits

Companies are adopting virtualization at a rapid speed because of the tremendous benefit it offers and some of them include:

  • Server Consolidation: Virtualization helps to consolidate multiple servers into one single physical server thus offering improved operational performance.
  • Reduced Hardware Costs: As the number of physical servers goes down, the cost of servers and associated costs like IT infrastructure, space, etc. will also decrease.
  • Improved Application Security: By having a separate application in each virtual machine, any vulnerability is segregated and it does not affect other applications.
  • Reduced Maintenance: Since virtual servers can easily be relocated and migrated, maintenance of hardware and software can be done with minimal downtime.
  • Enhanced Scalability – The ease with which virtual servers can be deployed will result in improved scalability of IT implementation.

File or Block Level Replication

Different kinds of replication techniques can be used to replicate data between two servers both locally and remotely. In block level, replication is performed by the storage controllers or by mirroring the software. In file-system level (replication of file system changes), the host software performs the replication. In both block and file level replication, it does not matter what type of applications are getting replicated. They are basically application agnostic, but some vendors do offer solutions with some kind of application specificity. But these solutions cannot provide the automation, granularity and other advantages that come with application-specific solution. Also, one needs to be concerned about the following:

  • Replicated server is always in a passive mode - cannot be accessed for reporting/monitoring purposes.
  • Possibility of virus/corruption getting propagated from production server to replicated server.

Application Specific Replication Approach

In this approach, the replication is done at a mailbox or database level and it is very application specific. One can pick and choose the mailboxes or databases that need to be replicated. In the case of Exchange Server, one can set up a granular plan for key executives, sales and IT people, in which the replication occurs more frequently to achieve the required Recovery Point Objective (RPO) and Recovery Time Objective (RTO). For everyone else in the company, another plan can be set up where the replication intervals are not that frequent.

Another advantage of this approach is that the replicated or failover server is in an Active mode. The failover server can be accessed for reporting and monitoring purposes. With other replication approaches, the failover server is in a Passive mode and cannot be used for maintenance, monitoring or reporting purposes.

Backup and Replication

Some solutions offer both backup and replication as part of a single solution. In this case, the backup is integrated with replication and the users get a two-in-one solution. Considered two-tier architecture, these solutions consists of an application and agent environment. The application server also hosts the network share that stores all the backup files. The files are stored on this network share and not on any particular target server so as to prevent loss of backup files. If the target server goes down, users would like to continue to access their backup files in order to rebuild the target server with as little downtime as possible.

The mailboxes and databases will be backed to the backup server and then replicated to the remote failover server. The full back and restore is done first and then only the changes will be applied through incremental. For restoring emails, mailboxes and databases, the local backup data can be used and for disaster recovery purposes, the remote failover server can be utilized.

Virtual Environments

Many high availability solutions protect data that reside on virtual servers. Customers can have multiple physical servers at the primary location and at the offsite disaster recovery location they can have one physical server with multiple virtual servers. Also, multiple virtual servers from the primary site can be easily backed up and replicated to the disaster recovery site.

With some disaster recovery solutions, both on physical and virtual servers, the appropriate agents are installed and these agents have very small footprint. Because of the limited footprint, the impact on these servers is minimal from a performance perspective. With other replication solutions, one has to install the entire application on the virtual servers and this will take a huge toll on performance.

Physical to Virtual Servers

In this scenario, the production environment has physical servers and the disaster recovery site is deployed in a virtual environment. Both the physical and virtual servers are controlled by the Application and it can be located either at the production site or at the remote site.

Virtual to Virtual Environments

In order to achieve significant cost savings, some companies not only virtualize their disaster recovery site but also use virtual servers in the production environment. One can have one or more physical servers housing many virtual servers both at production and remote sites.

Failover/Failback

When a disaster strikes the primary site, then all the users will be failed over to the remote site. Once the primary is rebuilt, one can go through the failback process to the original primary servers very easily. Also, only a particular virtual server containing Exchange or SQL server can be failed over without affecting other physical or virtual servers.

The only way to make sure that your disaster recovery solution works is to test it periodically. Unfortunately, to do that one has to failover the entire Exchange or SQL server. Administrators will be leery about doing this for fear of crashing the production Exchange or SQL server. Some solutions can create a test mailbox or database and use it for failover/failback testing periodically. Through this approach, customers can be fully assured that their disaster recovery solution will work when it is badly needed and have peace of mind.

Migration

Virtual servers in conjunction with certain disaster recovery solutions can be used as a migration tool. If a physical server goes bad, then one can failover to the remote failover virtual server. Once the primary site is rebuilt, then the failback can be easily achieved. With some applications, there is no need to have identical versions of Exchange on primary and failover servers. In fact, one can run Exchange 2003 on primary server and Exchange 2007 on failover server. This feature can be used as a migration tool. For example, you can failover to the failover server which runs Exchange 2007. Upgrade the original primary to Exchange 2007 and failback again. This scenario is applicable to SQL 2000, SQL 2005 and SQL 2008 servers also.

Conclusion

Companies are increasingly adopting virtual servers as virtualization offers many compelling benefits. This increase in virtualization poses tremendous disaster recovery and data protection challenges to IT Administrators. There is a greater need to implement the appropriate high availability and failover solutions to protect these servers.

Wednesday, 16 September 2009

Wireless LAN Security Guide

Security for any organization large or small

Introduction

One of the most common questions that people ask me about Wireless LANs is "are Wireless LANs really safe?" immediately followed up by "what kind of security do I need for my Wireless LAN?" The answer to the first question is "yes, if you implement good security measures" but the second question forces me to resort to the old "it depends". It depends on what level of risk is acceptable to your home or organization. It depends on what level of management and cost you are willing to bear. To simplify this extremely complex topic, I've come up with four arbitrary levels of WLAN (Wireless LAN) security as a general guideline that is designed to suit everyone's needs from the home to the military.
  • Level 1: Home and SOHO WLAN security
  • Level 2: Small Business WLAN security
  • Level 3: Medium to large Enterprise WLAN security
  • Level 4: Military grade maximum level WLAN security
Level 1: Home and SOHO WLAN security

Unfortunately, many home users are either using some old equipment, old drivers, or older operating systems that don't natively support WPA so they are still using WEP if anything at all. WEP encryption was thought to be good for a week for most light traffic home wireless networks because the older WEP cracking tools needed 5 to 10 million packets to recover a WEP key, but the newest WEP cracking techniques can break WEP in minutes. Even if there isn't that much traffic, the attacker now has ways to artificially generate traffic and accelerate WEP cracking. Because of this, consumers should avoid any product that doesn't support WPA TKIP mode at a minimum but preferably WPA AES capable or WPA2 certified devices. If they have WEP only devices, check with the vendor to see if there are any firmware and/or driver updates that will upgrade the device to WPA mode. If not, anyone who cares about privacy should throw out those devices. As harsh as that may sound, it is comforting to know that newer Access Points and Client Adapters that do support WPA can be purchased for as little as $30. Client side Wireless LAN software (officially known as Supplicants) also need to be updated to support WPA or WPA2. Windows XP SP1 with the WPA patch can suffice, but Windows XP SP2 is highly recommended.

The home or SOHO (Small Office Home Office) environment is very unlikely to have any kind of Authentication and PKI in place. This may change when TinyPEAP gets launched, but that is currently in BETA phase and is not ready for prime time yet. TinyPEAP puts a PEAP authentication server and PKI Certificate Authority in your home's Wi-Fi enabled Linksys Router which was once the exclusive domain of large organizations with dedicated authentication servers. For the time being, the only viable option for this environment is WPA PSK (Wi-Fi Protected Access Pre-Shared Key) mode. WPA mode mandates TKIP at a minimum but also has an optional AES encryption mode. AES mode is highly recommended because it has a rock solid pedigree in cryptanalytic resistance whereas TKIP may be under attack in the near future. Note that AES in WPA2 (fully ratified version of 802.11i) is no longer optional and is mandated today. Since most home users would be lucky if all of their equipment and software was TKIP capable, most homes will have to be content with TKIP mode for now.

WPA PSK mode can be an effective security mechanism but leaves a lot to be desired in terms of usability. This is because WPA PSK can be cracked with offline dictionary attacks so it relies on a strong random passphrase to be effective. Unfortunately, humans are very bad at memorizing long random strings of characters and will almost always use simple to remember words and phrases or some slight variation of that. This lends itself to dictionary attacks where a hacker will try every variation of every combination of words in the dictionary. To make this very difficult to hack, use a 10 digit string of random characters comprised of a-z, A-Z, 0-9 or use a very long word phrase made up of 20 or more characters. Unfortunately, this will force many users to write down their passphrases which in itself may lead to passphrase theft. WPA PSK is not a good long term security solution and leaves Level 1 security with much to be desired, but it can be safe when used correctly.

Level 2: Small Business WLAN security

Small businesses must move beyond Level 1 by incorporating authentication in to their Wireless LAN access controls. The standardized method for doing this is 802.1x and PEAP or TTLS authentication. 802.1x restricts access to the Datalink layer of a network by only permitting access to the network if a user proves their identity through the EAP (Extensible Authentication Protocol) mechanism. There are many forms of EAP, but the two forms of EAP that is most appropriate for Level 2 security is PEAP (Protected EAP) and TTLS (Tunneled Transport Layer Security). Note that PEAP in the general context refers to PEAP-EAP-MSCHAPv2 mode, which only requires a Server Side Digital Certificate and a Client Side Username/Password. There are stronger forms of PEAP which we'll cover later in the higher security levels. TTLS is actually a little better in security than PEAP-EAP-MSCHAPv2 because it does not divulge the username in clear text. However, both forms of authentication do a good job of protecting passwords because the MSCHAPv2 password challenge session is protected inside an encrypted tunnel. This is why PEAP or TTLS is so much better than Cisco's LEAP mechanism which transmits the MSCHAPv2 session in the clear lending itself to easy offline password dictionary cracking.

To implement PEAP or TTLS, the organization needs to implement a RADIUS Authentication Server. There are many ways to do this no matter what your software preference is. There are options for Microsoft Windows 2003 Server with IAS, 3rd party applications such as Funk Odyssey (needed for TTLS mode) that run on Windows, Open Source solutions with FreeRADIUS. However, in order to run in PEAP or TTLS mode, the RADIUS server must have a server side x.509 digital certificate. This certificate can be purchased from a 3rd party Certificate Authority such as Verisign, or it can be issued from an organization's internal Certificate Authority. These two options are conventional wisdom but neither option is particularly appealing to small businesses since they won't like paying $500/year for a 3rd party Digital Certificate and they most likely don't have a PKI in place which requires a Certificate Authority server. An excellent way to get around this problem is to use a Self Signed Certificate on your RADIUS server. Self Signed Digital Certificates violates all best practice concepts for PKI, but I say be damned with them if the alternative is to use no Digital Certificates at all on your RADIUS server and run a completely vulnerable EAP mechanism such as LEAP. Running a secure EAP mechanism such as PEAP or TTLS is too important to let PKI be an obstacle. A newer protocol from Cisco called EAP-FAST promises to solve this problem by claiming that you don't need PKI and Digital Certificates but if you read the fine print from Cisco, that's clearly not the case. Self Signed Certificates would solve the problem for PEAP, TTLS, or EAP-FAST for organizations too small to run a dedicated PKI Certificate Authority infrastructure.

The easiest method by far if you're a Microsoft Windows 2003 Server shop is to use the built in RADIUS server of Windows 2003 called IAS (Internet Authentication Server). For a small business, there is nothing wrong with adding the IAS service to an existing Windows 2003 server even if it's their only server which also happens to be the Active Directory server. You can, either convert that server in to a Certificate Authority as well and grant yourself a digital certificate for the RADIUS server or simply Self Sign a digital certificate. With this in place, the Root Certificate (the public key of the Digital Certificate) for the RADIUS server must be installed in all of the client's computers. With Active Directory, this can be easily be pushed out via Group Policy. All of the clients also need to configure their wireless settings on the WZC (Wireless Zero Configuration) service built in to Windows XP SP1 or SP2. However, Active Directory allows you to configure this globally for all your users with Active Directory Group Policy. Using the Microsoft method, a secure wireless network can be deployed throughout an organization big or small in hours. If you don't have IAS, it comes with Windows 2003 Standard Edition which costs around $500 per copy. IAS in my experience is extremely robust, reliable, and secure.

For those who wish to implement TTLS, they will need to either purchase Funk Software's Odyssey server (in the $2000 range) or implement FreeRADIUS on Linux which is Open Source. Note that Windows does not have a built in TTLS client built in, you will need to purchase a wireless Supplicant (AKA Client software) for your end users. MDC has an Open Source version for Linux, but you'll need to purchase one for Windows which is what most people are using. You'll either need to implement the Root Certificate on the Clients manually or you'll need to purchase a 3rd party Digital Certificate which has its Root Certificate already preinstalled. As for client side configuration, you'll need to find some other method to automate the installation process since Active Directory does not support the automation of 3rd party clients.

While 802.1x and PEAP or TTLS addresses the authentication half of the equation when it comes to security, encryption must also be addressed. Up until recent months, it was thought that "Dynamic WEP" where WEP keys are rotated often (commonly 10 minutes) was considered to be "good enough" encryption. With the next generation of WEP cryptanalysis tools, this is no longer the case and TKIP is the new bare minimum. The WPA standard implements TKIP which is a rewrite of the WEP protocol which will hold against current cryptanalysis techniques for now, but newer methods of attacking TKIP are on the horizon. The reliable long term solution from the IEEE standards body is the 802.11i standard which mandates AES. The recommendation for Level 2 through 3 is that you should be using WPA with TKIP at a minimum and upgrade to AES as soon as possible. Note that some WPA devices already support AES encryption while all WPA2 certified devices must support AES encryption. To be on the safe side, only buy products that support 802.11i and are WPA2 certified.

From a vulnerability standpoint, the only way to break this security level is to steal a user credential by either looking over someone's shoulders to see what password they are typing, coaxing them in to telling you what the password is (this is easier than you think), or installing a key logger on to a user's computer so you can record their key strokes as they type in the password. Barring password theft, it would be far easier to break in to your premise and tap in to a Wired LAN than to attempt to crack Level 2 Wireless LAN security. Level 2 is a good choice for most small businesses but organizations where security is a high priority should seriously consider the next two levels because a single lost password could compromise the entire system..

Level 3: Medium to large Enterprise WLAN security

Level 3 Wireless LAN security builds on the same principles of Level 2, but you're not allowed to use the "cheats" such as bolting on the RADIUS server on to an existing server or using Self Signed Digital Certificates. PEAP-EAP-MSCHAPv2 is also disallowed because of its sole dependency on passwords which would be classified as "single factor" authentication. EAP-TLS or PEAP-EAP-TLS using "soft" Digital Certificates (certificates that are stored on the user's hard drive) would be the recommended authentication method for this security level. PEAP-EAP-TLS is an improved version of the original EAP-TLS protocol that goes further to encrypt client digital certificate information. Both PEAP-EAP-TLS and EAP-TLS have the same server and client side digital certificate requirements, but PEAP-EAP-TLS may not be compatible with some older Supplicants (Client Software) or some non-Microsoft client side implementations.

To implement EAP-TLS or PEAP-EAP-TLS, not only does the server require a Digital Certificate but the users as well. This means you will need a full blown Certificate Authority to issue a proper Server Digital Certificate on a pair of dedicated RADIUS servers and not just a Self Signed Certificate on a makeshift RADIUS Server. For this security level, the proper PKI best practices should be followed. There should be at least a single dedicated PKI Root Certificate Authority, but preferably it should at least be a 2 or 3 tier PKI design. A two tier chain for a medium Enterprise organization would have an offline Root Certificate Authority and an online Issuing Certificate Authority. A large Enterprise should implement the three tier design with offline Root Certificate Authority, offline subordinate Certificate Authority, and online Issuing Certificate Authority. The reason for this is that if a Certificate Authority is ever compromised, you can revoke it and create a new one from the higher offline Certificate Authorities without having to start your PKI deployment from scratch. Building a PKI from scratch because of a compromised Certificate Authority would be completely unacceptable in a large scale environment.

To deploy Digital Certificates to the user community, a PKI management infrastructure must be deployed and permanent human resources must be allocated to manage end user certificates if your user base numbers in the thousands or more. Medium size Enterprises can add PKI management to their current hire/termination procedure. Microsoft Active Directory with an Enterprise Root Certificate Authority (a PKI that is completely integrated in to an Active Directory) can issue digital certificates automatically, but be warned that this is not a substitute for proper management. Lost or stolen laptops or terminated employees must have their digital certificates revoked and this is not an automatic process even if a user account is disabled or deleted. After the certificates are revoked, they must be published in a CRL (Certificate Revocation List) and be applied to all Authentication servers or else the revoked certificates are still usable. If Active Directory auto-enrollment is used, it is highly recommended that you do not just apply the policy to the entire domain by default so that everyone will automatically get a user digital certificate. The policy should be set on just a particular OU (Organizational Unit) so that users who need user certificates and Wireless LAN access must be manually moved to that Certificate enabled OU. Automatic enrollment should be used as a way to simplify management, not substitute management.

As for encryption, the same requirements and recommendations from the previous 2 levels apply. TKIP at a minimum but AES is recommended as soon as possible. Level 3 organizations should probably be the first to jump to the next level of encryption. The size of these organizations that would select Level 3 wireless LAN security can make upgrading difficult, but it's too important to ignore. The good news is that once AES is achieved, it is expected to hold for some time.

From a vulnerability standpoint, Level 3 is reasonably secure. The only way to compromise this security level is if the hacker can not only steal a user's password, but also steal that user's Digital Certificate which is much more difficult than just stealing a user's password. To steal a "soft" Digital Certificate, either the laptop needs to be stolen in which case it would be obvious and the certificate could be revoked, or a malicious program like a backdoor, virus or worm would have to be installed on the laptop to "harvest" the private key of the digital certificate. The latter option is much more sinister because a theft could occur totally undetected and the certificate would not be revoked. The same malicious code could also "log" the user's keystrokes and the user's password would be compromised as well. At this point, Level 3 security would be totally defeated hence the need for an even stronger solution in Level 4. Discriminating Enterprises should seriously consider the next security level.

Level 4: Military grade maximum level WLAN security

Level 4 builds on Level 3 but aims to solve the key logging certificate stealing malicious code threat. From a PKI Certificate Authority standpoint, not only is a 3 tier architecture required but the use of FIPS 140-2 Level 3 compliant HSMs (Hardware Security Modules AKA Cryptographic Modules for server side applications) are mandated. These modules cost thousands of dollars in the form of a tamper resistant external module. All Certificate Authorities should use one of these modules to ensure maximum security. Even a malicious code compromise on the Root Certificate Authority cannot compromise the Root CA's private key although such a compromise on a Certificate Authority would still be very serious. This is why the top two tiers of the PKI chain are never connected to the network as an extra precaution so that all interactions between the PKI tiers must be hand carried.

On the user side, the Digital Certificate cannot be stored on the hard drive so EAP-TLS or PEAP-EAP-TLS with "hard" tokens are mandatory. The certificates must be stored inside an HSM (these are called Cryptographic Tokens on the client side) which are typically in the form of a USB dongle the size of two fingers carried on a person's key chain or a smartcard. USB dongles are usually much more practical because they can be used by notebooks without a smartcard reader. Some newer Notebook computers have a built in HSM called a TPM (Trusted Platform Module) but it can't be separated from the computer. If an HSM empowered computer is infected with malicious code, the password can be logged and stolen but the digital certificate cannot. This is because the HSM never divulges the private key of the digital certificate to its host computer because all asymmetric cryptographic operations happen inside the HSM and not on its host computer. This makes it nearly impossible to steal a private key unless the TPM Notebook or USB dongle is physically stolen. If that were to occur, it would be fairly obvious and the Digital Certificate stored inside the stolen HSM could be easily revoked by an administrator as part of the PKI management process. To further enhance security, more expensive USB dongles and smartcards have built in finger print readers so that they are useless unless they have your living finger or they can figure out some extremely complex method of fooling the finger print reader. But the biometrics portion is just a last defence meant to buy you enough time to revoke a certificate before unauthorized access is gained. With biometrics enabled HSMs, you have the strongest 3-factor authentication system possible.

From an encryption standpoint, AES is the only encryption algorithm permitted for Level 4 and it also happens to be mandated for federal government and military applications. AES was created by the NIST and its encryption algorithm was selected from a list of finalists that represented the best encryption algorithms in the world. To comply with the AES requirement, 802.11i (AKA WPA2) compliant Wi-Fi gear is required on all Access Points, client Adapters, and software. Most consumer Wi-Fi products sold do not support 802.11i while most newer business class Wi-Fi products do. You'll have be look for the 802.11i or WPA2 logo on any Wi-Fi products you buy. Many organizations may already own products that are AES compliant if they would simply update their firwares and drivers on their Access Points and Client Adapters. Cisco products are a perfect example of this because it is probably the most dominant player in the enterprise Wireless LAN market yet most of their customers are not running the latest firmware. Upgrades on such a large scale are very difficult but corporations cannot afford to put off good security because not only is it good business, it may be the law because of SOX and HIPAA compliance.

From a vulnerability standpoint, Level 4 is rock solid and extremely difficult to compromise. The hacker would have to not only steal a user's password, but also physically steal that user's cryptographic token or a TPM notebook and take advantage of it before the user realizes anything wrong and reports the theft. With 3-factor authentication, it is practically impossible to break in to the Wireless LAN from the wireless side. The attacker will have to try some other means of compromising the network and a crowbar would be far more effective at that point.

Conclusion
Contrary to popular belief, a Wireless LAN can indeed be secure. Depending on the level of risk versus cost trade off you are willing to take, you will need to decide if you need to implement Level 1, 2, 3 or 4. Fortunately, most of the security measures that you need to implement can also serve you in other aspects of IT infrastructure. The same RADIUS, PKI, and Cryptographic Tokens can be used to secure your VPN and Remote Access solution. PKI, Digital Certificates, and Cryptographic Modules are the fundamental building blocks of strong authentication and there is no way around that. You can make the best of it by leveraging the hefty investment for all your security needs.

Tuesday, 15 September 2009

What is virtualisation and is it really going to revolutionise IT management?

It seems like everything is being virtualised these days. Even a cursory glance at their back-offices reveals that businesses are now operating in an increasingly virtual environment. Storage, servers, and applications - nothing has to reside in a box in a specific location for users to access them anymore. According to statistics, the virtualisation market skyrocketed to roughly $US1.2 billion last year and by 2007 it will surge to $US2.2 billion. At its simplest level, it either describes several things working together as one entity, or one entity working as several things. It is an arbitrary representation of a physical resource. A common analogy is that of a bunch of employees from different departments & offices all cooperating as one project team.

The virtualisation of servers, for instance, is steadily making the IT world sit up and take notice. Enabling multiple virtual operating systems to run on a single physical machine, yet remain logically distinct with consistent hardware profiles, server virtualisation can often replace the costly practice of manual server consolidation by combining many physical servers into one logical server. With multiple operating systems and their applications, all able to run on a single server, IT departments can enjoy improved management, flexibility and compatibility as well as vastly increased hardware utilisation.

"There are certainly savings on operational costs, because there are a lot of costs associated with running a server - data centre space, data centre power and data centre cooling for instance, But there are also operational expenses associated with labour - provisioning servers and changing servers, bringing new applications on line, patching them and so on and so forth - so there is a pretty dramatic reduction in labour costs as well." And then of course there are actual hardware savings, because very often instead of running ten servers that are all lightly loaded you can use one server that is more heavily loaded because all the applications can now run on one box.

Costs and security

Elsewhere in the virtualisation universe there is operating system virtualisation (where a single computer can accommodate multiple platforms and facilitate their operation simultaneously); application virtualisation (where end-user software is packaged, stored and distributed in an on-demand fashion across a network); and data virtualisation (allowing users access to various sources of disparate data without knowing where the data actually resides).

Because physical devices and applications are represented in a virtual environment, administrators can manipulate them with more flexibility and reduced negative effects than in the physical environment. Heterogeneous hardware can be used together and scalability can be achieved by simply plugging extra capacity into the virtual resource pool. The addition or removal of hardware can be easily managed with virtualisation tools. Operating costs can also be lowered while performance, connectivity and capacity can be improved, and therefore maintenance costs lowered. Continuity is also assured as resources can be spread and mirrored and any one element can be taken offline or replaced without impacting other elements.

And, despite consumer concerns over the implications that virtualisation may have on security the technology can also prove beneficial in this area.

Network virtualisation

Elsewhere, network virtualisation is having a similar impact, as multiple networks are combined into a single network or single networks are separated logically into multiple parts. With a virtualised network environment, the underlying architecture is invisible to users and services are no longer associated with a specific device or connection. Instead, they are accessed via a common applications interface and the network will figure out how to deliver a service to the user, whether access is via desktop, PDA or mobile phone.

While storage, server and network virtualisation are well-established technologies, PC virtualisation is perhaps less well-known. The concept behind this field of virtualisation is that a single physical computer simultaneously runs multiple virtual PCs, each with its own operating system. Tech giants such as Intel, for instance, has developed a technology based on dual- or multi-core CPUs that make it easier to run Windows, Linux, Unix or Solaris side by side but as entirely separate entities that are allocated individual portions of CPU power, memory and hard disk space.

Network managers that need to evaluate the performance of applications under different platforms will certainly see the benefit of this approach, whilst the development community will also recognise the value to putting new hardware and software to the test. Productivity could certainly benefit, as users can run different applications without having to switch machines or location. And security will also benefit, as running antivirus tools or firewalls to protect web browsing on one operating system will effectively isolate dangerous traffic from attacking mission-critical applications running in another.

Virtualised environment

Nevertheless, some trepidation remains, and for some the concerns stretch beyond security. The short-term costs of an ambitious virtualisation project can be expensive, with the need for new infrastructure and configuration of current hardware. And such an investment is difficult to justify to a board that remains somewhat confused by the technology.

Indeed, while wariness of an expensive IT project is entirely understandable, the virtualisation revolution will inevitably continue unabated. As costs for the technology begin to drop, consumer understanding of the technology grows, and more hardware manufacturers such as Intel include built-in virtualisation functionality in their products, it will become increasingly difficult to justify not deploying virtualisation in an IT system.

"Virtualisation has been occurring for decades now, it is just cascading now to other parts of the infrastructure as other technologies evolve and mature, where we see it going next will be towards this idea of a heterogeneous mutli-system virtualisation, and away from the component-based view, to look at the infrastructure holistically." The evolution towards a more virtualised enterprise is inevitable. "It is already on its way to ubiquity," he concludes. "Over the next few years we are going to see more and more widespread adoption. Leading-edge customers are already moving towards this notion of what we call a `virtualised enterprise`, where all your applications run in a virtual environment for all the benefits that we have discussed, and all your desktops move to a virtualised environment running on servers so that it is more secure and centrally managed. So it will start to permeate every aspect of IT operations.

Thursday, 27 August 2009

Computer Security

What is Computer Security?

Computer Security is a branch of technology known as information security as applied to computers. Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction. The objective of computer security varies and can include protection of information from theft or corruption, or the preservation of availability, as defined in the security policy.

Technological and managerial procedures applied to computer systems to ensure the availability, integrity and confidentiality of information managed by the computer system

Computer security imposes requirements on computers that are different from most system requirements because they often take the form of constraints on what computers are not supposed to do.

Typical approaches to improving computer security can include the following:

  • Physically limit access to computers to only those who will not compromise security.
  • Hardware mechanisms that impose rules on computer programs, thus avoiding depending on computer programs for computer security.
  • Operating system mechanisms that impose rules on programs to avoid trusting computer programs.
  • Programming strategies to make computer programs dependable and resist subversion.

Computer Security has three Layers:

  • Hacking
  • Cracking
  • Phreaking

Hacking

Unauthorized use or attempts to circumvent or bypass the security mechanisms of an information or network system.

Computer hacking always involves some degree of infringement on the privacy of others or damage to computer-based property such as files, web pages or software. The impact of computer hacking varies from simply being simply invasive and annoying to illegal.

Cracking

Act of breaking into a computer system.

Software Cracking is the modification of software to remove protection methods: copy prevention, trial/demo version, serial number, hardware key, CD check or software annoyances like nag screens and adware.

The most common software crack is the modification of an application's binary to cause or prevent a specific key branch in the program's execution.

Phreaking

A term coined to describe the activity of a subculture of people who study, experiment with, or explore telecommunication systems.

Security by design

The technologies of computer security are based on logic. There is no universal standard notion of what secure behaviour is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behaviour.

There are several approaches to security in computing; sometimes a combination of approaches is valid:

  1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
  2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
  3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
  4. Trust no software but enforce a security policy with trustworthy mechanisms.

12 tips for computer security:

  1. Update / patch ALL your software every now and then!
  2. Check / adjust ALL your settings so they are safe, since they ARENT by default!
  3. Use firewall, like Zone Alarm to control what goes in and out from your computer!
  4. Use good passwords: at least 13marks long, containing both letters and numbers. Remember to change your password every few months at least and don’t ever use the same password in two places!
  5. Get a good antivirus program: NOD32, F-Secure or Norton Antivirus and keep it updated!
  6. Don’t open or execute files that you are not 100% sure are absolutely safe no matter where or how you get them.
  7. Wipe your history files (like cookies, internet history and temporary files, etc.), logs and personal files, with specific wiping program (like Eraser) instead of just deleting them.
  8. Use encryption to enhance your privacy! Use encrypted email (like Hushmail or Ziplip), www-surfing and encrypt sensitive files on your computer (PGP).
  9. When you are finished using some internet-based service like email, sign out of it rather than just closing your browser! Also, when you leave your computer, make sure that none of such programs or connections are left open that someone could abuse. In WindowsNT/2k/XP, press Windowskey+L to lock the workstation.
  10. DonĂ¯'t use public computers for anything you need to type in your logins, they usually have Trojan horses that capture your passwords.
  11. Make backups and store them in safe place! Easiest way to do a total-backup is to make an "Image" of your hard drive or partition and store it on safe location, but floppies will usually be just fine for storing documents, etc.
  12. Install and Use a Hardware Firewall