Thanks For Using The Performance of a Lifetime!



Chatroom Auctions & Paid Classifides DDDPL's Additional Job Search










FAQ
Last visit was: Fri Nov 24, 2017 11:40 pm
It is currently Fri Nov 24, 2017 11:40 pm



 [ 1 post ] 
 Selecting a Colocation Facility 
Author Message
User avatar

Joined: Mon Sep 13, 2010 1:47 pm
Posts: 45372
Post Selecting a Colocation Facility
Selecting a Colocation Facility
by: Robert Dupree



Selecting a Colocation Facility

Communications and data access are integral for business operations. Websites, email and messaging services, and servers have to be up and protected for business communications to function. Data centers that house mission-critical infrastructure – the most vital parts of the company network, like servers and databases – must sustain the power, climate-control and connectivity that network infrastructure demands.

Redundancy

The colocation facility should have redundant systems, with sufficient capacity, at every point of operation, from climate control systems to network equipment. Having multiple units for to handle capacity is not the same as redundancy. All systems should be in an n+1 configuration; for example, if there are two units, they should both run at less than 50% capacity so that if one fails, the other can still handle the load; if there are three units, then they should be at 66% capacity.

Superior Connectivity

Several factors determine the quality of a colocation center’s Internet connectivity:

* The number of Internet backbones
* Total available bandwidth
* Latency, packet loss, and jitter
* Uptime

Colocation data centers provide Internet access to tier-1 carriers, tier-2 carriers, or carrier-neutral access. Tier-1 services provide direct access to one of the major Internet backbone networks, like AT&T. Tier-2 providers routes traffic among multiple tier-1 providers, and reliability and speed are influenced by how effectively traffic is routed. Carrier-neutral access allows service from any carrier, but requires that customers configure their own routing and maintain their own connections to Internet backbones. Tier-2 access is preferable because routing is configured by the colocation facility and it’s more reliable because more backbones are available, which increases uptime and network performance.

Load-Responsive Routing

The way that Internet traffic is routed, both the hardware and routing logic, has a significant effect on connectivity. Effective routing covers three areas:

* Redundant routers and switches with full capacity for all customers
* Performance-based routing, which continuously adjusts routes to find the most effective one, rather than the least expensive
* Routing among more than three, and preferably four or five, Internet backbones

A hardware-redundant, dynamically adjusting routing system creates a self-healing network that offsets backbone problems, traffic and load, and hardware failures so that service doesn’t suffer.

Climate Control

The design of the facility itself has an impact on the performance of systems housed in the colocation center. Server rooms should control air flow between rows. The heat generated by servers is blown in one row, and cool air can be drawn in from another row. Designated hot and cold rows circulate the air to keep servers from overheating.

Appropriately-sized and redundant climate-control units create cold air and control humidity. Servers have strict climate requirements, about 72 degrees and 45% humidity. Colos must have both chillers for the facility climate control and computer room air conditioning (CRAC) units for the server rooms. The capacity for cooling units is calculated by dividing the total tonnage by the square footage. For example, if there are two 50-ton chillers and a 4000 square foot facility, the chiller capacity is 0.040 tons/foot. Chillers and CRAC units should each have a capacity of 0.30 tons/foot or higher.

Power

If there is a failure of the primary power source, the generators and UPS are vital to keep the network online. The UPSs run the servers while power switches from regular electricity to generators, and there must be generators onsite for immediate backup power. The power system should have the following features:

* Failover practices for switching to redundant generator and UPS systems
* Multiple, redundant UPS, systems, since UPSs fail routinely
* A generator large enough to handle 1.5 times the normal building load

Making a Decision

Keeping servers offsite can be a good decision logistically, but only if the data center provides a reliable, secure network environment. Look for potential pitfalls, where service may not offer enough reliability or performance:

* Non-redundant power, cooling, or Internet connections
* Insufficient backup power or non-redundant UPSs or generators
* A low number of Internet backbone connections or substandard routing methods
* Insufficient cooling systems
* Poor uptime record



About The Author

Robert Dupree

Communications and data access are integral for business operations. Websites, email and messaging services, and servers have to be up and protected for business communications to function. Data centers that house mission-critical infrastructure – the most vital parts of the company network, like servers and databases – must sustain the power, climate-control and connectivity that network infrastructure demands.

ACC's San Diego colocation services utilize a state-of-the-art San Diego data center. http://www.colocation.ccccom.com

Copyright © 2001-Present ArticleCity.com

This article was posted by permission.


Tue Jul 31, 2007 7:21 pm
 [ 1 post ] 

Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  






Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by ST Software for PTF.