Archive for category: Blog

Pictured: a wireframe cloud with a shield and a lock contained within it and a variety of different digital services connected to it. The image signifies having a secure cloud and access to data and services hosted in data centers.

Any Cloud is Only as Secure as Its Underlying Data Center Infrastructure

Pictured: a wireframe cloud with a shield and a lock contained within it and a variety of different digital services connected to it. The image signifies having a secure cloud and access to data and services hosted in data centers.

It’s the “Data Center Inside”

So, what are the features and functions that make cloud computing reliable, secure and scalable? The answer is akin to the famous “Intel Inside” tagline that adorned most personal computers over the past several decades. In today’s cloud computing world, it’s the “Data Center Inside” that makes the critical difference to the cloud computing services visible on the outside – this is true not only from reliability, security and scalability considerations but also from a control, cost-effectiveness and flexibility standpoint. Any cloud is only as functional as its underlying data center.

Let’s Get Physical.

It begins with where the cloud physically resides – is the cloud’s underlying data center(s) located in one of the safest physical locations in the country? If yes, is the data center(s):

  • Insulated from natural disasters, power blackouts, and attacks on large population centers with their associated threat vectors.
  • K-rated by the U.S. State Department with certified anti-ram fencing, wedge barriers, a DoD anti-terrorism perimeter blast berm and counter-IED protective measures.
  • Recognized for the highest standards in design, construction and operational sustainability, featuring multiple redundant systems to support current and future IT needs of its clients.
  • Certified by the Uptime Institute to indicate the tier level and type of availability standards it meets.
  • Setup to ensure smooth 7x24x365 operations – including, from a disaster recovery plan standpoint.
  • Designed to offer scalable and sustainable infrastructure that focuses on enterprise grade wide area network services, which deliver the optimum solution in terms of rack space, power, connectivity, bandwidth and latency.

Let’s Get Technical.

Once the cloud’s underlying data center(s) is deemed to be physically up to the challenge, it needs to meet a variety of technical specifications and functional requirements to be able to deliver enterprise-class cloud computing services. So, does the data center(s) provide the following:

  • Hardware & Networking Functions
    • Bandwidth availability/internet connectivity – multiple, redundant Tier 1 providers
    • Redundant servers and storage – failover provisions at hardware and software levels
    • Tiered data storage – automated progression or demotion of data across different tiers (types) of storage devices and media
    • Virtualization – maximize physical server productivity via virtual machines (VMs)
    • Data encryption, SSL certificates, firewalls and also virtual firewalls for VMs
    • Intrusion detection and prevention systems – behavioral analysis and alerts to staff
    • Scalability – for future needs
  • Standards, Compliance & Certifications
    • Uptime Tier Level certification as required for cloud services
    • NIST compliance as required for cloud services
    • SSAE 16 SOC 1 & SOC 2 for Sarbanes-Oxley audit requirements
    • FedRAMP for government cloud services
    • FISMA for federal cybersecurity requirements
    • PCI DSS for online credit card payment requirements
    • Other industry (finance, healthcare, etc.) certifications/compliance as required

Finally, the cloud provider also needs to perform regular maintenance and testing of their underlying data center(s) with periodic reporting on system health and testing. More importantly, these underlying data center(s) must perform to exacting, industry-standard service level agreements. When the data center(s) is solid on the inside, as enunciated above, the associated cloud computing services on the outside are as good as gold.

For HDCs, it’s All About Location, Location, Location – Part 4

Pictured: the location of a future hyperscale data center in Hannibal, Ohio. DP Facilities' next data center will be constructed next to the 485 Megawatt natural gas-fired Long Ridge Energy Terminal, ensuring it is able to meet all of it's power supply needs.

In Part 3 of this blog series, we discussed the differences between hyperscale and HCI, and between hyperscale and a really big cloud. We then defined the “self-healing” HDC and explained how colocation helps extend hyperscale to the enterprise. In this concluding part of the series we will talk about our new HDC campus location and our partnership with Fortress Transportation and Infrastructure Investors LLC (FTAI) in this regard.

Hannibal, Ohio – Ideal Location and Partner for Our Next HDC

In Part 2 of this blog series, we hinted at the “upcoming power and space squeeze that has already begun to crowd” the NoVA region data center market. We also mentioned that two of the most critical factors impacting an HDC relate to physical space and an abundant power supply. And, we stated that these factors were critical to decisions we make relating to our HDC expansion plans.

At DP Facilities, we use intelligent site selection, a security-first design approach and outstanding operational abilities to create critical infrastructure that is capable of serving the most demanding missions and compliance requirements in the world. Accordingly, after a long and arduous search, we recently selected a site in Hannibal, Ohio, for our next world-class data center campus. The facility will be constructed next to the 485 Megawatt natural gas-fired Long Ridge Energy Terminal being developed by Fortress Transportation and Infrastructure Investors LLC (FTAI), offering our hyperscale data center customers up to 125 Megawatts of that electricity at the extremely low cost of approximately 4.5 cents per kWh. We are excited to partner with FTAI in a specialized use of their property for building a best-of-breed HDC for commercial, government and highly-regulated tenants in one of the most safe and secure locations in the country.

Sustainable 21st Century Clean-Powered Business

FTAI will build its Long Ridge Energy Terminal, which will be a new gas-fired, combined-cycle power plant, designed to be one of the most energy-efficient in the world. A combined-cycle power plant uses both a gas and a steam turbine together to produce up to 50% more electricity from the same fuel than a traditional simple-cycle plant. The power plant will cover about 25 acres of the 1,600-acre site, which allows for a number of expansion projects. The site also may accommodate a solar power installation in the future, although the natural gas plant already fulfills our sustainability objectives.

Our HDC Advantage

In this blog series, we explained the hyperscale phenomenon and its impact on the continuously evolving HDC. Unfortunately, the primary requirements of the HDC — space and power — are not being satisfactorily fulfilled in the increasingly crowded NoVA region, which controls 55% of the data center market in the country.

At DP Facilities, we saw the future of the data center market — in terms of location, security, space and power requirements — several years ago. We organized our business around six security pillars that guided the design and construction of our first facility — Mineral Gap Data Center in Wise, Virginia.

Known as “The Safest Place on Earth,” Wise is in a protected mountain location in southwestern Virginia, and is strategically located away from major threat vectors. Locating our data center there served to insulate Mineral Gap from natural disasters, blackouts and attacks on large population centers. The facility is outside the flood zone, is less than a day’s drive from the Washington, D.C. area, and is conveniently located near a private airport.

In keeping with our six security pillars, DP Facilities has selected Hannibal, Ohio, for our next major data center campus. It will include best-of-breed hyperscale capacity that will offer colocation and hybrid colo opportunities to commercial and government customers. As savvy hyperscale customers know, location is critical — hence our selection of Hannibal, where space is abundant, power is inexpensive and the ability to scale is unlimited.

To learn more about how our facilities and future plans fit your IT infrastructure needs, call us at (866) 589-6125 or email us at info@mineralgap.com.

Differentiating HDC from Traditional Enterprise Data Center – Part 3

Pictured: a computer rendering of a hyperscale data center.

In Part 2 of this blog series, we discussed the HDC market size and its benefits, when critical hyperscale objectives are met. In Part 3, we explain the differences between hyperscale and hyper converged infrastructure (HCI), and between hyperscale and a really big cloud. We then discuss how colocation can help traditional enterprise data centers avail of the advantages of hyperscale. 

Hyperscale, Hyper Converged Infrastructure (HCI) and a Really Big Cloud 

Traditional converged infrastructure bundled the main elements of a data center — compute, memory, networking, servers, storage, virtualization tools and management software — on a prequalified turnkey appliance that typically resided in a single chassis. HCI took this convergence to the next level, as it were, by virtualizing all of these converged hardware elements through a software-defined infrastructure including virtualized computing, virtualized software-defined storage and virtualized software-defined networking.   

Now, while a hyperscale data center can efficiently use HCI-empowered hardware, HCI is not mandatory for empowering a data center to make more efficient use of its space, power, and cooling features — i.e., to qualify it as an HDC. Similarly, a “really big cloud platform” does not necessarily mean it is hyperscale-enabled. The key differentiator in a hyperscale environment is the automation aspect or the “self-healing” feature — one which may not be available in a “really big cloud platform.” Per the old adage, bigger is not always better. But smarter, which is what hyperscale brings to the modern data center, is what defines HDC. 

The “Self-Healing” HDC 

Hyperscale is automation applied to the data center industry. The primary features of “automation” or a “self-healing” HDC include automated remediation, lifecycle management of resources, proactive alerting, predictive scheduling, workload shuffling, and a single but extensible point of control.  

Got Hyperscale, Will Colocate 

While some enterprise companies experiencing hypergrowth might choose to hyperscale their in-house data centers to meet demand, rapid on-prem data center expansion is a very expensive proposition, likely requiring personalization of most features in their overall computing and networking environment in parallel with massive scaling on demand, managing a dynamic network configuration and controlling every aspect of the end-to-end experience. Instead of making huge investments to hyperscale, they can choose to outsource their data center via colocation in an HDC operator’s facility 

From a business perspective, reduced cost (both CAPEX and OPEX) is the primary benefit of colocation, which is a multi-tenant arrangement, where other companies help distribute the cost by “sharing” space. However, the hypergrowth enterprise just doesn’t rent space — it gains access to professional staff at a secure facility on a 7x24x365 basis and benefits from a constantly updated HDC infrastructure, which allows it to manage its data from a remote on-premise terminal.  

Colocation in an HDC allows a business to scale its IT infrastructure to fit its needs and manage growth without having to incur capital expenditures. More importantly, these colocation facilities are based outside major metropolitan areas, which are more susceptible to various threat vectors that could significantly impact uptime of the hypergrowth enterprise. It’s simple — colocation helps extend hyperscale in a cost-efficient, reliable and secure way to the enterprise. 

Part 4 of this blog series introduces an example of a hyperscale data center that illustrates the sustainability (including $/KWH), automated scaling, and security of a best-in-breed HDC: DP Facilities, Inc.’s next HDC located in Hannibal, Ohio, which is being developed in partnership with Fortress Transportation and Infrastructure Investors LLC (FTAI), on the site of FTAI’s 485 MW Long Ridge Energy Terminal. Our Hannibal site will offer our HDC customers up to 125 MW of that electricity at an extremely low cost of approximately 4.5 cents per kWh.

Hyperscale Data Center: Market Size and Benefits – Part 2

In Part 1 of this blog series, we introduced the hyperscale phenomenon, which led to the evolution of the Hyperscale Data Center (HDC). In Part 2, we define the HDC market size, including local growth constraints, and discuss various hyperscale accomplishments required of data center operations in the overall cloud computing and wide area networking environment.

HDC Market Size

In January 2019, Synergy Research Group (SRG) released data showing “the number of large data centers operated by hyperscale providers rose by 11% in 2018 to reach 430 by year end.” In fact, per John Dinsdale, a Chief Analyst and Research Director at Synergy Research Group, “Hyperscale growth goes on unabated, with company revenues growing by an average 24% per year and their capex growing by over 40% – much of which is going into building and equipping data centers. In addition to the 430 current hyperscale data centers we have visibility of a further 132 that are at various stages of planning or building. There is no end in sight to the data center building boom.” As can be seen in the accompanying pie chart, the United States still has 40% of the HDC market.

Pictured: a chart showing the data center locations, by country, of hyperscale data center operators in December 2018. The United States had 40% of the hyperscale data center operators worldwide, followed by China at 8%, Japan at 6%, and the United Kingdom at 6%.

Reigns as Data Center King

More importantly, from our regional viewpoint, Data Center Frontier reported in April 2019:

“As of September 2018, Northern Virginia was home to 4.7 million square feet of commissioned data center space, representing 955 megawatts of commissioned power. Demand remains very strong reflected in a vacancy rate of only 4.4 percent as of 2018. In fact, the volume of data center capacity in the planning phase in the region has reached 1,097MW, or 1.1 gigawatts.

Data center operators in Northern Virginia leased 270 megawatts of capacity in 2018, more than doubling the previous record for annual absorption, according to data from Jones Lang LaSalle, which said the region accounted for 55 percent of all data center leasing nationally.”

While the NoVA data is not limited to HDCs, this upcoming power and space squeeze has already begun to crowd the region’s data center segment. DP Facilities sees the overcrowding in Ashburn as an opportunity to build capacity for hyperscale customers in other regions of the U.S. that will better serve their needs.

What Does an HDC Need to Accomplish?

Two of the most critical factors impacting a hyperscale data center relate to physical space and an abundant power supply — both of which need to contribute to scaling data center operations in “hyper” fashion. Specifically, as ZDNet pointed out in its April 2019 article, “How hyperscale data centers are reshaping all of IT,” an HDC needs to accomplish the following four objectives:

  • Maximize cooling efficiency. Powering climate control systems is typically the largest contribution to OPEX in data centers worldwide, so maximizing cooling efficiency is of paramount concern.
  • Allocate electrical power in discrete packages. Multi-tenant data centers need to allocate power in “blocks” of fractional megawatts.
  • Ensure electricity availability. Modern workload management systems enable the replication of workloads across servers, making workloads redundant instead of power, thus reducing electricity costs.
  • Balance workloads across servers. How a data center manages its workloads and processor utilization software determines how virtual machines are utilized and how temperature control is optimized.

The ZDNet article explains in more detail how each of these critical requirements can be met for a data center to be sustainable as an HDC.

In Part 3 of this blog series, we will explain the differences between hyperscale and Hyper Converged Infrastructure (HCI); and between hyperscale and a really big cloud. We will also discuss some of the challenges that hyperscale presents to traditional enterprise data centers.

Unraveling the Hyperscale Mystery – Part 1

Pictured: an artist's rendering of the insider of a data center with transparent server racks extending into the background. It symbolizes the concept of a hyperscale data center.

This four-part blog series will discuss the dominant role that hyperscale plays in today’s state-of-the-art cloud computing and related data center infrastructure environment. It will present various aspects of the hyperscale ecosystem, including definitions, market size, challenges, solutions, etc. It will conclude with DP Facilities’ unique value-add in the hyperscale space with respect to the enterprise.

What is Hyperscale?

In the IT world, scaling computer architecture typically entails increasing one or more of these hardware resources—computing power, memory, networking infrastructure, or storage capacity.

However, with the arrival of Big Data and cloud computing, the ability to rapidly scale hardware resources of the underlying data center(s) in order to respond to increasing demand, becomes a challenge of paramount importance.

Hyperscale is not simply the ability to scale, but the ability to scale rapidly and in huge multiples as mandated by demand. Thus, hyperscale architecture design had to replace traditional high-grade computing elements such as blade servers, storage networks, network switches, network control hardware, obsolete power supply systems, etc., for a stripped-down, cost-effective infrastructure design that supported converged networking, software-based control elements of aforementioned hardware resources, and one which deployed a base level of virtual machines.

In the IT domain, there is a vertical or “scaling up” approach to computing architecture, which typically involves adding more power to existing machines or more capacity to existing storage. However, in hyperscale architecture, there is a “scaling out” or horizontal scalability, which means adding more virtual machines to a data center’s infrastructure for high throughput, or remotely increasing redundant capacity for high availability and fault tolerance. Thus, hyperscaling is not only done at a physical level with a data center’s infrastructure and distribution systems, but also at a logical level with computing tasks and network performance.

What is a Hyperscale Data Center (HDC)?

A Hyperscale Data Center is a data center that meets the hyperscale requirements defined above. More importantly, an HDC is infinitely more complex and sophisticated than your traditional data center. First and foremost, an HDC is a significantly larger facility than a typical enterprise data center. Per International Data Corporation (IDC), a market intelligence firm, a data center qualifies as Hyperscale when it exceeds 5,000 servers and 10,000 square feet. But this is only a starting point, as we know that the largest cloud providers (Amazon, Google and Microsoft) house hundreds of thousands of servers in HDCs around the world.

Again, per IDC, HDCs require architecture that allows for a homogenous scale-out of greenfield applications. Not only must an HDC enable personalization of nearly every aspect of its computing and configuration environment, but also it must focus on automating or “self-healing.” This term implies that an HDC recognizes that inevitably some breaks and delays occur, but its environment is so well controlled, that it will automatically adjust and correct itself.

In Part II of this blog series, we will begin with a definition of the HDC market size and then go on to explain the primary accomplishments of the HDC in terms of sustainability — from a power, cooling, availability and load balancing standpoint.

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part 4 – Leveraging SDN and Cloud On-Ramp Technologies to Offer Optimum Data Center Solutions

At the conclusion of Part 3 of this blog series, we established the imperative of cloud on-ramps in a data center. We will conclude this series with how we have leveraged SDN and cloud on-ramp technologies to empower our customers.

Empowering Customers with a Range of Data Center Solutions

Pictured: the three fundamental layers of the cloud computing services stack, which are infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).In the cloud computing world, infrastructure as a service (IaaS) is one of three fundamental layers of the cloud computing services stack, which also includes Platform as a Service (PaaS) and Software as a Service (SaaS).

As a data center offering IaaS, we leverage not only SDN and cloud on-ramp technologies, but also a wide range of network virtualization technologies that make different levels of data center solutions possible. IaaS is an automated computing infrastructure offering, provisioned and managed over the Internet. IaaS provides virtualized computing resources on an outsourced basis to support business operations. Typically, IaaS provides hardware, storage, servers and data center space complemented by networking capabilities; it may also include software. Customers are able to self-provision this infrastructure, using a Web-based graphical user interface that serves as an IT operations management console for the overall environment.

Delivering Carrier-neutral, Cloud-neutral, Cost-effective Data Center Solutions

As a best-of-breed, carrier-neutral colocation provider, we have leveraged several state-of-the-art technologies to provide data center solutions that go beyond cost savings. Our Mineral Gap data center offers connectivity, reliability, security, and scalability second to none. As far as connectivity goes, we have fully redundant network connections and typically offer access to a variety of public and private telecommunications services to meet the unique requirements of any business. We also provide connectivity to major cloud providers such as Amazon Web Services (AWS), Azure, Google Cloud, IBM Cloud, Oracle Cloud, and Salesforce. In addition, multiple carriers lit into our Mineral Gap facility offer various SD-WAN services.

When it comes to reliability, we have the systems, processes, and staff in place to deliver “four-nines” or more availability on an annual basis with uptime Service Level Agreements to back it up. In addition to physical security measures that meet or exceed federal government standards, we are NIST 800 compliant and meet SSAE 16, SOC 1 and SOC 2 standards. We also meet standards that are critical to specific verticals, such as PCI DSS, NIST, HITRUST, HIPPA and more. We offer scalable and sustainable infrastructure that focuses on data center and network services, which deliver the optimum solution in terms of rack space, power (with the ability to handle brownouts), connectivity, bandwidth (including bursting on-demand) and latency.

We would like to conclude this blog series by reiterating the critical role data centers play in today’s enterprise networks. It is what drives us to constantly leverage technology that empowers and benefits our customers in a timely fashion. Whether it is colocation or hybrid-colo, we seek to deliver every customer’s desired combination of flexibility, scalability, security, control, reliability, and cost-effectiveness that best suits their business model.

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part 3 – Why Cloud On-Ramps are Key for an Enterprise Migrating to the Cloud

What are Cloud On-Ramps?

“Migrating to the cloud” has been a buzz phrase in the business world for some time and yet, there is little awareness outside of IT staff, as to what it really entails. In reality, data centers are where the cloud—whether public, private or hybrid—resides. Nonetheless, an enterprise, seeking to migrate some or all of its IT operations to the cloud, needs to understand that a critical aspect of a successful migration relates to cloud on-ramps. As we explained at the end of Part 2 of this blog series, cloud on-ramps are private, direct connections to the cloud—i.e., typically to popular name brand cloud service providers—from within a 3rd party data center (as shown in the diagram). So for an enterprise customer looking to colocate its IT operations into a 3rd party data center, this cloud connectivity is critical to its decision in selecting the right data center operator.

Pictured: a diagram that demonstrates how a cloud on-ramp interfaces between a data center and cloud hosting providers.

Does the Data Center Measure Up?

A modern data center must provide connectivity to major cloud providers such as Amazon Web Services (AWS), Azure, Google Cloud, IBM Cloud, Oracle Cloud, and Salesforce. However, these cloud services providers won’t just agree to connect with a data center, unless it meets certain industry standards. Among the first things that name brand cloud service providers look for in 3rd party data centers is their Uptime Institute rating. There are four “tiers” created by the Uptime Institute to indicate the level and type of standards a data center meets (see uptimeinstitute.com graphic below). For example, a data center with a Tier III rating indicates it meets 99.982% uptime, no more than 1.6 hours of downtime per year, and is N+1 fault tolerant providing at least 72-hour power outage protection.

In addition to the Uptime rating, these name brand cloud service providers will seek assurances that a 3rd party data center has the capabilities to provide critical connectivity, reliability, security, and quality guarantees. They will also want the 3rd party data center to be carrier-neutral. Typically, a data center that provides colocation services is designed to connect to multiple carriers in a “network neutral” environment. More importantly, an enterprise that seeks to adopt a hybrid cloud strategy will need a data center to offer provisioning on-ramps to various private and public clouds. In fact, networking powerhouse, Cisco recently introduced its “SD-WAN Cloud onRamp for CoLocation,” which is a platform of virtualized network functions (VNFs) and trusted hardware that runs in a colocation facility to provide connectivity to multi-cloud applications, along with an integrated security stack and cloud orchestration for remote management.

Getting IT Right

So while “migrating to the cloud” might seem like a catchy thing to say, it can prove to be a challenge, if it is not done right. A successful migration requires the right data center with the right secure colocation infrastructure and the right cloud on-ramps to ensure that an enterprise gets it right – and helps put it’s business on an on-ramp to cloud nine. In Part 4, the concluding part of this blog series, we will discuss how we leverage these technologies to provide data center solutions that go beyond cost savings.

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part 2 – How SD-WAN is Streamlining the Enterprise and Attendant Data Centers

At the conclusion of Part 1 of this blog series, we introduced the concept of virtualization, which is the underlying technology that defines the SD-WAN (Software-Defined Wide Area Network). In this post, we will expand on how SD-WAN, along with its conforming data center infrastructure, has transformed the enterprise network.

What is SD-WAN?

SD-WAN is a wide area network that utilizes software components to control WAN operations. Network control software is used to virtualize networking hardware, just like hypervisors are deployed to virtualize data center operations.

Pictured: a diagram of SD-WAN architecture that demonstrates how an enterprise can connect to its branch office networks and data centers.

As depicted in the diagram, SD-WAN allows an enterprise to connect all of its networks—branch office networks and attendant data centers—within its system across a wide geographic area. Further, SD-WAN provides end-to-end encryption across the network and thus increases security. Nonetheless, while SD-WAN is credited with providing end-to-end reliability, scalability and security, it’s the underlying hardware infrastructure – constituted in its disparate data centers –  that guarantees it.

Debunking Popular SD-WAN Myths via Data Center Truths

Myth 1: All SD-WAN Providers Reduce Network Costs

One of the big myths associated with SD-WAN is that all SD-WAN providers reduce network costs. An enterprise might decide to scale down its expensive MPLS transport infrastructure and use it only for mission-critical applications while moving the rest of its non-critical traffic to the cloud. In making such decisions, i.e., to move some or all of its traffic to a lower-cost SD-WAN incorporating cloud services, the enterprise will need to do a cost/benefit analysis as to how these lower costs impact its overall operations and top-line revenues. While an SD-WAN implemented using low-cost data centers offering rudimentary cloud services might reduce overall network costs, it could adversely impact an enterprise’s operating efficiency and revenue growth.

Myth 2: All SD-WAN Solutions Enable Global Enterprise Expansion and Cloud Migration

Another big myth that pervades the SD-WAN space is that all SD-WAN solutions enable global enterprise expansion and cloud migration. Nothing can be further from the truth. A global SD-WAN solution must offer reliable, secure and scalable connectivity, including dedicated private connections. It should be a single platform solution that necessarily integrates end-to-end embedded WAN optimization, application acceleration, and multi-layered security for its cloud and on-premise customers. Only then will this SD-WAN solution empower a global enterprise customer to connect its branch offices, optimize its application performance, and scale connectivity to its users worldwide. Again, it’s the comprehensive capabilities of the underlying data center infrastructure that makes such a global SD-WAN solution possible.

SD-WAN is Only as Scalable and Cost-Efficient as its Underlying Data Center Infrastructure

While SD-WAN is helping streamline the business enterprise, it’s pretty apparent that inbuilt data center functionality is what impacts key factors like network costs, cloud migration and global enterprise expansion. In Part 3 of this series, we will talk about cloud on-ramps, which are private, direct connections to the cloud from within a data center.  As more and more enterprises migrate to the cloud, these on-ramps are vital to the “go, no-go” decision that an enterprise will make when selecting a provider.

What is SDN and how did it impact data centers?

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part 1 – How Software Rocked the Wide Area Networks (WANs) They Support

This four-part blog series will highlight the crucial role that the modern data center plays in today’s constantly evolving enterprise networks with their myriad transport protocols and interconnections to the internet, cloud services, and various endpoints. It will specifically discuss the impact of SDN and Cloud On-Ramp technologies on the enterprise WAN.   

What is SDN?

In trying to better understand the role of the modern data center in today’s enterprise networks, we believe it’s necessary to explain some of the technological evolution that has taken place in the past decade. It began with a paradigm shift in the computer networking architecture model, when the concept of Software-Defined Networking or SDN was introduced. SDN’s discerning feature was its ability to separate the data plane from the control plane in networking hardware, such as routers and switches, which constitute some of the critical components in a data center.

Pictured: the SDN controller has two main application programming interfaces or APIs that interact with the application layer and the infrastructure layer.SDN basically decoupled the network control function from the hardware and implemented it in software, while data plane functionality continued to reside in the networking hardware. SDN’s implementation of the control plane in software on an independent server empowered the network administrator to have flexible control of the network and consequently deliver more efficient network usage. SDN liberated the network administrator to manage traffic from a simple, graphical user interface on a remote central console without having to be bothered by physically changing connections and settings on routers and switches.

As seen in the diagram, the SDN controller has two main application programming interfaces or APIs. The Southbound API relays information to the switches and routers below it in the networking infrastructure stack and the Northbound API communicates with the business applications above it. The SDN controller thus allows a network administrator to change network rules on the fly, dynamically prioritize traffic based on applications and usage, and even block certain traffic as and when required.

SDN Impact—from SD-WAN to the Virtual Data Center

It’s important to understand the key role that SDN plays in modern data centers, which usually host multi-tenant colocation clients and cloud services providers. A data center’s ability to control and manage assorted network traffic utilizing SDN, which is typically implemented via SD-WANs or Software-Defined Wide Area Networks, has not only been significantly enhanced but also streamlined. SDN has also helped reduce the cost structure of data centers because it now allows them to implement cheaper commodity networking hardware.

Finally, we would be remiss if we did not mention the concept of the software-defined data center (SDDC) or virtual data center (VDC). What SDN has done for computer networks, SDDC/VDC is doing for data centers. SDDC is an enterprise-class data center that implements virtualization techniques. Virtualization is the ability to simulate hardware resources in software. All of the simulated hardware functionality is replicated as a “virtual instance” and operates exactly as the hardware would. SDDC primarily enables virtualizing physical servers, storage, networking and other infrastructure devices and equipment in a data center.

In Part 2 of this series, we will expound more on SD-WAN and some of the myths associated with it. We will also address how data center choices impact SD-WAN factors like network costs, cloud migration and global enterprise expansion. But needless to say, SDN has forever changed the enterprise networking world for the better.

Mission Critical IT and Hospitals: Why Compete for Campus Resources?

Today’s hospital campuses have evolved over the last several decades to adapt to emerging requirements of innovative patient care services and tools. In particular, primary and emergency power resources on a hospital campus have to serve many missions, including basic life safety in the building (such as emergency lighting and fire protection) as well as patient care systems that are both mechanical and IT related.

Hospitals have a greater risk profile in the event of an outage and have a wide variety of additional risk factors to manage in the event of power loss, including building systems, medical devices and IT applications, and monitoring capabilities that maintain the lives of many patients in residence, not to mention data backup and disaster recovery.

While utility power has proven to be highly scalable and reliable throughout most parts of the United States, backup on-site generators have had a more mixed record. These assets require large fixed-cost investments, have limited capacity, and in the event of insufficient power or loss of service may be subject to aggressive surge pricing from suppliers.

These drawbacks associated with on-site backup power might be sustainable in a static demand environment within the hospital organization. In fact, these facilities have faced an overall accelerating competition for mission-critical power resources within the organization, particularly with regard to the rapid-scale growth of IT power requirements.

The hospital IT environment involves traditional business facility needs (such as employee email, files, operating systems and workflow management systems, administrative and financial back office systems, communications/telecoms, building operations, security systems) and electronically controlled access points (such as elevators, power assisted doors and other items). These assets are often defined as mission critical in any medium-to-large enterprise, so they are supported by backup generator power that can maintain operations for several days at a time until fuel to replenish generators arrives or power is restored.

Both facility and patient elements of mission-critical areas have experienced significant growth, as patient care and the IT storage and processing demands associated with this care and operation of the hospital environment have advanced. This has in turn placed significantly greater demands on campus-based power backup, which has finite capacity. In the event of a crisis, any requirement not explicitly tied to the preservation of human life must yield, resulting in loss of power to a variety of mission critical IT resources and capabilities.

This tradeoff is unnecessary. Hospitals can manage all their mission critical requirements by allocating on-campus resources to only those operations directly tied to patient care that must be co-located with patients, whether IT or mechanical. All other IT resources and capabilities can locate in off-site data infrastructure, with appropriate power backup and support that do not compete with finite campus resources and which do not require large capital expenses on behalf of the hospital organization itself. Using off-site data infrastructure facilitates backup disaster recovery, frees up on-campus real estate and redundant power capacity, and enhances the hospital’s ability to meet all the demands on the organization without unnecessary, risky, high-cost tradeoffs.

DP Facilities, Inc. is 100% U.S.-citizen-owned and operated. Our flagship data center, Mineral Gap — located in Wise, Virginia (also known as “The Safest Place on Earth”) — is HITRUST CSF® certified, demonstrating that Mineral Gap’s BMS, EPMS, SOC, and NOC systems have met key regulations and industry-defined requirements in colocation, including hybrid cloud, for healthcare and is appropriately managing risk, including HIPAA compliance. Mineral Gap is the first concurrently maintainable designed and constructed Tier III data center in Virginia, certified by Uptime Institute for 99.98-percent availability. Mineral Gap is simply one of the best data centers in the US.