Archive for category: Blog

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part 3 – Why Cloud On-Ramps are Key for an Enterprise Migrating to the Cloud

What are Cloud On-Ramps?

“Migrating to the cloud” has been a buzz phrase in the business world for some time and yet, there is little awareness outside of IT staff, as to what it really entails. In reality, data centers are where the cloud—whether public, private or hybrid—resides. Nonetheless, an enterprise, seeking to migrate some or all of its IT operations to the cloud, needs to understand that a critical aspect of a successful migration relates to cloud on-ramps. As we explained at the end of Part II of this blog series, cloud on-ramps are private, direct connections to the cloud—i.e., typically to popular name brand cloud service providers—from within a 3rd party data center (as shown in the diagram). So for an enterprise customer looking to colocate its IT operations into a 3rd party data center, this cloud connectivity is critical to its decision in selecting the right data center operator.

Pictured: a diagram that demonstrates how a cloud on-ramp interfaces between a data center and cloud hosting providers.

Does the Data Center Measure Up?

A modern data center must provide connectivity to major cloud providers such as Amazon Web Services (AWS), Azure, Google Cloud, IBM Cloud, Oracle Cloud, and Salesforce. However, these cloud services providers won’t just agree to connect with a data center, unless it meets certain industry standards. Among the first things that name brand cloud service providers look for in 3rd party data centers is their Uptime Institute rating. There are four “tiers” created by the Uptime Institute to indicate the level and type of standards a data center meets (see uptimeinstitute.com graphic below). For example, a data center with a Tier III rating indicates it meets 99.982% uptime, no more than 1.6 hours of downtime per year, and is N+1 fault tolerant providing at least 72-hour power outage protection.

In addition to the Uptime rating, these name brand cloud service providers will seek assurances that a 3rd party data center has the capabilities to provide critical connectivity, reliability, security, and quality guarantees. They will also want the 3rd party data center to be carrier-neutral. Typically, a data center that provides colocation services is designed to connect to multiple carriers in a “network neutral” environment. More importantly, an enterprise that seeks to adopt a hybrid cloud strategy will need a data center to offer provisioning on-ramps to various private and public clouds. In fact, networking powerhouse, Cisco recently introduced its “SD-WAN Cloud onRamp for CoLocation,” which is a platform of virtualized network functions (VNFs) and trusted hardware that runs in a colocation facility to provide connectivity to multi-cloud applications, along with an integrated security stack and cloud orchestration for remote management.

Getting IT Right

So while “migrating to the cloud” might seem like a catchy thing to say, it can prove to be a challenge, if it is not done right. A successful migration requires the right data center with the right secure colocation infrastructure and the right cloud on-ramps to ensure that an enterprise gets it right – and helps put it’s business on an on-ramp to cloud nine. In Part IV, the concluding part of this blog series, we will discuss how we leverage these technologies to provide data center solutions that go beyond cost savings.

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part 2 – How SD-WAN is Streamlining the Enterprise and Attendant Data Centers

At the conclusion of Part I of this blog series, we introduced the concept of virtualization, which is the underlying technology that defines the SD-WAN (Software-Defined Wide Area Network). In this post, we will expand on how SD-WAN, along with its conforming data center infrastructure, has transformed the enterprise network.

What is SD-WAN?

SD-WAN is a wide area network that utilizes software components to control WAN operations. Network control software is used to virtualize networking hardware, just like hypervisors are deployed to virtualize data center operations.

Pictured: a diagram of SD-WAN architecture that demonstrates how an enterprise can connect to its branch office networks and data centers.

As depicted in the diagram, SD-WAN allows an enterprise to connect all of its networks—branch office networks and attendant data centers—within its system across a wide geographic area. Further, SD-WAN provides end-to-end encryption across the network and thus increases security. Nonetheless, while SD-WAN is credited with providing end-to-end reliability, scalability and security, it’s the underlying hardware infrastructure – constituted in its disparate data centers –  that guarantees it.

Debunking Popular SD-WAN Myths via Data Center Truths

Myth 1: All SD-WAN Providers Reduce Network Costs

One of the big myths associated with SD-WAN is that all SD-WAN providers reduce network costs. An enterprise might decide to scale down its expensive MPLS transport infrastructure and use it only for mission-critical applications while moving the rest of its non-critical traffic to the cloud. In making such decisions, i.e., to move some or all of its traffic to a lower-cost SD-WAN incorporating cloud services, the enterprise will need to do a cost/benefit analysis as to how these lower costs impact its overall operations and top-line revenues. While an SD-WAN implemented using low-cost data centers offering rudimentary cloud services might reduce overall network costs, it could adversely impact an enterprise’s operating efficiency and revenue growth.

Myth 2: All SD-WAN Solutions Enable Global Enterprise Expansion and Cloud Migration

Another big myth that pervades the SD-WAN space is that all SD-WAN solutions enable global enterprise expansion and cloud migration. Nothing can be further from the truth. A global SD-WAN solution must offer reliable, secure and scalable connectivity, including dedicated private connections. It should be a single platform solution that necessarily integrates end-to-end embedded WAN optimization, application acceleration, and multi-layered security for its cloud and on-premise customers. Only then will this SD-WAN solution empower a global enterprise customer to connect its branch offices, optimize its application performance, and scale connectivity to its users worldwide. Again, it’s the comprehensive capabilities of the underlying data center infrastructure that makes such a global SD-WAN solution possible.

SD-WAN is Only as Scalable and Cost-Efficient as its Underlying Data Center Infrastructure

While SD-WAN is helping streamline the business enterprise, it’s pretty apparent that inbuilt data center functionality is what impacts key factors like network costs, cloud migration and global enterprise expansion. In Part III of this series, we will talk about cloud on-ramps, which are private, direct connections to the cloud from within a data center.  As more and more enterprises migrate to the cloud, these on-ramps are vital to the “go, no-go” decision that an enterprise will make when selecting a provider.

What is SDN and how did it impact data centers?

The Critical Role Data Centers Play in Today’s Enterprise Networks: Part I – How Software Rocked the Wide Area Networks (WANs) They Support

This four-part blog series will highlight the crucial role that the modern data center plays in today’s constantly evolving enterprise networks with their myriad transport protocols and interconnections to the internet, cloud services, and various endpoints. It will specifically discuss the impact of SDN and Cloud On-Ramp technologies on the enterprise WAN.   

What is SDN?

In trying to better understand the role of the modern data center in today’s enterprise networks, we believe it’s necessary to explain some of the technological evolution that has taken place in the past decade. It began with a paradigm shift in the computer networking architecture model, when the concept of Software-Defined Networking or SDN was introduced. SDN’s discerning feature was its ability to separate the data plane from the control plane in networking hardware, such as routers and switches, which constitute some of the critical components in a data center.

Pictured: the SDN controller has two main application programming interfaces or APIs that interact with the application layer and the infrastructure layer.SDN basically decoupled the network control function from the hardware and implemented it in software, while data plane functionality continued to reside in the networking hardware. SDN’s implementation of the control plane in software on an independent server empowered the network administrator to have flexible control of the network and consequently deliver more efficient network usage. SDN liberated the network administrator to manage traffic from a simple, graphical user interface on a remote central console without having to be bothered by physically changing connections and settings on routers and switches.

As seen in the diagram, the SDN controller has two main application programming interfaces or APIs. The Southbound API relays information to the switches and routers below it in the networking infrastructure stack and the Northbound API communicates with the business applications above it. The SDN controller thus allows a network administrator to change network rules on the fly, dynamically prioritize traffic based on applications and usage, and even block certain traffic as and when required.

SDN Impact—from SD-WAN to the Virtual Data Center

It’s important to understand the key role that SDN plays in modern data centers, which usually host multi-tenant colocation clients and cloud services providers. A data center’s ability to control and manage assorted network traffic utilizing SDN, which is typically implemented via SD-WANs or Software-Defined Wide Area Networks, has not only been significantly enhanced but also streamlined. SDN has also helped reduce the cost structure of data centers because it now allows them to implement cheaper commodity networking hardware.

Finally, we would be remiss if we did not mention the concept of the software-defined data center (SDDC) or virtual data center (VDC). What SDN has done for computer networks, SDDC/VDC is doing for data centers. SDDC is an enterprise-class data center that implements virtualization techniques. Virtualization is the ability to simulate hardware resources in software. All of the simulated hardware functionality is replicated as a “virtual instance” and operates exactly as the hardware would. SDDC primarily enables virtualizing physical servers, storage, networking and other infrastructure devices and equipment in a data center.

In Part II of this series, we will expound more on SD-WAN and some of the myths associated with it. We will also address how data center choices impact SD-WAN factors like network costs, cloud migration and global enterprise expansion. But needless to say, SDN has forever changed the enterprise networking world for the better.

Mission Critical IT and Hospitals: Why Compete for Campus Resources?

Today’s hospital campuses have evolved over the last several decades to adapt to emerging requirements of innovative patient care services and tools. In particular, primary and emergency power resources on a hospital campus have to serve many missions, including basic life safety in the building (such as emergency lighting and fire protection) as well as patient care systems that are both mechanical and IT related.

Hospitals have a greater risk profile in the event of an outage and have a wide variety of additional risk factors to manage in the event of power loss, including building systems, medical devices and IT applications, and monitoring capabilities that maintain the lives of many patients in residence, not to mention data backup and disaster recovery.

While utility power has proven to be highly scalable and reliable throughout most parts of the United States, backup on-site generators have had a more mixed record. These assets require large fixed-cost investments, have limited capacity, and in the event of insufficient power or loss of service may be subject to aggressive surge pricing from suppliers.

These drawbacks associated with on-site backup power might be sustainable in a static demand environment within the hospital organization. In fact, these facilities have faced an overall accelerating competition for mission-critical power resources within the organization, particularly with regard to the rapid-scale growth of IT power requirements.

The hospital IT environment involves traditional business facility needs (such as employee email, files, operating systems and workflow management systems, administrative and financial back office systems, communications/telecoms, building operations, security systems) and electronically controlled access points (such as elevators, power assisted doors and other items). These assets are often defined as mission critical in any medium-to-large enterprise, so they are supported by backup generator power that can maintain operations for several days at a time until fuel to replenish generators arrives or power is restored.

Both facility and patient elements of mission-critical areas have experienced significant growth, as patient care and the IT storage and processing demands associated with this care and operation of the hospital environment have advanced. This has in turn placed significantly greater demands on campus-based power backup, which has finite capacity. In the event of a crisis, any requirement not explicitly tied to the preservation of human life must yield, resulting in loss of power to a variety of mission critical IT resources and capabilities.

This tradeoff is unnecessary. Hospitals can manage all their mission critical requirements by allocating on-campus resources to only those operations directly tied to patient care that must be co-located with patients, whether IT or mechanical. All other IT resources and capabilities can locate in off-site data infrastructure, with appropriate power backup and support that do not compete with finite campus resources and which do not require large capital expenses on behalf of the hospital organization itself. Using off-site data infrastructure facilitates backup disaster recovery, frees up on-campus real estate and redundant power capacity, and enhances the hospital’s ability to meet all the demands on the organization without unnecessary, risky, high-cost tradeoffs.

DP Facilities, Inc. is 100% U.S.-citizen-owned and operated. Our flagship data center, Mineral Gap — located in Wise, Virginia (also known as “The Safest Place on Earth”) — is HITRUST CSF® certified, demonstrating that Mineral Gap’s BMS, EPMS, SOC, and NOC systems have met key regulations and industry-defined requirements in colocation, including hybrid cloud, for healthcare and is appropriately managing risk, including HIPAA compliance. Mineral Gap is the first concurrently maintainable designed and constructed Tier III data center in Virginia, certified by Uptime Institute for 99.98-percent availability. Mineral Gap is simply one of the best data centers in the US.

Mobile Medical Apps In a Regulatory World

Mobile medical apps are related to implantable medical devices such as insulin pumps, yet there may be many other applications that have a broader application and appeal to the general population, not just “sick” patients.

Many mobile app developers from the largest IT firms in the world, like Apple, Google, Microsoft and Amazon, are past masters of data collection, processing and storage. For many of these firms, the creation and management of data center infrastructure is a core element of their business, even if many physical locations of these assets are not owned by the firms themselves. Large firms have perfected the design of their facilities and environments down to the doorknobs. Smaller app developers and new entrants to the mobile app community may also have sophisticated data center companies and infrastructure capabilities for the scope of their apps.

And yet…the word “medical” has introduced a new series of stakeholders, regulators, requirements and responsibilities that cannot simply bolt on to current data center environments and architecture.

There are two categories of mobile medical apps:

The first category includes the apps that the US Food and Drug Administration (FDA) will regulate because they meet the definition of a device preventing, diagnosing or treating a disease — e.g, mobile apps that monitor fetal heart rate (at home) during pregnancy and apps that monitor blood pressure. These apps are subject to FDA requirements and generate large amounts of data that must be stored in a HITRUST-compliant environment.

The second category includes the “wellness” apps that FDA generally does not currently regulate, such as FitBit, Apple Health app, and others. These apps generate large amounts of data that can be handled more like traditional customer data.

FDA-regulated apps pose a new challenge and an invitation for app makers large and small to house data in environments that are FDA and HITRUST compliant on day one and do not require large retrofitting of many facilities that handle much larger amounts of data not subject to these same requirements.

Access to a compliance-focused, purpose-built data environment for mobile medical app data will save developers significant cost and compliance approval delays.

DP Facilities, Inc. is 100% U.S.-citizen-owned and operated. Our flagship data center, Mineral Gap — located in Wise, Virginia (also known as “The Safest Place on Earth”) — is HITRUST CSF® certified, demonstrating that Mineral Gap’s BMS, EPMS, SOC, and NOC systems have met key regulations and industry-defined requirements in colocation, including hybrid cloud, for healthcare and is appropriately managing risk, including HIPAA compliance. Mineral Gap is the first concurrently maintainable designed and constructed Tier III data center in Virginia, certified by Uptime Institute for 99.98-percent availability. Mineral Gap is simply one of the top data centers in the US.

Implantable Active Medical Devices: Risk and Opportunity in the Health Care Data Enterprise

Implantable medical devices are one of the most successful elements of our modern medical environment and are responsible for millions of interventions that have extended life and quality of life for many people over the last several decades. Current and planned active devices can transmit information about the patient to either the patient, healthcare provider, or medical device company (or some combination of all three).  

Implantable cardiac devices are a great example of this — e.g., Medtronic has a number of implantable heart monitors that monitor a patient’s heart rate and transmit the information back to Medtronic’s clinical network so physicians can analyze the data. The device will alert the patient and the physician immediately if the patient’s heart rate is outside normal parameters. Implantable insulin pumps are another popular device category. The implantable pump automatically detects blood sugar levels and dispenses insulin as needed (eliminating the need to do finger-prick blood sugar reads and then manually inject insulin). The data for the insulin pumps is usually stored on the pump itself and not kept long term, but that will likely change.

This data is mostly transmitted via wireless networks and is used through a variety of applications and platforms to treat the individual patient, then discarded (i.e., not kept long-term). And yet, all this data, generated across many patients for long periods of time, has real applications that require long-term secure, compliant data storage so it can be aggregated and analyzed for research purposes.  

Major cybersecurity risks to this data are already being detected. The U.S. Food and Drug Administration published draft guidance in October 2018 encouraging health care delivery organizations, manufacturers, users and data customers to manage security risks associated with these devices and their generated data. While device manufacturers have clear opportunities and risks to manage regarding their data and relevant networks, hospital systems are also a potential threat vector for cyber intrusions and loss of sensitive data related to these devices and patient care.

The use of implantable medical devices presents incredibly complex challenges to the healthcare enterprise. The healthcare industry needs the right data infrastructure, both to address regulatory requirements and to support long-term, secure medical device data collection and storage. The right data infrastructure — including HIPAA-compliant, HITRUST-certified data centers — can also manage risk, enable improved health outcomes, and facilitate new opportunities and revenue models.

DP Facilities, Inc. is 100% U.S.-citizen-owned and operated. Our flagship data center, Mineral Gap — located in Wise, Virginia (also known as “The Safest Place on Earth”) — is HITRUST CSF® certified, demonstrating that Mineral Gap’s BMS, EPMS, SOC, and NOC systems have met key regulations and industry-defined requirements in colocation, including hybrid cloud, for healthcare and is appropriately managing risk, including HIPAA compliance. Mineral Gap is the first concurrently maintainable designed and constructed Tier III data center in Virginia, certified by Uptime Institute for 99.98-percent availability. Mineral Gap is simply one of the top data centers in the US.

Genomic Research Needs Robust Data Infrastructure

In 1946, Jorge Luis Borges wrote a story, “On Rigor in Science,” consisting of a single paragraph. The story recounts an empire where cartography was so advanced that it produced a map the same size as the empire itself. Today, the map in Borges’s story could serve as a metaphor for the human genome and the use of genetic testing.

Healthcare providers and researchers are increasingly using genetic and genomic testing in clinical practice and medical research which creates enormous amounts of extremely sensitive data. In fact, the volume of that data is expanding rapidly, as more private- and public-sector organizations tap the power of the genome. For example:

  • The marketplace for genomic testing comprises firms who are for-profit enterprises such as 23andMe and Ancestry.com among others.
  • Pharmaceutical companies are beginning to use specific genetic tests to determine if their drugs are appropriate for certain patients.
  • Private firms, such as Myriad Genetics, operate genomic databases, algorithms and analytical tools.
  • Finally, the National Institutes of Health (NIH) has a significant research interest in genomic data and testing and corresponding enormous data storage needs.  There are already very large databases of genetic and genomic info in place, and these assets are only getting larger. NIH maintains ClinGen, one of the largest public genetic databases, and there are other public databases and associated resources within the academic medical community as well.

These databases and analytical capabilities represent some of the most valuable intellectual property for drug and treatment development in the world today. “Precision medicine,” or treatments based on a patient’s genes, lifestyle and other individual factors, is highly reliant on genetic and genomic data. The Food and Drug Administration (FDA) also issued 2018 guidance (PDF) on the regulation, use and data integrity of these databases in the clinical trials process, ensuring the use and growth of these resources for years to come.

All of this data is subject to compliance regimes consistent with HIPAA, and is also subject to the data breach reporting requirements of the Health Information Technology for Economic and Clinical Health Act (HITECH) Act, where breaches that affect 500 people or more must be reported and are subject to significant potential fines.

Database creators, owners, managers, users, and their IT teams need a plug-and-play data hosting and storage solution that is secure, compliant, scalable and resilient. Rather than retrofit, these stakeholders should look for existing infrastructure that provides these benefits of colocation today.

One of the top data center companies, DP Facilities, Inc., is 100% U.S.-citizen-owned and operated. Our flagship data center, Mineral Gap — located in Wise, Virginia (also known as “The Safest Place on Earth”) — is HITRUST CSF® certified, demonstrating that Mineral Gap’s BMS, EPMS, SOC, and NOC systems have met key regulations and industry-defined requirements in colocation, including hybrid cloud, for healthcare and is appropriately managing risk, including HIPAA compliance. Mineral Gap is the first concurrently maintainable designed and constructed Tier III data center in Virginia, certified by Uptime Institute for 99.98-percent availability. Mineral Gap is simply one of the best data centers in the US.

Why Your Data Center Must Be Owned & Operated by U.S. Citizens

U.S. healthcare consumers’ data is too important to let it be exposed to foreign actors and threat vectors. Make sure your data center is 100% owned by U.S. citizens, and that only U.S. citizens operate and staff the data center. Doing so offers a vital layer of protection and allows for the protection of US law and data custody standards.

To be clear, it’s an imperative not driven by xenophobia or politics. Being U.S.-citizen-owned and operated is simply about making sure your data center’s owners don’t have a built-in incentive to compromise security, whether because of ties to overseas investors, links to foreign governments, or because their business is incorporated under another nation’s laws.

According to data service Statista, nearly 70% of web application attack traffic originates outside the U.S. — and the top foreign sources might be surprising: Netherlands (11.9%), China (7.1%), Brazil (6.2%), Russia (4.4%). Healthcare is the fourth most-targeted industry for cyber espionage, accounting for 24% of breaches globally.

Today’s headlines are full of examples showing the consequences of data breaches to healthcare organizations: lawsuits, criminal investigations, reputational damage — not to mention the risk of identity theft and other harm to consumers themselves. Statista added: “In 2017, the average costs of cybercrime in the United States amounted to 21.22 million U.S. dollars, the most costly worldwide.”

In the U.S., health information privacy and security are governed by the Health Insurance Portability and Accountability Act of 1996 (HIPAA), regulated by the U.S. Department of Health and Human Services (HHS). HHS summarizes key elements of the HIPAA Security Rule here.

In addition, the U.S.-based not-for-profit HITRUST Alliance — which is made up of leaders from across the healthcare industry and its supporters — plays an important role in certifying data center systems. By including federal and state regulations, standards and frameworks, and incorporating a risk-based approach, the HITRUST CSF® (a widely used information privacy and security framework) helps organizations address these challenges through a comprehensive and flexible framework of prescriptive and scalable security controls.

DP Facilities, Inc. is 100% U.S.-citizen-owned and operated. Our flagship data center, Mineral Gap — located in Wise, Virginia — is HITRUST CSF® certified, demonstrating that Mineral Gap’s BMS, EPMS, SOC, and NOC systems have met key regulations and industry-defined requirements in colocation, including hybrid colo, for healthcare and is appropriately managing risk, including HIPAA certification.