Click here to close now.

Welcome!

Cloud Security Authors: Liz McMillan, Pat Romanski, Elizabeth White, John Wetherill, Ed Featherston

Related Topics: CloudExpo® Blog, JAVA IoT, Microservices Expo, Linux Containers, Cloud Security, SDN Journal

CloudExpo® Blog: Article

The Facts About Cloud High Availability and Disaster Recovery

Understanding the facts about HA and DR in the cloud can help you make informed decisions

Enterprises are moving more and more applications to the cloud. Gartner predicts that the bulk of new IT spending by 2016 will be for cloud computing platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017.1

The far-reaching impact of cloud computing is summarized in a recent McKinsey report on disruptive technologies: "Cloud technology has the potential to improve productivity across $3 trillion in global enterprise IT spending, as well as enabling the creation of new online products and services for billions of consumers and millions of businesses alike."2

For many organizations, moving applications that can tolerate brief periods of downtime to the cloud is a straightforward decision with clear benefits. However, concerns about how to provide high availability and disaster protection in the cloud may make this decision more difficult for business-critical applications such as SQL, SAP, and Exchange. Understanding the facts about HA and DR in the cloud can help you make informed decisions about moving applications to the cloud, while ensuring the important business operations that depend on them are protected from downtime and data loss.

Fact #1: You need high availability protection in a cloud.
Do not assume that your cloud environment provides high availability protection, unless you have specifically configured it for HA. In fact, according to a recent study: "The average unavailability of cloud services is 10 hours per year or more, while the average availability is estimated to be 99.9% far less than the expected availability of business critical applications."3 That is the equivalent of more than a day of downtime. In fact, in 2014, Microsoft Windows Azure, Google, and Amazon Web Services all had some measure of service interruptions or downtime ranging from 4 minutes to several hours.4

For business critical applications, the redundancy that you can get with some cloud solutions, such as Windows Azure, is not enough. When you consider the cost of a minute of downtime for applications, such as SQL Server, Oracle, and SAP that may run many of your key business processes, it becomes clear that you need true high availability and disaster recovery protection. You need to ensure that end users have immediate access to data and applications in the event of a local failure, a regional disaster or anything in between.

However, the traditional way of providing high availability protection is to build a cluster using two identical servers - a primary server and a standby server -  with shared (typically SAN) storage. If the primary server fails, the application operation is moved to the standby server, which has immediate access to the same storage. The problem is that SANs are not only expensive to buy, manage, and maintain, they are simply not an option in public cloud offerings. There are, however, high availability solutions that can be used in a cloud that do not require a SAN.

Fact #2: You can build a cluster in a cloud.
Even though you cannot have a SAN in a cloud, you can build a cluster for high availability protection. In a Windows cloud, you simply add SANLess cluster software to your Windows Server Failover Cluster (WSFC). The SANLess software uses real time, block level replication to keep local storage in two geographic regions of the cloud synchronized. If there is an outage, the application operation is automatically moved to the remote instance, which has immediate access to current data. The synchronized storage looks to the WSFC like a traditional shared storage so there is no added complexity or specialized skills needed to build or manage a SANLess cluster. In fact, a SANLess cluster is easy to manage and has the added benefit of eliminating the single point of failure risk of a SAN. SANLess clusters also provide complete configuration flexibility, allowing you to replicate between physical, virtual, cloud, and hybrid cloud environment as well as between SAN and SANLess clusters.

Fact #3: You can have geographically separated nodes for DR in a cloud.
While providing high availability within the cloud will protect you from normal hardware failures and other unexpected outages within an availability zone (Amazon) or fault domain (Azure), you still need to protect against regional disasters. The easiest solution is to configure a multisite (geographically separated) cluster.

One effective method is to build a SANLess cluster within a cloud and extend it for disaster recovery by adding another node(s) in an alternate data center or a different geographic region within the cloud. Unlike traditional clusters that require you to have identical hardware and software in every node, a SANLess cluster allows you to mix physical, cloud and hybrid cloud configurations. The benefits of a DR configuration are clear. For example, simply adding a third, geographically separated node to your SANLess cluster in a Windows Azure cloud can give you a recovery point objective (RPO) of near zero data loss and a recovery time objective (RTO) of just about one minute.

Fact #4: You can create a cluster that mixes cloud and on-premises nodes.
You can use your on-premises data center as your primary location with a failover cluster to provide high availability protection and use the cloud as your hot standby DR site. This is a very cost-effective alternative to building out your own DR site, or renting rack space in a business continuity facility. In this case, the on-premises servers can be your choice of traditional SAN-based clusters, SANLess clusters, or even single servers not currently participating in a cluster.

The objective of having a "hot" standby DR site is to have standby servers up and running as quickly as possible in the DR site with access to a copy of the most recent application data. In the event of a disaster, recovery is automatic and immediate. A multisite cluster is an effective way to implement a hot standby DR site. In this case, the SANLess date. In the event of a forecasted disaster, such as a storm or a flood, applications can be moved to the cloud before potential disaster strikes. In the event of an unexpected disaster, applications can be recovered manually or in some cases automatically, depending upon the quorum configuration. This mix of cloud and on-premises nodes gives you an excellent RTO and RPO with minimal investment in infrastructure.

Fact #5: HA and DR in a cloud can be easy and highly cost-effective.
If you choose a SANLess software that provides an intuitive configuration interface, you can create a standard WSFC in a cloud in minutes without specialized skills. A SANLess cluster can help you realize significant cost savings in several ways. First, in a Microsoft SQL Server environment a SANLess cluster can give you high availability with SQL Server Standard Edition software licenses without requiring you to upgrade to costly SQL Server Enterprise Edition.

Second, you can realize hundreds of thousands of dollars in savings with a SANLess by eliminating the total cost of ownership (TCO) associated with a SAN. The savings in TCO include the SAN hardware acquisition costs; the power, cooling, and data center floor space costs; and the ongoing labor cost of specialized SAN administration.

If you are thinking about moving your important applications to the cloud, you need to consider how you will protect those applications from downtime and data loss. While traditional SAN-based clusters are not possible in these environments, SANLess clusters can provide an easy, cost-efficient alternative. These clusters not only provide high availability protection, but also enable significantly greater configuration flexibility and potentially dramatic savings in both licensing costs and SAN TCO.

Notes

1"Gartner Says Cloud Computing Will Become the Bulk of New IT Spend by 2016."

2 Manyika, James and Michael Chui, et al, "Disruptive technologies: Advances that will transform life, business, and the global economy," McKinsey Global Institute (May 2013) 

3Whittaker, Josh, "Amazon Web Services Suffers Outage, Takes Out Vine, Instagram, Others with it," ZDNet, (August 26, 2013)

4Mackay, Martin, "Downtime Report: Top Ten Outages in 2013," Business2Community.com, (December 2013)

More Stories By Jerry Melnick

Jerry Melnick ([email protected]) is responsible for defining corporate strategy and operations at SIOS Technology Corp. (www.us.sios.com), maker of SIOS SAN and #SANLess cluster software (www.clustersyourway.com). He more than 25 years of experience in the enterprise and high availability software industries. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Collecting data in the field and configuring multitudes of unique devices is a time-consuming, labor-intensive process that can stretch IT resources. Horan & Bird [H&B], Australia’s fifth-largest Solar Panel Installer, wanted to automate sensor data collection and monitoring from its solar panels and integrate the data with its business and marketing systems. After data was collected and structured, two major areas needed to be addressed: improving developer workflows and extending access to a business application to multiple users (multi-tenancy). Docker, a container technology, was used to ...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue of profound change in the industry.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers is very hard. You have to learn five new and different technologies and best practices (libswarm, sy...
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data retrieval. They can easily adapt to new data sets and provide access to both structured and unstruc...
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fil...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
The Internet of Things promises to transform businesses (and lives), but navigating the business and technical path to success can be difficult to understand. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, demonstrated how to approach creating broadly successful connected customer solutions using real world business transformation studies including New England BioLabs and more.
The recent trends like cloud computing, social, mobile and Internet of Things are forcing enterprises to modernize in order to compete in the competitive globalized markets. However, enterprises are approaching newer technologies with a more silo-ed way, gaining only sub optimal benefits. The Modern Enterprise model is presented as a newer way to think of enterprise IT, which takes a more holistic approach to embracing modern technologies.