Cloud Security Authors: Liz McMillan, Elizabeth White, Ravi Rajamiyer, Pat Romanski, Zakia Bouachraoui

Related Topics: @CloudExpo, Cloud Security, @DXWorldExpo

@CloudExpo: Blog Post

Taking Control of the Hybrid Cloud By @DerekCollison | @CloudExpo [#Cloud]

Why you need an OS to drive services across multiple environments

2015 is being billed by many in the industry as the "Year of the Hybrid Cloud." In fact, more than 65 percent of enterprise IT organizations will commit to hybrid cloud technologies before 2016, vastly driving the rate and pace of change in IT organizations, according to IDC FutureScape for Cloud. The reason hybrid cloud is so attractive is that organizations believe they can achieve greater levels of scalability and cost-effectiveness by using a combination of in-house IT resources and public cloud environments tailored to their unique needs.

Despite these promises, the evolution to a hybrid cloud approach can leave enterprises disappointed and wanting more. The hybrid cloud introduces greater complexity because each environment brings with it its own unique tools for deployment, management, monitoring and security; and can result in higher costs if enterprises aren't strategic about where their workloads run. To truly achieve the promise of the hybrid cloud, IT organizations can benefit from an overarching operating system (OS) that can unite the disparate environments, provide a single view into and control over the various environments they are utilizing.

The Promises and Pain Points of the Hybrid Cloud
That hybrid cloud is expected to be a $1.85 trillion market by 2017, accounting for half of all IT spend, according to research firm Gartner, should not be surprising. The hybrid cloud offers enterprises the ability to leverage computing resources - both on-premise in their own data centers, or off-premise in the cloud -to gain agility and cut capex costs. With a hybrid cloud solution, enterprises can place workloads where it makes the most sense and costs less; share workloads on multiple public clouds; and scale up or down as needed.

For example, a hybrid cloud solution enables enterprises to store sensitive data in-house - to ensure security and reduce potential latency when accessing it - while using public cloud resources to meet temporary capacity needs that are not easily met using a private cloud.

While the business case for hybrid cloud makes sense, there are a number of challenges that enterprises may not consider when implementing their strategy, beyond just the day-to-day issues of managing diverse IT environments.

The top challenges enterprises face include security and integration, according to the 2015 IDG Enterprise Cloud Computing Study. Security - including the risk of unauthorized access, data integrity and protection - was cited by 61 percent of survey respondents as a barrier. And integration - such as making information available to applications outside the cloud, and preserving a uniform set of access privileges - was considered a major concern by 41 percent.

As this study indicates, there appears to be a perception of risk that comes with public clouds. But today public clouds are probably more secure than most private clouds or internal corporate IT resources. We believe that the concern about security likely revolves more around the lack of visibility and control enterprises feel they have when they leverage a hybrid cloud model. Each cloud vendor and equipment manufacturer has its own tools and processes, and very often they are not designed to communicate well with each other, or apply policy and business logic consistently across platforms. This disparity introduces challenges that prevent enterprises from seamlessly managing all of their on-premise and public cloud resources. Instead of simplifying their IT with the cloud, enterprises are actually adding new levels of complexity and management costs.

The cost of cloud resources sometimes surprises enterprises too. Even though the cloud promises to reduce costs - moving from a capex model to opex - enterprises often find their public cloud bills to be higher than they expected, particularly if they've moved a large amount of workloads to the cloud or if they don't have the right governance in place. Enterprises that use a lot of public cloud resources may find that the overall cost is higher than compared to running the same workloads on premise.

Enterprises have taken a variety of approaches to address the challenges they face in adopting a hybrid cloud strategy. Some write homegrown scripts that attempt to patch together various vendor-specific management tools to achieve a somewhat more cohesive - but not perfect - approach to overall management of public cloud and on-premise resources. Some physically segment their network, through firewall configuration or buying niche products to achieve better application and network security. Regardless of what they try, enterprises find that there is significant manual labor still involved in operating their hybrid cloud environments, which often negates the automation and agility the expected to achieve.

The Roadmap for a Successful Hybrid Cloud
Enterprises need a broader, cohesive solution that enables them to access and manage both on-premise and public cloud resources the same way, from a centralized location. Essentially what they need is a hybrid cloud operating system (HCOS) that enables them to drive all of their IT resources - regardless of location. The HCOS is no different than operating systems we're all familiar with on our personal computers. Much like a computer operating system ties together all the applications we're used to using on our laptop, the HCOS manages an application's access to the compute resources it needs, and not just on one server, but across a cluster of them both on premise and in public clouds.

An HCOS eliminates today's current patchwork approach by:

  • Managing both on premise and public cloud resources - An HCOS gives enterprises clear visibility into how all their IT resources are operating and enables them to control all resources from a single location.
  • Providing a single place to create and enforce policy - With centralized management, the HCOS allows enterprises apply policy consistently and ensure that changes to apps or moves to other resources don't compromise security.
  • Operating a highly diverse set of workloads - An HCOS enables users to operate any kind of workload, whether it's an application, service, operating system or Docker container.
  • Deploying multiple application components as a unit - An HCOS allows you to use whatever resource is best suited for each component and then enables you to compose them together into a cohesive application.
  • Securing workload communication and interaction - Because policy is enforced across the entire infrastructure, enterprises don't need to rely on multiple firewalls placed at arbitrary borders to ensure the environments and workloads can communicate securely.
  • Masking network complexities from developers - An HCOS clears the way for developers to innovate without being slowed down by typical IT concerns about security and interoperability thanks to policy being applied all the way down to the development environments.
  • Ensuring the performance and scalability of applications and services - By giving visibility into how the resources are performing, an HCOS allows enterprises to anticipate issues, move workloads to other resources and scale quickly, easily and automatically as demand requires.

An HCOS is designed to eliminate the challenges of security, control and complexity with which enterprises are currently struggling, and speed innovation and time to market. With a HCOS, enterprises can rest assured that their hybrid cloud resources will be easy to deploy and manage, while maintaining policy and the highest governance standards. Most important, with a HCOS, enterprises will be able to achieve the return on investment that they expect from their hybrid cloud strategy.

More Stories By Derek Collison

Derek Collison, CEO and Founder at Apcera, is a recognized leader in large-scale distributed systems and cloud platforms. Before founding Apcera, he designed and architected the industry’s first open PaaS, Cloud Foundry, for VMware. Prior to that, he co-founded the AJAX APIs group at Google, and designed and implemented a wide range of messaging products, including Rendezvous and EMS, at TIBCO Software.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...