Welcome!

Cloud Security Authors: Elizabeth White, Zakia Bouachraoui, Pat Romanski, Yeshim Deniz, Liz McMillan

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Cloud Security

SDN Journal: Article

Resiliency in Controller-Based Network Architectures

At the core of SDN solutions is the concept of a controller

Last week Ivan Pepelnjak wrote an article about the failure domains of controller based network architectures. At the core of SDN solutions is the concept of a controller, which in most cases lives outside the network devices themselves. A controller as a central entity controlling the network (hence its name) provides very significant values and capabilities to the network. We have talked about these in this blog many times.

Centralized Control

When introducing a centralized entity into any inherently distributed system, the architecture of such a system needs to carefully consider failure domains and scenarios. Networks have been distributed entities, with each device more or less independent and a huge suite of protocols defined to manage the distributed state between all of them. When you think about it, it’s actually quite impressive to think about the extend of distribution we have created in networks. We have created an extremely large distributed system with local decision making and control. I am not sure there are too many other examples of complex distributed systems that truly run without some form of central authority.

It is exactly that last point that we networking folks tend to forget or ignore. Many control systems in the world have central control and management. And the vast majority of them work pretty well. Any complex manufacturing facility has centralized control over robots, belts and all other machinery that it may use. There usually is some distributed state and health checks at interfaces between machines and operations, but the entire end to end process is controller by a centralized entity.

The reason for this is not much different from the reason we are starting to deploy controllers in networks. Having a true end to end view of all available resources will provide better overall performance of and control over the network. A centralized entity can make choices and decisions that are related or dependent of previous choices based on information that may well be outside the reach of a typical system in distributed operation.

Architectural Choices

But the introduction of such an entity needs to be carefully architected and designed. The exact role of a controller in the day to day (or microsecond to microsecond) operation of a network becomes a critical choice, it defines the dependency of the network on the controller and as a result, the impact of a controller failure. At Plexxi we made a very deliberate architectural choice for our controller:

  • it cannot ever be in the data path of network traffic. Not for new flows, not for existing flows. Not for link failures. Not for switch failures.

The network has to run when the controller is not available. It has to run for existing attached devices, newly attached devices, existing flows and new flows. Of course we want the controller to be available all the time because it gives us the best visibility, but we very deliberately architected it so that the network keeps working if it isn’t.

To that purpose we split our controller into two separate components. The most visible (and perhaps even traditional in this new world of controller architecture) is our central controller. It’s software, runs on a VM or bare metal server and is the central coordinator. It maintains the database with all relevant data. It communicates with the switches. And the operator communicates with it through a GUI, or our APIs or Data Services Engine.

Then there is a distributed portion of the controller. It run on every Plexxi Switch. It communicates with the central controller and takes higher level configuration, policy and topology instructions, then passes them to the Switch software that turns this into configuration for the hardware etc. Similarly, things like statistics and state info from the Switch software is passed to the distributed portion of the controller, then passed back to the central controller.

Network Independence

But most importantly, Plexxi switches are fully capable of making forwarding decisions by themselves. They learn MAC addresses. They resolve ARP. They have L2 forwarding tables. They have L3 forwarding tables. And these tables themselves are not managed by the central controller. They are managed by each switch. What the central controller provides is topology information on how to reach other switches in a Plexxi domain. Out of the many paths through the fabric, which ones should be used and for what percentage of traffic. And hundreds of backup paths through that fabric if a link of switch fails. And those failures are communicated between the switches themselves, without involving the controller (who gets informed, but is not in the action path).

Having this very clear line in the sand of what the switches are responsible for and what the controller is responsible for allows us to worry (just a little) less about the 100% resiliency of the controller. Don’t get me wrong, we want the controller there, but your network will operate as you expect if its not. In his article, Ivan calls it “controller enhanced network infrastructure”. That works.

[Today's Fun Fact: All polar bears are left-handed. Or left-clawed. I would assume that means they tend to be more creative than other bears too.]

The post Resiliency in Controller based Network Architectures appeared first on Plexxi.

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...