Welcome!

Cloud Security Authors: Liz McMillan, Elizabeth White, Pat Romanski, Zakia Bouachraoui, Yeshim Deniz

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Release Management , Cloud Security, SDN Journal

Containers Expo Blog: Blog Post

Network Design in a Virtual World

Applications and operations must rule

We get quite caught up in high level architectures at times. It is good to read some posts that focus on design and implementation and the practicality of taking higher level architectures to reality. Two of Ivan’s posts caught my eye this week. In the first, he discusses the difference in how application and network folks look at the deployment of tiered applications and what that means for the security between them. In the second, he asks a question that our entire industry has under delivered on for more than a decade: why can’t we have plug-n-play networking? They may appear as wildly different topics, but in my mind they are more than related. Applications and operations must drive network design and implementation.

In creating a data center design it is important to carefully design how L2 and L3 are layered on top of the physical network. L2 and L3 provide different levels of separation and security domains and understanding what can (or should) go where can very significantly change how efficient an application runs on the network. As Ivan points out, in many cases layers of an application require additional network services between them. The obvious ones are firewalls and loadbalancers, less obvious ones may include IPS/IDS systems, mirror and compliance monitoring and I am sure you can come up with a few more.

Traffic from applications (or between tiers of an application, the often mentioned east-west traffic) needs to be passed through one or more of these network services (or none). With the distributed nature of the VM components of a tiered application, getting the traffic to these services is not always easy. There is a movement to virtualizing these services and have them distributed and co-located with the actual VMs, but some services simply need to be a applied in a more central place because of the context they need to do their work.

Getting traffic to centralized or semi-distributed services can be accomplished in several ways. By far the easiest is to have the application send the traffic explicitly to the service. Many firewalls also act as a router for a segment, so telling the application where its default router is ensures its traffic always ends up on the firewall. Most loadbalancers terminate a http or other connection oriented session on the “outside” and attach it to a new session on the “inside”, so that traffic also naturally flows to the service.

Carefully crafting the boundaries between subnets, what belongs on each subnet and what service is applied on and between subnets is not at all trivial. There are those that believe every server or even VM should be in its own 31 bit subnet. And while just about every application (and I include storage in that too for the most part) only really needs L2 connectivity to its router, there are traits of not requiring to route traffic that may reduce the need for the network services. Multicast based applications within a subnet just work without complexity. IGMP snooping on the switch ports is about all you need. Worrying about intrusion becomes easier when VMs or portions of applications cannot be reached from outside the subnet. There is no one size fits all, no magic design or template.

The question of plug and play networking should be an embarrassing one for all of us in the industry. We have not done anything to significantly improve the automatic provisioning of networks. Sure we have glued together some DHCP, LLDP, CDP or 802.1X based VLAN memberships (mostly pushed by VoIP phone enablement), but we honestly have not moved on significantly from those most basic steps. There is certainly progress when creating fabrics and we are doing our part to significantly reduce the amount of provisioning touches. The bulk of provisioning and configuration however is on the access side of a network, where we plug in our servers, appliances, storage and everything else. And Ivan is totally correct. Some of the fundamental tools exist to exchange useful information between the devices just connected, but we have not taken that to a next level and taken a good chunk of provisioning out of the hands of the operator (and their scripts).

The reality of network design and implementation is in the details. An understanding of the applications that use the networks and how they are tiered and separated into VMs is critical to understand how L2 and L3 are layered on top of a network. Virtualization may make this easier to dynamically attach VMs to network segments (L2 or L3), but the resulting traffic flow still needs to make sense. Especially if network services need to be applied.

When we talk to our customers, the discussion moves on from spine and leaf versus a mesh fabric very quickly in most cases. The bulk of the discussions are focused on flexibility, automation, placement of boundaries and adjustment of topologies. The design process is driven by the application. Which is why it is nice to see Ivan’s video article starting with an application and deriving a network design from it. Even if the application was a generic one.

[Today's fun fact: Half of all Americans live within 50 miles of their birthplace. This is called propinquity.]

The post Network Design in a Virtual World: Applications and Operations must Rule appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...