Welcome!

Cloud Security Authors: Elizabeth White, Zakia Bouachraoui, Pat Romanski, Yeshim Deniz, Liz McMillan

Related Topics: Cloud Security, Containers Expo Blog, @CloudExpo

Cloud Security: Article

Control the Flow for Security | @CloudExpo #Cloud

Why is TCP/IP great for networking but problematic for security?

TCP/IP connectivity starts with a DNS look-up so that Endpoint A, seeking to establish a connection to Endpoint B, can determine B's IP address. Not knowing when a connection request may be coming, Endpoint B has to continually listen for the arrival of such requests. Not even knowing who the requester is, Endpoint B must respond to the connection request to establish a TCP connection. Only then can Endpoint B seek more information from Endpoint A to try to establish its identity, authorization, and trust.

This basic architecture has fueled hugely scalable TCP/IP networking. The problem is, it requires:

  • Servers to be heavily advertised (DNS)
  • Continual network connectivity
  • Servers to expose themselves to unknown users and devices by responding to TCP requests.

If you have a desire to be as susceptible as possible to network-based attacks, and to be fooled by anyone who has stolen credentials from an authorized user, this is the perfect formula.

Server enforced authorization leave servers vulnerable
To defend themselves, enterprises have tried to limit authorization, usually by mapping employees and other users into Active Directory Groups that define the applications they are allowed to access. The problems, from the standpoint of protection against network-based attacks, are:

  • Stolen credentials can still fool the system if based simply on username/password
  • Servers must engage with the prospective user - establish a TCP connection and then probably a TLS connection - before enough information can be obtained to determine whether the user is authorized or not.

A lot of bad things can happen in that time frame, including SQL injection, OS or server vulnerability exploitation, connection hijacking. It leads to a lot of closed barn door situations where the horse has already escaped.

Speed bumps like firewalls and VPNs and NAC don't slow the attacks
Because of that, over the years, enterprise IT professionals have tried to put controls in place to create "speed bumps" in the network to slow down or stop attackers. The most common of these "speed bumps" are firewalls, VPNs, ACLs, and VLANs.

Network Address Translation (NAT) has been used to create enterprises networks that operate solely in their own private address space, which also enables the deployment of internal DNS servers for internal applications.

Commonly, they are deployed at the traditional perimeter: the LAN/WAN boundary. This means they are mostly about controlling access to remote users. In this case, deployments have been problematic:

  • Tunneled VPN access provides broad LAN connectivity. Creating and maintaining ACLs to limit such access is complex, difficult to maintain, and still results in a large attack surface as the external user must be connected to basic corporate network services (such as DNS, DHCP, software update, and system monitoring).
  • Through phishing and other techniques, attackers have now compromised systems within the internal corporate network, effectively parachuting "behind" the perimeter defenses, rendering them useless.

An attempt to address these realities have been made via Network Access Control (NAC).

When fully deployed, NAC moves the authentication process into the network as a way to prevent unauthorized users from ever seeing or connecting to servers for which they are not authorized to access. NAC is a very promising tool, but still suffers from some unfortunate realities.

NAC can be complex to deploy. For that reason, the granularity of a NAC decision is often just to put an authorized user on one of three different networks (VLANs) - internal corporate network, guest network, quarantine network (used to update software).

To execute greater granularity requires the configuration and maintenance of a complex set of Access Control Lists (ACLs), which are basically a stack of IP address/port white list and black list rules. You could, for instance, limit user A on IPA from connecting to anything but servers B, C, and D of IPB, IPC, and IPD respectively. But, as you can probably imagine, trying to configure this list for all users for all servers for all circumstances is untenable.

The expanding enterprise "perimeter" promises more complexity, less security
There is an even bigger issue today that affects the viability of all these network "speed bump" approaches to security. Where do you put the speed bumps? The assumption with all of these controls is that the enterprise owns and controls the network path to the servers they want to protect. That was a great 1992 assumption. Maybe even 2002. In those days, pretty much all enterprise applications were run from within the enterprise network, accessed by users who were either local or backhauled over the corporate WAN to access the applications.

Is that true now?

Many apps have moved to SaaS or to Cloud Service Providers. Many companies are "untethering" their remote sites and de-commissioning their traditional MPLS or site-to-site VPNs. There is also a growing trend towards wireless networks bought as-a-service and even Layer 2 switches in the cloud. As these trends gain greater momentum, just where would enterprises "plug-in" these network-based "speed-bumps?"

Software Defined Perimeters (SDP): secure, simple
The technology called Software Defined Perimeters (SDP) has been created to address all of the issues cited above. SDP does not attempt to regulate traffic at the network level. It operates at the TCP level, which means it can be deployed anywhere and is transparent to network-level issues such as addressing, ownership, changing topologies, etc. Since data can't flow unless a TCP connection is established, SDP enables an enterprise to completely control who gets to connect to what over their entire extended enterprise network.

In SDP, applications, services, and servers are isolated from users by an SDP Gateway, which is a dynamically configured TCP Gateway. The Gateway rejects all traffic sent to protected servers unless users and endpoints are "pre-approved" by a third-party arbitrator. This third-party role is played by the SDP Controller. Endpoints desiring connectivity to a destination protected by an SDP Gateway don't bother to send a connection request to that destination. Instead they "apply" for connectivity to the SDP Controller, who determines if they are trusted or not.

Trust verification involves device authentication, user authentication, and a set of context-based information that will continue to expand over time - location, BYOD vs. managed device, software posture, software integrity, and more. The goal is to evaluate overall trust as much as possible before allowing connectivity. If satisfied, the SDP Gateway dynamically configures the TCP Gateways to allow connectivity. The systems isolated and protected by the SDP gateways are then never exposed to:

  • Attackers who have stolen credentials
  • Unauthorized systems that may intend to exploit server or application vulnerabilities
  • Successful spear phishers trying to move laterally in a persistent search for access to sensitive data
  • Bad guys who, failing everything else, just want to deny service to others via bandwidth or resource starvation attacks

SDP Controllers and Gateways are software entities and can be deployed with no topological restriction. Thus SDP provides a powerful tool for enterprises to completely control the flow, no matter where the application is (internal or cloud), who the user is (employee or non-employee), or what the device is (managed or BYOD).

More Stories By Mark Hoover

Mark Hoover is CEO of Vidder Security. He has been involved in the technology and market development of security and networking technologies over a period of almost 30 years, including Firewalls, VPNs, IP routing, ATM, Gigabit Ethernet Switching, and load balancers.

Most recently, he has been a Venture Partner at Woodside Fund for two years. Prior to that he was the president of Acuitive, a strategic marketing consulting firm that helped define product and market strategies for start-ups, including Brocade, Alteon Websystems, Netscreen, Maverick Semiconductor, Redline Networks, and many others. He started his career at AT&T Bell Labs and moved to SynOptics/Bay Networks before founding Acuitive.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...