Welcome!

Cloud Security Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Elizabeth White, Ravi Rajamiyer

Related Topics: @DevOpsSummit, Linux Containers, Cloud Security

@DevOpsSummit: Article

Survival Is Not Mandatory | @DevOpsSummit #DevOps #Microservices

Critical reading for rugged DevOps or DevOpsSec

Deming, the patron saint of DevOps once advised, "It is not necessary to change.  Survival is not mandatory."

To survive, application development teams are constantly pressured to deliver software even faster. But fast is not enough. The best organizations realize that security, quality and integrity at velocity are mandatory for survival. Hence, DevOpsSec.

My aim here is to leave you uncomfortably amazed.  While the pace of development has changed significantly, our processes to secure our applications has not changed enough.  It is as if a wildfire has raged uncontained for years now but we have failed to take notice.

Maybe that sounds a bit crazy, but once I share a little background on what is happening, you'll understand why innovation in this arena is critical for survival (when it comes to your applications).  If you are aiming to ramp up your own Rugged DevOps or DevOpsSec practices, this is critical reading material.

Skyrocketing Usage, Plummeting Visibility
Open source component use in development is skyrocketing, and for good reason. Over 17 billion open source and third-party components - across the major development languages - were downloaded last year helping development teams accelerate release timelines and deliver more innovative solutions to market.  To better imagine the impact of this download volume, you need to understand that there are an estimated 11 million developers responsible for the billions of downloads.

1_16While the magnitude of usage is amazing it has obscured a vast majority of the risks.  Of the billions of downloads recorded last year, 1 in 16 components downloaded had known security vulnerabilities.

While we have seen 30x growth of download requests over the past seven years, we have also witnessed huge growth in the number of organizations improving the speed of consumption and efficiency of their downloads using repository managers.

In the past 18 months since I joined Sonatype, we have seen active instances of repository managers like Nexus, Artifactory, and Archiva grow from 40,000 to over 70,000 installations, with users in the millions - also for good reason.  Development teams want to ensure faster, more reliable builds.  They also need a private and safe place to house and share their own proprietary components as well as assembled applications, images and other binary outputs.  The repository manager has established itself as a parts warehouse for software development.

All Is Not Moonlight and Roses
One in 16 may not sound like a lot until you recognize that many organizations are downloading over 250,000 components every year...some of the largest organizations consume millions of them.  If you have not calculated this in your head yet: 250,000 / 16 = 15,625.

Quality Control Numbers

Couple this fact with the remarks in the 2015 Verizon Data Breach and Investigations Report stating that applications were the most often exploited attack vector by hackers.  As a developer community we are electively sourcing known vulnerable components for use in our applications and those applications are now more vulnerable to attack.

Your Repository Manager Should Serve and Protect
If bad components are getting in, why not just stop them at the front door?

Easier said than done.   Current approaches to preventing such behavior are ineffective.  For some, "golden repositories" are used to house approved components - but components approved once are rarely vetted again for newly discovered vulnerabilities.  For other organizations, OSS Review Boards mandate reviews and approval for all new components - but these organizations are poorly staffed to match the volume and velocity of consumption leading to workarounds by development teams.  And while developers themselves would much prefer to use the best quality, highest integrity components when designing an applications, they are never allocated sufficient time to investigate the vulnerability status of every component required for their latest build.

Sparking Innovation
When current approaches fail, inspiration often sparks innovation.

Enter, a new invention: a repository firewall.  Think of it as a repository manager with a guard at the front door.  Every component that gets downloaded by a proxy repository is automatically evaluated against the parameters development, governance and security teams establish.  Does it have an AGPL license, include a known security vulnerability, or is it incredibly outdated?  The repository firewall allows organizations to download "good" components while it blocks and quarantines "bad" component downloads.  Using a repository firewall can keep your repository manager and development lifecycle safe and secure - in an instant, automatically.

The first repository firewall of its kind is called Nexus Firewall.  It couples software supply chain intelligence about component quality, security, and risks with automatic evaluations against personalized policies for approving or rejecting new downloads.  Evaluations can also be applied later in the development lifecycle where staging repositories are in use.

Nexus Firewall

Imagine the result: 16 out of 16 downloads, or 250,000 out of 250,000 downloads are now of the best quality.  Everything flowing into your application development lifecycle contains the highest quality components.  You are instantly compliant with established policies.  Your applications are less vulnerable to attacks.  You are automatically reducing risk.

Next up in this series, I will share insights about Repository Health Check reports (free to use).  Used by over 15,000 organizations, they can offer organizations the first clue as to whether or not your team could benefit from a repository firewall.

NOTE:  While this story covers only one aspect of Rugged DevOps, there is much more to be learned about the subject.  One of the best papers I have read recently comes from Amy DeMartine and Kurt Bittner at Forrester Research, entitled "The Seven Habits Of Rugged DevOps: Strengthen Cybersecurity Using DevOps Culture, Organization, Process, And Automation" (registration is required to download, but it's worth it).

More Stories By Derek Weeks

In 2015, Derek Weeks led the largest and most comprehensive analysis of software supply chain practices to date across 160,000 development organizations. He is a huge advocate of applying proven supply chain management principles into DevOps practices to improve efficiencies, reduce costs, and sustain long-lasting competitive advantages.

As a 20+ year veteran of the software industry, he has advised leading businesses on IT performance improvement practices covering continuous delivery, business process management, systems and network operations, service management, capacity planning and storage management. As the VP and DevOps Advocate for Sonatype, he is passionate about changing the way people think about software supply chains and improving public safety through improved software integrity. Follow him here @weekstweets, find me here www.linkedin.com/in/derekeweeks, and read me here http://blog.sonatype.com/author/weeks/.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...