Cloud Security Authors: Liz McMillan, Elizabeth White, Zakia Bouachraoui, Terry Ray, Yeshim Deniz

Related Topics: @CloudExpo, Cloud Security

@CloudExpo: Article

Amazon Launches High Performance Cloud – Hackers in Love

New service enables ultrafast number crunching (and password cracking)

Calling it a "nuclear-powered bulldozer", yesterday, Amazon announced and blogged about its newest cloud infrastructure service, the "Cluster GPU Instance", which delivers supercomputer calculation power for as little as $2.10 per hour.  The new instance type employs the same NVIDIA Tesla processor used in three of the five fastest supercomputers.  It is rated at 515 gigaflops (515 billion double-precision floating point calculations per second) and each Amazon instance employs two of them, giving each instance more than one teraflop of processing power.  Amazon further allows instances to be clustered "up through and above 128 nodes" for even more power. 

Theoretically, a 128-node cluster of the new Amazon EC2 instances would qualify as the 50th fastest computer in the world.  The new instance type enables a wide variety of calculation-intensive workloads for applications that include energy exploration, weather prediction, graphics rendering, and video transcoding.  And, oh, it is also good for enabling encryption code breaking and identity theft.

Amazon CTO Werner Vogels spoke at both 1st and 2nd Cloud Expos

I doubt Amazon's founder Jeff Bezos or its web services evangelist Jeff Barr are feeling like Oppenheimer did when he witnessed the first test of his creation, the atom bomb, and famously quoted Hindu scripture about becoming "the destroyer of worlds", but maybe they should feel that way.  This is a big moment for good and evil.

First, the Good

Not long ago, computing power of this magnitude was a very precious resource, only available to the large, well-funded companies, government agencies, and academic institutions who could afford to buy and manage expensive supercomputers from companies like Cray and IBM.

Then, things started to change with the growth of computer gaming, scientific visualization, computer animation, and media streaming, which drove the development and volume production of processing chips called "graphics processing units" (GPUs) by companies like NVIDIA and ATI.

In their namesake application of graphics processing, GPUs perform the complex "floating point" decimal arithmetic needed to render and manipulate highly detailed graphics and photorealistic computer-generated imagery, or "CGI".  (Conventional CPU chips, like the x86, ARM, and others, can only perform integer arithmetic "in hardware", making them ill-suited for efficient graphics processing.)

But, there are many other, non-graphical applications that also require floating point calculations and it was not long before GPUs were being used as mathematical co-processors in high-end scientific workstations and aggregated in servers.  Although they are much cheaper than first-generation super computers, these systems are still quite expensive, with workstations costing $10K and up and servers going for multiples of that.

Yesterday that all changed.  Now the tiniest company, even the lone quant, can have the same computational power, for as little as $2.10 per hour, with no up-front investment and access from anywhere in the world.

Amazon Cluster GPU Instances can lower the cost and accelerate the progress of fighting famine and disease, building safer, more fuel efficient vehicles and aircraft, finding and exploiting new sources of energy, and, of course, producing breathtaking visual entertainment.  We have not yet begun to imagine the new businesses and research projects this kind of cloud computing will make possible.

In their blog entry entitled "A Couple More Nails in the Coffin of the Private Compute Cluster" large scale computing specialists Cycle Computing provide a very detailed picture of how they have used this technology to build a value-added computation service in the Amazon public cloud to support these kinds of applications.

And, that is definitely all good.  But, every innovation has a dark side and this is no exception.

Then, the Evil

At the same time as Amazon was announcing the general availability of the EC2 Cluster GPU Instance, German programmer Thomas Roth, writing on his Stacksmashing.net blog, was showing how he used it to create a password "hash cracker" that could crack a six character password in 49 minutes ($1.71 to Amazon.)

The password he cracked was one that used the SHA1 hashing scheme designed by the National Security Agency and published by NIST as a Federal Information Processing Standard.  In 2005, SHA1 was found to contain a mathematical weakness that could enable security vulnerabilities and was deprecated accordingly, but not before it came to be employed in a number of widely-used security applications and protocols.  His cracker could also be used against MD5/4 and NTLM security protocols.  Like SHA1, these protocols have been deprecated or replaced, due to similar vulnerabilities, but, also like SHA1, only after becoming widely deployed.

So, the password Roth cracked so quickly was short and encrypted with a deprecated method, correctly suggesting that it would have been much more difficult to use the Amazon service to crack a longer, better encrypted password.  But, remember, he only used one cluster node and he was just fooling around.  He seems nonplussed about how hard it might be to take it further.

"This just shows one more time that SHA1 for password hashing is deprecated.  You really don't want to use it anymore!  Instead, use something like scrypt or PBKDF2!  Just imagine a whole cluster of this machines (Which is now easy to do for anybody thanks to Amazon) cracking passwords for you, pretty comfortable.  Large scaling password cracking for everybody! [...]

"If I find the time, I'll write a tool which uses the AWS-API to launch on-demand password-cracking instances with a preconfigured AMI. Stay tuned either via RSS or via Twitter."

I am not sure what color this guy's hat is, but his sang froid is unsettling.  And, Amazon says that users can expect these clusters to scale at about 90% efficiency, and developers can expect the availability soon of a variety of programming aids that will simplify the process of exploiting and scaling the GPU clusters.  So, Mr. Roth is not vamping.

Don't Be Shiva

Again, every innovation has its dark potential.  In this case, the innovation is not technical; GPUs have been used for nefarious purposes of the above kind for years.  This is an economic innovation that takes considerable cost and time out of a kind of criminality that can be extremely rewarding - identity and data theft.   It cannot be stopped any more than digital piracy or other forms of highly-leveraged electronic misbehavior, it can only be slowed down, and only if Amazon and others like them drive against it.  Will they?

I'm not sure what it will take to make sure that Amazon doesn't let their new yellow cake get into the wrong hands, but I suspect it will more likely be a result of regulation or litigation after a disaster than of altruistic foresight before trouble strikes.  As I mentioned in my article, SMB Cloud is a Hacker's Paradise a few months back, large cloud services providers, including Amazon, have so far not demonstrated striking speed and initiative in getting and staying ahead of the bad guys, whose resolve for mischief and mayhem is boundless.

Optimism is no defense.  One of the main reasons cyber-crime is so out of control now is that the World Wide Web was built on a foundation of magical thinking in the form of the fraternal optimism of academics.  As bad as security risks have been in the Web 1.0 era, despite the definite improvements in prevention and hygiene that have been made, they still may pale by comparison with what could be coming.  Cloud computing has multiplied many good things, like cost savings and business agility, by 1-2 orders of magnitude.  It can do the same for many bad things, if we let it.  Let's not let it.

More Stories By Tim Negris

Tim Negris is SVP, Marketing & Sales at Yottamine Analytics, a pioneering Big Data machine learning software company. He occasionally authors software industry news analysis and insights on Ulitzer.com, is a 25-year technology industry veteran with expertise in software development, database, networking, social media, cloud computing, mobile apps, analytics, and other enabling technologies.

He is recognized for ability to rapidly translate complex technical information and concepts into compelling, actionable knowledge. He is also widely credited with coining the term and co-developing the concept of the “Thin Client” computing model while working for Larry Ellison in the early days of Oracle.

Tim has also held a variety of executive and consulting roles in a numerous start-ups, and several established companies, including Sybase, Oracle, HP, Dell, and IBM. He is a frequent contributor to a number of publications and sites, focusing on technologies and their applications, and has written a number of advanced software applications for social media, video streaming, and music education.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...