Click here to close now.




















Welcome!

Cloud Security Authors: Cloud Best Practices Network, Pat Romanski, Elizabeth White, Liz McMillan, Tim Hinds

Related Topics: Cloud Security, Java IoT, Linux Containers, Containers Expo Blog, Agile Computing, @CloudExpo

Cloud Security: Blog Feed Post

Facebook Exploit Is Not Unique

Facebook isn't unique in the ability to use it to attack a third party, it's just more effective

This week's "bad news" with respect to information security centers on Facebook and the exploitation of HTTP caches to affect a DDoS attack. Reported as a 'vulnerability', this exploit takes advantage of the way the application protocol is designed to work. In fact, the same author who reports the Facebook 'vulnerability' has also shown you can use Google to do the same thing. Just about any site that enables you to submit content containing links and then retrieves those links for you (for caching purposes) could be used in this way. It's not unique to Facebook or Google, for that matter, they just have the perfect environment to make such an exploit highly effective.

The exploit works by using a site (in this case Facebook) to load content and takes advantage of the general principle of amplification to effectively DDoS a third-party site. This is a flood-based like attack, meaning it's attempting to overwhelm a server by flooding it with requests that voraciously consume server-side resources and slow everyone down - to the point of forcing it to appear "down" to legitimate users.

The requests brokered by Facebook are themselves 110% legitimate requests. The requests for an image (or PDF or large video file) are well-formed, and nothing about the requests on an individual basis could be detected as being an attack. This is, in part, why the exploit works: because the individual requests are wholly legitimate requests.

How it Works
The trigger for the "attack" is the caching service. Caches are generally excellent at, well, caching static objects with well-defined URIs. A cache doesn't have a problem finding /myimage.png. It's either there, or it's not and the cache has to go to origin to retrieve it. Where things get more difficult is when requests for content are dynamic; that is, they send parameters that the origin server interprets to determine which image to send, e.g. /myimage?id=30. This is much like an old developer trick to force the reload of dynamic content when browser or server caches indicate a match on the URL. By tacking on a random query parameter, you can "trick" the browser and the server into believing it's a brand new object, and it will go to origin to retrieve it - even though the query parameter is never used. That's where the exploit comes in.

HTTP servers accept as part of the definition of a URI any number of variable query parameters. Those parameters can be ignored or used at the discretion of the application. But when the HTTP server is looking to see if that content has been served already, it does look at those parameters. The reference for a given object is its URL, and thus tacking on a query parameter forces (or tricks if you prefer) the HTTP server to believe the object has never been served before and thus can't be retrieved from a cache.

Caches act on the same principles as an HTTP server because when you get down to brass tacks, a cache is a very specialized HTTP server, focused on mirroring content so it's closer to the user.

<img src=http://target.com/file?r=1>
<img src=http://target.com/file?r=2>
<img src=http://target.com/file?r=3>
...
<img src=http://target.com/file?r=1000>

Many, many, many, many (repeat as necessary) web applications are built using such models. Whether to retrieve text-based content or images is irrelevant to the cache. The cache looks at the request and, if it can't match it somehow, it's going to go to origin.

Which is what's possible with Facebook Notes and Google. By taking advantage of (exploiting) this design principle, if a note crafted with multiple image objects retrieved via a dynamic query is viewed by enough users at the same time, the origin can become overwhelmed or its network oversubscribed.

This is what makes it an exploit, not a vulnerability. There's nothing wrong with the behavior of these caches - they are working exactly as they were designed to act with respect to HTTP. The problem is that when the protocol and caching behavior was defined, such abusive behavior was not considered.

In other words, this is a protocol exploit not specific to Facebook (or Google). In fact, similar exploits have been used to launch attacks in the past. For example, consider some noise raised around WordPress in March 2014 that indicated it was being used to attack other sites by bypassing the cache and forcing a full reload from the origin server:

If you notice, all queries had a random value (like “?4137049=643182″) that bypassed their cache and force a full page reload every single time. It was killing their server pretty quickly.

 

But the most interesting part is that all the requests were coming from valid and legitimate WordPress sites. Yes, other WordPress sites were sending that random requests at a very large scale and bringing the site down.

The WordPress exploit was taking advantage of the way "pingbacks" work. Attackers were using sites to add pingbacks to amplify an attack on a third party site (also, ironically, a WordPress site).

It's not just Facebook, or Google - it's inherent in the way caching is designed to work.

Not Just HTTP
This isn't just an issue with HTTP. We can see similar behavior in a DNS exploit that renders DNS caching ineffective as protection against certain attack types. In the DNS case, querying a cache with a random host name results in a query to the authoritative (origin) DNS service. If you send enough random host names at the cache, eventually the DNS service is going to feel the impact and possibly choke.

In general, these types of exploits are based on protocol and well-defined system behavior. A cache is, by design, required to either return a matching object if found or go to the origin server if it is not. In both the HTTP and DNS case, the caching services are acting properly and as one would expect.

The problem is that this proper behavior can be exploited to affect a DDoS attack - against third-parties in the case of Facebook/Google and against the domain owner in the case of DNS.

These are not vulnerabilities, they are protocol exploits. This same "vulnerability" is probably present in most architectures that include caching. The difference is that Facebook's ginormous base of users allows for what is expected behavior to quickly turn into what looks like an attack.

Mitigating
The general consensus right now is the best way to mitigate this potential "attack" is to identify and either rate limit or disallow requests coming from Facebook's crawlers by IP address. In essence, the suggestion is to blacklist Facebook (and perhaps Google) to keep it from potentially overwhelming your site.

The author noted in his post regarding this exploit that:

Facebook crawler shows itself as facebookexternalhit. Right now it seems there is no other choice than to block it in order to avoid this nuisance.

The post was later updated to note that blocking by agent may not be enough, hence the consensus on IP-based blacklisting.

The problem is that attackers could simply find another site with a large user base (there are quite a few of them out there with the users to support a successful attack) and find the right mix of queries to bypass the cache (cause caches are a pretty standard part of a web-scale infrastructure) and voila! Instant attack.

Blocking Facebook isn't going to stop other potential attacks and it might seriously impede revenue generating strategies that rely on Facebook as a channel. Rate limiting based on inbound query volume for specific content will help mitigate the impact (and ensure legitimate requests continue to be served) but this requires some service to intermediate and monitor inbound requests and, upon seeing behavior indicative of a potential attack, the ability to intercede or apply the appropriate rate limiting policy. Such a policy could go further and blacklist IP addresses showing sudden increases in requests or simply blocking requests for the specified URI in question - returning instead some other content.

Another option would be to use a caching solution capable of managing dynamic content. For example, F5 Dynamic Caching includes the ability to designate parameters as either indicative of new content or not. That is, the caching service can be configured to ignore some (or all) parameters and serve content out of cache instead of hammering on the origin server.

Let's say the URI for an image was: /directory/images/dog.gif?ver=1;sz=728X90 where valid query parameters are "ver" (version) and "sz" (size). A policy can be configured to recognize "ver" as indicative of different content while all other query parameters indicate the same content and can be served out of cache. With this kind of policy an attacker could send any combination of the following and the same image would be served from cache, even though "sz" is different and there are random additional query parameters.

/directory/images/dog.gif?ver=1;sz=728X90; id=1234
/directory/images/dog.gif?ver=1;sz=728X900; id=123456
/directory/images/dog.gif?ver=1;sz=728X90; cid=1234 

By placing an application fluent cache service in front of your origin servers, when Facebook (or Google) comes knocking, you're able to handle the load.

Action Items
There have been no reports of an attack stemming from this exploitable condition in Facebook Notes or Google, so blacklisting crawlers from either Facebook or Google seems premature. Given that this condition is based on protocol behavior and system design and not a vulnerability unique to Facebook (or Google), though, it would be a good idea to have a plan in place to address, should such an attack actually occur - from there or some other site.

You should review your own architecture and evaluate its ability to withstand a sudden influx of dynamic requests for content like this, and put into place an operational plan for dealing with it should such an event occur.

For more information on protecting against all types of DDoS attacks, check out a new infographic we’ve put together here.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@ThingsExpo Stories
A producer of the first smartphones and tablets, presenter Lee M. Williams will talk about how he is now applying his experience in mobile technology to the design and development of the next generation of Environmental and Sustainability Services at ETwater. In his session at @ThingsExpo, Lee Williams, COO of ETwater, will talk about how he is now applying his experience in mobile technology to the design and development of the next generation of Environmental and Sustainability Services at ETwater.
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
SYS-CON Events announced today that IceWarp will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IceWarp, the leader of cloud and on-premise messaging, delivers secured email, chat, documents, conferencing and collaboration to today's mobile workforce, all in one unified interface
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and analyzed? As an area of investment, how might a retail company move towards an innovation methodolo...
The Internet of Things (IoT) is about the digitization of physical assets including sensors, devices, machines, gateways, and the network. It creates possibilities for significant value creation and new revenue generating business models via data democratization and ubiquitous analytics across IoT networks. The explosion of data in all forms in IoT requires a more robust and broader lens in order to enable smarter timely actions and better outcomes. Business operations become the key driver of IoT applications and projects. Business operations, IT, and data scientists need advanced analytics t...
Consumer IoT applications provide data about the user that just doesn’t exist in traditional PC or mobile web applications. This rich data, or “context,” enables the highly personalized consumer experiences that characterize many consumer IoT apps. This same data is also providing brands with unprecedented insight into how their connected products are being used, while, at the same time, powering highly targeted engagement and marketing opportunities. In his session at @ThingsExpo, Nathan Treloar, President and COO of Bebaio, will explore examples of brands transforming their businesses by t...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, imple...
While many app developers are comfortable building apps for the smartphone, there is a whole new world out there. In his session at @ThingsExpo, Narayan Sainaney, Co-founder and CTO of Mojio, will discuss how the business case for connected car apps is growing and, with open platform companies having already done the heavy lifting, there really is no barrier to entry.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes about through a Communications Platform as a Service which allows for messaging, screen sharing, video...
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron's memory solutions enable the world's most innovative computing, consumer,...
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device access to health records while reducing operating costs and complying with government regulations.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with APIs within the next year.
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...