|By Lori MacVittie||
|April 20, 2009 06:40 AM EDT||
What is this application delivery thing that everyone keeps telling me I need? Isn’t that just the latest marketing term for load balancing?
A recently released Forrester report concludes that “firms must develop and integrated strategy for application delivery.” We don’t disagree with that, or with the Gartner report claiming that “Load Balancing is Dead, Time to Focus on Application Delivery.” Application delivery is the next step in the logical evolutionary path from the tactical solution of load balancing to a comprehensive application infrastructure strategy.
Forrester’s research indicates that despite the fact that application delivery makes sense, many organizations are still operating in a very tactical, problem-resolution oriented manner.
Top infrastructure initiatives — like consolidation and virtualization — are focused within the data center, and firms aren’t paying enough attention to solving the growing need to provide anywhere, anytime access to applications. The result? Application response times don’t meet expectations. The knee-jerk usual reactions are to increase network bandwidth and to deploy point solutions like WAN optimization, but these measures do not address the underlying problems. Our conclusion: To deliver acceptable application performance levels without unacceptable increases in IT costs, firms must develop an integrated strategy for application delivery.
Despite the increased focus on the network, we still don’t see a lot of companies taking advantage of more purpose-built solutions that tackle application performance, availability, and scalability. An increasing number of firms are throwing hardware point solutions at the problem, as demonstrated by the 41% reporting that they are deploying such equipment as load balancers. However, we were a bit surprised to see a lower emphasis on more comprehensive solutions, with 33% and 20% indicating they are taking a more strategic approach by implementing application delivery infrastructure and application acceleration equipment, respectively.
Some of the reason for the lack of adoption of more integrated solutions is likely that organizations are simply not aware of what application delivery is. Some of the reason is certainly that there still exist silos within IT that focus on the many functions of application delivery but do so in a non-integrated fashion themselves. Some of the reason is simply that IT is overburdened at the moment; and has very little time for strategy when it is tasked with solving real problems right now.
Ironic, then, that IT doesn’t have time to focus on the very strategy that could reduce the burden of siloed application delivery management and thus give IT the time they need in the first place. A Catch-22, to be certain.
WHAT IS APPLICATION DELIVERY
Analysts, press, industry pundits. All three agree that application delivery is an essential component to the efficient data center of tomorrow. But they – and I’m guilty of this too - often assume you know what application delivery is, and what it does, and why it’s so necessary as part of solid foundation for emerging data center models.
The question “What is it?” is far more common than you might think. The term is one that almost always requires defining unless you’ve been knee-deep in the industry for a while. If I say “load balancer” to a crowd the term is immediately understood. But if I say “application delivery” the audience gets that “are-you-speaking-in-a-foreign-language-because-I-don’t-know-what-the-hell-you’re-talking-about” look on their face. You know, the one that makes you wonder if you just brayed like a donkey or maybe your latent Tourette’s Syndrome just kicked in.
That’s why I often describe it with “it’s like load balancing on steroids.” Mostly because application delivery grew out of load balancing because it just made sense.
Application delivery is what you do with applications. You deliver them, via some kind of network, to users. Application delivery infrastructure, then, is all the components necessary to make that happen.
Load balancing is the core of application delivery. This is because the load balancer just happened to be deployed in the perfect place in the data center to provide additional, application-focused functionality: between the client and the server. Because it usually acted as a proxy, it was able to grow from simple layer 4 (TCP) load balancing to a more flexible, intelligent and application-aware layer 7 (application) device. As it did so, developers began to see the potential benefits of adding functionality to the load balancers. Because the device could see everything from the network layer to the application data, it could optimize network and communication protocols, add security options, implement rate shaping and other QoS functionality, and be more “smart” regarding the definition of “availability” when it came to the application. And thus application delivery solutions began to appear, each one comprising more and more application-aware functionality; each one capable of providing more and more benefits.
And as applications grew more complex, so did the infrastructure. There’s performance and access considerations. Reliability, scalability, and security concerns. Failover, optimization, and application-specific quirks that must be addressed in a load balanced environment. There are a lot of components required to deliver an application, keep it secure, and make sure it’s fast enough to keep the user happy.
The Forrester report discusses the need to “provide anywhere, anytime access to applications.” That means from home, on the road, in the office, in the wee-hours of the morning or during the middle of the day. Application delivery also concerns itself with being able to handle the myriad other factors that go into application delivery such as SLAs based on application and user and network conditions.
Application delivery is about making decisions based on the context of each request, rather than on one or two variables. And not just decisions like which server should respond, but which network should it be returned on and which data center should be used and should the request be scanned for malicious intent and is this request even a legitimate one for this application, for this user, from this location. It’s about applying optimizations to protocols that improve the performance of applications over both WAN and LAN. It’s focused on ensuring availability of applications and maintaining service-level agreements through intelligent load balancing decisions and careful monitoring of application health at the application layer, not just the network layer. Application delivery focuses on the application and on its unique quirks and behaviors that can impede performance. It provides a platform on which on-demand adjustments can be made to the application delivery process; on which functionality can be deployed to address security or architectural issues in a centralized manner.
Application delivery is the integration of solutions focused on the security, performance, and reliability of applications.
This graphic from the aforementioned Forrester report very nicely illustrates the difference between a point-product based solution and an integrated application delivery architecture.
Source: “Application Delivery Takes Center Stage,” a commissioned study conducted by Forrester Consulting on behalf of Citrix Systems, December 2008
WHY APPLICATION DELIVERY
Hardware point solutions can result in sprawl. Sprawl increases operating expenses, makes it difficult to troubleshoot, introduces unnecessary complexity, and as a bonus it negatively impacts application performance – the very thing the solutions were put in place to address - by adding latency at every hop. I’ve explained the problem of sprawl and the proliferation of point solutions to many different types of audiences and not once has someone yelled out, “You’re a liar! Does not!” because everyone knows it’s true; we just may not agree on the best way to solve that problem.
Obviously if you integrate all the functionality normally found in point solutions so that they all work on the same data set, it’s going to remove the issue of latency because all the solutions can work on the same data without needing it packaged up and delivered via a fairly expensive TCP connection.
That used to be “the big” problem application delivery solved. Today the focus is also on streamlining application delivery processes: the manual configuration and coordination of policies across disparate solutions designed to secure and speed the delivery of applications. That process, when using multiple point solutions, can become a nightmare. It’s not just the configuration of each individual device that’s the problem, it’s the coordination across all those solutions that becomes problematic and time consuming. Policies implemented and enforced on one point solution may interfere with the application of a policy on another device, conflicting with one another even though both are equally valid and equally necessary. Resolving those conflicts takes time and can actually require a re-architecting of the network. For example, where in the data flow you place certain solutions such as security can change how the policies act. If an intermediary acting as a full proxy changes, in any way, the application data or headers it can trigger false positives on security devices inspecting traffic behind it.
Encrypted data has to be decrypted to be inspected by security and content filtering solutions, so it’s essential to ensure that those solutions are in the flow in the right place in the network. Or you have to provide them with the proper certificates so they can decrypt the data, which means more management and tracking of certificates. This also introduces the potential for certificate theft as few devices have secure key stores and cert management. That’s assuming you can store the cert on an intermediary device; some provide no mechanism for doing so.
These problems are solved with an integrated application delivery solution because the policies are designed to collaborate with one another; to work in concert with each other rather than conflicting with one another. They have access to each other’s data if necessary and understand their relationship with one another. And when there is still a conflict – and there invariably is for some situations – then the answer is to reorder policies, not re-architect the network. The former is a much simpler solution that requires less time and fewer headaches.
An integrated solution also ensures the reuse of knowledge. If you know how to configure the application acceleration components you also know how to configure the application security, and the core load balancing features. The interfaces are the same and the processes (and terminology) are the same, which means less time spent learning the nuances of each product and becoming familiar with each product’s unique view of how the product should be configured and managed. This streamlines the application delivery process and makes it more efficient, which translates into reduced operating expenses.
THE PROBLEMS OF ACCURACY and CONTEXT
In an architecture comprised of multiple solutions, the only real way to share that context is to pass it around somehow between devices. The only exception to this is in the case of some security solutions that can be deployed in a bridged mode. IDS and web application firewalls are the most common example, where the solutions are deployed in such a way that the original requests are essentially broadcast to the devices, usually through the use of mirroring on the switch. This solution does solve the problem, but it also results in duplication of data on the network and increases the bandwidth used in the process.
The passing around of context between devices doesn’t happen for a number of reasons. Foremost is the lack of a communication protocol to do so. There is no “context-sharing” standard, no best practices, no agreed upon method of sharing that context between disparate devices. While there may be a way to do so among products from a single vendor, anyone who builds an application delivery network based on individual components rarely sources from a single vendor, so any “sharing” of context that is possible is generally lost.
The other issue with a multiple-solution architecture is that many solutions are full proxies. That means that it is not the user that appears to be the client, it is the last intermediary in the chain of proxies that appears to be the client. If the flow of data is client –> SSL accelerator –> load balancer –> server then the load balancer sees the SSL accelerator as the client, and not the end-user. That means data regarding the network conditions for the client are not accurate. The load balancer sees the local segment of the network as the “client link” and any decisions made based on that will be based on incorrect data.
This problem is particularly prevalent in Web 2.0 applications which provide APIs for integration. Requests via the API have different requirements; they are treated differently than requests for the same data arriving via the web application itself. Without an intelligent infrastructure, the handling of these requests is spread across multiple pieces of infrastructure – and often in the application itself. A change in policy requires changes across multiple devices, which can not be only be time-consuming but is prone to error introduction based on the sheer volume of changes required.
In an integrated application delivery network the myriad functions are integrated and deployed on the same platform. This means that what one solution (e.g. security) does to data is understand and recognized by other solutions (e.g. caching and application acceleration). The context is preserved as requests and responses flow through the disparate functions. It solves the second issue – accurate data upon which to make decisions – by having access to the original request, from the network layer up to the application layer.
START SIMPLE, GROW LATER
Most organizations necessarily turn to application delivery solutions because they are in need of a high availability architecture; they need a load balancer. As scalability through virtualization (horizontal scalability) continues to rise in popularity as a more efficient means of achieving goals, load balancing will continue to be a more strategic part of the data center. It behooves network and system architects, then, to consider the long-term ramifications associated with virtualization and increasing demand on applications in terms of access, performance, and security. Doing so should, according to analysts, lead those architects to determine that an application delivery networking solution will serve their needs best as it is these very issues that are addressed by such platforms.
Choosing a modularized, extensible application delivery platform allows architects to start with load balancing and add additional functionality as they need and in such a way as to allow them to truly design a solution that fits their specific needs rather than simply acquire and deploy more devices that may dictate changes in the network and infrastructure architecture.
web 2.0,APIs,services,virtualization,acceleration,optimization,security,access,context aware,
Related articles & blogs
- Load Balancing is Dead, Time to Focus on Application Delivery
- Architects Need to Better Leverage Virtualization
- What’s good for the network isn’t always good for the application
- Why you still need Layer 7 persistence
- Layer 7 switching + Load balancing = Layer 7 Load Balancing
- The Web 2.0 Botnet: Twisting Twitter and Automated Collaboration
- API Request Throttling: A Better Option
Connected things, systems and people can provide information to other things, systems and people and initiate actions for each other that result in new service possibilities. By taking a look at the impact of Internet of Things when it transitions to a highly connected services marketplace we can understand how connecting the right “things” and leveraging the right partners can provide enormous impact to your business’ growth and success. In her general session at @ThingsExpo, Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, discussed how this exciting emergence of layers of...
Jul. 7, 2015 12:45 PM EDT Reads: 1,648
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Jul. 7, 2015 12:30 PM EDT Reads: 1,929
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is corr...
Jul. 7, 2015 12:15 PM EDT Reads: 1,957
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
Jul. 7, 2015 12:00 PM EDT Reads: 1,628
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Jul. 7, 2015 12:00 PM EDT Reads: 1,976
SYS-CON Events announced today that kintone has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. kintone promotes cloud-based workgroup productivity, transparency and profitability with a seamless collaboration space, build your own business application (BYOA) platform, and workflow automation system.
Jul. 7, 2015 12:00 PM EDT Reads: 2,226
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fillin...
Jul. 7, 2015 11:45 AM EDT Reads: 2,788
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
Jul. 7, 2015 11:30 AM EDT Reads: 1,923
SYS-CON Events announced today that Secure Infrastructure & Services will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Secure Infrastructure & Services (SIAS) is a managed services provider of cloud computing solutions for the IBM Power Systems market. The company helps mid-market firms built on IBM hardware platforms to deploy new levels of reliable and cost-effective computing and high availability solutions, leveraging the cloud and the benefits of Infrastructure-as-a-Service (IaaS...
Jul. 7, 2015 11:00 AM EDT Reads: 2,054
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Jul. 7, 2015 11:00 AM EDT Reads: 2,481
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
Jul. 7, 2015 10:30 AM EDT Reads: 2,208
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Jul. 7, 2015 09:45 AM EDT Reads: 2,226
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.
Jul. 7, 2015 09:00 AM EDT Reads: 2,583
SYS-CON Events announced today that Intelligent Systems Services will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-critical systems. ISS has completed many successful projects in Healthcare, Commercial, Manu...
Jul. 7, 2015 09:00 AM EDT Reads: 1,372
"We have a tagline - "Power in the API Economy." What that means is everything that is built in applications and connected applications is done through APIs," explained Roberto Medrano, Executive Vice President at Akana, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 7, 2015 09:00 AM EDT Reads: 1,926
SYS-CON Events announced today that SoftLayer, an IBM company, has been named “Gold Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place November 3–5, 2015 at the Santa Clara Convention Center in Santa Clara, CA. SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from Web startups to global enterprises. SoftLayer’s modular architecture, full-featured API, and sophisticated automation pro...
Jul. 7, 2015 08:45 AM EDT Reads: 2,274
The basic integration architecture, as defined by ESBs, hasn’t changed for more than a decade. Most cloud integration providers still rely on an ESB architecture and their proprietary connectors. As a result, enterprise integration projects suffer from constraints of availability and reliability of these connectors that are not re-usable across other integration vendors. However, the rapid adoption of APIs and almost ubiquitous availability of APIs amongst most SaaS and Cloud applications are rapidly redefining traditional integration approaches and their reliance on proprietary connectors. ...
Jul. 7, 2015 08:15 AM EDT Reads: 1,612
SYS-CON Events announced today that WHOA.com, an ISO 27001 Certified secure cloud computing company, participated as “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo® New York, which took place June 9-11, 2015, at the Javits Center in New York City, NY. WHOA.com is a leader in next-generation, ISO 27001 Certified secure cloud solutions. WHOA.com offers a comprehensive portfolio of best-in-class cloud services for business including Infrastructure as a Service (IaaS), Secure Cloud Desktop, Cloud Storage, Disaster Recovery, Integrated Applications and Security.
Jul. 7, 2015 08:00 AM EDT Reads: 1,328
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context wi...
Jul. 7, 2015 07:15 AM EDT Reads: 1,980
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
Jul. 7, 2015 07:00 AM EDT Reads: 1,871