How End-User Monitoring “Graduated” from APM |
Written by Bojan Simic |
December 07, 2009 |
Over the last 2-3 years, the term “Application Performance Management” (APM) became an integral part of marketing messaging for more than 70 technology vendors. Even though solutions provided by all of these vendors are helping to improve the speed and availability of business-critical applications, these vendors are providing solutions that are significantly different. These solutions could range anywhere from network performance monitoring to application acceleration, Web management and even managed/carrier services. However, the APM as a general concept has become relatively easy for decision makers of end-user organizations to digest, as it hits all key pain points that IT organizations are dealing with. As a result, multiple vendors were more than happy to jump on this bandwagon and position themselves as players in this space. Other than the language in their press releases and marketing collateral, these vendors really have nothing else in common. Technology wise, how similar are the offerings of F5, NetQoS, Keynote Systems and OpTier? They are not similar at all. So at one point, vendors needed a point of differentiation and a new theme that would hit the key concerns of end-users. The first area of the APM market that evolved into a concept that fully resonates with the needs of end-user organizations is End-User Monitoring. There are several reasons why End-User Monitoring/ Management reached this “tipping point”. Some of the key drivers behind this are:
So what does End-User Monitoring really mean? Once again, the answer is: it depends. If you ask Web Monitoring vendors, End-User Monitoring means monitoring the performance of Web applications from outside the corporate firewall. It also means measuring more than just the basic availability and application response times, but also the conversion and abandonment rates for end-users, while taking into consideration the geographical location and type of ISP connectivity for each user or end-user group. This group of vendors includes the likes of Keynote Systems, Gomez (recently acquired by Compuware), AlertSite, Zoho (through their www.site24x7.com solution) and Webmetrics. If you ask networking vendors, End-User Monitoring means capturing packet flow data and using this information to estimate the speed of applications as experienced by business-users. Some of the vendors that fall into this group are NetQoS (recently acquired by CA), OPNET, NetScout, Fluke Networks and InfoVista. If you ask vendors that provide desktop-based solutions for application monitoring, it would mean, first, identifying the number of users or applications impacted by performance issues, second, business processes that are suffering or, third, learning about problems with end-user experience before end-users call a help-desk. These vendors include Aternity and Knoa Software. If you ask Business Transaction Management (BTM) vendors, it means monitoring application speed and availability for each transaction. These vendors specialize in monitoring transaction flow from when an end-user request has been placed across the network, all the way to the data center. However, many of these vendors miss capabilities for monitoring what end-users are really experiencing when using these applications. Some of these vendors have recently acquired capabilities for end-user experience through either product development, such as OpTier by launching their Experience Manager product, or through strategic partnerships (DynaTrace by partnering with Coradiant). There are also solutions that combine several of these concepts and include the monitoring of different parts of enterprise infrastructure to understand how their performance impacts business users and processes that are being supported. Examples of these solutions are Compuware’s Vantage platform and Quest Software’s Foglight solution. The bottom line is: the quality of end-user experience is not a metric, it’s a concept. And it is one that is getting more traction from end-users, for a good reason. It helps IT teams to understand how successfully they are doing their jobs before they hear about it through trouble tickets filed by business users. As of now, there is no industry accepted metric for monitoring the quality of end-user experience. The metrics that are currently being used range from measuring application response times in aggregate and monitoring response times for each transaction to monitoring usage patterns and user session abandonment rates. There were some attempts to introduce the quality of end-user experience indexes that would aggregate different performance metrics into a single measure, but they are still limited to a relatively small group of users. In some cases, these indexes are being provided by a group of vendors such as members of the Apdex alliance. In other cases, these indexes are created by a single vendor (i.e Ipanema’s Application Quality Score). Further segmentation and growth of the End-User Monitoring market will depend on the alignment of KPIs that vendors are able to provide with strategies that end-user organizations are taking to align their IT initiatives to business goals. These two parties coming to a mutual agreement about KPIs would eliminate a lot of confusion about what End-User Monitoring really means. |
Comments
Mike, I agree with your comments that EUM, as currently defined, is not sufficient for managing application performance end-to-end. Many EUM solutions are effective in alerting end-users that there is a problem, but they are not as effective in helping them solve the problem. Combining EUM tools with solutions for monitoring an entire transaction flow is much more effective when dealing with application performance issues.
I look forward to sharing my upcoming research with you.
CA Wily Introscope: (it was missed from your analysis somehow).According to CA Introscope monitors over 5B tx per day and they have 1200 Enterprise Class customers that won't go a day without it. We wouldn't. It gives us REAL-TIME app performance monitoring with an "Application centric" view. helps our apps people do root-cause analysis on the app on the spot. This means we can fix stuff before calls come into helpdesk. None of the others do this.
NetQos provides our network guys insight into the performance of the app as it flows through the network. CA's Wily CEM Monitors user activity and helps our folks gain immediate insight into the experience our customer sees.
Giving us this TRIPLE PLAY or
3 views of the app, is what does it for us: infra/apps/customer exp views of the perf should be what constitutes a total APM tool.
Mike
New Relic
RSS feed for comments to this post.