Podcast |
January 02, 2011 |
Speakers: Mehdi Daoudi, Co-Founder and CEO, Catchpoint Systems; Bill Kish, CEO, Coyote Point; Peter Melerud, Co-Founder and COO, KEMP Technologies
Moderator: Bojan Simic, President and Principal Analyst, TRAC Research
Bojan Simic: Our topic today is how organizations in the small to medium size market sector manage the performance of their web applications. Before we get into the specifics of how they do that from a technology and processes perspective, one question that I have for all of our guest speakers is how they actually go about defining this sector. As you are probably aware, in some other IT markets companies go about defining their market sectors based on number of users, number of employees or revenue size. Some of these metrics are more specific in web performance management, so if you folks wouldn't mind sharing how you go about defining what falls into that SMB category for your organizations.
|
May 02, 2010 |
TRAC Research recently recorded a podcast about the key trends in the load testing market with Priya Kothari, Product Marketing Manager for HP’s Performance Validation solutions. Some of the topics covered in this podcast include: using load testing solutions to align IT with business goals, internal processes that organizations need to have in place to get the most out of their load testing solutions and the role that load testing technology plays in deploying cloud computing services.
Here are some of the key insights from the podcast:
“Finding the tools to actually do your testing is the easy part, but implementing proper load testing practices is often the hard part… In order to do a performance test you’ll need to know what the application is built for. Meaning, what is the business purpose of the applications, what will users be doing on the applications, what types of transactions will be performed and how many users will be accessing that application? You also need to know what users will be expecting from the application and what types of service level objectives should be in place and tested for. It is also important to understand what pieces of the applications are the most critical. If they have this type of information, testers can then accurately plan the testing to ensure that high priority requirements are always covered… This really helps to put an end to testing for testing sake, but rather align IT testing teams with the needs of various business stakeholders.”
“A lot of people believe that moving to the cloud actually means that you don’t have to worry about performance testing, since now they have access to unlimited hardware. What they often don’t realize is, that if the application itself is not scalable, then the elasticity of the cloud can actually cost them thousands of dollars. If the application is not scalable you will be using new machines to support the load. When moving to the cloud, it becomes even more important to test your applications and to tune them properly so they are optimized when it comes to hardware consumption… With the hybrid cloud, there is a new factor that organizations need to consider: you need to ensure that you have enough bandwidth between yourself and the cloud provider. Also, cloud vendors themselves need to start thinking about load testing. They need to test their infrastructure for specific usage conditions and ensure that they are not going to be a bottleneck.”
“Application modernization is one of the key trends that we are seeing. We are seeing more customers that are moving away from legacy technologies and move to rich internet applications, Web 2.0 and SOA-based applications, as well as frameworks such as AJAX, Flex and Silverlight. We are also seeing that some of the major application providers themselves are picking on these trends. For example, SAP is starting to use Flash, Flex and Silverlight in their latest releases. We are also seeing a browser explosion. It used to be all Internet Explorer. Now we are seeing more of Firefox, Google Chrome, Safari and Opera. Everyone is looking to create a richer end-user experience from using their applications to become more competitive in the marketplace.”
Click here to listen to the podcast
|
March 29, 2010 |
TRAC Research recently had the chance to discuss some of the key trends in the cloud management market with Dave Asprey, Entrepreneur in Residence at Trinity Ventures. Dave is one of the leading experts in cloud management, application delivery and performance management and in the past has held executive positions at Citrix, Blue Coat, Zeus Technologies and Speedera Networks. Some of the key topics covered during this podcast include: market opportunities for deploying usage-based pricing for IT management solutions, key challenges for managing cloud performance and the impact that the emergence of cloud computing is having on load balancing and WAN optimization solutions.
Here are some of the key insights from the podcast:
"If you are a public IT management company and you switch to usage-based pricing, you'll probably have a hard couple of quarters. So for public companies this is a hard pill to swallow, but for private companies the challenge is that this is a very disruptive technology. You either have to make this business model from the start or have to make pretty radical changes to your existing model."
"I am not a big believer in cloud bursting, at least not with the way the technology is today. What is going to happen is that enterprises may have some applications that might be cheaper to just toss on the cloud. Same companies are going to say: "These applications are going to be in the cloud for the next couple of months". Well, 2 years later that becomes a significant application to the company. And that's how all other disruptive technologies like networking, PCs and mobile devices entered the enterprise."
"If you really architected something to run on hardware, it is pretty hard to port it to the cloud or to port it to a virtual appliance. And if you do that, the odds of it being highly performing go down substantially. However, if you start with software architecture and keep in mind that it can run on pretty much any hardware, this becomes something that is relatively easy to do."
"As you move more strategic applications to the cloud, application performance management pieces will become critically important, especially for paid applications. Another application management technology that will become critically important in the cloud is something that I would call "n+1 scaling". Hardware vendors have historically put 2 devices next to each other, and they would just have the "heartbeat" between the two. However, a few vendors out there architected solutions where they can just keep adding more boxes. When you are selling virtual appliances to cloud providers, it is very important that providers can deploy any number of virtual appliances and have them all running as a part of a single pool or as a subset of many different pools. Most virtual appliances today are not up to that "n+1" pooling task."
Click here to listen to the podcast
|
January 31, 2010 |
TRAC Research had the chance to interview Darin Bartik, Senior Director of Product Marketing at Quest Software about the key trends and challenges regarding managing application performance in virtual environments.
Here are some of the key insights that Darin shared during this podcast:
“The reason that most of IT organizations are not on the same page is that their priorities have been based on the old way of thinking about performance monitoring and management. This is based on the IT being pushed by the business and then struggling to keep up, so they end up being reactive many times and that reactivity has forced the need to get very broad coverage. What ended up happening here is that this broad coverage didn’t help domain specific technologists, so a lot more domain specific tools were purchased…. and all of this different data and different tools that people have has created all of this finger pointing and you have “war room” activities and never ending conference calls.“
“The fundamental challenge for managing application performance in virtual environments comes from a really good part of virtualization, which is both resource sharing and all of the efficiencies that come with it, but this breaks that physical link that was true forever, and all traditional management tools are now starting to lose visibility because these tools were built on the idea of the physical world…End-user organizations have to take the view of a service that they are delivering, which is something that is static. So if we take more of a service management approach and manage the application instead of the infrastructure pieces that will help maintain that visibility link all the way from where the end-user interacts to the virtual infrastructure.“
“Automation is something that customers are begging to ask about, especially in more mature environments and that’s what is really going to take things from more of a performance monitoring to a performance management paradigm. Instead of just monitoring what is going on and reacting to that manually, let the tool take an administrative action. Let it witness the event and instead of telling someone that something is wrong and giving some expert advice, actually take that expert advice and perform an action.”
“Virtualization was set to change everything and when it comes to managing performance it really does."
Click here to listen to the podcast
|
January 04, 2010 |
Business Transaction Management (BTM) is one of the fastest growing areas of the IT Management market. TRAC Research had an opportunity to discuss key trends in the IT and BTM markets with Russell Rothstein, Vice President of Product Marketing at OpTier.
Here are some of the key insights from this podcast:
“There are three different aspects of aligning IT with business from the monitoring perspective: process, resources and language….What IT can do to help is to prioritize business processes and make sure that the most important processes are given maximum resources. Also, you have to look at what is the business impact when there is a change in the environment. What happens when you add servers, virtual machines or when there is a data center migration? Are these activities helping a business or not?”
“Something that we hear from enterprises all the time is: how do we build a common language between business and IT? To do that, you have to ensure that information that is collected by the IT can be presented to business folks in a language that they can understand.”
“Enterprises are seeing more opportunities for a return-on-investment in a private cloud space, not as much in public cloud space….One of the key hesitations for moving in into the cloud model is that the cloud distorts visibility into business-critical transactions. In order to tap into the benefits of the cloud, you need to have a greater visibility into it and the BTM is built for following transactions as they flow from the end-user, into the cloud, out of the cloud following an entire path of the transaction flow”
“The BTM market has been super hot in 2009 and I don’t think that there are any signs of it cooling off in 2010. What we’ve seen is, first of all, a lot more awareness of what BTM is among enterprises. Also, organizations are trying to understand how BTM relates to other initiatives such as CMDB, SOA, ITSM and the Cloud. We are also seeing our customers becoming more savvy about differences between BTM and APM.”
Click here to listen to the podcast
|
December 20, 2009 |
TRAC Research recently conducted a podcast with Vik Chaudhary, VP of Product Marketing at Keynote Systems. Vik provided his insights about key trends in the Web Experience Monitoring market, the top capabilities end-user organizations are asking for as well as the impact that technologies such as SaaS and the Cloud are having on this market.
This podcast also includes topics such as KPIs that top performing companies are using to measure the quality of Web experience, the importance of Web Monitoring tools beyond business-to-customer (B2C) environments, and the best practices for dealing with changes in end-user expectations.
Here are some of the key insights from this podcast.
“Key performance metrics that high performing websites are measuring are: page download time, content generation time, time taken by a browser to render an application, geographic uniformity, load handling, peak availability and outage hours.”
“Top two capabilities that end-users are asking for are: 1) Monitoring performance of applications from a perspective of mobile users and; 2) Web monitoring capabilities that pinpoint both the third-party components on your page (ads, common login components, Web analytics, etc.) and an individual Web servers and infrastructure components that are causing performance problems.”
“The Web Experience market in 2009 has matured tremendously. What we see is that both IT operations teams and business managers are recognizing that the need for accurately monitoring performance of applications. They want to do monitoring of these applications by using synthetic approach, which is something that Keynote does, but they also want to do more of real end-user monitoring”
Click here to listen to the podcast
|
|