Written by Bojan Simic |
October 25, 2010 |
Network emulation technology was originally designed to help network managers, IT operations teams and developers to create lab environments that simulate their actual network conditions for testing application performance in pre-production. These solutions allow organizations to use historic network performance data to create virtual testing models that generate information about expected levels of performance and help eliminate network performance bottlenecks before applications go into production. Providers of this type of technology include: Anue Systems, Apposite Technologies, iTrinegy and Shunra Software, while OPNET provides a somewhat similar solution for building network models. End-user organizations are reporting that the accuracy of simulated network environments created by these technologies is around 95% and that, somewhat quickly, these solutions pay for themselves. Some organizations that I have had the chance to speak with reported that their network emulation products have paid for themselves four times over in just labor costs alone, as they significantly reduced the number of incidents with application performance in production that IT teams had to deal with.
Network emulation technologies have been around for a while and even though they provide some measurable business benefits for organizations, they never reached wide-spread adoption in the enterprise. One of the main reasons for this is that many organizations are interested in using these solutions only when they are planning new technology rollouts or making changes to existing applications and, therefore, larger organizations that are building a lot of custom applications and have new rollouts almost on a weekly basis have been the main beneficiaries of these solutions. On the other hand, organizations that have only a few new technology rollouts per year have not been able to justify making investments in network emulation solutions. Vendors in this space have been trying to make their solutions more accessible to end-users by providing their capabilities as a managed service or adjusting pricing models, but the key obstacles for the wide-spread adoption of this technology still remains.
|
Read more... |
Written by Bojan Simic |
February 09, 2010 |
One of the key questions in IT performance management is: Does an improved ability to collect more performance data generally lead to an improved performance of IT services? The answer is: No, not necessarily. Actually, a number of end-user organizations that I have spoken with reported that their ability to prevent and resolve performance issues deteriorated after they invested in additional monitoring tools. As new challenges of managing application performance “jump out”, organizations tend to deploy new point solutions that are addressing each of these problems. This does allow them to collect more information about these specific problems, but it doesn’t necessarily allow them to have better control of the overall IT performance.
Managing application performance is one of the key IT initiatives for end-users, but there is not a single class of technology or solution provider that can address every single issue of managing the performance of business-critical applications. Some major IT management vendors are investing significant resources in acquiring companies to enhance their product offering and enable them to tackle more performance challenges. However, the capabilities needed for end-to-end management of IT performance are rapidly changing and companies that are looking to provide capabilities for addressing each of the major IT management challenges are likely to keep playing “catch-up”. End-user requirements are changing at a pace that is faster than product development cycles or times needed for acquisitions to be initiated, agreed on and completed. So, are organizations that are looking to access all relevant IT performance data through a single platform are out of luck?
|
Read more... |
|
|
|