Network Management

EXPERT ADVICE

The Trouble With IT

There is no doubt that the complexity of managing IT continues to grow. Transaction volumes are ever-growing, applications are interwoven with Web services, and workers are adopting the newest mobile devices-du-jour. The impact of social media and the Web continues to vex IT security managers. Then there’s the case of enterprise software, which runs the business.

Enterprise software technology has become a staple of large organizations over the last 20 years, but there’s no evidence that managing these systems is getting any easier for CIOs and IT managers. Not only do these systems have enormous feature sets and capabilities, they are heavily integrated with many other applications and services inside and outside the company. There can be weighty consequences when something goes awry.

If your systems buckle during periods of heavy volume and sales, or customer service suffers as a result, IT has failed. The job of IT is not to deploy and manage technology, but rather to effectively deliver automated business processes and business transactions. Somehow, this mission has become lost in the struggle to implement and control these large applications.

The Nature of IT: Silos Before Quality

IT has a good handle on preventing front-page disasters. What we don’t hear about are the thousands of smaller-scale, business critical incidents that may not bring a company to its knees, but still cost billions of dollars.

How many insurance agents cannot use a critical application on the first day of the month, because IT has not finished batch processing of the previous month’s data? When users suddenly experience sluggish response times, all hell breaks loose. Naturally, IT is to blame for these mishaps, and if it is unable to fix the problem quickly, it’s not a pretty situation.

CEOs use enterprise software, too. Solving the problem is like peeling an onion. First, IT often is unaware of the end-user problem until users sound off. Second, once IT knows of the problem, how does it quickly determine which application, database, set of transactions, network or server is behind the issue? Third, when the problematic area has been isolated, how does the IT expert find the cause and fix it?

Since IT is so complex and involves many different technologies, it is divided into areas of expertise: network, server, storage, security, application support and databases. Each area, or silo, specializes only in its domain and is practically oblivious to all the others. While this specialization can lower management costs, it makes quality control of applications and transactions services a nightmare, because they cut across all these silos.

No wonder delivering a high quality of service for applications in the enterprise is mind-boggling. The challenge for enterprise IT managers is finding more foolproof ways to troubleshoot, monitor and manage this environment. This capability would not only rapidly repair but prevent common problems from happening in the first place. This is easier said than done, but it is possible with an approach embedded in quality control.

The Deming Approach to IT?

Decades ago, quality guru Walter Deming said, “We can no longer live with commonly accepted levels of delays, mistakes, defective materials, and defective workmanship.” Deming’s work led to widespread changes in global manufacturing practices, and the adoption of zero-defect methodologies such as Six Sigma. Should IT follow suit?

For many reasons, it would not be fair to compare IT to a manufacturing assembly line. First, the final product of IT, an automated business process, is more like a utility than a widget. Second, even as a utility, it bears an order of magnitude higher complexity because of the number of layers involved, including network, infrastructure and applications. Third, the pace of change in IT is probably a few orders of magnitude higher than in any other industry.

Even so, the manufacturing concept of quality control can apply to the world of enterprise technology. Manufacturing quality control evolved from final product sampling, through component sampling, to process control. While quality control in manufacturing is mature, it is quite the opposite in IT.

In most IT organizations, operators monitor and attend to the basic IT infrastructure components: networks, CPUs, storage. Determining how these components affect the final product — the business process — is fairly difficult, due to the high degree of shared resources and, increasingly, many degrees of “virtualization.” As a result, few companies monitor quality of service of their applications or business transactions in terms of errors, performance or content.

A New Approach: Application Quality of Service

In the enterprise software space, quality control requires finding a single version of the truth to restore service levels. Through automated performance monitoring techniques and technologies, IT managers can look across the entire ecosystem to understand root causes, intersections and patterns between users and systems that cause performance to suffer.

The goal of automated monitoring is to pinpoint with a high degree of accuracy the origination of an application break, fix it, and then be able to analyze historical data later to help prevent future problems and improve service levels. Yet to be successful, this approach requires continuous monitoring of the quality, performance and responsiveness of applications, 24 hours a day, seven days a week.

Just like with Six Sigma, a comprehensive management and monitoring process enables IT to eliminate all but the statistically insignificant defects, and maintain consistently high levels of quality.

Quality for the sake of quality is a noble and justifiable cause, yet CIOs also need to focus on the bottom line. How often do they throw precious capital at purchasing new servers, bandwidth or storage when what they really needed to do first was look more closely at their systems, applications and user behavior? A more holistic perspective can help determine how and if applications are interfacing with that infrastructure inappropriately to cause sluggish response and other glitches.

None of this is easy, but with centralized, highly automated methods for tracking, troubleshooting, and preventing application errors and failures across the board, IT can make progress toward more of a “zero-defect” environment.


Mark Kremer is CEO of Precise Software.

3 Comments

  • When quality guru Walter Deming said, "We can no longer live with commonly accepted levels of delays, mistakes, defective materials, and defective workmanship", he was also talking about creating and designing systems for customers (online businesses) that were capable of growing and expanding with the acceleration of that business. I.E. anticipating and going beyond the needs of the business "as it is now".

    Way too many system designs are presented to SMB’s that lack a very valuable tool that is crucial to preventing losses from "delays, mistakes and defective workmanship". Especially when the business grows faster than anticipated.

    A Server Load Balancer

    Inexpensive, prudent….and Deming-like. Even outside of New Mexico.

    Indeed, this Publication has spoken highly of Server Load Balancers!!

    http://www.technewsworld.com/rsstory/69767.html?wlc=1286506625

    Just my opinion.

    • By suggesting that a Server Load Balancer is a cheap option to resolve the issue, it sounds to me like you have already decided where the problem is without really knowing.

      I have seen real world applications experience performance problems which were already load balanced. If the load from users is not the problem, how will a load balancer help?

      There is more focus today on getting an application written as quickly as possible, rather than writing an application to run as quickly as possible. Poor code affects the utilisation of servers – both where the code is running and other servers that utilise that code through web services/SQL etc. Find the root cause of the problem and fix it, and you may find that those under utilised servers may suddenly become more utilised!

      • Before the Load Balancer can become useful the application must work reliably.

        No argument..

        BUT….

        When the app is working the users will come. When the users come, traffic/user volume can become an issue.

        High volumes cause slowdowns and breakdowns. Thus the load volume becomes an issue unless you over supply with extra hardware, bandwidth and servers – quantity and quality.

        A Good Boy scout is prepared. But today even the Boy Scouts have an issue with money. They make do with what they have which most of the time is not lots of money.

        And NOBODY throws ANYTHING away.

        INEVITABLY, SMB’S buy a couple of new servers for the anticipated extra load. But JUST AS INEVITABLY SMB’s keep using the older servers as well.

        A good, inexpensive server load balancer like those offered by KEMP Technologies, F5 or Loadbalancer.org enables the Boy Scout to mix the powerful new tools with the old standbys saving money while offering larger capacity for getting the users to the app they need.

        Just my opinion.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels