personal website

testing vs agility

September 24th, 2007 . by maria

Gartner group vice-president Richard Hunter also author of ” IT Risk: Turning Business Threats into Competitive Advantage” , defines IT risk as ” anything that poses a risk to either the availability, access, accuracy or agility of a business” . He ranks availability as most important. ” Dollars spent on availability are dollars well spent,” he said.

One thing that jumps out at me is ” agility” . I’ve found that as our company grows, we have been losing agility. As we incorporate more checks and balances and testing into our application development process, we lose the ability to pop out new applications or new features as-needed. As we standardize on certain software and hardware, we lose the willingness to incorporate one-off items where the business processes call for it – at least without a lot of red tape and justification. Even processes themselves, such as processes surrounding purchasing and procurement, can hamper agility.

While I agree that agility is important for a business, I am struggling to see how businesses that are trying to please auditors and improve processes can also maintain agility. Hunter himself seems to recognize this, too: ” IT risk is related to IT value. It would be short-sighted not to recognise either value or risk,” Hunter explained. There are risks associated with agility. When we roll out an application with little testing, it may fail. We allow a department to use a consumer digital camera, but our Helpdesk struggles to offer assistance because they aren’t familiar with the product, and we find there is a hidden cost – it isn’t as durable as the models we usually buy. A new vendor doesn’t deliver quickly enough, and our project is delayed. We spend weeks taking calls from laptop users before we determine that the new laptop battery fits too snugly and doesn’t always charge. We lose agility when things go throught the layers of testing necessary to prevent most of these failures, but we lower our risks.

This article goes on to explain that IT managers need to better be able to explain the risks to executives. But again, it is couched in an explanation of what happens when a server fails, which is a function of availability. Measures which ensure availability are fine, but I think most businesses are seeing the value of availability, access, and accuracy. It is agility which has taken a beating. The article references a loss of agility due to government regulation, but offers no suggestion that this should be considered, and no advice on how to regain agility.

So I’ll offer my advice: build transparency and trust into your IT department, instead of processes and red tape. Give someone personal responsibility for the project, and ensure that they know they will receive the calls if it fails, and they’re likely to do a much better job of preventing it from failing. A bunch of red tape, lab testing, and good vendor references doesn’t guarantee that a new barcode scanner will work. When it fails, who feels personally responsible for getting it fixed? Nobody. ” I checked the vendor’s references ” , says one guy. ” It tested fine in the test lab ” , says another. No one person is personally invested in the project, so why are they going jump up and dig in when it fails? But when someone has a cradle-to-grave relationship with a project, they feel personally responsible when it fails, and come running to the rescue. Give techs the ability to reject layers of testing and red tape when they can explain why they aren’t necessary, and authorization to utilize a complete test lab when they are, and my guess is that there will be just as much testing, but even better results.

Leave a Reply


Mail (never published)