Web App: ready for testing

On Friday morning I received a call from Tony, who supervises the coding team working on the web application I’ve been discussing here. While the call was mainly about ways to resolve a coding issue his tester, Stacy, had discovered, we also discussed getting my test team an opportunity to work with the app. We quickly agreed that, pending Stacy’s successful test of some minor changes, I’d be in Tony’s office with my test team on Monday.

This was anticipated; I wasn’t certain of the exact schedule but I knew we were close. I’d made arrangements with the unit supervisor, Caroline, earlier in the week to borrow two staffers for testing. She offered me June and Dawn, two excellent testers I’d worked with when the in-house interface for the project had gone on-line. The three of us met briefly with the project manager on Friday afternoon. I gave them a brief overview of the testing setup and made arrangements to car pool to Tony’s office, where a perfunctory testing site is available. That’s tomorrow’s plan for the day.


Ugly History

I mentioned in an earlier essay that this project is (now) over a year behind schedule. This will be our third effort to get past the User Acceptance Test hurdle.

Test Session One

The first testing session occurred just over a year ago, and was a total disaster.

My testers: A group of analyst types who were chosen for their testing experience; none had more than casual knowledge of the work the application supports. The theory behind these selections was that I (as testing coordinator) would learn how difficult it would prove to learn the app.

Everything went wrong. Everything. The web part didn’t reliably communicate with the back end, and when they did talk it was often in pidgin. We’d report to the testing center every morning, and quickly demonstrate to the web vendor that things weren’t properly connected. Karl, their lead coder, would start making phone calls, and we’d soon have the two vendors and the network guys arguing about the problem. By noon, things would be “working.”

Yeah. Right. I did indeed learn from my experienced testers where they had trouble figuring out the system. We documented those and passed the notes to the coders. We also demonstrated, and documented, that many of the possible paths through the application led to dead ends, to incorrect responses, to badly-laid-out documents, to obscure errors, to total chaos. By Wednesday it was clear the session was a failure; by Friday I’d written an angry memo to my management summarizing the shortcomings of the application, the test environment, and the coding team.

Odd fact: Throughout this project, we’ve always had more reliable connections through the production system than through the development system. Prod has key servers in IBM’s Boulder facility, while every server in the Dev system is within a few hundred feet of my desk. Go figure….

After the holidays, and the state’s IT reorganization, the project got reassigned to an internal coding team in January.

Test Session Two

By July, the new coding team had things pretty much under control. We expected problems, and knew there was a major gap in the system, but they needed us to assess the state of the application. I recruited a new test team.

My testers: The unit’s telephone support staff. The theory this time was that we needed a staff which was expert on the underlying system, and that using the telephone staff would have the incidental, beneficial effect of teaching them how the web application worked. This will, we anticipate, help them when the users start calling for help.

How things went. OK. We continued to have connectivity problems, but less strenuous ones; we also benefited by having competent technical staff at hand to evaluate the problems and lend a hand in their solutions. After three days we’d found a few bugs, demonstrated that the expected problems were indeed problems, but satisfied ourselves that the system worked reasonably well. (Incidentally, we also demonstrated that having six instances of the same large document retrieval query hitting on the system simultaneously brought the entire system to its knees. An important Lesson Learned; I’ll not repeat it tomorrow.)

As we wrapped this session up, the coders began to suspect that there was a design problem. I’ve already discussed that.

Meantime, in the Real World

We solicited for real-world testers last fall, and one of them has had access to the system since February. She promptly identified a handful of presentation issues, and reported a number of other problems. We’ve since added other testers, and now have about 35 folks on-line. Several have been very helpful; one, in particular, found a major flaw–a missing chunk of code–my teams had somehow managed to miss: Several of the screen controls had no code attached….

That’s why you run pilots, and beta tests, and suchlike. There’s always something to bite you. Best if volunteers report them.

Test Session Three

Tomorrow! I requested two experienced testers, and got the best in the unit. June hasn’t seen the web app yet, but both understand the underlying system as well as anyone and are key troubleshooters for problem cases. My plan: We’ll play with the app tomorrow morning, then start running through the test script (another story–some other time) after lunch. I expect that to take us through Tuesday morning. We’ll spend Tuesday afternoon away from the script, pounding a couple problem areas. By then, we’ll certainly know where we stand.

There’ll be some loose ends, and we’ll have to test those next week. But we’re getting close to Launch….

This entry was posted in Bureaucratic Whimsy. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.