Your e-Business Quality Partner eValid™ -- The Web Quality Suite
Browser-Based Client-Side Functional Testing and Validation, Page Timing/Tuning, Application Monitoring, WebSite Spidering & Analysis, and Realistic Server Loading.
© Copyright 2000-2013 by Software Research, Inc.

eValid -- Michael J. Hunter Blog Entry
eValid Home

Introduction
Here is an article that originally appeared in Michael Hunter's Book of Testing Blog in June 2008. Apparently this Blog is no longer available on the Dr. Dobbs Journal website, so the article is included here for reference.

  • About EFM...

    As head of Software Research, Inc. Edward Miller has been a software test tool developer for nearly 20 years. Those tools include the TCAT coverage analyzer series, the CAPBAK family of test recording and playback engines, the TestWorks automated client-server test suite, and most recently the eValid web application test engine.

    He has been a contributor to the software testing community for many years. He organized the original Florida Software Testing Workshop in 1978, and organized the 1st International Conference on Workstations. More recently he organized and chaired a total of 15 QualityWeek (QW) conferences in San Francisco, plus 5 QualityWeek/Europe (QWE) conferences. The QW/QWE conferences had total attendance of over 25,000.

    Widely published in ACM and IEEE conferences and publications, he was a seminar leader on Software Testing and Automated Tools for nearly a decade.

    Here is what Edward has to say:

  • Answers to Five Questions

    1. DDJ: What was your first introduction to testing? What did that leave you thinking about the act and/or concept of testing?

      EFM: Probably my first introduction to test was when I was working on writing programs to test student programs on an IBM 7094. Well, "trying to write" may be more accurate. That was before I learned what "undecidable" meant!

      Later, in a much more serious vein, for a time in the early 1970's I worked in technology that involved attempting to verify quality in terminal guidance for interceptor missiles that actually might have had nuclear release authority for their nuclear warheads intended for anti-ICBM defense. It was such a different era -- when the Soviet Union as a threat was taken very, very seriously -- and missile defense was a serious, serious business.

      And, while it was very challenging work it was also kind of frustrating because of the enormous technical difficulty.

      One of the most difficult things for me to do as I recall was to explain to high ranking military types -- Generals with two or three or more stars! -- what ACTUALLY was going to be involved in confirming the quality, reliability and integrity of some of that missile-borne software.

      By a curious twist, however, the extreme difficulty that we revealed in trying to confirm the software that was doing that kind of guidance and control work led, I'm sure only in small part and in concert with work done by a lot of other teams, to the USA signing on to the ABM treaty. That was a good thing, for sure!

      Then later, in the Reagan era, "Star Wars" popped up and again was used to foment a kind of win for the Western Alliance with the demise of the Soviet Union.

    2. DDJ: What has most surprised you as you have learned about testing/in your experiences with testing?

      EFM: In a word, how really hard it is to test applications automatically.

      For example, back in 2000, we were near the end of the client-server test tool cycle. Our product line, TestWorks, had been a success, but we knew things were changing. So, we looked into the crystal ball to try to see the future and -- no big surprise here, not rocket science -- the future we saw was "the web."

      So we said, OK, if you have a web browser enabled application, what is the best way to test it? And the answer was pretty easy, and really obvious: test it from a browser.

      That was the easy part, defining the outside concept. Just test the web application from the browser and you're done.

      Some seven years later we now have released eValid V8, an IE clone browser that can test ANY kind of web browser application including websites, web services, e-commerce applications, etc. Because the test functionality is all inside the eValid browser the overhead for performing functions is very low, the fidelity of what is done is very high, and the ability to do tests reliably -- even when the web pages change around a lot -- is very good.

      But this isn't a product pitch at all. The real point here is that we found out -- in many cases the hard way! -- that it is a lot more complicated to do a good job of automated testing of a web application that you'd think.

      Many times I've said to myself, "if we knew then how complicated it was going to be to get this product going we might not have done it." Maybe we benefited from a kind of technical blindness to the underlying complexity of web applications, but that's water over the dam. eValid is a built engine and it functions pretty much as we intended and hoped.

      You never really know until you try to build functionality into a web browser to allow you to test those functions just exactly what's actually going on "under the hood"... until you jump under the hood and try to get control of it! That's when the fun begins, and when things that are obvious turn out to be very, very subtle and difficult!

      Believe me, it has been a real challenge. As we often say around here, "We don't get boredom...".

      But it turned out for the best. Our technology now has a U.S. Patent (with more Patents Applied for), and there is very strong interest in the technical community in the browser-based approach to web testing applications.

    3. DDJ: How would you describe your testing philosophy?

      EFM: There's an old line that goes like this; "In testing good news is good news, bad news is good news. The only news in software testing that's bad news is NO news!"

      In other words, you have to REALLY pay attention to what you're doing and what you see and what you observe! Everything may turn out to be important, and it takes a very sharp eye and very careful thinking to do testing well. And, if all your tests pass you learn nothing. Worse, if all your tests fail, you learn even less.

      Software testing is, in essence, structured experimentation -- a very disciplined kind of experimentation. Certain stimulii produce certain responses in the thing you're testing -- the UUT (the Unit Under Test) -- and the goal is to somehow characterize whether those responses are OK or not, relative to some understanding of what the object is supposed to do.

      It's great when there is a "formal specification" you can test against, but most of the time that's unavailable and instead you have to perform your experiments against something of lesser context and detail. And in that case, instinct, experience, good judgement, and -- to be honest -- quite a bit of luck is needed to get a good result.

      Not every test has to produce a failure to be valuable. I think too often people focus too much on testing to reveal failures rather than testing to confirm operation. Of course, finding and documenting anomalies is important, but that's maybe only 1% of the whole picture of a "mostly OK application."

      Applying these ideas in the eValid technology brought much of the basics of testing into very sharp focus. In the context of testing web browser enabled applications -- which typically are extremely complex and rich with functionality -- even the slightest errors can cause real havoc.

    4. DDJ: What is the most interesting bug you have seen?

      EFM: It seems to me you have to parse 'interesting' in two dimensions: intricacy and value. Intricacy as in "how hard to detect", and value as in how much impact does the defect have?

      Peter Neumann's "Risks" publishes the biggies. Almost always human goofs that have big impact.

      But here is a subtle problem we found that might have taken years to find and might have cost users multiple millions.

      A hardware product we were working on performs a kind of virtualization of web access -- in the virtual private network space. There is software in the device that scans HTML passages that are coming over the wire and has to manipulate them in a non-invasive way. What we found as the result of applying some very tricky test suites to the device was what looked at first like just a simple LAN-transmission anomaly.

      But it was a consistent and repeatable anomaly.

      We finally isolated the problem down to a programming error in one of the HTML processing modules, and we created from that guesswork a set of simple HTML passages that were passed through incorrectly every time.

      This was very hard to detect -- in fact, it was a problem with the product that had been there for many years. But as the web-page sophistication level increased, this particular defect had increasingly negative effect. We never finalized the total value lost to the client, but it was certainly multiple millions.

    5. DDJ: What do you see as the biggest challenge for testers/the test discipline for the next five years?

      EFM: The quick answer: The Web!

      The web is in many ways a real mess, yet it has enough redundancy and resiliency so that it thrives anyway. But far too many websites and web applications are vulnerable to all kinds of problems: pages that don't work, sloppy JavaScript that eats performance, new technologies that have (ahem, shall we say) "rough edges". (The guilty will need no accuser here!)

      These days it's way beyond "broken links" -- those are there of course -- the issues are at the level of very powerful applications that just don't work reliably enough, or consistently enough. Ask yourself, did you see some website (or web application) in the last week that turned you off? Of course you did; it's unavoidable.

      But if I had to list the web quality issues they'd come down to these:

      1. Web 2.0
        The implications of the Web 2.0 revolution in collaborative websites is going to create some very tough quality and testing challenges. You have dynamic content, global asynchrony, and a raft of other tough issues to deal with.

      2. AJAX
        The hard part about AJAX is that the better it gets the more "Asynchronous" it gets, and doing QA/QC/Testing on something that isn't terribly consistent in when it does what it does is daunting.

      3. Web Development Environments
        New systems, like Google's GWT, the Morfik system, and many others like them, make it easy to create big, robust, websites. If you don't do things right, you ALSO wind up with websites that are nearly impossible to do any serious testing on.

      4. Service Level Assurance
        How do you successfully assure that a web application, which could be hosted "in the cloud" or anyway, really is working correctly? This requires Rich Internet Application monitoring, which is a kind of repetitive functional test process.

      5. Competitive Webiste Comparison
        As web-based commerce moves closer and closer to being the main mechanism of "doing business" it's inevitable that firms will need to figure out how their website compares with the competition.

        Right now that's almost a total unknown! The methods we know about are effective, but they are TOO dependent on human interpretation. Like everything in computing it's gotta be automated.