|
In it's role as a monitoring agent eValid Website Analysis and Test Suite can be thought of as a robot that can reproduce any kind of human interaction with a website, including input, validation, and timing/performance checks. During the automated playback process the eValid robot collects a variety of data and statistics for later analysis.
The most important feature of eValid is that the test robot actually is a full featured web browser -- and because of this the tests run by eValid are indistinguishable from an actual user.
Note: These questions, raised by bsmDigest Peter Goldin, a regular contributor to bsmDigest (Katherin Chalmers, Editor), are being answered from the perspective of eValid as a monitoring agent only, leaving aside such eValid capabilities as comprehensive site analysis, regression testing, and server loading.
End users of websites should receive timely, responsive, informative, and correct information.
In eValid's view those needs translate into: (i) timing of responses and analysis of multi-step user sequences; and (ii), validation of response pages' content and properties.
A typical eValid test script is a simple text file with commands and parameters that tell the eValid browser what to do and what to measure and validate. As it plays back, eValid generates a complete event log plus various subsets of it, each configured for special purposes.
Checking that a server responds to a ping, or that the server delivers a page via an HTTP/S request, merely confirms that the machine are running (turned on), or serving pages.
Running an eValid test, which fully emulates a human user working from a client machine that's actually at the end of the "last mile" is the best possible insurance that a web application really is working as it is supposed to be.
Pretty much any industry that relies on its website or web application to drive it business, or to actually handle business, needs advanced end-user application monitoring. (See the Medical Analogy description.)
Yes, in some cases companies don't immediately appreciate the value of having an actual robot doing the job of simulating a human doing useful work. But they usually come around after they see eValid in operation.
eValid can measure and assess a wide range of factors that affect a website or a web application: performance, content verification and validation, complexity monitoring, content checking, link checking, response timing, component download timing... eValid can document virtually any actual detail about website or web application behavior. (See the eValid Validation Modes description.)
eValid tests end-user perspective results -- content and performance -- by simulating actually being an end user. It's totally realistic. The server can NOT tell it is NOT an IE browser and a real human doing the typing and clicking.
Understanding the customer's system is simple: the test engineer makes a recording and eValid keeps a script that, when played back, simulates what the test engineer did during the recording. The "standard" output from eValid establishes the benchmark directly.
There's no intermediate process involved: eValid is the user!
A single Windows PC used for monitoring can handle dozens of tracks of eValid playbacks, and each monitoring track can handle dozens of tests per hour (e.g. at 5-minute intervals). How many depends on how complex the tests are -- i.e. how long they take.
A Windows PC fully dedicated to monitoring thus can easily handle 50-500 playbacks per hour depending on test complexity and test length (execution time). (See the eValid e-Business Transaction Monitoring Services description.)
eValid tests are 100% real. To the server it appears there is simply an additional user making requests.
If an eValid test fails -- e.g. by taking too long due to a server slowdown -- then the customer knows that there is a serious problem. Customers can then RELY on the veracity of the data: if eValid signals a problem, there really IS a problem.
From a user's perspective, "real time" is the only time there is!
A 6-second delay in delivery of a web page to the user seems to be a threshold longer than which invariably causes "click away".
eValid monitoring of page response measured at the client's desktop is very important. Because eValid is an actual browser, the real time it measures is very accurate. This means that an eValid monitoring mode test that thresholds on a 6-second response time really WILL measure what actual users experience.
As eValid plays back a script it generates a complete event log plus various subsets of it, each configured for special purposes.
All eValid logs are CSV and/or spreadsheet compatible, so they can be processed in a variety of ways to create presentations of key information. (See the eValid e-Business Transaction Monitoring Services description.)
In our experience eValid is mostly employed as an application monitoring agent. In most cases, a complex test with multiple test steps runs against internal timers so that errors occur if specified actions are not accomplished within the time limits, and/or if completed do not yield validated result values.
This kind of functional confirmation is intended NOT to fail, UNLESS the server is in serious trouble. If there are failures eValid can inform the customer graphically, or with email or pager alerts, depending on the configuration.
Usually one wants to know only about failed tests, and in these cases the eValid detailed event log is the best indicator of where the failure occurred in the playback sequence.
In practice the biggest issue seems to be to architect tests that are complex in nature, but that also exercise critical parts of the IT infrastructure.
For example, in an e-commerce type application it is important to have a test that logs in, selects items for the shopping cart, proceeds to the checkout, and actually tries to make a purchase.
That sounds simple enough, but to get the best benefit it is a good idea to do that kind of test in a variety of ways, with each variation intended to stress a different part of the infrastructure. Sometimes this requires both skill with eValid test recording and ALSO very detailed knowledge of the application IT infrastructure architecture.
Yes.
eValid has been integrated with a number network monitoring systems, including the widely used Nagios system.
In addition, we offer transaction monitoring services that are based on eValid operation. (See the eValid e-Business Transaction Monitoring Services description.)
BSM with eValid monitoring is critical if you want to have measurable, quantifiable, repeatable assurance that a web application really IS working coherently and correctly!
Yes.
eValid enables making measurements of how a web application actually works at a level of accuracy and a high degree of reality.
evalid provides value by ensuring web application operation at a very low Total Cost of Ownership (TCO) and a very high Return On Investment (ROI).