Showing posts with label LoadRunner. Show all posts
Showing posts with label LoadRunner. Show all posts

Thursday, July 3, 2008

Retrieving Standard Deviation for Transaction Response times with VS2008

One of my plans for the firm for which I work is to integrate a nightly automagic LT with the Continuous Integration effort. I want to be able to automagically compare results of the previous days results and see if anything is amiss. One of the tests I would like to apply is the two population mean test which requires having the average response time, standard deviation and the number of transactions.

VS2008 doesn't provide all of these statistics by default in their LT summary. LoadRunner includes this information by default in their summary results and extracting the values are quite simple if you publish the results to an Excel Spreadsheet.

It's not so quite as simple as that with VS2008 but it can be done with a little effort.

I have configured my LT to record individual metrics to the LoadTest Data Store so that I can query the tables and extract the information that I want. In the examples below I have identified the LoadTestRunId for a given run where I have plenty of metrics to number crunch. In my nightly build and test scenario I envision a simple query to pull the max(LoadTestRunId) to get the latest run to run queries to extract the required information.

I wrote this query today to get the information that I wanted:



   1:  SELECT WebLoadTestTransaction.TransactionName, 

   2:         LoadTestTransactionSummaryData.Average, 

   3:         STDEV(LoadTestTransactionDetail.ElapsedTime) AS StdDev, 

   4:         LoadTestTransactionSummaryData.Minimum, 

   5:         LoadTestTransactionSummaryData.Maximum, 

   6:         LoadTestTransactionSummaryData.Percentile90, 

   7:         LoadTestTransactionSummaryData.Percentile95, 

   8:         LoadTestTransactionSummaryData.TransactionCount

   9:  FROM   LoadTestTransactionDetail INNER JOIN

  10:                    WebLoadTestTransaction ON LoadTestTransactionDetail.LoadTestRunId = WebLoadTestTransaction.LoadTestRunId AND 

  11:                    LoadTestTransactionDetail.TransactionId = WebLoadTestTransaction.TransactionId INNER JOIN

  12:                    LoadTestTransactionSummaryData ON WebLoadTestTransaction.LoadTestRunId = LoadTestTransactionSummaryData.LoadTestRunId AND 

  13:                    WebLoadTestTransaction.TransactionId = LoadTestTransactionSummaryData.TransactionId

  14:  WHERE  (LoadTestTransactionDetail.LoadTestRunId = 33)

  15:  GROUP BY WebLoadTestTransaction.TransactionName, LoadTestTransactionSummaryData.Average, LoadTestTransactionSummaryData.Minimum, 

  16:                    LoadTestTransactionSummaryData.Maximum, LoadTestTransactionSummaryData.Percentile90, LoadTestTransactionSummaryData.Percentile95, 

  17:                    LoadTestTransactionSummaryData.TransactionCount

  18:  ORDER BY WebLoadTestTransaction.TransactionName



The QBE table entries look this this:


And I get the results I want:



   1:  TransactionName            Average             StdDev               Minimum Maximum 90th  95th  TransactionCount

   2:  someTransaction.Details    0.47510353043101411 0.090671477518885088 0.416   3.695   0.522 0.552 4617

   3:  someTransaction.FirstHit   0.44759085986571417 0.077896506427873255 0.384   3.956   0.502 0.526 4617

   4:  someTransaction.LookupName 0.51917002382499466 0.082360597958219192 0.45    2.844   0.577 0.605 4617



Huzzah!

I'll end up setting up an automagic comparison routine. A number of years ago I wrote a C# class for doing statistical tests so I'll probably end up using that for my left/right comparisons and number crunching.

Saturday, June 28, 2008

The Good, The Bad and The Ugly.

I've been real busy with non load testing stuff at work and haven't had much of a chance to continue playing around with VS2008 for LT purposes but I have found out some interesting stuff.

The Good:

You can specify the data store (either SQL Server or SQL Server Express) for storing all the individual hits from web sites and then query that information which renders that I was doing in the earlier blog post as moot. Yay! By default, VS2008 TSTE keeps these results in SQL Server Express that is installed when Visual Studio is installed. It's pretty cool that you can choose to store results in another location for historical reasons.

Looking around the data in the tables I'm not sure if you can group page hits together by transaction ID. But, I haven't had much of a chance to play around with querying data and am not for sure if this is the case. If that is the case, I can still use a method similar to a previous blog entry for collecting the metrics myself for catching page response time as it relates to individual transactions. A bit of a PITA but fantastic that I can do it if I want. Yay for a flexible framework that allows me as a test with development background to run around and do this kind of stuff.

One thing I know for sure though is with the ability to query the databases I should now have the ability to crunch the numbers for transactions and now get average, standard deviation, hit counts, etc. The idea is that in a nicely controlled environment I should be able to generate an automagic report for day to day automated load test execution and be able to do something like a mean population comparison of transactions times from one test to the next and pick out statistically significant deviations for alpha = 0.05. That'll be kinda neat if it works out.

Now that I think about it, the VS2008 tool should have been reporting these metrics all along. LoadRunner defaults with basic statistical information and VS2008 should report the same as well. It is kind of annoying, but I like the fact that I have a published schema that I can use to extract the information that I want (see link to schema information for VS2008 further down below). So, another plus in the LoadRunner column. [I might really like the VS2008 product, but I do have to be fair in my comparisons of Pro/Con of VS2008 versus LoadRunner.]

I wasn't able to use the statistical modeling at my previous employer because our systems were just too large with too many variables involved. We had communication from the web server to the database server (which was on the same switch so not much of a problem) but we also had a bunch of communications to the Host. The Host was required for testing the web sites and was actually hitting production systems that merely disregarded the requests and sent back bogus results so that we wouldn't kill the mainframe.

Many years ago I tried to get rid of this variability by writing what I called a "parroting proxy server." It was a bit of C# code that was essentially a protocol agnostic proxy server that would pass messages from point A to point C where the intermediate Point B was the proxy. It would store responses with a in memory hash and if the same request was caught being requested it would go ahead and send back the hashed response to eliminate the variance that we would see. It worked out pretty good but we never got around to implementing it. Unfortunately in a Fortune 50 company the bureaucracy can be too great and with Empire Building as it is the other managers didn't see what was in this for them and I suppose I didn't do a very good sales job. So be it.

To flip the switch to log all data so that you can query the data for fun and profit is to modify the "Timing Details Storage Property" in the Run Setting node of the load test editor to "All individual details."

And unlike the LoadRunner database (AFAIK, I could be wrong!), this schema is published with all sorts of handy information located at http://blogs.msdn.com/billbar/articles/529874.aspx. How great is that?

The Bad:

I haven't had enough time to continue my little load testing adventure. I'm finding that my new employer needs to update their processes as it relates to building production final machines. The current steps are extremely painful. But I suspect it'll be just fine in the long run as I've had some experience in this in the past and now that this process can be fully automated and integrated into our build process to be painless.

The Ugly:

Ok, this is ugly. I tried to write a simple script to augment some stuff that I did last week only to find that I couldn't bring up an instance of IE with the Web Recorder active and it is frustrating as hell. I found several posts that essentially say that the "solution" is to delete your login profile and start again. Yeah, that really sucks, Microsoft. I'd like to have a solution that doesn't require me to blow away my profile and have me spend hours getting my desktop back into the shape that I like it. That is a real PITA for sure.

I have three profiles on my machine and I can only generate recorded web tests with one of these profiles. I blew away one of the smaller profiles (not my main profile where I have 2.5+ gig of files) and gave it a try and it was no-go. I still could not record web tests. Yeah, that is bad. Very bad.

I'm not sure what the solution on this is going to be. In the short term I'm going to have to use my Administrator login to create scripts and then xfer that over to my main login script. How craptacular is that? Once I get some free time I'll have to look into this further but I am dreading it. No fun!

Here are some MS blog entries related to the problem. Perhaps they could help somebody else but I haven't seen any love for me, yet.

http://blogs.msdn.com/mtaute/archive/2007/11/09/diagnosing-and-fixing-web-test-recorder-bar-issues.aspx
http://blogs.msdn.com/edglas/archive/2007/11/01/web-test-recorder-bar-not-showing-up.aspx

It appears that Fiddler2 might be an interim solution from what I read here in this MS blog post located at http://blogs.msdn.com/slumley/pages/enhanced-web-test-support-in-fiddler.aspx.

A PITA? Sure, but it beats not having any solution at all. Fiddler is purt near cool and I've "fiddled" with it a little bit and it is nifty.

Wednesday, June 18, 2008

Loading parameter data from a CSV in a VS2008 coded web test

Today I've been working on how to load correlated data from a CSV file into a coded web test. VS2008 has support for a wide range of data sources for correlation but I still prefer the good 'ol CSV format.

I rag on LR a lot, but selecting a data file and accessing it with the "{pSomeParam}" format is much easier in LR than VS2008. Details below:

In my little webtest I recorded a simple script hitting Google and I wanted to be able to pass different search terms to Google. Nothing fancy, just enough to figure out how to do what I wanted. This task in LR is pretty trivial.

I created a file by the name of searchTerms.csv that contains the following:

8<------------------
column1
loadrunner
perl
superbase
vs2008
------------------>8

I then added a data source to the web test and converted it over to code to see how I need to code things in the future and this is how it looks:



   1:  [DataSource("searchTerms", 

   2:              "Microsoft.VisualStudio.TestTools.DataSource.CSV", 

   3:              "|DataDirectory|\\databindingtowebtest\\searchTerms.csv", 

   4:              Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Sequential,

   5:              "searchTerms#csv")]

   6:  [DataBinding("searchTerms", "searchTerms#csv", "column1", "GoogleSearchTerm")]



The context for the current web test should now be automagically updated with an index entry by the name of "GoogleSearchTerm" that is referenced like this:



   1:  request2.QueryStringParameters.Add("q", this.Context["GoogleSearchTerm"].ToString(), false, false);



I don't mind the accessing of the correlated data via the context. That is no big deal, but setting up the databinding is kind of a complicated PITA. I have to give nods to LR at this point for the ease of creating params. But, on the other hand, VS2008 gives me a lot more control of slicing and dicing of return HTML data so there had to be a trade off someplace and I guess this is one of those situations.

Something important that I've learned is that numeric data needs to be double quoted to prevent mangling of the values. For example, I have the following entries now:



  [DataSource("lastNames", 

              "Microsoft.VisualStudio.TestTools.DataSource.CSV", 

              "c:\\work\\data\\LRData\\lastNames.csv", 

              Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Random, //   .Sequential, 

              "lastNames#csv")]

 

  [DataSource("targetServer",

            "Microsoft.VisualStudio.TestTools.DataSource.CSV",

            "c:\\work\\data\\LRData\\targetServer.csv",

            Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Sequential,

            "targetServer#csv")]

 

  [DataBinding("lastNames", "lastNames#csv", "lastName", "lastName")]

  [DataBinding("targetServer", "targetServer#csv", "TargetServer", "targetServer")]



The IP address that I want to hit is bound to the context entry for "targetServer."

I found that if I have the entry as:

8<----------------------
TargetServer
10.0.0.100
---------------------->8

I find that if it is not quoted it will be manged into the value "10.001." Just adding the double quotes around the value will allow me to use the IP address as I originally intended.



string targetWebServer = this.Context["targetServer"].ToString();

WebTestRequest request1 = new WebTestRequest("http://" + targetWebServer.ToString() + "/someVDir/somePage.aspx");



Knowing is half the battle. (GI Joe!)

Tuesday, June 17, 2008

Selecting a random button in a grid with VS2008

I am finally getting into writing virtual users with the VS2008 VSTE tools and the learning curve is pretty steep but the amount of control that I get is pretty darn cool so I think the pain points will be worth it.

Something that has always been a pain with LR in the past is randomly selecting some parsed HTML to randomize the next step in a vuser script. Sure, you can use web_reg_save_param with "ord=all" ala:

web_reg_save_param ("someParam", "LB= value=\"", "RB=\"", "Ord=All", LAST);

and pull it all into an array parameter and then iterate over the entries in the parameter by utilizing sprintf() ala:




   1:  web_reg_save_param ("someParam", "LB= value=\"", "RB=\"", "Ord=All", LAST);

   2:   

   3:  for (i = 1; i <= atoi(lr_eval_string("{someParam_count}")); i++) {

   4:    sprintf(someParamValue, "{someParam_%d}", i);

   5:    lr_output_message("%s", lr_eval_string(someParamValue));

   6:  }




That way of accessing the parameter array info always seemed like a PITA to me (maybe I'm in the minority with this thought?). Want to substring the output even further? More of a PITA since C doesn't have any quick and friendly string handling functions (substr(), left(), ltrim(), etc).


But, you have to be able to do it to get the job done.

Today I needed to basically select a random customer view button from a grid that was returned from this WebTestRequest hit:




   1:  WebTestRequest request3 = new WebTestRequest("http://xxx.yyy.zzz.iii/someWebPage.aspx");

   2:  request3.Method = "POST";

   3:  FormPostHttpBody request3Body = new FormPostHttpBody();

   4:  request3Body.FormPostParameters.Add("__EVENTTARGET", this.Context["$HIDDEN1.__EVENTTARGET"].ToString());

   5:  request3Body.FormPostParameters.Add("__EVENTARGUMENT", this.Context["$HIDDEN1.__EVENTARGUMENT"].ToString());

   6:  request3Body.FormPostParameters.Add("__LASTFOCUS", this.Context["$HIDDEN1.__LASTFOCUS"].ToString());

   7:  request3Body.FormPostParameters.Add("__VIEWSTATE", this.Context["$HIDDEN1.__VIEWSTATE"].ToString());

   8:  request3Body.FormPostParameters.Add("__VIEWSTATEENCRYPTED", this.Context["$HIDDEN1.__VIEWSTATEENCRYPTED"].ToString());

   9:  request3Body.FormPostParameters.Add("__EVENTVALIDATION", this.Context["$HIDDEN1.__EVENTVALIDATION"].ToString());

  10:  request3.Body = request3Body;

  11:  ExtractHiddenFields extractionRule2 = new ExtractHiddenFields();

  12:  extractionRule2.Required = true;

  13:  extractionRule2.HtmlDecode = true;

  14:  extractionRule2.ContextParameterName = "1";

  15:  request3.ExtractValues += new EventHandler<ExtractionEventArgs>(selectRandomButton);

  16:  request3.ExtractValues += new EventHandler<ExtractionEventArgs>(extractionRule2.Extract);

  17:  yield return request3;

  18:  request3 = null;



The WebTestRequest has an event for allowing coders to write their own code for extracting data from the HTML output. I wrote the code below is referenced in the above code at line # 15:



   1:  void selectRandomButton(object sender, ExtractionEventArgs e) {

   2:    if (e.Response.HtmlDocument != null) {

   3:      List<String> buttonNameList = new List<String>();

   4:      foreach (HtmlTag tag in e.Response.HtmlDocument.GetFilteredHtmlTags(new string[] { "input" })) {

   5:        if (tag.GetAttributeValueAsString("type") == "submit") {

   6:          buttonNameList.Add(tag.GetAttributeValueAsString("name"));

   7:        };

   8:      };

   9:      if (buttonNameList.Count > 0) {

  10:        Random randomizer = new Random();

  11:        e.WebTest.Context.Add("RANDOMCUSTOMERBUTTON",

  12:                               buttonNameList[(int)randomizer.Next(buttonNameList.Count) - 1]);

  13:        e.Success = true;

  14:      } else {

  15:        e.Success = false;

  16:        e.Message = "No entries were found for customer view buttons.";

  17:      };

  18:    } else {

  19:      e.Success = false;

  20:      e.Message = "No HTML to work against!";

  21:    };

  22:  }



Take a gander at the loop at line #4. In that loop I'm inspected a group of elements searching for the data that I want. No need to specify a dozen calls of web_reg_save_param only to reference said params via lr_eval_string and strcmp(). That's right, nice simple easy to read code! It's all there. I can spin, fold and mutilate the information all I want! That and I have access to .NET Containers. I don't have to write my own double linked list for storing data for declaring the square array in advanced (I did a lot of square arrays because I am a slacker like that). How sweet is that?!

While it might seem more complex at first, I believe that the extra control that I get over the selection of data is fantastic. I can think of several times in the past that I've had to write more complicated code than I have wanted to for the slicing and dicing of HTML and this method would have made things so much easier for me.

And I found a cool code formatting site today located at http://www.manoli.net/csharpformat/format.aspx. It took a little work getting the CSS stuff taken care of (after all, I'm not a GUI whiz) but it was worth it for sure.