Quantcast
Channel: Ingmar Verheij » Stresstest
Viewing all articles
Browse latest Browse all 2

Loadtesting best practices – Part 2

0
0

This is the second part in a series two about loadtesting best practices.

The first part focused on the “basics” of loadtesting, most of them where about preparation. You can find the first part here.

In this second part I’ll focus on some more advanced topics which are usefull in a later stage of the process.

 

8 – First time setup

The first time a user logs in a Windows environment or the first time an application is used, a first time setup might be displayed. Since this is a first time setup, it will happen only once for each user.

When you’re automating a test with simulated user actions you usually don’t want to test the first time setup, you’ll probably want to test the application itself. Therefore it is recommended to prevent the first time setup, usually this is done by setting a some registry keys.

Common first time setup’s are the initials in Microsoft Office, or the preference wizard in Internet Explorer (or Firefox).

Before you do a full blown test, test the complete script again with a new user. During the creation of the script you’ve probably configure the first time setup (or even changed some settings…).

 

9 – Assumptions

It is hard not to make any assumption, but each assumption you make can result in an unexpected result. Try to think about what you expect to happen and then what you expect not to happen. Are you really sure?

Before giving some examples of assumptions, here are two quotes to remember:

“Assume makes an ass out of u and me (ass-u-me)”
“Assumption is the mother of all fuck-ups”

So in short, don’t be an ass and don’t fuck-up due to assumptions.

The most common assumptions are about timing, the time between actions like launchen an application and using the application. Another common assumption is that each step, in a series of actions, is correct and that you don’t have to check everything, you should.

 

10 – Load changes everything

Applications tend to react in the same manner each time you use them, under heavy load this changes. Since the application relies on a available resources, these might get scarce due to resource contention. This results in unexpected behaviour, usually due to timeouts and assumptions in the application.

There is no need to accomodate for weird error messages, unless that’s what you’re testing. You should however accomodate for slow responsetimes, or unresponsive applications.

 

11 – No changes

An automated test simulating user actions is dumb script, it executes the commands you tell him. This means that if you change the environment, and don’t change the script, it stops working.

For instance if you start a script by waiting for the desktop, which means the user is logged in, by lookin for an icon on the desktop. If this icon is removed (or moved) the script fails. A real user would ignore the missing icon and start working, a script won’t (unless you accomodate for it).

A best practice is to agree a freeze period where no changes are allowed. During the creation of the script and the execution of the test, usually during a non-working day, are usually a few days.

Even though these agreements are made, some sysadmins still change the environment because they think it has no (or hardly any) impact on the environment. So another best practice is to test the script the (working) day before the test, this enables you to change the script or undo the changes made.

 

12 – Client side testing

User actions are simulated using either a server or a client component.

A server side component is a script, utility or some other process that is launched in the session of a user. This enables the script to execute commands and detect windows and objects inside a session. Downside of server side components is that they give an additional load, it’s overhead. The server side component has (almost) no knowledge about the client or the connection quality. Since the process is active on the server, the content on the screen are displayed with local speed. A server side component also relies on the local clock to time results, see best practice 13 about clock drift.

A client side component is a process executed on the client instead of on the server. This enabled the process to look at the end-result, including the effects of the quality of the connection. In other words: if a large bitmap is displayed on a slow connection, the client side components waits until the content is displayed (unlike the server component).

Since the client side component executed on the client, no server side component is required. This prevents overhead on the server and enable the use of a reliable clock, even when you’re testing a virtualized server under heavy load.

 

13 – Clock drift

Each system is equipped with a system timer which enabled the clock to work. On a set interval a pulse if given, this is called a clock tick.

Timers (used in scripts to measure the time an action required to complete) and performance metrics rely on the system clock. If the system clock is inaccurate, the results are inaccurate.

On a virtualized system the hypervisor (VMM) is responsible for distributing clock ticks generated by the hardware. Each guest OS (VM) should get the same amount of clock ticks with the same interval, this way the guest OS can run is own clock.

A hypervisor (VMM) under heavy load isn’t able to distribute the clock ticks to guest os’s (VM). Tick may get lost by the VM unwittingly. Davit Ott (Intel) wrote a short article about virtualization and performance, read it here.

Try to avoid the use of a virtualized clock when the system is under heavy load. Using a virtualized system for timing is possible, of course, but the results might be influenced when the clock ticks gets lost. Using an external clock source (like a SQL server) helps, using a client side component can prevent the problem.

 

14 – Monitor

During a test you should monitor the results, like performance metrics, and the end-result. This way you can experience first hand what the impact of the test is. This not only helps in validating the result, it also might help you finding the bottleneck and solving the problem, if that is an objective.

Especially in a virtual desktop environment, like SBC or VDI, you should launch a dedicated session for monitoring. Use that session to browse, click around and experience the impact.

During the test write down all the properties and parameters of the test.  Write down the events that occur, for instance spikes in performance metrics, slow response times, sessions failing. Keep the notes with the test results , they are very usefull when analysing the results.

I once had a user connecting his laptop to the network during an extensive loadtest, his laptop was configured to synchronise his homefolder and mailbox. Since we where testing on a WAN connection, this had an impact on the test. Since I wrote this down in my notes, I could explain the impact on the perceived performance.

 

15 – Validate results

After a test is completed and the results are collected, try to validate the results. Are the results what you expected? Do they represent the results you noticed during the test?

Data collected don’t lie, they are collected by a dumb process. But if you collected the wrong data, interprete it in a wrong way or if the load generated don’t match the scenario described (during preparation, see best practice 1 and 2) then the results might be invalid.

 

16 – Lifecycle management

Performaing a loadtest to scaling a system once is good, repeating the process is better. Once you’ve set a baseline you can measure the impact of changes.

(Almost) each environment is subject to change. Applications are added, upgraded or replaced, users are added and the workload changes.

 

Loadtesting should be a part of lifecycle management resulting in a better overall quality, as described in the Deming cycle.

Plan – Create a plan with the results you want to achieve.

For instance adding a software package or increasing load.

Do – Implement the plan

In a test environment according to DTAP

Check – Compare the results with the baseline

Are the results achieved as expected?

Act – Resolve issues and store results

If the results are not achieve, solve the problem. Finally store all results for archiving.

 

 

Ingmar Verheij

The post Loadtesting best practices – Part 2 appeared first on Ingmar Verheij.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images