Performance Testing Rookie Mistakes.... (Admit it, we've all made these)


Hey, Did you take the environment down?!$ 


Sound familiar? wink


And the list goes on...


We Assumed that http status code of 200 meant a successful transaction and didn’t bother to place that assertion for Validation.  


We used milliseconds or no Think Times for pauses and inadvertently executed a totally unrealistic user Load. 


We Blindly tried to analyze a problem using the load generation tool when we could have just opened a browser network inspector. 


We spent valuable hours creating scripts for every last business transaction when we could have run simple tests to uncover low hanging bottlenecks.


We have cried wolf by alerting there was a bottleneck at a specific tier when it was just a saturation symptom and not the root cause.


We were given a list of User ID’s and didn’t run a quick login script to see if they were all Valid.


We set up and ran initial load tests by defined by clients' goals instead of executing a slow ramping methodical test to understand the current scalability. 


We jumped the gun and started making (and emailing) conclusions while the load test was still executing instead of waiting to analyze the final test results. 


We slammed the target application with an unreasonable login rate and declared failure. 


We executed a Login every iteration instead of looping on the Action of a load script. 


We blindly created load scripts without the direction of business analyst’s direction on how users really intend to use the application. 


We got stuck on fancy logic of the scripts when we could have created a simple script. 


We felt the panic of thinking errors were coming from a script problem and didn’t stop to consider that the errors might really be an environment problem. Vice Versa.


We forgot to put an “end” time to the test and therefore flooded the application with a ramping load until it fell over. 


We didn’t name business transactions appropriately and had no idea what that pesky request was actually doing. 


We assumed that no one else was using the environment instead of scheduling dedicated time. 


We assumed that only the virtual user load was using the environment resources. 


We forgot to run a ghost test to baseline monitored resources. 


We didn’t bother to check that recorded script to see if there are 3rd party requests and therefore load tested an unintended application. 


We didn’t proof the recording to see if transaction grouping was correct and couldn’t make heads or tails of the results. 


We didn’t baseline with a single user so we had no idea as so the minimum achievable response time. 


We checked a single user script and called it good and threw it into the mix before realizing that a concurrent test was missing a required correlation.


We assumed that high TPS rate were all successes when the majority were errors. 


We ran a rogue load test by accident while trying to automate the scheduler. 


We inadvertently created hundreds of thousands live user sessions saturating the app server's memory. 


But....we learned from our mistakes, didn't we? At PerformanceWisdom, my goal is to have you learn these valuable performance engineering skills without having to make all these same mistakes (too many to list). You will save valuable time! See our Load test Analysis  course!




Who Me? No! Well, not on purpose...