Ask a Flooder 20: How can fear affect application performance? [Video]

A case study on the problems with the Dutch coronavirus testing hotline and how we can avoid them in our own performance testing

In this video, I talk about the power of fear, and how it can end up affecting performance test results.

Transcript

Hi everyone, I'm Nicole van der Hoeven, back with another Ask a Flooder, and today I want to talk about the effect of fear on your performance outcomes.

The Dutch "coronalijn"

I currently live in Maastricht, in the Netherlands. At the beginning of this global pandemic, the only people getting tested actively for COVID-19 were either people who presented with serious symptoms of the disease, or health care professionals who were continually exposed to the disease and therefore at a higher risk of contracting it.

Last month, on June 1st, the Dutch government made testing available to everybody. They set up a telephone hotline that any Dutch resident could call to set an appointment to get tested for a COVID-19 test. A task force was dispatched to make sure that they could test up to 30.000 people per day. Yet on the first day of testing, only 1.146 people were tested. So what happened?

Well, it turned out that the bottleneck of the entire process wasn't getting enough tests, registering the tests, or even processing the test results. The bottleneck actually occurred way before that, when people called up the hotline to get an appointment.

323.000 people called on the first day that the hotline opened. And not even all of them had called to make an appointment. Some of them were just calling for general information on the coronavirus despite the fact that the government made it clear that that's not what the line was for, and that there were other lines dedicated to information. There were reports on Twitter of people waiting up to 6 hours on the line just to talk to someone who would make an appointment for them. There were calls that got dropped. People said that sometimes it happened in the middle of a call. Even telephone operators that   the appointment system was down or slow and unusable for a lot of it.

As a result, only 5,500 people actually had appointments booked on a day where there were 323,000 that called. That's 1.70%.

What does that have to do with performance?

As performance engineers, we like to believe that everything we do is reasoned. We design tests based on quantitative data. We look at historical trends and metrics to help us decide how to build a workload model. But the truth is that that's often not enough.

If anything, this pandemic has shown us just how wrong our guesses can be, whether or not they're  informed by facts, especially when it has to do with human behavior. We've seen violent fights erupt over toilet paper. We've seen peaks and troughs on the stock market that didn't have anything to do with the underlying companies. We've seen arson on 5G cell towers. Because human beings are irrational, and it's really hard to capture that irrationality in our model for how users are going to behave.

We may never be able to accurate predict how irrationality is going to affect the load on our application in production. But that doesn't mean that we shouldn't try. There are still things that we can do to mitigate the risk. I think that any application that is public-facing should be assessed for exposure to the risk of fear or other human psychological factors.

How do we account for fear in our performance testing?

I actually wrote a whole blog post on this particular situation (the Dutch corona line), and how it might have been possible to predict (maybe not with 100% accuracy but with reasonable accuracy) how many people would have called the hotline.

For example, we can look at statistics like the of the Netherlands or how many people normally get flu-like symptoms or at least report them at this time of the year. These are statistics that are readily available from the Department of Health, and we could have used them to inform our decisions on the workload model for such an application. We may not have gotten it exactly, but we would have at least bin in the same ballpark.

If we have a good enough idea about the workload that our application would have to handle when it's influenced by fear, we can start running load tests around that. Another good open-ended kind of test to run, especially for applications that are particularly susceptible to fear, is the stress test. In a stress test, you push your application to its limits and you find out what they are. You could, for instance, start with your peak work load, and then add a certain number of users every 30 minutes and see how much your application can actually handle. You can do the same with soak tests, where you're exposing your application to the same amount of load for an extended period of time. Both of those kind of exploratory tests help you define the limits of your application and how it would respond if fear plays a role in the workload model in production.

If we overlook this critical human element in our test design, then we leave ourselves, and our applications, vulnerable to performance degradation and, perhaps, outages.

Till next time, happy flooding!

Start load testing now

It only takes 30 seconds to create an account, and get access to our free-tier to begin load testing without any risk.

Keep reading: related stories
Return to the Flood Blog