Recently we held a webinar that introduced some beginner through intermediate techniques on getting started with cloud load testing on Flood IO.
Attendees across the different regional broadcasts raised some interesting questions. Rather than isolate the discussion to specific regions, we wanted to aggregate and answer the top questions here.
When should load testing shift left and when should it shift right?
There is certainly a lot of buzz in the industry about shifting left which can be interpreted mostly as performing testing (including load testing) earlier and continuously within the development lifecycle — particularly for DevOps.
Likewise, the case can also be made that shift right (testing in or close to production environments), is also a good home for load testing.
Personally I care less about the direction to shift and more about getting shi*t done.
At Flood, we've seen customers go both directions here.
Some customers run large scale load tests in production — primarily because that's the only environment available at the appropriate scale. Load testing in this environment has some drawbacks. Repeatability can be difficult, and there are important factors to consider such as logging, monitoring, and test data management. It does offer valuable insight into how production environments scale and often uncovers a raft of performance issues that might be more difficult to identify in a non-production environment — for example, rate limits, throttles, intrusion / DDOS detection, CDN and cache performance.
We also have customers who invest heavily in shift left styles of load testing. Most common are customers using our API to integrate load testing with their continuous integration & deployment pipelines. This offers earlier detection of performance defects, sometimes in a more controlled test (smaller scale / dedicated tuning). It can quickly detect configuration issues in the application/infrastructure design without the production noise. With this approach the feedback loop is much tighter, which is consistent with test early, test often schools of thought.
Why are cloud-based load testing services better than the "legacy" thick client load testing tools everyone’s been using for years?
Let's avoid the debate about which tool is "better" — I think a more useful approach is to consider the differences in terms of load test creation, execution and analysis.
Traditional load testing tools were generally full-featured, shrink-wrapped software. With a commercial license, they would be difficult to share or make available to other colleagues. Test scripts would be created with closed source or proprietary tools. Execution carried significant overhead for provisioning and infrastructure costs. Reporting would be done retrospectively with analysis not easily shared outside your team.
Cloud-based load testing generally supports non-commercial, open source tools with no vendor lock in. Infrastructure is provisioned on demand or reserved with significant discounts. Reporting and analysis happen in real time and is easily shared via the web. Cloud-based load testing with Flood IO provides an economy of scale, with an inclusive platform that can support open source tools and an easy-to-share philosophy.
What’s the value of integrating load testing into CI/CD pipelines?
Continuous Integration provides tight feedback loops, which are great for all forms of automated testing. It provides the mechanism from which to execute load tests and feed results back into your decision-making process.
Many Flood IO customers use our API to integrate with popular CI platforms like Jenkins and Buildkite. This lets them automate the provisioning of load test infrastructure, execution and analysis of results. Some customers take the results integration one step further, flagging tests which have failed to meet SLAs or exceed thresholds.
Our roadmap has some great features planned around CI/CD pipelines.
We already have dedicated performance testers. How would developers and testers do load testing without stepping on their toes?
It's important to acknowledge that performance is everyone's responsibility.
From the marketing team adding some more trackers to the site, to the front end developer changing up the CSS or JS framework, all the way through to back end developers creating APIs and application services, as well as operators of caches, databases, servers, networks and storage - everyone has a play on performance.
Don't get your knickers in a knot over who is responsible for performance. Load testing should not be exclusive. Everyone needs to be involved.
A healthy dialogue between a developer and tester might be "Hey I've been exploring how this endpoint behaves under load and I noticed it tends to slow down over time when we search on a wildcard".
We really believe in building a distributed load testing platform for everyone and encourage novices to experts to be involved, regardless of job title.
If I don’t have any load tests yet, where should I get started? What tool is easiest to learn?
To simulate Protocol Level Users (for example, HTTP), you can't go past JMeter or Gatling. Protocol level scripting is not trivial; beware of record and playback myths. You will need a solid understanding of HTTP and the way your application behaves to be proficient.
JMeter is the most popular load testing tool on our platform. It has been around since 1998 and has really gathered steam since version 2 in 2007. There are 10+ years of solid information out there to dip into. It also has a UI, which is handy if you're not really into writing code. You might also want to check out Ruby-JMeter, which we developed and open sourced. It's been very popular with customers wanting to express JMeter test plans in code.
We also love Gatling for its simplicity and powerful design. Tests are written in Scala, which has its own learning curve. Customers who use Gatling tend to have more of a "I like writing code" background in general. However, if you have no strong preference either way (coding vs. UI) then we'd definitely recommend checking it out.
For Browser Level Users we've been experimenting with Selenium for over a year now and we have a strong customer base invested in load testing with it. It's popular because you're simulating user behaviour in a browser, which can be easier than protocol level scripting. This reduction in complexity comes at the cost of concurrency. Stay tuned for some interesting progress we have made in this space.
For a detailed look at how to get started with JMeter, Ruby-JMeter, Gatling and Selenium, see the guides we've prepared at help.flood.io
What sets Flood apart from other load testing services?
Flood IO is a distributed load testing platform for everyone.
Our grid nodes are purpose-built for the cloud, based on a loosely coupled, cluster-less, shared nothing architecture. That means we can afford massive horizontal scale for distributed load tests.
Our floods support popular open source load testing tools. In fact, we're tool agnostic and easily integrate with what we consider the best tools for the job. We'll be announcing some exciting integrations with Tricentis Tosca in October 2017.
Our reporting is real time with a 15-second resolution. Time series is our thing and we love processing and visualizing data in a compact, meaningful way.
Our customers love how easy it is to scale their load tests around the globe, share results with others and get on with the business of load testing. It's free to try, so check us out at flood.io