At Flood, we have been running distributed load testing infrastructure on the cloud for many years. Our grid of load generation infrastructure is spun up on demand to process compute-intensive workloads. So it wasn't a massive stretch of the imagination to think about how we might use our compute time in the fight against COVID-19.
The Institute for Protein Design (IPD) at the University of Washington is currently using the Rosetta molecular modeling suite to help fight coronavirus. That work requires CPU time, which in turn relies heavily, if not entirely, on distributed computing from volunteers. The platform for volunteering this is called the Berkeley Open Infrastructure for Network Computing or BOINC.
Most importantly, BOINC helps academic research groups model important protein structures such as those seen in the coronavirus. The IPD recently stated on 21 Feb 20:
"We are happy to report that the Rosetta molecular modeling suite was recently used to accurately predict the atomic-scale structure of an important coronavirus protein weeks before it could be measured in the lab. Knowledge gained from studying this viral protein is now being used to guide the design of novel vaccines and antiviral drugs."
Knowing that distributed computing can help put an end to this virus, Flood has begun to donate spare compute capacity using BOINC. To start with, we are using the otherwise idle capacity of backend servers to provide CPU cycles for scientific computing on the Rosetta@home project.
Since going live this week, we have contributed 34,000 Cobblestones of computation (29 quadrillion floating-point operations) to Rosetta@home. While still early days for us, we are looking for ways to boost this number and make it a permanent part of our distributed design.
We are using Docker to make the BOINC image easily distributable amongst our core. We are then using our existing orchestration layer to spin these BOINC images up as containers amongst our infrastructure. Of note, processing tasks typically take hours to complete - so this is something you will need long-running infrastructure for to process. If you want to do something similar, you can get our image from Docker or craft your purpose-built fork from GitHub.