Algorithm Gets the Blame for COVID-19 Vaccine Snafu
Managers at Stanford Medical Center blamed an algorithm for their decision not to give the COVID-19 vaccination to 1,300 residents and fellows working as frontline physicians in the hospital, much to the displeasure of those residents and fellows.
The hospital had scheduled a photo op last Friday, December 18, to celebrate the first COVID-19 vaccinations taking place. But instead, about 100 residents and fellows showed up to protest the fact that only seven residents at the hospital were offered the vaccine.
“Our algorithm…clearly didn’t work,” Tim Morrison, the director of the ambulatory care team at the university hospital, told a group of protestors in a video posted to Twitter last week. “There’s problems with our algorithm.”
That didn’t satisfy the protestors. “Algorithms suck!” one of them shouted. “[Bleep] the algorithm!” said another.
According to Morrison, the hospital used an algorithm to decide who should get the vaccine. The hospital received 5,000 doses of the Pfizer vaccine, which would be enough to vaccinate 2,500 people.
According to a schematic of the university’s algorithm, which the MIT Technology Review obtained , the university considered a handful of factors as weights in coming up with the final Vaccination Sequence Score. Those factors included the employee’s age (it gave points for being younger than 25 or older than 65, interestingly), the employees’ job type, the prevalence of COVID-19 by job type and department, and the number of COVID-19 tests.
It’s not entirely clear how the algorithm worked, or was supposed to work. In particular, there are two factors in the algorithm–“prevalence for COVID-19 by job role and staff department” and “percent positive for COVID-19 by job role and staff department”–that seem oddly similar.
There are also questions about how the third job-based variable–the “percentage of COVID-19 tests collected by job role as a percent of the total collected at Stanford Healthcare”–reflects the actual risk that hospital employees’ face.
But at the end of the day, it’s clear that, instead of prioritizing frontline residents who are at high risk of contracting the disease, the algorithm prioritized other folks, including administrators who don’t interact with patients closely and doctors seeing patients remotely, since they were chosen to receive the vaccine ahead of residents.
Criticism of Stanford’s algorithm was swift. The complexity of the algorithm, and the overall lack of clarity in how the various factors were chosen, were near the top of the list.
“Stanford’s decision to de-prioritize residents and fellows is defenseless on the basis of science reason, ethics, and equality,” Stanford Medical Center chief residents wrote in a letter on Thursday December 17, a day before the fateful photo op. “Many of us know senior faculty who have worked from home since the pandemic began in March 2020, with no in-person patient responsibilities, who were selected for vaccination. In the meantime, we residents and fellows strap on N95 masks for the tenth month of this pandemic without a transparent and clear plan for our protection in place.”
While Stanford Medical Center officials admitted to the algorithmic error on Tuesday December 15, the chief residents wrote, those leaders failed to change the algorithm before going forward with the planned vaccinations on Friday December 18. “We believe that to achieve the aim of justice, there is a human responsibility of oversight over such algorithms to ensure that the results are equitable,” they wrote.
It’s worth noting that, while Stanford used an algorithm to prioritize vaccine distribution, it was a rules-based algorithm, and not an algorithm based on machine learning. A machine learning algorithm would have required the hospital to establish the variables, collect data, train the model, assess the algorithm’s (or the model’s) performance, and refine the weights and re-test the model. This iterative nature is intrinsic to the data science process.
In Stanford’s case, the algorithm’s rules were established by human experts, but it’s unclear to what extent the rules were tested to see what real-world impact they would have, and refined accordingly.