Computer Model Used to Justify Lock Down Proven To Be ‘Sh*tcode’

It was an Imperial College computer model that forecasted 500K deaths in the UK (and 2.5 million in the US) should policymakers pursue a “herd immunity” approach (a la Sweden), that influenced them to reverse course and go full lockdown instead.

The model was produced by a team headed by Neil Ferguson, (who recently resigned his post advising the UK government when it surfaced that he was himself violating lockdown directives by breaking self-isolation for dalliances with a married woman).

The source code behind the model was to be made available to the public, and after numerous delays and excuses in doing so, has finally been posted to GitHub

code review has been undertaken by an anonymous ex-Google software engineer here, who tells us the GitHub repository code has been heavily massaged by Microsoft engineers, and others, in an effort to whip the code into shape to safely expose it to the pubic.

Alas, they seem to have failed and numerous flaws and bugs from the original software persist in the released version. Requests for the unedited version of the original code behind the model have gone unanswered.

The most worrisome outcome of the review is that the code produces “non-deterministic outputs”

Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.

This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it’s apparent that the same numbers as in Report 9 might not come out of it.

The documentation proffers the rationalization that iterations of the model should be run and then differing results averaged together to produce a resultant model. However, any decent piece of software, especially one that is creating a model, should produce the same result if it is fed the same initial data, or “seed”. This code doesn’t.

“The documentation says:

The model is stochastic. Multiple runs with different seeds should be undertaken to see average behaviour.

“Stochastic” is just a scientific-sounding word for “random”. That’s not a problem if the randomness is intentional pseudo-randomness, i.e. the randomness is derived from a starting “seed” which is iterated to produce the random numbers. Such randomness is often used in Monte Carlo techniques. It’s safe because the seed can be recorded and the same (pseudo-)random numbers produced from it in future. Any kid who’s played Minecraft is familiar with pseudo-randomness because Minecraft gives you the seeds it uses to generate the random worlds, so by sharing seeds you can share worlds.

Clearly, the documentation wants us to think that, given a starting seed, the model will always produce the same results.

Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.

In one instance, a team at the Edinburgh University attempted to modify the code so that they could store the data in tables that would make it more efficient to load and run. Performance issues aside, simply moving or optimizing where the input data comes from should have no effect on the output of processing, given the same input data. What the Edinburgh team found however, was this optimization produced a variation in the output, “the resulting predictions varied by around 80,000 deaths after 80 days” which is nearly 3X the total number of UK deaths to date.

Edinburgh reported the bug to Imperial, who dismissed it as “a small non-determinism” and told them the problem goes away if you run the code on a single CPU (which the reviewer notes “is as far away from supercomputing as one can get”).

Alas, the Edinburgh team found that software still produced different results if it was run on a single CPU. It shouldn’t, provided it is coded properly. Whether the software is run on a single CPU or multi-threaded, the only difference should be the speed at which the output is produced. Given the same input conditions, the outputs should be the same. It isn’t, and Imperial knew this.

Nonetheless, that’s how Imperial use the code: they know it breaks when they try to run it faster. It’s clear from reading the code that in 2014 Imperial tried to make the code use multiple CPUs to speed it up, but never made it work reliably. This sort of programming is known to be difficult and usually requires senior, experienced engineers to get good results. Results that randomly change from run to run are a common consequence of thread-safety bugs. More colloquially, these are known as “Heisenbugs“.

Another team even found that the output varied depending on what type of computer it was run on.

In issue #30, someone reports that the model produces different outputs depending on what kind of computer it’s run on (regardless of the number of CPUs). Again, the explanation is that although this new problem “will just add to the issues” …  “This isn’t a problem running the model in full as it is stochastic anyway”.

The response illustrates the burning question: Why didn’t the Imperial College team realize their software was so flawed?

Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up… and eventually this behaviour became normalised within the team.

Most of us are familiar with the computing adage, “Garbage In/Garbage Out” and the untrained reader may think that’s what being asserted in this code review. It isn’t. What’s being asserted is that output is garbage, regardless of the input. 

In this case, the output we’re experiencing as a result is a worldwide lockdown and shutdown of the global economy, and we don’t really know if this was necessary or not because we have no actual data (aside from Sweden) and severely flawed models.

Read the entire code review here. 

More at axisofeasy.com


PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Please DONATE TODAY To Help Our Non-Profit Mission To Defend The Scientific Method.

Trackback from your site.

Comments (8)

  • Avatar

    Tom O

    |

    Not being able to get into the heads of the people that did the programming – really knowing their motive, if you will – this “review” sounds like it was just written by people with vastly poorer skills than the y thought they had. Yet the output, if you will, falls in line with the current efforts to forge a reason for world government through creating an image of individual nations not being able to “deal” with worldwide issues.

    In other words, in my mind, this piece of “code” did exactly what they wanted it to do – scare people into a stampede of sheep to do whatever it takes to “make this bad thing go away.” Just as the equally poor modelling of climate changes, I don’t see this as an accident or “unintended consequence.”

    It is time to stop pretending that these malevolent actions are anything other than just that – malevolent actions, and those that impose them on the rest of us by stoking fear – that would be the “Main Scream Media” yes, that second word is intentional – need to be prosecuted for the damage that they do. If it wasn’t for the sensationalizing of the “Main Scream Media,” a lot of people that have been sent on to wherever souls go, would still be with us. What they have done is no less criminal than screaming “fire” in a packed theatre and creating a panic.

    Reply

    • Avatar

      Aaron Christiansen

      |

      Except the code / model is over a decade old (15 years), and was wrong before.

      “Never attribute to malice that which is adequately explained by stupidity”

      Reply

  • Avatar

    Zoe Phin

    |

    In summary, the software gives random results within some range?

    Michael Mann can turn the results into a hockey stick.

    Reply

    • Avatar

      rickk

      |

      indeed

      Niall’s code turned 50,000 to 2,000,000 – much to the chagrin of Trump – but to the excitement of Fauci

      Reply

  • Avatar

    Shane

    |

    Since `Vaccine Man’ Bill Gates funds these twits, why didn’t they get him to come in and fix the bugs for them?

    Reply

  • Avatar

    Carl

    |

    Unfortunately the same sh*tmodel is now being used to justify the lockdowns in hindsight and to justify their continuation. “’Think of the number — potentially 2.2 million people if we did nothing, if we didn’t do the [social] distancing, if we didn’t do all of the things that we’re doing,’ Trump said Sunday.” https://www.politico.com/news/2020/04/01/trump-coronavirus-millions-saved-160814

    Going forward the media chorus is now singing the tune, “If we end the lockdowns it will all be for naught and millions might die!”

    It is the thinking of our political leaders that is in lockdown. As long as their minds are locked within the paradigm that “social distancing rules have to stay in place to save lives” the economy cannot fully “open”, because a fully “open” economy would be an economy of crowded bars, crowded restaurants, crowded busses, crowded airplanes, crowded beaches, crowded malls, crowded food courts, crowded motels and hotels, crowded etc., etc., etc. It would be an society where no one is wearing a mask or rubber gloves and check out counters are not being sanitized between customers. It would be a society where people are not afraid to shake hands nor afraid to be within 6 feet of another human being.

    This will only be achieved when our political leaders concede that they were wrong in the first place and that “social distancing” is the absolutely wrong approach to dealing with an infectious viral respiratory disease, because it interferes with the prompt evolution of natural “herd immunity”. As Dr. Jeffrey I. Barke, MD, a family practice physician based in Newport Beach, CA, states. Natural “herd immunity” cannot develop when you have the “herd” quarantined in their homes. See video: https://www.youtube.com/watch?v=NJIe7qxXcvo I would suggest watching the video before YouTube takes it down because it disagrees with the WHO.

    Reply

    • Avatar

      Dev

      |

      That was a good link, thanks Carl.
      Effectively, Government’s dealing with this situation has shown that they are non-essential, whereas business show that they are essential!
      If this is how Gov deal with a “bad flu equivalent” which has been appalling then how they deal with mass dependency is going to be equally appalling.
      Removing our ability to sustain ourselves financially and independently is far worse, even if the cold had been worse.
      A decimated society leaves no hope for independent future.

      Reply

  • Avatar

    tom0mason

    |

    Statistical models are useful for probing the edges of scientific knowledge but only in very limited ways. Currently most of these models project (or extrapolate) into the future by means of a derived mathematical curve(s).
    Assessing what maybe the best or worst outcomes from such modeling could be useful for initially gaining a handle on what to expect in the future (from the little knowledge available). However in highly dynamic systems with many interacting factors, they (models) are all but useless at providing accurate ideas for what might happen in the future. In such natural systems, initial conditions, and natural chaos plays a significant part in how parameters mutate and evolve over time. Therefore it is better to move to an observational evidence based model(s) (or partial observational & statistical models — as done with weather forecasting, or assessing the most probable flu type for the next vaccine), as soon as the statistical models show themselves to be wildly inaccurate!
    Long term weather forecasting (more a week forward), climate forecasting, or viral epidemics, have chaotic elements that will mutate and change how it’s evolution will progress — teasing out these chaotic elements is never very straight forward or obvious.
    An early basic flows and feedback schematic for climate models looks like this —
    https://rclutz.files.wordpress.com/2017/05/climate-diagram2.png
    No doubt a basic flow and feedback diagram for all of the elements of human viral outbreak epidemic would be at least as complex as this, if not more so — is there such a basic template? Can one be made?.
    I’d also reason that a ‘one size fits all’ model solution is unlikely to be found as environmental, average age, average racial heritage, nutritional and basic health of populations in different locations over the planet, vary wildly.

    Stay safe and don’t panic.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via