The world's largest physics lab and everyone's favourite particle accelerator has been in the news more than once this last week.
Firstly, CERN scientist Antonio Ereditato revealed that recent large hadron collider results suggest subatomic particles may have gone FASTER than the speed of light.
Seemingly aware of the impact upon science (and the Universe) if this is a mistake, the team presented its work to scientists.
As reported by the BBC, "The speed of light is widely held to be the Universe's ultimate speed limit, and much of modern physics - as laid out in part by Albert Einstein in his theory of special relativity - depends on the idea that nothing can exceed it."
"We tried to find all possible explanations for this," said Ereditato. "We wanted to find a mistake - trivial mistakes, more complicated mistakes, or nasty effects - and we didn't."
Question 1: In light of this news, should we now dig deeper into CERN's approach to data accuracy?
Question 2: How deep into its software application development and code/data analysis should we peer?
News also circulated last week of CERN's deployment of Coverity Static Analysis to improve the integrity of the source code found across a number of projects analysing data from CERN's Large Hadron Collider (LHC).
Since integrating Coverity's solution, CERN has eliminated more than 40,000 software defects that could otherwise impact the accuracy of its pioneering particle physics research.
One of LHC's core software ingredients, ROOT, is a software program used by CERN's physicists to store, analyse, and visualise petabytes of data about the LHC experiment.
"Better quality software translates to better research results," said Axel Naumann, a member of CERN's ROOT Development Team. "Like CERN, Coverity finds the unknown; its development testing solution, Coverity Static Analysis, discovers the rare, unpredictable cases that can't be recreated in a test environment."
According to a Coverity-generated press statement, the integrity of ROOT's software is integral to the research conducted at CERN. Every second, scientists at CERN oversee 600 million particle collisions that will help to redefine the way we view the Universe. The collisions, which involve trillions of protons travelling at almost the speed of light, take place in the LHC, the world's most powerful particle accelerator. The experiments conducted around the LHC generate approximately 15 petabytes data per year, equivalent to 15,000 standard disk drives. Given the size and scale of these experiments, CERN has implemented a number of processes to ensure data generated by the LHC experiments is accurate and as bug free as possible.
"ROOT is used by all 10,000 physicists, so software integrity is a major issue," added Naumann. "A bug in ROOT can have a significant negative impact on the results of the LHC experiments and physicists' data analyses."
Within the first week of implementing Coverity Static Analysis, CERN's ROOT development team found thousands of possible software defects that could have impacted software integrity and research accuracy, including buffer overflows and memory leaks, with very few false positives. To improve the integrity of CERN's source code, the ROOT team spent just six weeks on resolving the errors and continues to use the solution in production daily to prevent further software defects from occurring.
So is there a big bang bug botheration going on here or not? Is CERN about to accelerate its next particles around without keeping its data centric software back end in order?
Looking for an answer let us turn to Dr Sheldon Cooper to see what he would say...
Leonard: So, tell us about you.
Penny: Um, me? Okay - I'm a Sagittarius, which probably tells you way more than you need to know.
Sheldon: Yes - it tells us that you participate in the mass cultural delusion that the sun's apparent position relative to arbitrarily defined constellations at the time of your birth somehow affects your personality.