Mythinformation is a made up word – not by me I hasten to add – but this lack of authenticity is ironic since mythinformation is a term used to describe information based on apocryphal (mythical I guess) or false data.
The classical example of mythinformation is attendance at sports events. Teams like high attendance because to drive increased revenue from sponsorship and marketing activity. Clubs and franchises seek to outdo each other (and themselves) in terms of number of people at their games.
So what’s the problem? Surely attendance is just the number of people attending the match? Well yes…well no! It transpires that reported attendance can have no resemblance to number of people in the stadium. Last Friday for example, whilst on my morning commute, I heard that about a game the night before between Arsenal (in London) and Borisov (from Belarus). The official attendance was an impressive (and precise) 54,648. And yet in truth less than 30,000 people were actually inside the stadium.
The difference? Turns out that 54,648 was purely the number of tickets sold – inclusive of all season ticket holders – irrespective of whether those individuals actually turned up to watch the game.
Mythinformation. The (inaccurate) attendance figure – 54,648 – was widely reported in newspapers and on websites…thereby validating and giving implied accuracy. Moreover, 54,648 is listed in the official records – records that will be retained for posterity – and no-one will ever question the origin.
Why even think about this though? Well the classical ‘wisdom hierarchy’ is a path from data to information to knowledge. Data: symbols, signs or numbers. Information: data processed to be useful or given meaning. Knowledge: collection or application of information to provide understanding.
So back to my Friday morning stadium report; 54,648 is a number – data. But we know these data are incorrect. Therefore the reported match attendance is mythinformation. And knowledge derived from this mythinformation – such as the most popular team or best game to attend next week – is flawed.
Pharmaceutical Research and Development is replete with data – data on molecules, projects, diseases, portfolios, segments and companies. As an industry we are highly – and rightly – regulated and audited. We have very many very high quality checks and balances in place. It is fortunately rare for our data to be anything other than true and accurate.
This is good. But we do have a lot of data…a very large amount of data. Many would argue that with so much data, our major challenge is how to handle and process.
We produce graphs, plots, trends with statistics and significance. We colour code and we shade. We process and we interpret. And we present.
But we don’t often show data. Rather we share information. Or more precisely we share our interpretation of the data, with associated knowledge and conclusions inevitably sounding factual. But is that always right? How often are there other possible or plausible interpretations of our data?
Research and Development is absolutely based on information based on quality data – ours is an industry where data and information are independently verified. But what about those assumptions we include in our interpretations. An assumption can only ever be something we believe to be correct, but can we always be certain? After all, it is certainty that converts assumption into fact.
Assumptions are as important as facts in science. The more opportunity we have to review facts and debate assumptions – especially testable assumptions – the better our overall performance will be.
After all, one of the beauties of science – why we love what we do so much – is that we are able to propose and run experiments…experiments designed to convert assumptions into facts.