An opportunity lost for an internet we could all rely on
In times of uncertainty - such as during a pandemic, conspiracy theories abound. This gives lie to the notion that free speech ensures that truth will prevail. Since the internet lacks bi-directional links and user-editable pages, its design contributes to misinformation spread, unlike Wikipedia’s more reliable, interconnected model.
This article was first published in The Mint. You can read the original at this link.
There is no better environment for conspiracy theories to flourish than a global pandemic. At times of uncertainty, when it is unclear when or how we will get out of the predicament we find ourselves in, the painful reality of a long and punishing recovery can be hard to accept. At such times, the distance between scepticism and paranoid cynicism shortens, and for those who traverse that path, convenient theories that align with preconceived notions can be so comforting as to override any evidence to the contrary.
Why is it that even at this time in human history, when we have ready access to more empirical data than ever before, theories that are unsubstantiated can hold such sway over so many? How is it that charismatic charlatans in positions of influence can make preposterous claims and get vast crowds to believe what they say despite the best scientific evidence to the contrary?
The entire notion of free speech is based on the assumption that if you allow people to freely exchange ideas, the good ones—those that are backed up by empirical evidence and data—will always bubble up to the surface. It presumes that as long as we create an environment in which all ideas can been shared, the absurd and fantastical ones will be exposed for what they are and thus get discredited.
And yet, all around us, day after day, we can see that this is not the case. Instead, the freedom to speak has, arguably, allowed some bad ideas to gain credence—to the point where they are widely regarded as being true, despite hard data to the contrary. How is it that at a time when the internet has made most of the world’s knowledge available at our fingertips, incontrovertible facts can so easily be disregarded?
Could it be that at least one explanation for why this is so comes from the early days of the World Wide Web and the design choices we made while it was being built?
When Tim-Berners Lee joined CERN as a consultant in 1980, he developed a hypertext program called Enquire to record the various dependencies between 10,000 odd people who comprised the particle physics team. He posited that once he had mapped out these relationships, it would be possible to find answers to questions such as what would happened if a particular module were to be removed or a team repurposed.
Core to his idea was the concept of bi-directional links—hypertext language that connected two documents to each other, so that every time you clicked through to a new page in a database, that page remained connected back to the source from which it came. This was a powerful feature because it allowed knowledge to be tightly integrated so that individual pages never stood on their own, but instead derived their credibility from the pages to which they was connected as well as those they connected to.
Had this sort of deep interlinkage between various online pages been scaled up, the internet would have been a very different place. It would have been possible for us to verify information more thoroughly. Conversely, it would have been next to impossible to fabricate unsubstantiated theories that did not align with all that had been stated before and that would be stated hence.
Unfortunately, in the process of building the World Wide Web, Tim Berners-Lee was forced to make compromises. In order for his idea to gain scale, each page was going to be have to stand on its own, and so he was forced to drop the concept of bi-directional links. For similar reasons, he came to the realization that it would not be possible to make every page on the web editable by everyone. Instead, each page would have to be managed by its creator who could chose to link to whatever other pages he or she wanted.
These early design choices gave us the World Wide Web that we know today. It is the reason why the internet today is a global network of individual pages of knowledge, linked loosely to others. It is the reason why individual web pages are static and non-editable and why they do not need to have any binding relationship to anything else. This is why, when we go down certain rabbit holes of discovery, we get so quickly and thoroughly unmoored from everything else—including scientific fact. It’s why conspiracy theories can easily take root and exist on a knowledge network without any basis in truth.
That said, there is still one place on the internet where bi-directional links and user editable pages are not only present, but are at the core of its central architecture. In this particular corner of the web, knowledge is not only reasonably reliable and accurate, it is continuously updated and fortified—not by a team of employees paid to administer it, but by lay users.
That place is Wikipedia, where every link can be followed back to its source with perfect internal consistency, allowing new knowledge to be created in a way that builds upon the knowledge that already exists. The fact that a sizeable majority of people on the internet think of Wikipedia as a source of truth shows how effective two-way linkages and user-edited pages can be.
The entire World Wide Web was very nearly designed in this manner. Had that early vision translated into reality, the internet might not have developed into the cesspool of inaccuracy that so much of it has become. And perhaps then, freedom of speech might have truly been able to find the technology-enabled home that the internet was supposed to offer.