Truth and LLMs

Education is evolving in the age of technology and AI. While modern education is moving away from rote learning to skills like critical thinking, students should also be trained to use AI as a research tool. However, the rise of AI-generated content poses challenges in distinguishing genuine research from fabricated material, necessitating the teaching of healthy skepticism and cross-referencing skills to students.

This article was first published in The Mint. You can read the original at this link.


When I was in school, we were prohibited from using electronic aids during our exams. Even then, I couldn’t understand why we had to do sums in our heads when it was so much simpler to use an electronic calculator. But when I had the temerity to question my teachers, I was given a lecture about how the ability to multiply numbers in my head would stand me in good stead when I was older.

Today, with the perfect vision of hindsight, I can say that not only have I rarely put that skill to use, I end up using the calculator app on my phone all the time—even for the simplest of sums. And I am none the worse for it.

Tools Not Skills

I am happy to say that there are schools today that are a lot less anachronistic. Not only can my son carry a calculator into his exam hall, he is actively encouraged to use it. I guess mental multiplication is no longer considered the important life-skill that it once was. I was also pleased to note that they’ve extended a similar relaxation to another bane of my schoolboy life, spelling, with examiners being encouraged to ignore spelling mistakes unless they impede comprehension.

Today, calculators and spell checkers routinely do for us what we had no option but to do ourselves when we were little. And since they are built directly into technology that surrounds us, they are reliably available whenever we need them. If the purpose of our education system is to equip our children with the skills they need to succeed in life, we need to be looking at what they will need to survive in the world they are growing into, rather than teaching them skills that might have been useful when their teachers were growing up.

I was also gratified to note that my son is being actively encouraged to think beyond his text books. When we were in school, we were tested on our ability to regurgitate answers that we had learnt by rote. The focus of modern education programmes seems to have shifted away from this and towards evaluating students on their ability to sift through information and find arguments that best support their propositions. They are being required to develop different writing skills, ones that do not depend on their ability to memorise, but instead forces them to think how to present answers in ways that most cogently make their case. From personal experience, I can readily confirm that these skills will take you further in life than the ability to calculate sums in your head. It will help equip students, not only for a life in academia, but for all types of knowledge work.

And yet, as useful as these skills are, are they what our children need to survive in the world they are growing up into?

AI for Research

In the recent past, we have seen tremendous improvement in large language models (LLMs). Just a few years ago, these systems were little more than glorified autocomplete algorithms that used pattern recognition to approximate human conversation. Today, having gorged themselves on the contents of much of the navigable internet, it seems there isn’t a lot that these applications cannot do. A few weeks ago, I used one of these programs to part-write one of my articles in this column. Most readers couldn’t tell that it had been written, almost entirely, by a computer till the big reveal in the penultimate paragraph.

There are a variety of different applications to which these technologies are being put, including, most pertinently to our current discussion, research. Today, an LLM can be tasked with conducting research on pretty much any issue worthy of study, and it will generate an accurate summary of the most relevant papers in a form indistinguishable from what a human would have produced. If this is the direction research is headed, is there really any point in training our children the old-fashioned way? If it is inevitable that machines will replace human researchers, would it not be better for us to train our children to use artificial intelligence as a research tool?

But I believe that there is another, entirely different issue presaged by these developments that is worthy of our attention. One that if we fail to make young students aware of, our failure will leave them ill-prepared for their future.

Truth from Fiction

Unlike in the past, where we could take it for granted that all the academic literature that we come across has been peer-reviewed and is therefore reliably accurate, given the ease with which LLMs generate articles out of thin air, there is a growing concern that even the most seemingly authentic primary source material might have been entirely made up. We’ve all seen how fake news has affected journalism and eroded trust in media as a whole. I can see how academic research could meet the same fate once LLM-generated content begins to proliferate to such an extent that it is impossible to distinguish genuine research from that which has been made up.

This is the future our children are growing into and if there is one thing we need to teach them, it is the art of figuring out whether the material they are relying on is genuine or generated out of thin air.

To do this, we need to inculcate in them a sense of healthy scepticism, so that they question all the information they are presented with, no matter how reliable the source might seem to be. We need to train them to cross-reference sources and only accept facts that have been adequately corroborated.

Our current peer-review system was supposed to take care of all of this, but if we can no longer rely on that time-tested system, we will need to learn anew how to fend for ourselves.