The lady with the lamp and data-driven medicine

Florence Nightingale’s contributions to medical science extended beyond nursing to include statistical analysis, leading to significant hospital reforms. Despite technological advancements, the medical profession still lags in adopting data technologies like AI. Regulations constraining data within national boundaries may hinder the development of these technologies.

This article was first published in The Mint. You can read the original at this link.


Florence Nightingale earned the sobriquet “the Lady with the Lamp" for her extraordinary dedication to the injured and the dying during the Crimean War. However, as much as she is known for her actions on the war front, it is her contributions to medical science that have had a lasting impact on the world.

In addition to being a competent nurse, Florence Nightingale was a brilliant mathematician. She used statistical data to demonstrate that the unsanitary conditions in wartime hospitals had resulted in over 60% of British soldiers dying from disease in the first seven months of the war. Based on this data, the army was persuaded to make changes in the way in which they built hospitals. Because of these changes, these buildings became light and airy; they were also designed with separate wings to prevent the spread of disease.

After the war, Florence Nightingale turned her attention to civilian hospitals which, she soon realized, simply did not have data on important information like recoveries, the length of stay and mortality from different diseases. In order to address this, she built, with the help of some of the world’s leading statisticians of the time, a system for the collection and study of medical data. She cajoled hospitals across the country to adopt it and even got the government to collect information about illnesses as part of the census. Thanks to her, England implemented many of the data-driven process that arguably form the foundation of modern evidence-based medicine.

Today, 160 years after those initial promising steps, the medical profession lags in its adoption of data technologies. Doctors use data to assess whether the patient’s physical parameters lie within an acceptable range but, in doing so, they are constrained to adopt a binary approach that suffers from the flaw of threshold thinking.

If patients are diagnosed solely based on test results, doctors are constrained to rely on a single snapshot of information to come to conclusions about the patient’s health. Instead, if the information could be presented to them longitudinally, they would have the opportunity to observe the direction in which critical parameters were trending. This would enable them to intervene well before patients reached dangerous thresholds.

It is not clear why we don’t already present medical data in this manner. We certainly have the technology to do so. We can use the cloud to store information so that it can be presented as time-series data. We have artificial intelligence (AI) technologies that can make sense of our databases, particularly when they are designed to interact with multiple data sources so that they can generate analyses that are richer and more meaningful than would otherwise have been possible. And yet, despite the availability of these technologies, our medical systems still provide doctors small snapshots from which they are forced to derive useful conclusions.

To be fair, there are a number of examples of AI being successfully deployed to solve previously intractable medical problems. Duke University has developed an AI system that accurately predicts chronic kidney failure by plotting a patient’s historical glomerular filtration rate (GFR) readings. Taking this along with various other data, it can predict the trajectory of that patient’s kidney function. Similar solutions have been designed for conditions as varied as cardiac arrest, depression and cancer.

Manufacturers are using AI to build artificial pancreas machines that are designed to supply the optimum amount of insulin in a real-time response to changes in blood sugar. These devices continuously monitor the glucose in the bloodstream and predict the appropriate dose to pump into it.

The Imperial College of London has developed an AI-powered electro-surgical smart knife for use in cancer surgeries. These devices carry out real time analysis of the smoke emanating from a tissue that is being incised to accurately predict whether the knife is cutting cancer cells or healthy tissue. This allows surgeons to more precisely excise malignant tumours while preserving as much of the healthy tissue as possible.

All of these devices have been built using AI. The neural networks that operate at the core of these technologies have analysed mountains of annotated data and built prediction models that generate remarkably accurate insights. However, since our medical systems are not designed to be interoperable, the only institutions that have anything close to the volume of data required to build an AI solution are large speciality hospitals.

As India looks to adopt a new electronic health record framework, it seems inevitable that it will include in the regulation some of the data localization stipulations that are now being inserted into other regulations. If the government does go down this path, it would do well to remember that advances in medical science are increasingly dependent on access to data. If India is to keep pace with the rest of the world, its policies will need to support the optimal use of data.

Regulations that mindlessly constrain data within national boundaries stifle the development of these technologies by needlessly hampering their interoperability and fungibility. There is no denying that we must ensure that we keep the security and sovereignty of our medical data uppermost in our minds. But, we must find ways to do so that don’t bind our hands.