How digital trust systems can guard against false data

China’s social credit system monitors and scores citizens’ behaviors, rewarding or penalizing them accordingly. This system raises concerns about data accuracy and the impact of false information on individuals’ lives. As other countries might adopt similar systems, it’s crucial to ensure these algorithms are fair and continually reassess their trust evaluations to avoid unjust consequences.

This article was first published in The Mint. You can read the original at this link.


In 1983, Tang Guoji graduated from ateachers’ college and applied for a job. Despite his more than adequate qualifications, he soon found that no work unit or graduate school in China was willing to employ him. He petitioned the government for a job and even filed complaints against the unfair treatment that was being meted out to him, but to no avail. He eventually became a freelance writer and earned some measure of success, but it wasn’t until 20 years later that he found out that he hadn’t been able to get a job after graduation because a college advisor whom he had rubbed the wrong way back in school had inserted a document into his dang’an that declared him mentally unstable.

The dang’an is a file that the Chinese government maintains on its urban residents containing details of all political, administrative and personal transgressions they commit. This file was integral to the Communist Party’s objective of social control and was designed to ensure that all citizens adhered to a set of expected societal norms and behaviour patterns. So central was the dang’an to life in communist China that anything entered in it was given the highest degree of credibility.

Today, people in China are less dependent on the Party for their jobs and, consequently, the importance of the dang’an has diminished. However, all this is set to change next year when the Chinese government’s new [[social credit]] system comes online. This all-seeing digital system uses the full breadth of modern internet technologies to detect adherence by individuals to the state’s vision of how a model citizen should behave. It has been designed to incentivize sincerity and trust-keeping, ensuring that citizens who conform to the ideal are rewarded with social and financial benefits, while those who do not are pilloried as shame-worthy and put on blacklists that limit their access to markets. In time, these digital frameworks will likely allow citizens to improve their own social credit scores by unfriending those in their social circle who do not conform to the ideal, creating the perfect mechanism of self policing.

The Chinese social credit system sounds like dang’an on steroids. It will give the Chinese government an unprecedented ability to nudge its citizens into behaving in the manner that the Party believes is appropriate, equipping the state with social levers, the likes of which no other ruler in history has ever had.

We have always used data to come to conclusions about the people around us—who we should trust and who we would be wise to steer clear of. As our interactions move increasingly online, more and more of the data that we use to evaluate trust is collected and processed through digital systems. It is inevitable that, as algorithms get better at analysing and processing this data, we will, as dystopian as it might sound, outsource our trust decision-making to them. If we already know that we are likely to start leaning on computers to help us make these decisions, it is crucial that we train these systems well.

We often base our decisions on first impressions, refusing to change our opinions of a person even if subsequent evidence indicates that our initial impressions were incorrect. If we hand off the responsibility of evaluating trust to machines, there is a risk that we will end up imbuing our digital decision making systems with the same shortcomings that colour our own decisions. When the conclusions that algorithms arrive at about the trustworthiness of a person are based on the scores they attribute to various types of human behaviour, it is likely that their final decisions will be coloured by inaccurate data. Just as the introduction of one incorrect document into Tang Guoji’s dang’an subjected him to a lifetime of suffering, computer systems that build trust profiles based on incrementally observed behaviours are defenceless against solitary instances of corrupted data that will colour the eventual recommendation.

On the other hand, there are people who allow their impressions to evolve over time, letting their determination of the trustworthiness of a person change based on each new element of information they receive. These are people who, though they might have distrusted someone based on their initial impressions of them, are more than willing to admit that they made a mistake once they have learnt a bit more about that person. If trust algorithms could be designed like this, to constantly question their trust assessments based on every new data point they receive, their final recommendations will be that much more nuanced. Algorithms built on this basis will be able to ensure that their initially inaccurate assumptions keep getting tested against subsequent inputs so that even if false data happens to creep into the system, it will not unduly affect the final assessment.

While China might be the first country to roll out social credit scores, as the world moves increasingly online, it is possible, likely even, that other countries will follow suit. When that happens, it is imperative that they use trust evaluation systems that have been designed to ensure that the malicious or accidental introduction of false data does not compromise its analysis.

If a manual paper file-based system could wreak such havoc on the life of an innocent individual, how much worse will it be when computational systems that we trust to operate with complete impartiality make the same mistakes?