Trust works two ways
The increasing reliance on social ratings and feedback loops in services like ride-sharing platforms is leading to a system where personal ratings may determine access to services. This trend, mirrored in China’s proposed national trust score, raises concerns about algorithmic discrimination and its societal impact.
This article was first published in The Mint. You can read the original at this link.
Last month, a friend of mine in the US told me that after he booked a cab on a ride sharing platform, he got an apologetic message from the driver cancelling the ride. By way of explanation, the message said that, for personal safety reasons, he didn’t accept rides from passengers with a rating of less than 4.7. Even though this is exactly how the dual feedback loop is supposed to work, I was surprised because this was the very first time I’d heard it being made to work against the recipient of services.
I was immediately reminded of a recent episode of Black Mirror that described a not-so-distant future where your social score determines what you are entitled to. High scores allow you to jump to the head of the line and access premium services while low scores exclude you from even entering certain exclusive restaurants and stores.
Everyone in that fictional world was obsessed with improving their personal rating, constantly taking selfies that their followers could like and always being nice to people around them in the hope of getting the coveted five-star rating. The holy grail was to associate with people who had a large number of followers so as to improve your own score by association.
To me, the incident with the cancelled cab ride showed that this Black Mirror episode was uncomfortably close to coming true. We are so invested in social media that it seems inevitable that those platforms will eventually be used to slot us into some sort of a trust rating system. The services we consume have already become largely social in their structure. We’ve all benefited from the trust feedback loop that is built into them, relying on the reviews of strangers to decide where we stay or which restaurants we eat at and on the independent reviews of products and sellers before clicking “buy".
If there is already such a focus on crowdsourced reviews, it seems inevitable that the camera will soon be pointed in the opposite direction. At present, platforms keep their users’ social graphs confidential, refusing, for good reason, to share insights into the behaviour of their users with any other service providers. Their ability to understand what their users like and how they will respond is core to their being able to offer them features they will appreciate and keep them coming back for more, locking them deeper into their service.
Sharing user information, even with service providers that are not direct competitors, could end up eroding this competitive advantage. Even so, there are clear benefits to a shared social graph. Users who have misbehaved on one social network are very likely to misbehave on every other and it is in everyone’s interest to weed out the bad apples. A centralized service that ranks users based on their behaviour across a number of services would help eliminate trolls and other undesirables from our timeline and eventually improve the safety and overall experience on these platforms.
In June 2014, China published a policy document titled Planning Outline for the Construction of a Social Credit System that outlined details of the proposed national trust score that the Chinese government intended to roll out for its citizens. When deployed, this will be the world’s first consolidated social graph that aggregates behaviour across all forms of social interaction, real and virtual.
Using an algorithm devised by the Chinese government, it will boil down all feedback—both negative and positive—into a single number that represents an individual’s trust score. This score will establish how trustworthy a person is and will rank him against the entire population, establishing entitlements and benefits based solely on that rank. It is currently voluntary, but millions have already signed up for the trial already, recognising that a good social rank will offer them many real-world benefits ranging from the chance to get a loan or a fast-tracked application for a coveted Schengen visa. For millions, this represents opportunities to jump to the head of the line simply by being more polite and obeying rules.
If this sounds slightly dystopian, it’s probably because it is. For every person who manages to get to the head of the line, many more will be consigned to the lower levels of the social graph. There are already indications that lower-ranked citizens will have slower internet connections, restricted access to restaurants and may even be restricted from accessing certain forms of public transport.
Relying solely on algorithms to determine social entitlements may seem like a neat new way to incentivise behaviour but the impact that this sort of algorithmic discrimination will have on society as a whole is significant. Ranking algorithms lack the empathy that is needed to regulate society and using gamification techniques to achieve desirable behaviour on a grand scale is nothing less than a recipe for disaster.