What if governments would use your private data

In human culture it is not unusual to put people into imaginary ranks, be it because of the way they dress, their education, their career, or kindness. Depending on one’s social background the criteria for rating people can be extremely individual. Just like one person can have more best friends and another only one. Does it mean that one person is more popular and therefore has more friends? Maybe. But it could also mean, that one person can have a very different system to subconsciously rate individuals around them.

Nowadays we do not just get scored by humans around us, but also by computers. Social media, online shops, or search engines are collecting our data everyday creating an online persona of us, to show us personalized content or ads. This data can also be bought by organizations or governments, which may help you get a better credit score or maybe even deny you access to certain public places.

Digital Persona

If you open up a new bank account or need a loan, the bank will check your credit score to make sure you are trustworthy and can pay the loan back. Also, each time you rate a restaurant on yelp, visit a place with your location turned on, buy something online, like a video on YouTube or comment on a friend’s post on social media your data is being collected and an online persona of you is being automatically created and put in relation with people behaving similar to you online.

Usually, your online persona is being used to personalize your digital experience. This data about you can be bought so that organizations and governments could create a scoring system. Let’s say you play online poker from time to time: Algorithms could then detect the start of a possible gambling addiction, hence lowering your credit score. One of the most prominent social credit scoring systems is issued by Zhima Credit in China, which is sometimes translated to Sesame Credit.

The Zhima Credit

Zhima Credit is a subsidiary of the Chinese online retailer Alibaba. Users can voluntarily choose being scored by the system based on credit history and behavior. Li Yinguin, Zhima’s technology director, explained how the system works: “Someone who plays video games for 10 hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.”

If participating in this social scoring system is voluntarily, why do people do it? In China a huge number of citizens does not own a traditional credit history. The Zhima system allows them to be measured and gives banks the opportunity to score their credit risk. If your score is high enough you may even book hotel rooms or rent a car without having to pay a deposit. The Zhima Credit can be understood as a bonus program, where you gain benefits, if you behave a certain way.

Good Versus Bad

What is considered good by one person may not be for the other based on subjective, social or cultural aspects. Let’s just take a look about sleeping patterns in different cultures: In Germany sleeping in too long at the weekends is usually considered lazy, because you are not seizing the day. Especially when the sun is shining you should wake up early and go on a hiking trip. For Egyptians on the other hand it is not frowned upon sleeping till noon on the weekends, as a long working week and stressful commute can drain a lot of your energy. Also, as it can get very hot in the summer time, the seizing the day part may come a bit later than in a Central European country which eventually leads to people staying up late and needing to compensate on sleep over the weekend.

“Given the variation in each country’s context, a global system may not be feasible. It will fall on each country, at least in the near term, to make their own decision. For individuals, it is important to make informed decisions on your own data: who can store and use them, who can share them, who they are shared with, and what are the impacts on you. It is also important to get prepared and consider whether and how to participate in a social score system should it be implemented in your country,” says Prof. Chengyi Lin, Affiliate Professor of Strategy at INSEAD and a leading expert on digital transformation.

Can A Social Scoring System Be Trusted?

A digital scoring system would be developed by humans adopting artificial intelligence and machine learning. In order for the algorithms to know what is right and wrong they would need to feed it with our data and different behavior has to be connected with a positive, negative or neutral score. But how reliable can such a system really be today?

First of all, the algorithm would be programmed by people so there is always the possibility of human error and the subjective definition of right and wrong. Second, the system would probably be regimented by the government. In Germany, for example, they have their issues with suppressive systems burned into their cultural minds with the last one just ending 30 years ago. This does not mean that Germans do not trust their government –in the face of the Corona pandemic Germans nowadays trust their government more than ever – but rather being aware of the democratic need to split power and interest in order to protect freedom and privacy.

Just imagine if such a scoring system would have been implemented a few decades ago. Would we still have the same rights we have now? How could such a system affect individuality and how much of our freedom and privacy would we lose? Such a scoring system would need to have access to the most private aspects of our lives. We cannot just shut the door and make it wait outside.

Dangers of Discrimination in An Automated Scoring System

In order for a scoring system to work it has to access our everyday life, to understand our sleep patterns, how often we go to the gym, take a walk, or what we eat. Let’s say you have a sweet tooth and buy and eat sweets on a regular basis. Even though you may be healthy the system could negatively score you on it: your health insurance could cost more, because the system puts you in a risk group for diabetes, and you may have to pay more for dental treatments, even though you take good care of your teeth. A system will score you via numbers: If statistics say you are more likely to be a criminal because you grew up in a certain area with a lot of gang violence or are more likely to succeed when your parents have well-paying careers, will this have any consequences on which colleges or scholarships you can apply to?

Transparency in how a system like this works is very important. Only then, can we be sure its purpose is really to make our lives safer and more convenient. Also, how do we make sure people not in the system can still participate in everyday life. A recent headline in China was about a man who had to turn himself in after 16 years on the run for murder, as he didn’t have a smartphone and therefore no health code, which Chinese authorities use to fight the spread of the corona virus, and thus could not access public transport or even find a place to live. I am sure we can all agree that one less criminal on the street is very positive, but what about elderly who do not use smartphones, like your grandparents? Would they be excluded from society, because they are not part of a digital scoring system?

In order for a digital scoring system to work there are a lot of issues which still need to be discussed, tested or understood, like transparency, privacy, and discrimination issues. And we still have a long way to go, before an AI can grant someone the opportunity to get an apartment or take the bus.