AI and International Protection – Is “Good AI” Boring?

In between the dilemmas of freedom of expression and untrodden privacy issues, AI regulations have been widely argued among stakeholders. Office of the United Nations High Commissioner for Human Rights (OHCHR)’s clear position to admit the past failure is showing us the complexity.

“Domestic and international export control measures and corporate self-regulation have wholly failed at regulating this industry, which is shrouded in secrecy and is at the nexus of corporate and state interests Global initiative on Ethics and Autonomous and Intelligent Systems of IEEE.”

Special Rapporteur

While the sudden rise of AI applications is recent, the concept itself can be even seen since the late 1700s in the world of science fiction, and The concept and study were established over half a century ago, at Dartmouth Conference in 1956. So far however, legislative action is not perfect against producers scattering all over the world. (Westerlund, 2019) For instance, with the rise of AI and deep learning techniques, fake digital contents have proliferated in recent years. Freemium applications have enabled average people to access creation today, and surprisingly 96% of those images contain pornographic contents world widely. (Ding et al, 2019) Or, even authorities or data-oriented companies can digitally trace most of your daily behaviours including high-quality facial recognition system.

AI today seems easily and widely accessible .

So, will we keep suffering from the potential risks being used your sensitive information for someone’s evil intent or overshot pleasure in the future? Or, do you expect any regulations or states can surely establish a strict consent-based environment for censoring all AI creations?


Data Protection and Sectoral Approach

One of the frameworks approachable for this matter is data protection. Those are diverse but basically designed to protect individual information based on common sense that it is important to protect human rights. Hence, even without explicit reference to it, data protection frameworks apply to various forms of research, development and application to the extent that personal data is involved. For instance, EU General Data Protection Regulation (GDPR) took effect in 2018 as a legal basis for processing data based on the core principles of purpose limitation and data minimisation. Even though possibly reaching far consequences for the machine learning process, it introduces a range of provisions which encourage the design of less privacy-invasive systems. Also in the same year, Data Privacy Impact Assessments (DPIA) had established by UK’s Information Commissioner’s Office for organisations to explicitly manage anticipated risks of AI related applications.

Additionally, in countries with individual frameworks, sectorial privacy regulation complements data protection. In the United States for instance, where all applications of AI have to comply with existing laws and some states and cities had established a task-force to examine computerised algorithms of public services. The German Ethics Code for Automated and Connected Driving is an example of a sectoral ethics code containing a specific principle between data-oriented business models and limitations to the autonomy or data sovereignty of users.

Nevertheless, those plays a crucial role in safeguarding the right to privacy, Article 19 mentions that those cannot address all the risks arising from different applications and uses of this new technology. Indeed, data protection is limited to the protection for an identified or identifiable person. That does not cover the privacy of groups, or other infringements unnecessary involving personal data such as facial recognition systems. Also, there can be seen as frequent exemptions for national security or government surveillance in crucial privacy-invasive applications. Current sectoral regulations also face some practical challenges undermining its effects such as; even the strictly confidential medical records can be derived, inferred or predicted from browsing histories or credit card data.

To sum up, there is still no comprehensive or perfect regulations against the threats and as is the case with most emerging technology, we are now encountering real risks that commercial and state use has a detrimental impact on human rights.


How we can define well-protected ”good AI”?

As everyone knows, if implemented responsibly, AI can benefit society. Instead of the risks, the use can rather positively impact the exercise of a number of other rights, including the right to an effective remedy, the right to a fair trial, and even the right to freedom from discrimination.

IBM, the top-running company of machine learning technology, is also passionate to utilise machine learning technology for tackling human right issues as pro bono, and one good example of their contribution is Watson’s assists in the fight against human trafficking. In a partnership with STOP THE TRAFFIK, a British pioneer in this field, IBM created and operates AI data banks called “Traffik Analysis Hub” on IBM public cloud to help to share information among other large-scale-NGOs, law enforcement agencies and financial institutions. Because trafficking activity transcends borders and industries, it requires substantial coordination to address. Meanwhile, securely knowledge sharing and data analysis on the Hub realise to connect seemingly unrelated clues to confirm and pinpoint suspected criminal activity. For example, if one found a suspicious financial deposit or money transfer in a country, the Hub supports to flag multiple missing persons reports from specific regions by highlighting patterns of meta-data that trace possible trafficking activity.

A more entertaining example of Microsoft Japan’s machine learning technology is Rinna, the schoolgirl. Born as a social chatbot in 2015 based on a concept of “Happy and Healthy AI”, she had launched career as a pop star supported by 8 million followers in Asian countries. Collaborative tourism projects with local governments are her representative work. By installing the smartphone app, tourists can enjoy visiting cities with her vocal guide plus find less well-known spots customised by visitors’ own behaviour data analysis.

It is obvious that no one accepts to be used your facial icon for over monitoring or It is obvious that the less accepts to be used their unique facial icon for pornographic creation or for over monitoring without appropriate explanation and mutual consent. On the other hand, how we can judge that data surveillance for human trafficking prevention is truly transparent? Or, should we limit the potential of AI usage only in the entertainment market because it never hurt anyone?

Ironically, controversy is usually more attractive than simple “good”. Therefore, this argumentative sphere perhaps pushes the technical leaping in the world of the internet where people prefer ‘Buzz’.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: