Human Rights Framework and AI Regulation: Transparency of AI

Artificial Intelligence (AI) is a broad discipline in computer science that focuses on creating smart machines with the ability to do tasks that would otherwise require human intelligence. Ed Burns Nicole and Laskowski, notes that AI is the imitation of human intelligence to create computer systems and machines with the capacity to carry out intelligence conventionally reserved for humans. Specific examples where AI is applied include speech recognition, natural language processing, machine vision, and expert systems. The duo further claims that AI underscores three vital cognitive skills, including self-correction, reasoning, and learning. Self-correction is a cognitive skill that embarks on progressive fine-tuning of algorithms to ensure that they achieve the most precise results possible. Reasoning processes focus on selecting the most convenient algorithm to obtain the desired outcome.  On the other hand, the learning aspect of AI programming deals with creating rules and acquiring data needed to covert pieces of data into actionable information. 

Practitioners and academics have, in recent years, sought to have transparency incorporated into the core operations of artificial intelligence models for all the right reasons. According to Andrew Burt, transparency helps to downplay the issues of trust, discrimination, and fairness that have, in recent years, been receiving increased attention. For instance, the new credit card business that was recently implemented by Apple is being accused of sexist lending models. Amazon was also compelled to scrap an artificial intelligence tool that was being used for hiring after complaints emerged that the device failed to meet transparency standards for discriminating against women. Dr. Markus Noga further notes that many organizations have adopted machine learning models and progressively incorporated it into the core functioning of companies, such as making decisions. In recent days, companies have been using machine learning to allocate loans, jobs, admit students into universities, recommend apartments to rent, person to date, or movies to watch. Such decisions may directly or indirectly affect the lives of people.  

Ensuring transparency in artificial intelligence will go a long way to help overcome bias.  According to Greg Satell and Josh Sutton (2019), there are diverse sources of bias that cannot be substantially or utterly eliminated. Nonetheless, the duo believes that making AI systems more transparent, explainable, and auditable can help a great deal in mitigating the effects of bias. They further proposed three practical strategies that leaders in the machine learning systems can adopt to address this menace. Firstly, Greg Satell and Josh Sutton believe that subjecting AI systems to thorough human review helps to downplay the degree or machine-related errors. For instance, according to research cited in a report by the White House during the Obama era, humans and machines had 3.5% and 7.5% errors respectively in reading radiology images. However, the report further noted that when the work of humans and machines is combined, the overall error drops to 0.5 percent. Therefore, human reviews do not only help to foster transparency, but they also promote accuracy by reducing the margin of error. 

Secondly, programmers who develop AI systems are supposed to know and understand their algorithms in the same way businesses enterprises such as banks need to know their clients. For instance, Eric Haller, the managing director of Datalabs at Experian, asserts that his data scientists are now required to be extra careful during this AI era than decades ago, fairly simple models were used. He adds that the firm was only needed to enter and store accurate data records, which would quickly and conveniently be retrieved and corrected if a mistake was reported (Noga, 2018). However, the introduction of artificial intelligence, which is used to power most of their models, complicated things because it is no longer easy to download and run open-source codes. Instead, data scientists are required to have a deep understanding of every source code entered into the algorithms and can elaborate it to the firm’s shareholders.   

Thirdly, transparency in data sources and AI systems facilitates auditing, which, in effect, helps to overcome bias. Artificial intelligence does not only help to cut on operational costs and replace human labor, but it also works best as a force multiplier, which allows the creation of new value. Through transparency, AI systems can easily be auditable and an explained to external stakeholder, which in return, foster fairness and creates room for making data more useful and practical (Rouse, 2019). Transparency in artificial intelligence helps organizations to monitor and transfer the costs saved by replacing human labor with machines to monetary values in the form of revenue. 

Although transparency in artificial intelligence systems is known for its vast benefits, a recent study by Andrew Burt dubbed The AI Transparency Paradox noted several flaws linked to data transparency. Burt maintains that disclosing too much data on AI systems makes it easier for hackers to hack into the system. Releasing too much information about the functionality and the system’s source code makes it vulnerable to attackers who may, in turn, disclose the company’s confidential information. This does not only hamper the firm’s security, but it also makes it susceptible to regulatory actions or lawsuits. Therefore, although transparency comes with real benefits to AI systems, organizations need to think critically and carefully around it because it also poses operational hurdles. For instance, companies need to be extra keen when handling the information generated about their risks as well as how to protect and share it.  

Sources:

Burt, A. (2019). The AI Transparency Paradox. Retrieved 12 February 2020, from https://hbr.org/2019/12/the-ai-transparency-paradox

Noga, M. (2018). Bringing Transparency Into AI. Retrieved 12 February 2020, from https://www.digitalistmag.com/future-of-work/2018/11/27/bringing-transparency-into-ai-06194523

PRIVACY INTERNATIONAL. (2018). Privacy and Freedom of Expression in the Age of Artificial Intelligence. Retrieved 12 February 2020, from https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-1.pdf

Rouse, M. (2019). What is Artificial Intelligence (AI)? Retrieved 12 February 2020, from https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence

Satell, G., & Sutton, J. (2019). We Need AI That Is Explainable, Auditable, and Transparent. Retrieved 12 February 2020, from https://hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent

One thought on “Human Rights Framework and AI Regulation: Transparency of AI

  1. The example of a human-AI combination regarding minimising errors on reading radiology images is interesting!

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: