BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

We Need To Examine The Ethics And Governance Of Artificial Intelligence

This article is more than 5 years old.

Growing up, one of my favorite movies was Steven Spielberg’s Minority Report.

I was fascinated by the idea that a crime could be prevented before it occurred. More interesting to me at the time was the futuristic role that ‘super intelligent’ technology – something depicted as more sophisticated and advanced than humans – could play in doing this accurately.

Recently, the role that pre-crime and artificial intelligence can play in our world has been explored in episodes of the popular Netflix TV show Black Mirror, focusing on the debate between free will and determinism.

Working in counter-terrorism, I know that the use of artificial intelligence in the security space is fast becoming a reality. After all, decisions and choices previously made by humans are being increasingly delegated to algorithms, which can advise, and decide, how data is interpreted and what actions should result.

Take the example of new technology that can recognize not just our faces but also determine our mood and map our body language. Such systems can even tell a real smile from a fake one. Being able to utilize this in predicting the risk of a security threat in a crowded airport or train station, and prevent it from occurring, for example, would be useful. Some conversations I have had with individuals working in cyber-security indicate that it is already being done.

This immediately brings two concerns to mind. The first is the ability for technology to do this accurately. Like humans, technology is known to make mistakes, displaying unfair bias against people of color and women, for instance. Sometimes, this bias reflects input from the creators of such algorithms themselves. It would not just be unethical, but also unacceptable, for people to be disadvantaged in the application of these systems on a mass scale.

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

However, such input must assist in alleviating the misguided and inconclusive evidence which leads to unfair outcomes, and not add to errors. The reduction in bias and discrimination will only be achieved if moral responsibility is input into machines: something which may be impossible. Humans must be able to feed into, and discuss, the moral costs and benefits of allowing artificial intelligence systems to map vulnerability assessments and their outcomes by thinking and acting as rationally as human beings. But as we know, human behavior is not the easiest to predict or map, and an entire stream of economics reveals that we often act in irrational ways.

This brings us to regulation. At the moment, AI research and development is still being undertaken by corporations and government bodies that are more visible to potential regulators. Therefore, corporations can be regulated using existing institutional structures. Many have argued, for example, that a key first step would be increasing the transparency and accountability of those who build the systems and processes.

While accountability can help us understand human biases, we also have an additional future problem. AI systems pose serious complications if they act outside the control of human beings. I would be interested to understand how such a control problem could be overcome and regulated if AI ‘super intelligence’ poses a danger of doing us harm.