BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Experts Reveal Their Tech Ethics Wishes For The New Year

This article is more than 5 years old.

We can only hope that 2019 will be the year when the ethics of emerging technologies becomes a central part of discussions on innovation and progress.

Below, I've asked some prolific tech ethicists about their hopes for the upcoming year and to share with readers what they would like to see people and policy bodies pay more attention to before the opportunity passes us by.

Their answers cover a wide range of topics from AI to the Arctic.

Here's what they had to say:

David Ryan Polgar, Tech ethicist and Founder of All Tech Is Human; Co-host of Funny as Tech

"Digital phenotyping will be a hotly debated ethical issue in 2019. Digital phenotyping is the emerging field of examining how people utilize smartphones and other digital devices (motor skills with scrolling, time of use, the volume of communication) to make correlative predictions regarding depression, suicidality, Alzheimer’s, and more. It’s what I like to call the canary in the smartphone. But what are the trade-offs and risks of distressing users with false assumptions?

This topic will be important in 2019 because suicide rates have risen 30 percent since 1999 and we are naturally looking at our ubiquitous smartphones to provide early warning signs. There are major issues, however, to work out around data privacy (especially for teen users), free will, and whether we can accurately decipher the signal from the noise through smartphone usage."


Nick Bostrom, Professor and Director of Oxford University's Future of Humanity Institute; author of the best-selling book Superintelligence: Paths, Dangers, Strategies

"The security implications of ongoing advances in biotechnology have, I believe, not been given sufficient attention by policymakers. As the capabilities of synthetic biology expand, and relevant skills and equipment proliferate, stronger regulation and oversight will become necessary.

Maybe we need to move away from the current regime — in which DIY biohacking and completely open and uncontrolled information-sharing are actively encouraged — towards a regime more like that which governs nuclear energy and nuclear weapons technology, areas where security concerns are recognized as being of critical importance.

In the meantime, ethical oversight over bioscience funding and experimentation should adopt a wider purview that takes into account not only risks to lab workers and research subjects, but also ways in which proposed research projects may help or hinder public health and biosecurity goals. Independent assessors with relevant risk assessment and security expertise should ideally augment these review panels."


Patrick Lin, Professor and Director of the Ethics + Emerging Sciences Group at Cal Poly; Member of the World Economic Forum's Global Future Councils and Stanford University's Center for Internet and Society

"The Arctic could become a geopolitical flashpoint in the not-too-distant future, as melting ice opens up major shipping shortcuts and access to huge energy reserves.

Technologies, such as robotics and AI, could either make it worse if used to project force—or make it better if used to build up infrastructure and promote cooperation, such as autonomous search-and-rescue ships, floating medical outposts, and smart buoys that create a wireless mesh network. Indigenous people already live up there and are in trouble because of flooding and disappearing food sources, but technologies again can help, such as with drone deliveries of machine parts, supplies, and food that would normally take weeks to get there.

This is a crucial test for humanity: if we can’t get our act together here, then there’s little hope to think we can develop outer space and other frontiers peacefully."


Evan Selinger, Professor at the Rochester Institute of Technology, Senior Fellow at The Future of Privacy Forum, and co-author of the new book Re-Engineering Humanity

"Face recognition technology is the technology to keep our eyes on in 2019.

The debates surrounding it have expressed our worst fears about surveillance and injustice and the tightly coupled links between corporate and state power. They’ve also triggered a battle amongst big tech companies, including Amazon, Microsoft, and Google, over how to define the parameters of corporate social responsibility at a time when external calls for greater accountability from civil rights groups, privacy activists and scholars, and internal demands for greater moral leadership, including pleas from employees and shareholders, are expressing concern over face surveillance governance having the potential to erode the basic fabric of democracy.

With aggressive competition fueling the global artificial intelligence race, it remains to be seen which values will guide innovation."


Rediet Abebe, Co-founder of Mechanism Design for Social Good; Co-founder of Black in AI 

"As AI researchers, we frequently work on problems that have significant consequences in domains such as health, criminal justice, and labor markets. In the coming year, I hope to see the continued growth of a research culture in which there is an expectation that AI researchers engage deeply with these various domains that intersect with their work. I hope, too, that this expectation becomes formalized and embedded in computing training.

I hope to see more AI researchers seek out collaborations and partnerships with domain experts from across various disciplines and lines of work in order to gain a deeper appreciation of the real world consequences and constraints of our work."


Greg M. Epstein, Humanist Chaplain at Harvard and MIT, author NYT best-seller Good Without God: What a Billion Nonreligious People Do Believe

"What is all this technology for?

Over the past decade, social media has become more of an addiction than a tool. The internet is the greatest source of information - and misinformation - we never imagined. But it is not really what we need right now: a space that facilitates true connection among the billions of human beings on this pale blue dot, all sharing what Martin Luther King, Jr. called 'an inescapable network of mutuality...a single garment of destiny.'

Yes, it sounds naive, but I believe 2019 can be a good year for pragmatic idealists with hope and heart, cultivating technology to improve and ennoble the human condition. And that means tech leaders struggling with their industry’s lack of inclusion, not just as a side topic but as the biggest reason they might fail in the long term. Because when innovation mainly benefits the few, is that even really innovative at all?

We’ll know leaders are getting it if we start to see them talk and communicate more vulnerably, with more emotional honesty. More like VC Justin Kan’s reflections on mortality and transience; theoretical astrophysicist Chanda Prescod-Weinstein’s passion for social, racial, and gender justice in science; fund manager Roy Bahat’s CV of his failures; or U.S. Representative-elect Alexandria Ocasio Cortez’s use of her own technology-driven celebrity to inspire millions of women and girls of color (not to mention middle-aged white guys like me) to fight for a better world for all."


And, of course, I can't help but add my own hope for 2019, which is that we continue to explore ways to improve access to technology to people all over the world, especially in the places where labor and other resources are exploited in order to drive the march of progress. From the development of 5G to increasingly sophisticated medical interventions, equality of access is a major challenge we'll need to face in order to help repair the enormous divide that continues to develop between the rich and the poor.