Will Artificial Intelligence Learn Morals?

AI

20 years ago we had to wait more than ten minutes to download a single song using a 56k dial-up modem. Audio cassettes were still very much in vogue. Fast forward to 2022, and one can now instruct their phone or car to play their favorite tracks using their voice. We can sign into our favorite music streaming service automatically, and it shows us the music and artists that may fit depending on the mood, the time or the occasion. One can automate nearly all electrical systems in their house to operate on their schedule (remind them to get groceries, switch on lights when they enter, etc.).

In a relatively short span of two decades, we have gone from waiting for technology to respond, to machines and systems waiting on our following command. Whether we like it or are even aware, Artificial Intelligence and automation are already playing a significant role in our lives.

AI is in the early stages

Technology is slowly approaching a level of intelligence that can anticipate our needs. We are in the golden age of AI, and yet we have barely begun to see its applications. The following steps go from a routine to a more profound, abstract process. For instance, if you habitually drink coffee every morning, it’s easy for an AI to learn your routine. But right now, it can’t even begin to approach what you are thinking about while you drink your coffee. The next step in the evolution of AI could be your Google Home or Amazon Alexa, knowing that you’re about to begin your day and could hand you your schedule unprompted.

AI is starting to switch gears from carrying out repetitive tasks to higher-order decision-making. We have barely begun to see AI’s capabilities. In the next five to 10 years, AI will likely impact every fabric of our lives. While we are more than content to let AI work for us, what happens when we start to outsource complex thoughts and decision-making? Our capability to make decisions is based on our capacity for consciousness, empathy and the ability to take a moral stand. When we let machines think for us, do we also burden them with the complex web of human morality?

Who decides which morals are right?

Mimicking human decision-making isn’t merely a matter of logic or technology. Over the centuries of human civilization, we have developed genuinely complex moral codes and ethics. These are informed as much by societal norms as upbringing, culture, and, to a large extent, religion. The problem is that morality continues to be a nebulous notion with no agreed-upon universals.

What’s perceived as moral in one society or religion could strike at the heart of everything right in another. The answer may vary depending on the context and who makes the decision. When we can barely balance our cognitive bias, how do we chart a path for machines to avoid data bias? There is mounting evidence that some of our technologies are already as flawed as we are. Facial recognition systems and digital assistants show signs of discrimination against women and people of color.

The most likely scenario is for AI to follow prescribed rules of morality per defined groups or societies. Imagine buying a basic code of ethics and upgrading it with morality packages depending on your proclivities. If you’re a Christian, the morality pack will follow the standard Christian code of ethics (or as closely approximatively as possible). In this hypothetical scenario, we still have control over the moral principles that the machine will follow. The problem arises when that decision is taken by someone else. Just imagine the implications of an authoritarian government implementing its version of morality over ruthlessly monitored citizens. The debate over who could even make such a call would have far-reaching implications.

What could a moral AI mean for the future?

The applications of a moral AI could defy belief. For instance, instead of today’s version of overpopulated jails, AI could make rehabilitating criminals a real possibility. Could we dare to dream of a future where we could rewrite the morals of criminals with a chip and prevent murders? Will it be a boon to society or an ethical nightmare? Could it reflect the movie “The Last Days of American Crime?” Even minor applications such as integrated continuous glucose monitoring (iCGM) systems on wearables to optimize diet and lifestyle can have a long-term impact on our society and well-being.

As complicated as morality in AI is, it pays to remember that humans are a tenacious breed. We tend to get many things wrong in the first draft. As Shakespeare put it, “by indirections finds directions out.” In other words, we tend to keep at the problem.

Almost all of our current technological advances had seemed impossible at some point in history. It will probably take us decades of trial and error, but we have already started on the first draft with projects like Delphi. Even a first iteration of an AI that attempts to be ethically informed, socially circumspect and culturally inclusive gives us a reason for hope. Perhaps technology can finally point us to that treasure map that promises an idyllic moral future that we have collectively dreamed of for centuries.

Author Profile

Sarah Meere
Sarah Meere
Executive Editor

Sarah looks after corporate enquiries and relationships for UKFilmPremieres, CelebEvents, ShowbizGossip, Celeb Management brands for the MarkMeets Group. Sarah works for numerous media brands across the UK.

Email https://markmeets.com/contact-form/

Leave a Reply