The Singularity is Real, Are Machine Ethics?

william smith
6 min readOct 1, 2017

In 1975, when I started to work in the software industry only information technology royalty like Alan Kay, Vin Cerf, Sir Tim Berners Lee, Doug Engelbart and maybe Steve Jobs imagined forty years later we’d be communicating throughout the world using tablet computers and mobile phones.

So what will our technology look like forty years from now? The more important question may be, “will our technology be ethical? A few characteristics seem very likely. They include; artificial intelligence (AI), the Internet of things (IoT) and, sadly, intelligent weapon systems and surveillance. Growth of these four technologies will only accelerate as investments continue to grow. The characteristics of the“ethics of machines” are a bit more enigmatic but no less influential in the lives of human beings. This is what Scientific Magazine’s Mitchell Waldrop had to say about the ethics of machines in his famous AI Magazine article from 1987

intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not.”

There’s an implied ethics in almost all AI applications. Furthermore, when AI applications are substituted for activities previously performed by humans the “ethics of machines” are substituted for the “ethics of humans”. That means machines decide what’s right and wrong not humans. We need to stay vigilant and attentive to machines imposing their morality on us humans.

Artificial Intelligence

Many science fiction writers and respected scientists have written about “Technological Singularity”, when intelligent machines will dominate human life. In all of these writings AI is the driving force. Today AI enabled computers evaluate alternative actions for arriving at a target state selected by humans. In the future computers may select the target state and evaluate the ability of humans to meet the target.

A simple example of computers developing actions to achieve a target state is the often maligned but widely used “credit scoring” model. Credit scoring has almost all of the features of the AI that machines could use to evaluate humans in the near future. This is how it works.

The first step in credit scoring is the establishment of an objective or target state. In the case of credit scoring the target state is the “probability of default” (PD). Credit reporting agencies are aficionados of “Big Data”. Big data is really nothing more than databases with extraordinarily large amounts of data that can be analyzed to reveal patterns, trends, and associations, especially relating to “human behavior”.

In the case of credit scoring, big data is used as the basis for calculating the probability of default for different groups of consumers. While “Big Data” may be relatively new, the calculation of probabilities has been around since the 15th century when Blaise Pascal was asked to resolve a gambling dispute among some French noblemen.

The second and maybe the most important step in the calculation of credit scores is the determination of variables correlated with the PD. Groups of consumers are then stratified into a range from highest to lowest probability of default, based on having one or more variables, correlated with the PD, in their credit file. Points are assigned to the different strata. For most credit scoring models the strata along with their point values look something like the following, The machine’s ethical characterizations, measuring degrees of “goodness”, are also listed.

  • Excellent 815–850
  • Very Good 755–814
  • Good 666–754
  • Fair 562–665
  • Poor 504–561
  • Very Poor 300–503

Once the strata are in place it’s a simple matter of assigning credit applicants to one strata or another based on the degree to which applicant data match variables correlated with the PD. The more an applicant’s data match data correlated with the probability of default, the worse the machine’s ethics consider the applicant and the fewer points he’s assigned.

One capability envisioned by singularity writers but not used by credit approval agencies is physical interaction with the environment and humans. That capability is also arriving quickly, however, as the “internet of things” (IoT) advances. Combining AI with IoT will enable machines to control the world of humans and override the “ethics of humans” with the “ethics of machines”.

Internet of Things (IoT)

The Internet of things (IoT) is the networking of physical devices, such as vehicles, buildings, and anything embedded with sensors or actuators, which enable “a thing” to collect and exchange data.

Because of IBM advertising many are familiar with it’s Watson computer. Watson is a great example of how AI, not unlike that refined by Credit Agencies, can reach-out and manipulate the physical world of human beings.

If you look at the Watson IoT blog, you’ll become familiar with Olli, the first self-driving AI-enabled vehicle, built by Local Motors. Watson AI enables what Olli’s designers call it’s “cognitive rider interface”. IBM “IoT for Automotive” extends the power of AI to connected vehicles, acquiring data from sensors and scanners. This ‘Meet Olli’ article, provides Olli’s details.

Intelligent Weapon Systems

More foreboding than Olli are AI-enabled weapon systems. Autonomous weapon systems, like the BAE Systems’ “Taranis” drone, engage targets without human intervention; they become lethal, however, when their targets include humans.

Lethal autonomous weapons systems (LAWS) include, for example, armed quadcopters that can search for and eliminate enemy combatants in a crowded city. Humans don’t make targeting decisions for LAWS drones. AI software, with machine ethics, does.

LAWS violate some fundamental principles of human dignity. They allow machines to choose whom to kill. For example, they might be tasked with “eliminating” anyone whose facial characteristics matched those in pictures of criminals stored in a police database. Criminals in the database could be tagged with degrees of goodness, like “bad” or “very bad”, calculated by an AI-enabled machine. “The conversation on Lethal Autonomous Weapon Systems (LAWS) centers on the ethics of a computer, using credit approval-like decision making, to kill a human-being.”

Facial recognition software is based on the ability to recognize a face and then measure the various features of the face. Moscow is adding facial-recognition technology to its network of 170,000 surveillance cameras across the city in, what city officials say, is a move “to identify criminals”. Simply installing weapons along with cameras would enable any policing agency to eliminate a “criminal” once a picture was snapped.

SurveillanceFacial Recognition

Facial recognition software is based on the ability to recognize a face and then measure the various features of the face. The only difference between this and credit scoring is that the comparison is between a picture taken by a camera connected to the IoT and a database of pictures of “whomever” instead of credit applicant data.

These developments are not stories told by si-fi writers or prototypes of evil scientists. Events like a Technological Singularity do not happen in an instant. They evolve, in innocuous, small steps taken by well-meaning human beings. Eventually the humans look around and realize the world has changed dramatically and the machines are in charge.

According Penn professor Martin Seligman, “It’s the distinctive psychological feature of “latitude”, the ability to create a large number of distinct options for action, that explains why humans are free and have free will.” — Homo Prospectus

Given enough data, machines could create endless options making them more free than humans. As the Singularity advances we may be attributing free will to machines. If we do we’ll also need to confront the substitution of the “ethics of machines” for the “ethics of humans”. Let’s hope we do before the machines do. If machines gain the ethical high ground, they win, game, set, match!

___________________________________________________________________

Notes:

  1. Mitchell Waldrop, A Question of Responsibility, AI Magazine Vol 8 №1 1987
  2. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super-intelligence.
  3. http://www.fico.com/en/blogs/risk-compliance/fico-score-distribution-remains-mixed/
  4. Schroeder, Ted W.. LETHAL AUTONOMOUS WEAPON SYSTEMS IN FUTURE CONFLICTS, March 2017
  5. Martin Seligman, Homo Prospectus, Oxford University Press N.Y. 2016 p. 193

Originally published at neutec.wordpress.com on October 1, 2017.

--

--