CEOs may need help defending investors & employees from Artificial Intelligence (AI)
This photo is very often the “wild and crazy guy” image Chief Executive Officers (CEOs) have of their Chief Information Officer (CIO), if they even acknowledge him as an “executive”. Many still think of CIOs as “the computer guy”.
Images of CIOs are often not the most flattering largely because business executives believe they’ll be disrespected by their peers if they appear to be “too technical”. Being “too technical” is not a badge of honor in most large businesses. However, as Artificial Intelligence (AI) technology becomes more and more avant-garde CEOs will find it increasingly important to develop a close partnership with their CIOs. That partnership may be the best protection CEOs can offer their investors and employees from what could be profit-ravaging AI projects.
This is Elizabeth Holmes, the founder of Theranos, a US start-up. She promised to revolutionize blood testing but recently agreed to settle charges that she raised over $700m fraudulently.
The SEC said Ms. Holmes deceived investors about the firm’s technology.
This is how Ken Auletta of the New Yorker described Holmes’ comically vague description of how her technology worked:
“A chemistry is performed so that a chemical reaction occurs and generates a signal from the chemical interaction with the sample, which is translated into a result, which is then reviewed by certified laboratory personnel.” [Holmes] added that, thanks to “miniaturization and automation, we are able to handle these tiny samples.”
Jina Choi, director of the SEC’s San Francisco regional office said “Innovators who seek to revolutionize and disrupt an industry must tell investors the truth about what their technology can do today — not just what they hope it might do someday.”
I know too well Ms. Choi’s admonishment when she said, “Innovators must tell investors the truth”. The President of the software company for which I worked, for over a decade, received the same admonishment from the SEC in 1986. At that time it was disclosed by our auditors that licensing fees were improperly recorded as revenue in financial statements. Our sales teams were forging contracts, many of which did not even exist. The auditors became increasingly suspicious in 1984 and staged a midnight raid that found all sorts of “bad things.” The Company then failed to release our 10K report to the SEC and was suspended by the NASD (National Association of Securities Dealers).
I tell that sad story simply to describe how our executives and all of us paid the price for not “telling investors the truth about what our technology could do”, instead of what we hoped it might do someday.” We’re now entering an acute period in the development of Artificial Intelligence (AI) technology when the risks of “not telling investors the truth” could be much worse than what the executives of my company or Ms. Holmes did. However, many times not telling the truth about AI technology, is difficult because like Winston Churchill said of Russia, “it is a riddle, wrapped in a mystery, inside an enigma”. AI has been wrapped in layer upon layer of enigma in an attempt to make it salable. At this point the mystery seems to continue!
I recently listened to a radio talk-show host interview the technology editor for the New York times, regarding Artificial Intelligence (AI). Of course the technology editor used the metaphor of a “neural network” to describe AI. The interviewer immediately picked-up on the idea that AI systems were the facsimile of human brains. Even though the editor kept reminding her “a neural network” was a “metaphor”, it appeared she did not want the interview to get to the point where he might begin to explain how “nodes” in AI networks were not really the facsimile of human synapse but simply parts of a larger algorithm.
For example, in the case of AI systems the computations involved in producing an output from an input are often represented by a flow graph, like the adjacent one. In a flow graph elementary computations are represented as locations along the graph called “nodes” where each node is a value resulting from a well thought-out computation. In fact the entire flow graph is a representation of computer code written to execute an algorithm on a computer.
Results of encoded computations at each node are then applied to the values at the children of that node. This is how an AI system decomposes a more complex formula into less complex parts to arrive at “intelligence”:
An AI program for the expression “sin(a² + b/a)” could be decomposed into the following nodes:
- two input nodes a and b
- one node for the division b/a taking a and b as input (i.e. as children),
- one node for the square of a (i.e. a²) (taking only a as input)
- one node for the addition (whose value would be a² + b/a) and taking as input the nodes a² and b/a
- finally one output node computing the sinus, and with a single input coming from the addition node.
According to Ajay Agrawal, Joshua Gans, and Avi Goldfarb, all professors at the University of Toronto’s Rotman School, machine intelligence technology is, in essence, a “prediction technology”, relying on statistics and probability. That means for something to be found “wrong” with AI technology the statistics themselves must be applied incorrectly or the data processed by the statistics must be “incorrect or inaccurate”.
The particular problem with AI, is that it’s often impossible to know when the software is not doing what it was intended to do let alone what it’s “hoped” to do someday.
AI Quality Assurance
“…a user’s confidence in the advice offered by a system may be affected by knowing what information the system has used or not used in reaching its decision.
For example, if an AI system responds “I recommend action X, and I used information A and B, but not C in reaching that conclusion,” then the user may be cautious about accepting the recommendation if information C seems significant to him.
current AI systems do not provide this kind of insight into the system’s decision making process; they serve more to justify the decisions actually made than to alert the user to possible weaknesses in the decision-making process.”
“Quality Measures and Assurance for AI Software”, John Rushby, Computer Science Laboratory, SRI International 1988.
Predictions by their very nature are an attempt to tell the future and no one knows what the future will be until it becomes the present. Even then it’s difficult to tell if the software was mistaken, or if the “big data” on which it relied was incomplete or inaccurate for one reason or another. Because AI software is “prediction technology” it’s completely reliant on the quality and accuracy of the data it uses to perform it’s predictive calculations.
The data used by AI applications is usually very complex with many dimensions and relationships. For an AI application to provide any reliable prediction about the future it’s extremely difficult, simply because of the complexity of real world data. Even high quality data can be used in a very complex statistical application that’s simply difficult to code.
While AI-enabled computers are likely to “emulate” the exchange of information throughout the human body it’s also likely it’s a step too far to describe them as emulating human thought.
The human brain is composed of about 100 billion nerve cells (neurons) interconnected by trillions of connections, called synapses. On average, each connection transmits about one signal per second. Some specialized connections send up to 1,000 signals per second with most originating throughout the body and transmitted to the brain via the nervous system. “Somehow… that’s producing thought,” says Charles Jennings, director of neuro-technology at the MIT McGovern Institute for Brain Research.
The human brain is not only connected to the Central Nervous System (CNS), it’s. also connected to the Peripheral Nervous system (PNS) which sends signals from all parts of the body in response to the body’s interaction with it’s environment.
Unlike digital neural networks made-up of predetermined calculations, biological responses are spontaneous and result in spontaneous order. That means they result in a decrease in enthalpy and an increase in entropy of the system. When both of these conditions are met, a “reaction occurs naturally” which is why human beings get tired when they think a lot. Computer systems, like AI applications, don’t get tired because their calculations do not result in a decrease in enthalpy and an increase in entropy of the system. In fact the result of a computer system’s calculations is the opposite of entropy. It actually synthesizes hundreds off computations into order.
So while a digital neural network may work “like” a biological neural network it does so only in the resemblance of the transmission of signals. The origination and content of the signals themselves are completely different.
In a digital neural network the content of signals is predetermined whereas the signals in a biological neural network are spontaneous and emergent. Biological signals are electrical or magnetic activity within the human body. They are usually detected via electrodes or transducers. Complex waveforms are reproduced using Fourier analysis. Transducers convert one energy form into another and can be used to monitor biological signals, for example blood pressure. So while biological signals can be interpreted using a graph that shows the changes in a signal’s amplitude over some duration of time they do not represent a predetermined code intended to send a message between a sender and receiver.
No one knows what thought will emerge or arise from a particular interaction with the environment but the results of calculations in a digital neural network are well known before they occur. They also include an expectation of how data needs to be arranged to accommodate the calculations and messages.
Neither the nodes or the exchange of data among the nodes of a digital neural network are biochemical, like those of a brain and the nervous system. They are mathematical computations each one of which is premeditated. They are thought-out beforehand and generally intended to decompose a larger algorithm into smaller and smaller computations. The results of the individual computations are exchanged among nodes by means of digital messages or signals but they are not spontaneous like those of a nervous system and a brain.
Physiology meaning ‘nature or origin’, is the scientific study of normal mechanisms, and their interactions, which work within a living system. A sub-discipline of biology, its focus is in how organisms, organ systems, organs, cells, and biomolecules carry out the chemical or physical functions that exist in a living system.
Human physiology seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific inquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal focus of physiology is at the level of organs and systems within systems.
This is an oil painting depicting Claude Bernard, the father of modern physiology, with his pupils. Dr. Bernard determined that The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical.
American physiologist Walter Cannon was the first to coin the term homeostasis in 1929. He used it to describe the ability of the body to regulate its internal environment. While each physiological system works to perform different functions in the body each individual system also works with every other system to keep a human being alive.
Human thought can be influenced by the interaction of all of the bodies organ systems. The human mind and body are parts of the same system. Whatever happens to one affects the other. This means that what we think affects the way we feel and how we feel affects how we think. If the body is tense, the mind will be tense. It’s not possible to change one without affecting the other.
Neuroscientist Antonio R. Damasio has worked for decades to show that feelings are what arise as the brain interprets emotions, which are themselves purely physical signals of the body reacting to what it believes to be the meaning of external stimuli, (i.e. messages).
The Amygdala are a pair of small organs within the medial temporal lobes of the human brain. They are responsible for making the notorious fight or flight decisions within human beings. However, the amygdala has evolved to perform the really important role of forming and storing memories associated with emotions.
The Amygdala is responsible for the formation and storage of memories which we have associated with emotional events. The Amygdala is really good at linking, often instantaneous reactions to situations, so every time in the future we encounter that situation we’re programed to “feel” the associated emotion automatically. In effect human beings “feel the meaning” of a situation at least as it relates to our well being.
For an AI application to be anything other than a metaphorical representation of a human brain it would need to include components that enable the spontaneity and feelings enabled by the human limbic system, where all emotions and feelings reside, which is highly unlikely, even if CEO’s change their image of CIO’s to be more like this. __________________________________________________________________
Originally published at neutec.wordpress.com on March 19, 2018.