Science Today


ARTIFICIAL INTELLIGENCE
(AND ALL THAT HYPE)

Information provided by Keith W. Miller,
Ph.D., Orthwein Endowed Professor for Lifelong Learning in
the Sciences at the University of Missouri-St. Louis

Artificial Intelligence (also known as AI) has had a long history of “hype cycles.”

Since the 1950s, both scientists and the public have periodically become convinced that AI is about to make dramatic advances that will revolutionize our lives. Then, when AI fails to deliver on many of those promises, interest (and research funding) fades. Keith W. Miller, Ph.D., Orthwein Endowed Professor for Lifelong Learning in the Sciences at the University of Missouri-St. Louis shares his thoughts and research on AI and the current new era of AI enthusiasm.

As AI devices become increasingly capable, many ethical issues arise. How should we treat these sophisticated machines; how should they treat us?

In the past few years, published research papers and articles in the public press about AI have increased dramatically. Is this just another hype cycle? Perhaps, but there are hints that AI may be fulfilling enough of its promises this time, so that things really are changing in a more permanent way.

“If you want a machine to behave in a manner that suggests it has ‘intelligence,’ one strategy is to analyze how humans process information, and have the machine (as much as possible) mimic the human process. Another strategy is to devise algorithms (detailed instructions that tell the computer what to do) that make the machines act as if they were intelligent, essentially ignoring any questions about whether the machine is intelligent in the same way that humans are intelligent.

These strategies need not be mutually exclusive; a researcher might combine them in some clever way to achieve AI goals,” Miller says. In the past few years, a strategy known generally as “machine learning” has become increasingly popular with AI researchers and AI programmers. The strategy is broad enough that people define the concept differently. However, the big idea is that computers are excellent at storing and accessing large collections of data, much better than humans at this particular task.

AI researchers explore the intersection of human intelligence and computer algorithms.

Machine learning uses this capacity of computers to either teach a computer how to do something, or to enable the computer (in a limited sense) to teach itself.

In order to use machine learning, a programmer identifies a set of inputs (data) that the AI is supposed to process. At first, the programmer delivers one set of inputs, and “teaches” the AI what the proper response should be to that set of data. Next, the programmer delivers another set of inputs and again indicates to the program what the correct response should be to those inputs. Behind the scenes (that is, within the data structures of the program), the machine learning algorithm is establishing connections between different aspects of the data, including placing “weights” on various links between data. If all goes well, by adjusting those weights and connections during the learning process, the program can start to make increasingly accurate “guesses” about the proper response to the next input.

After training, the program’s internal representation of reality can be frozen, and then the program can give (hopefully) appropriate responses for each set of inputs presented to it. However, if the human programmer can devise a way for the machine to recognize whether or not its current answer is good or bad, then the machine can keep adjusting its internal states, thereby extending its learning indefinitely. Devising metrics that enable this continuous self-evaluation (such a metric is sometimes called a “figure of merit”) is a lively area of AI research. A good thing about allowing the program to keep learning is that there is a potential for increasingly helpful responses from the program, as it adjusts its answers to match changing circumstances. A potentially bad thing about continuous learning is that the program might wander off into giving unhelpful, or even harmful, responses when no longer under direct human control.

A good machine learning program can achieve impressive results by substituting brute force computer processing for complex, intricate programming by humans. In the recent past, computers have defeated chess grand masters and the highest ranked humans on the game GO—a domain in which humans thought they would remain superior over computers. These recent AI programs used machine learning, among other strategies, to defeat humans at these games.

Machine learning has led to other advances in AI. For example, the algorithms in driverless cars that recognize dangerous situations, and quickly deliver appropriate responses to different traffic configurations; AI programs diagnose human diseases given inputs about a person’s vital signs and test results; and different machines identify, and in some cases fix, computer malfunctions. Machine learning algorithms help computers participate in written and oral conversations. And machine learning predictions about changes in financial markets are helping investment professionals to cash in.

Machine learning is not without its failures, some of them spectacularly public. For example, in 2016 Microsoft placed a chatbot named “Tay” on to the Internet, inviting the public to teach Tay to chat by interacting with the bot via text messages. Unfortunately, some people who interacted with Tay taught it to think and act badly; Tay sent out sexist and racist messages. Soon, Tay was taken off the Web, and Microsoft apologized.

“The Tay case dramatically illustrated an important ethical (and perhaps legal) issue that has to do with AI in general, and with machine learning in particular: when a machine becomes so sophisticated that it can operate largely without direct human supervision, who then is responsible for the behavior of that machine? Is the original programmer responsible for any subsequent behaviors? Is whoever teaches the computer also accountable? At some point, does the machine itself bear some responsibility for its own actions?” Miller asks.

There are no authoritative, final answers to many ethics questions about AI machines, including Web chatbots, physical robots and phone systems, all of which can now interact with humans in ways that used to be confined to human-to-human communication. Are we entering a world where a new species of entities is entering into the society of humans? Some scholars (both in computer science and philosophy) think that machines will never become peers or rivals of humans; other scholars are convinced that such a thing is not only possible, but that it is inevitable. Still other scholars warn that it may be possible, but we should not let it happen. As these debates rage on, AI machines are becoming ubiquitous: on our smartphones, as we check out of the supermarket, as we surf the Web and when we apply for a credit card—machine learning computers are making decisions that affect us. As far as we know, humans are still in charge; however, some people who have carefully studied the past and future of AI are worried.


I am one of those ‘worried’ people, but I am also cautiously optimistic. I think we should be intentional about how AI is introduced to our world.


“I am one of those ‘worried’ people, but I am also cautiously optomistic,” Miller shares. I think we should be intentional about how AI is introduced to our world. We should remember that computers should be evaluated on their contribution to humanity, not vice versa. When a company introduces a new technology, we should feel free to say ’no, thank you‘ even if the technology becomes fashionable, if we think its potential harms outweigh its advertised benefits. And we should encourage politicians to impose reasonable regulations on technology companies (who at the moment are largely unregulated). By moving forward with caution, perhaps we can live harmoniously with these increasingly sophisticated AI artifacts. Let’s be careful out there!”

STEM EXPERT SPOTLIGHT

Keith W. Miller, Ph.D., is the University of Missouri–St. Louis Orthwein Endowed Professor for Lifelong Learning in the Sciences. He has a BS in Education, an MS in Math, and a Ph.D. in Computer Science. Dr. Miller is a professor both in the Computer Science Department and in the College of Education. The Saint Louis Science Center is his community partner, and he can often be found staffing a table at the Science Center’s First Friday events. Dr. Miller’s research interests include computer ethics, online learning and software testing. For more details about Dr. Miller, please see
learnserver.net/faculty/keithmiller.

KEITH’S FAVORITE AI MOVIES
(in chronological order)

1 – METROPOLIS (1927)
2 – 2001: A SPACE ODYSSEY (1968)
3 – WESTWORLD (1973)
4 – THE TERMINATOR (1984)
5 – THE MATRIX (1999)
6 – BICENTENNIAL MAN (1999)
7 – A.I. ARTIFICIAL INTELLIGENCE (2001)
8 – I, ROBOT (2004)
9 – WALL-E (2008)
10 – ROBOT & FRANK (2012)
11 – HER (2013)
12 – EX MACHINA (2014)