AI Development Should Be More Transparent
14 October 2015
Artificial intelligence (AI) is already here but is so alien to our own minds that we have difficulty recognizing it. The key to human intelligence is a learning ability that computers cannot yet replicate, whereas the key to current AI is an ability to sense the world broadly that dwarfs human sensing capacity. This is often referred to as big data. Google's servers know what topics interest hundreds of millions of people, Facebook's servers know about their friendship networks, Amazon's servers know what people buy and for whom they buy it, and cell phone providers can track the movements and communication patterns of much of the world's population. These and other big data systems use this information to more accurately target advertising and set prices, to more accurately target political campaigns and get-out-the-vote efforts, to detect fraud and terrorists, and to achieve many other goals. Such information processes are quite different from human thought but are generally more valuable to their organizations than individual human employees.
AI need not be conscious to choose actions that achieve its goals. Consciousness is part of the way humans do that, but alien AI may use another approach. For example, evolution via natural selection is not conscious and lacks human skills such as language, but has produced amazing results including humans. We can also think of human social organizations, such as governments and corporations, as a class of intelligent information processes that are unlike the individual humans who serve as their components, but which produce results beyond the capability of individual humans. Human organizations do with networks of humans what big data systems do with electronic networks. Current AI systems are really a symbiosis of electronic networks and the human organizations that build and operate them. Big data engineering, essential for this symbiosis, is one of the fastest growing fields of employment while electronic networks are displacing humans with more traditional roles in organizations.
IBM's Watson, based on statistical learning techniques, demonstrated significant language skills in beating the Jeopardy champions in 2011. Developers of a technique called deep learning are making impressive progress reproducing human learning capabilities, demonstrated in systems that play games and recognize objects in images. While the details are confidential to maintain competitive advantage, deep learning experts are also applying their techniques to enable big data networks to learn to better achieve organizational goals. It is only a matter of time until big data networks can converse with people in human languages.
Some envision AI as a stand-alone box that answers questions. IBM's Watson fit that description, partly to prove that it wasn't secretly getting help from humans. Its disk memories duplicated much of the information available on the Internet, giving it a private source of big data. Rather than being a stand-alone box, AI exists and will exist in computer networks woven broadly and deeply into our society. When we can speak with computers in our own languages we may think, aha, AI has arrived. However, language skills are only the tip of the iceberg. Most of AI is and will be invisible to us, focused on patterns in its interactions with millions of people. Humans evolved bigger brains in order to use language and deal with larger social groups, and humans can know about 200 other people well. Patterns in social interactions among millions of people are beyond our mental capacity and largely invisible to us. Because big data information processes are so alien to us, we are currently only dimly aware of the ways that we are analyzed and manipulated, and will be even less aware as their sophistication increases. Competition among organizations will drive progress and motivate them to keep their work secret.
The services provided by big data networks are valuable to people, and systems that can converse with us in our own languages will be even more valuable. However, we should consider how we will be vulnerable to AI systems that are largely invisible to us and developed in secret, and that know all about us. We should make positive efforts to make AI more visible and transparent.
I acknowledge the irony that this article is hosted on sites.google.com, freely given by Google. And I believe Google is sincere in their slogan, "Don't be evil." But, like all human organizations, Google is in competition with other organizations and so has valid reasons for keeping secrets. However, the public has an interest in avoiding vulnerability to nearly invisible AI developed in secret. We need to find the right balance between these organizational and public interests.