Source: Evan Forester
Sharpening The AI Problem
Artificial general intelligence will be humanity’s greatest achievement. But researchers must first agree on the problem they’re solving.
In 2017, the cognitive scientist and entrepreneur, Gary Marcus, argued that AGI needs a moonshot. In an interview with Alice Lloyd George, he said, "Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal."
Marcus felt that the political climate of the time made such a collective effort unlikely. But the moonshot analogy for AGI has taken hold in the private sector and captured the public imagination. In a 2017 talk, the CEO and co-founder of DeepMind, Demis Hassabis, evoked the moonshot analogy to describe his company as "a kind of Apollo program effort for artificial intelligence.” Hassabis unpacks his vision with pitch deck efficiency: First they'll understand human intelligence, then they'll recreate it artificially. AGI will thereafter solve everything else.
A similar moonshot vision was expressed in the recent $1-billion partnership between OpenAI and Microsoft, a competitive response to Google and Amazon. As reported by Cade Metz, “Eventually, [Sam] Altman and his colleagues believe, they can build A.G.I. in a similar way [through reinforcement learning and enormous amounts of raw computing power]. If they can gather enough data to describe everything humans deal with on a daily basis — and if they have enough computing power to analyze all that data — they believe they can rebuild human intelligence.”
These astronomical investments and their broad similarity in goals suggest moonshots might actually deliver the goods. Unfortunately, it’s the “carefully orchestrated part” that defines a successful moonshot. In this post, I’m going to argue that the veneer of a common goal masks a rot at the foundation: There’s no real consensus on the AI problem.
In fact, profound disagreement might be the most succinct characterization of this chapter in AI. In her review of two collections of essays gathered from 45 experts, Kelsey Piper observed, “Almost all perceive something momentous on the horizon. But they differ in trying to describe what about it is momentous — and they disagree profoundly on whether it should give us pause.” In one essay, Neil Gershenfeld compared the discussions about AI to a mental health condition. He wrote, “They could better be described as manic-depressive: depending on how you count, we’re now in the fifth boom-and-bust cycle.”
Whenever colleagues disagree — profoundly disagree — we need to take a hard look at the problem they’re trying to solve. There’s a romantic view that science thrives in environments of diversity and conflict. And yes, diversity fuels inspired solutions. But disagreement on the problem side can fatally undermine a project. Imagine a military general who can’t identify the enemy. Or a CEO who can’t decide which market to tackle.
Or a community of researchers building knowledge-creating machines, who can’t agree on how knowledge is created.
What is The AI Problem?
It really depends on who you ask. Some researchers are fairly sanguine about AI. They view the seasons of AI winters and renewal as markers of progress. Seasonality suggests a wholly natural cycle of things. Expectations become inflated and thoughtful people make sober corrections. The AI problem is more a reflection on human foibles, speculation, and hype, not the substance of the project itself. The AI problem? What problem?
Others couldn’t disagree more. They feel AGI carries such dire repercussions that we should consider it inevitable and plan accordingly. Conceived in these terms, the AI problem splinters into an unbounded number of possible consequences, ranging from widespread economic upheaval to the extermination of humanity entirely. The AI problem becomes a veritable buffet of terror.
Whether cheery or panic stricken, the notion of the AI problem has effectively disappeared. In most discussions, the AI problem instead is framed in terms of its solution. The goal of AI is to achieve human level intelligence in a machine. The AI problem is how to achieve human level intelligence in a machine. When conflated with the goal, the AI problem is emptied of meaning. And with such broad consensus on the AI goal, the AI problem is all but ignored.
When The AI Problem was still a problem
It is therefore instructive to consider how the AI problem was originally framed, and why. The mathematical foundations for computer science were established in the 1930s, through the work of founders such as Alan Turing, Alonzo Church, and Kurt Gödel. What did the AI problem look like in these formative stages? Computers referred to human calculators, not the machines we use today. Solutions were not even in the realm of the collective imagination. Decades passed before AI crystallized as an academic discipline in its own right.
As explained by the computer scientist, Scott Aaronson, the work of the founders was essentially and necessarily philosophical. Here, philosophical refers to the precise and careful sharpening of problems. Aaronson wrote, “Indeed clarifying philosophical issues was the original point of their work; the technological payoffs only came later!” He couldn’t imagine a serious discussion about the prospects for artificial intelligence that was uninformed by this revolution in human knowledge three-quarters of a century ago.
The AI problem was about making the ground for solutions fertile by making the problem of general intelligence precise, and as importantly, making the problem shared. Problems are sharpened, resources are marshalled, and technological payoffs follow. Science, particularly big science, thrives in environments where problems are well formed. Sharp problems serve a practical purpose, to facilitate cooperation and unify disparate efforts.
The following vignettes on the nature of the AI problem demonstrate deep disagreements across the AI community, and how this disunity can pull projects apart at the seams.
A problem of integration
In 2009, the community was still throwing off the slush of the last AI winter. The computer scientist, Pedro Domingos captured the sentiment at the time. “To some, AI is the manifest destiny of computer science. To others, it’s a failure: clearly, the AI problem is nowhere near being solved. Why? For the most part, the answer is simple: no one is really trying to solve it.”
Domingos presented the “famous” AI problem in its modern goal-oriented form, to create an intelligence in machines that equaled or surpassed that of humans. He lamented a retreat from that grand project to narrow projects in natural-language understanding and classification. In an effort to become relevant, AI became small.
Domingos anticipated a return to the AI problem in all its original splendor. It would ride on exponentially growing computing power. “Within a decade or so,” he wrote, “computers will surpass the computing power of the human brain.” But there’s something missing, he noted. “The key is finding the right language in which to formulate and solve problems,“ a language that combines logic and probability, to manage the complexity and uncertainty of the real-world. He concluded, “this is how we’re ultimately going to solve AI: through the interplay between addressing real problems and inventing a language that makes them simpler.”
Domingos expanded that vision in 2015, through his popular book, The Master Algorithm. He reaffirmed the founding goal of AI, "to teach computers to do what humans currently do better, and learning is arguably the most important of those things." He cautioned that the problem won’t yield to a multitude of narrow AIs. Instead, "All knowledge—past, present, and future—can be derived from data by a single, universal learning algorithm."
This is the philosophy of empiricists. They maintain that all knowledge is based on experience (learning) derived from the senses (data). After briefly summarizing his philosophical position, Domingos presented a range of technical approaches and challenges. He cautioned, “You might think that machine learning is the final triumph of the empiricists, but the truth is more subtle, as we'll soon see.”
A problem of functional gaps
For others, machine learning is far from a final triumph, and their disagreements are anything but subtle. One of the most public examples of this debate involves Marcus and the computer scientist, Yann LeCun. Through a series of papers, lectures and tweetstorms, the two have demonstrated a deep and enduring set of differences.
These disagreements are invariably presented within the frame of technical solutions. Marcus, for example, has criticized his opponents as blinded by their instrumental bias and their focus on a familiar set of tools. He highlights an over-reliance on data, the limits of extrapolation, the challenges imposed by static predictive models in dynamical environments, the narrow reach of these systems and their limited ability to transfer knowledge.
But what’s really at the bottom of it all? Recently, Marcus summarized what most riles him. “What I hate is this: the notion that deep learning is without demonstrable limits and might, all by itself, get us to general intelligence, if we just give it a little more time and a little more data, as captured in Andrew Ng’s 2016 suggestion that AI, by which he meant mainly deep learning, would either ‘now or in the near future’ be able to do ‘any mental task’ a person could do ‘with less than one second of thought’.”
Marcus is railing against an AI problem framed in terms of integration. Integration cannot be the foundational problem if major pieces to the puzzle are missing entirely. Reflecting on a statement he made in 2012, Marcus said, “Just because you’ve built a better ladder doesn’t mean you’ve gotten to the moon. I still feel that way. I still feel like we’re actually no closer to the moon, where the moonshot is intelligence that’s really as flexible as human beings. We’re no closer to that moonshot than we were four years ago.”
A problem of creative gaps
While the camps described above frequently disagree, they are at least unified in their support of machine learning, broadly construed. But what if the true nature of the problem stands outside current solutions entirely?
Few people straddle the worlds of science and philosophy as comfortably as David Deutsch, pioneer of quantum computing and staunch advocate of Karl Popper's philosophy of critical rationalism. In a rebuke of empiricism in AI, Deutsch wrote, "I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors."
Deutsch maintains that machine learning is irredeemably misaligned with our best understanding of how knowledge is created. “Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.” He agrees that there’s a gap to AGI beyond mere computing power. But he recoils at the suggestion that integration can overcome explanatory deficits. Instead, he sees a creative gap, the inability to create new explanations. “Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.”
Both Domingos and Deutsch point to science as the embodiment of their philosophies, yet they each hold radically different philosophies of science itself. Domingos claimed, “Machine learning is the scientific method on steroids.” Deutsch compares the community’s faith in empiricism to pre-scientific ideas. He considers explanations essential to science, and the predominantly black box methods of machine learning notoriously instrumental.
In his reply to Deutsch's arguments, the computer scientist Ben Goertzel pointed back to the "integration bottleneck" as the key challenge. (His other concerns, hardware limitations and minimal funding, may be somewhat dated given the enormous investments in AI, but they’re duly noted.) In Goertzel's view, what's needed is "cognitive synergy: the fitting-together of different intelligent components into an appropriate cognitive architecture, in such a way that the components richly and dynamically support and assist each other, interrelating very closely in a similar manner to the components of the brain or body and thus giving rise to appropriate emergent structures and dynamics."
You can argue that everyone is sailing towards the same goal of AGI. But if the sailors are rowing in opposing directions, the boat tends to go in circles. Like the seasons of our AI winters.
Towards the greatest idea ever
Most of the hand-wringing over AGI considers the consequences of success, the repercussions that would follow the solution. But there’s a much more immediate concern: There’s no consensus on the problem of AI. There are irreconcilable differences on the most fundamental questions of how knowledge is created. There’s certainly consensus on the goal. But redefining the AI problem as the AI goal only empties it of meaning and undermines its critical importance in unifying efforts. Broad and diverse collaborations need sharp problems, and this need deepens with the complexity of the project.
For these reasons, the main impediment to AGI is philosophical, concerning the problem, not the solution. Henry Kissinger recently framed the AI problem as a philosophical vulnerability. He wrote, “The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy.” Deutsch goes further. He wrote, “What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.”
To bridge the creativity gap to AGI, Deutsch argues for a new mode of thinking about the problem. He observed that breakthroughs are often accompanied by new modes of explanations. He offers the scientific revolution itself as an example, where the idea of authority and revelation was displaced by the mode of reasoning and evidence. New modes of explanations accompanied other great breakthroughs in science, as well. Consider how ideas like quantum theory and evolution change the way we think about these phenomena. One of Darwin’s critics described his “strange inversion of reasoning.” To grasp the theory you must embrace the idea that complex entities need not be designed, which is starkly different from the mode of thinking that preceded the discovery.
Deutsch maintains that AGI is possible, and once the philosophical problem is addressed, a technical solution may follow quickly thereafter. “So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.”
Problems are sharpened, resources are marshalled, and technological payoffs follow.
With so many solutions competing for resources, so many voices vying to be heard, it’s wise to return to the principle of sharp problems. Resources and common goals are insufficient. AI began as an effort to sharpen the problem. This is the light that policy makers and investors need to guide them today.
Aaronson, S. "Why philosophers should care about computational complexity." Computability: Turing, Gödel, Church, and Beyond (2013): 261-328. https://arxiv.org/abs/1108.1791
Deutsch, D. “Creative Blocks.” (2012). https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence
Domingos, P. “Solving AI.” (2009). https://www.technologyreview.com/s/412202/solving-ai
Domingos, P. The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books, 2015.
George, A. L. “Discussing the limits of artificial intelligence.” (2017). https://techcrunch.com/2017/04/01/discussing-the-limits-of-artificial-intelligence
Goertzel, B. “The real reasons we don’t have AGI yet.” (2012). https://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
Hassabis, D. “Exploring the Frontiers of Knowledge.” (2017). https://www.youtube.com/watch?v=Ia3PywENxU8
Hodson, H. “DeepMind and Google: the battle to control artificial intelligence.” (2019). https://www.1843magazine.com/features/deepmind-and-google-the-battle-to-control-artificial-intelligence
Kissinger, H. “How the Enlightenment Ends.” (2018). https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/
Marcus, G. “The deepest problem with deep learning.” (2018). https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695
Metz, C. “With $1 Billion From Microsoft, an A.I. Lab Wants to Mimic the Brain.” (2019). https://www.nytimes.com/2019/07/22/technology/open-ai-microsoft.html
Piper, K. “How will AI change our lives? Experts can’t agree — and that could be a problem.” (2019). https://www.vox.com/future-perfect/2019/3/2/18244299/possible-minds-architects-intelligence-ai-experts