All roads lead to AGI, or do they?

robot artificial general intelligence
Image credit: 123RF

All Roads Lead to Rome: The Machine Learning Job Market in 2022 is a self-congratulatory article by the new VP of AI at Halodi Robotics, Eric Jang. The article boastfully discusses Jang’s lucrative job search, which he presents as market research.[1] However, what caught my attention was not the dubious antidotal market analysis (which includes many unverified claims and suppositions) but Jang’s assertion that every successful technology company will be an AGI company and require an “AGI strategy.”

AGI was only recently popularized by Shane Legg and fellow researchers Ben Goertzel and Cassio Pennachin.[2] In 2007, Goertzel and Pennachin edited a book titled Artificial General Intelligence, which called for stronger adherence to the original vision of AI, which is not (as Jang suggests) merely the “means to make… software more adaptive and generally useful.”[3] According to Goertzel and Pennachin, AGI possesses “self-understanding and autonomous self-control,” with the “ability to solve a variety of complex problems in a variety of contexts and to learn to solve new problems that they didn’t know about at the time of their creation.” In other words, AGI is a blank slate problem solver where knowledge of a problem is independent of any strategy to solve that problem and where any goal is understood and shared by the solution. This is a tall order for applied problem-solving since your solution does not need to know that it is solving a problem, but you do.

It would be revisionist to suggest that Legg, Goertzel, and Pannachin promoted the name to deal with the vacuous nature of contemporary “AI,” meaning almost anything to anyone. However, today “AGI” is used more and more by originalists to distinguish their work from the noisy contemporary “AI” label. Over the same period, “AGI” has gone from a fringe idea promoted by unknown researchers to something that people surprisingly use to make career decisions and as corporate mission statements.[4], [5]

Years before Goertzel edited Artificial General Intelligence, his company, Intelligenesis, and product, Webmind, was seen as rearing a “baby” on the internet. The media declared the baby “dumb,” but with a “brilliant future.”[6]

The Australian Financial Review hailed Webmind as the end of “mankind’s reign as the sole form of reasoning and intelligence on Earth.”[7] The Wall Street Journal explained Goertzel’s disembodied “global brain” as an “emerging network of transhuman intelligence” that would solve all our problems. All the while, Intelligenesis was bankrupt because of its solve-no-problem business model.

Business-minded technical leaders may recognize the conspicuous reason why a solve-no-problem business model might fail. AI-only companies fail because they incorrectly assume that technology transfer moves along a single dimension from basic to applied research to deployment, where one could create a solution today and solve an unknown problem tomorrow. Specifically, Webmind failed because it sought foundational knowledge to solve the basic problem of general intelligence that it assumed would solve other problems that would magically produce some financial windfall. However, this type of all-or-nothing strategy is ineffective, and most cannot survive on such a feast or famine strategy.[8] In retrospect, Goertzel wrote in a business postmortem that the goal of “creating a thinking machine, and then commercializing it” should have been “laughed out of any conversation with any serious businessperson.” Luckily for AI-only startups, serious business people are still in short supply.[9], [10]

This culture is a kind of Gresham’s Law for research. Instead of an economic principle stating that “bad money drives out good,” it is a research principle where basic research is seen as contaminated by premature exposure to any discussion of real-world customers or applications.[11] This methodology means that any practical problem of any societal importance has to be ignored.[12] C.P. Snow explains the two cultures of research and how researchers performing basic research often “pride themselves that their work could not in any conceivable circumstances have any practical use.”[13] Yang highlights this culture in All Roads Lead to Rome when he explains why he is uninterested in starting his own company. The reason is that he is “more interested in solving AGI than solving customer problems.” This seems to be the new culture of Silicon Valley, where some of the brightest people would instead work on some defunct theory of the mind than solve a real problem and create real value for customers.

The practical problem researchers face is that interpretation is required when you start to solve a problem. When the interpretation of a problem begins, researchers are no longer solving intelligence or designing solutions to solve new problems. Instead, they problem-solve and collect problem-specific information rather than discover foundational knowledge related to intelligence. The AI pioneer, Marvin Minsky, recalls his approach to AI in a biography by The New Yorker in 1981, saying “I mustn’t tell the machine exactly what to do. That would eliminate the problem.”[14] Elimination of a problem is a problem in AI research, which is also a problem for anyone seriously considering an AGI strategy.

The aspiration to solve everything instead of something is important to friends of AGI. According to a 2017 blog post co-written by Demis Hassabis—the cofounder and CEO of DeepMind—DeepMind will no longer focus on winning Go and instead on “developing advanced general” solutions. Hassabis adds that general solutions “could one day help scientists tackle some of our most complex problems such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials.”[15]

This yearning to do more than win board games is commendable. However, Hassabis is not interested in solving any one of these problems (e.g., new cures for diseases, reducing energy consumption,[16] or inventing new materials) but in solving all these problems. Solving a problem requires eliminating a problem by way of problem framing, domain knowledge, designing and running experiments, analyzing data, collaborating with peers, and generally being responsible to regulators, customers, and patients. Solving every problem doesn’t. That is why solving all problems rather than a single problem is appealing. Everyone involved is safe from action. The pursuit of ignoring problems is not “infinite courage,” as Jang states in his article. Isolating oneself from the cares of the world and everyday life in favor of personal esoteric pursuits is sanctimonious.

Remember, basic research does not care about societal or economic benefits that generally result from solving problems. It seeks epistemological ends.[17] For example, DeepMind recently released on Gato, which the company describes as a “general” solution. Gato can perform more than six hundred different tasks including playing video games, organizing objects, captioning pictures, and chatting. One DeepMind researcher even claimed, about AGI, that “The game is over!” However, none of the six hundred tasks have anything to do with curing disease, reducing energy consumption, or inventing new materials. Even if we accept Gato as general and not some ostentatious multi-functional solution (which it is), DeepMind has failed to solve any of the problems they declared as important for a general solution just five years ago. The game they are playing has nothing to do with private or public value creation because no problem is important enough to solve.[18]

All Roads Lead to Rome is a confused idiom in this context that incorrectly suggests that all “AI” activity leads to a single amorphous place. Moreover, the claim that successful businesses require an AGI strategy is entirely backward. An AGI strategy requires one to ignore problems and seek a general solution that may later solve problems. In the meantime, our problems get larger as all problems do when they are ignored. Ill-defined goals for solutions that “could one day solve a problem” have low economic value. Business-minded technical leaders understand that problems have one more solution which nobody has found­. Technical leadership is often about solving a problem many times or old problems in new ways, not seeking one solution to solve all problems. Businesses cannot aspire to solve everything instead of something.

About the author

Rich Heimann

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.


[1] All Roads Lead to Rome: The Machine Learning Job Market in 2022.

[2] History of the name: https://ai.stackexchange.com/questions/20231/who-first-coined-the-term-artificial-general-intelligence.

[3] Ben Goertzel and Cassio Pennachin, Artificial General Intelligence, (New York: Springer, 2007).

[4] OpenAI’s mission statement is to “build safe and beneficial AGI.”

[5] DeepMind’s mission statement reads “Our long-term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).”

[6] Ben Goertzel, “Waking Up from the Economy of Dreams,” April 9, 2001

[7] Cave, M. (2000, May 11). One dumb baby with a brilliant future. Australian Financial Review.

[8] S. Shead, “Alphabet’s DeepMind Losses Soared to $570 Million in 2018,” Forbes, August 7, 2019. Thomson, Amy. “Google Waives $1.5 Billion DeepMind Loan as AI Costs Mount.” Bloomberg.com, Bloomberg, Dec. 17, 2020.

[9] Ben Goertzel, “Waking Up from the Economy of Dreams,” April 9, 2001.

[10] Goertzel’s new company, TrueAGI, will focus on the enterprise to offer the “mind-as-a-service.”

[11] Donald E. Stokes. Completing the Bush Model: Pasteur’s Quadrant. (pg. 3)

[12] In a 2012 paper titled “Machine Learning that Matters,” Kiri Wagstaff of the Jet Propulsion Laboratory explains how today’s machine learning research has lost touch with problems important to science and society.

[13] C.P. Snow, The Two Cultures: And a Second Look: An Expanded Version of the Two Cultures and the Scientific Revolution, 2nd ed. Cambridge University Press, 1964. p. 32.

[14] Bernstein, Jeremy. “Profiles: Marvin Minsky” The New Yorker, December 14, 1981. pg. 73

[15] Demis Hassabis and David Silver, “AlphaGo’s Next Move,” DeepMind, May 27, 2017.

[16] To be fair, DeepMind has made some impact on its parent company. See Richard Evans and Jim Gao, “DeepMind AI Reduces Google Data Centre Cooling Bill by 40%,” DeepMind, July 20, 2016.

[17] Eric writes, “There is a tendency for robotics companies to start off with the mission of general-purpose robots and then rapidly specialize into something boring as the bean counters get impatient.”

[18] Hannah Kerner, “Too Many AI Researchers Think Real-World Problems Are Not Relevant,” MIT Technology Review, September 9, 2020.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.