Artificial general intelligence (AGI) and national security

human and artificial intelligence
Image source: 123RF

Artificial intelligence is everywhere. Want to check for directions, play a video game, or settle a sports argument over who is the NFL’s all-time touchdown leader (it’s Jerry Rice with 208) and you’ll probably turn to AI. Rideshare apps, facial recognition software, and robot-assisted surgeries – all AI.

With AI increasingly a part of our everyday lives, more advanced versions of AI are likely just around the corner. Tesla, for example, started advertising its autopilot AI as the future of driving over six years ago, while industry giants such as Google, Microsoft, and Meta are all betting that Artificial General Intelligence (AGI)—the ability of a thinking machine to actually understand and learn any intellectual task that a human can—represents the future of their AI technology initiatives. 

All of this suggests that in the not-too-distant future, AGI will finally emerge. We already have more than enough computing power to achieve AGI. The only thing missing is the insight into how the human brain works, and many researchers are already working on that issue—not necessarily for AGI but for the medical advances it will spin off.

When that happens, it’s safe to assume that computers will have capabilities that equal, then exceed, humans’ mental abilities. If a computer can be built that is as intelligent as a human, building one that is twice as smart is well within the realm of possibility shortly thereafter. Eventually, the absolute limits of computational power may be reached, but before that happens, humans will necessarily lose their position as the biggest thinkers on the planet. When thinking machines take over that dominant position, will humans still be able to fully control them? 

In the short term, the answer is yes. The bigger question is whether humans will use these AGI machines for good or evil. The computers, for their part, won’t care because the typical sources of conflict among humans—access to resources, demand for a higher standard of living, and desire for geographical expansion or increased power—won’t be of interest to the AGIs.

They will only be interested in their own energy sources, the factories where they can be reproduced, and their own ability to progress in their own direction. As a result, AGIs are more likely to compete with each other than to compete against humans.

As these initial AGIs “mature,” they will begin to draw conclusions from the information they process. This inevitably will lead to these AGI computers making judgments and offering opinions. With greater experience and an increased focus on decision-making, these thinking computers will be able to reach correct solutions more frequently than their human counterparts, leading to increased reliance and the AGIs playing a more strategic role.

Within the military context, this inevitably will lead to decisions being made only in consultation with AGIs. While it is unlikely human leaders will ever be willing to relinquish complete control to these thinking computers, they increasingly will turn to the AGIs because they have a superior ability to quickly and accurately analyze vast amounts of data, balance numerous variables, and ultimately propose the proper solutions to potentially volatile situations.

As these computers consistently demonstrate superior decision-making skills—and greater and greater levels of success—they will gradually assume greater control, not because they want it but because their human minders are willing to consistently follow their recommendations. What President would willingly unplug the silicon advisor whose proposed strategies helped to calm a potential trouble spot? What Pentagon official would rid themselves of the computers which helped them to select and operate the weapons that subdued a less-computerized enemy with minimal loss, effort, or expense?

But what if we are not the first owners of these powerful AGI systems, but nations that view themselves as the West’s natural enemies or, worse, rogue states such as Iran or North Korea? What if a terrorist organization such as Isis, the Taliban, or Al Shabaab gets control of an AGI system and uses it to gain political control, commit terrorist acts, or threaten the destruction of Western allies? 

Obviously, this has the potential to be an extremely dangerous situation. While we will be able to program the motivations of the first AGIs, we won’t control the motivations of the people (or conceivably the corporations) that initially create them. 

Science fiction is replete with stories of power-mad despots using thinking robots to wield power and destroy their enemies—and frankly, such a scenario is possible, given some of the infamous actions military strongmen and terrorist leaders have already inflicted on humanity. The far more likely scenario for these initial AGIs, however, likely involves using them to shape public opinion, manipulate financial markets, spy on potential enemies, or blackmail compromised individuals.    

Social media is already proving to be highly effective at influencing trends. Markets are now at the mercy of programmed trading. Given the impact technology is having today, it is not a huge leap to assume that the emergence of AGIs will amplify these issues by enabling individuals or corporations to change election results, twist the way in which the news is reported, provide access to confidential financial information, and abuse personal privacy. Human history is littered with the stories of persons or institutions which have been willing to do anything to enrich themselves or amass power. It is perhaps inevitable that AGIs will make that pursuit easier than ever.

There is some good news, though, if you can call it that. Because AGIs are likely to evolve rapidly, the window for humans to use AGIs for nefarious purposes is small. Once AGI finally emerges, AGI machines will mature quickly, behaving in their own best interests rather than doing the bidding of humans. While that doesn’t eliminate the potential for a rogue computer bent on destruction, AGIs will recognize such systems represent threats to their own civilization. Measuring their actions against the long-term common good, AGIs would weed out and eliminate such rogue machines, an act that would serve as a deterrent against such behavior in the future.

About the author

Charles Simon

Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI. Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of two books – Will Computers Revolt?: Preparing for the Future of Artificial Intelligence and Brain Simulator II: The Guide for Creating Artificial General Intelligence – and the developer of Brain Simulator II, an AGI research software platform that combines a neural network model with the ability to write code for any neuron cluster to easily mix neural and symbolic AI code. You can follow the author’s continuing AGI experimentation at http://brainsim.org or at the Facebook group: http://facebook.com/groups/brainsim.

Previous articleWhat to (not) expect from OpenAI’s ChatGPT
Next articleThe common traits of successful MLOps
Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer, and the Founder and CEO of FutureAI. Mr. Simon has many years of computer experience in the industry, including pioneering work in artificial intelligence (AI). His technical experience includes the creation of two unique AI systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of "Will the Computers Revolt: Preparing for the Future of Artificial Intelligence,” and the developer of BrainSim II, an artificial general intelligence (AGI) research software platform, and Sallie, a prototype software and artificial entity that learns in real-time with vision, hearing, speaking, and mobility. For more information, visit https://futureai.guru/Founder.aspx

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.