“Creepy” AI Weapons “Will” Make Deadly Decisions


Pentagon software chief Nicolas Chaillan suddenly resigned last month over fears that the US military was “15 to 20 years” behind China on cyber warfare and artificial intelligence, he said. he told the Financial Times.

The warning marks the latest sign of contention within the US military over how to prepare for what former Google executive Kai-Fu Lee calls the “third revolution” of the war, after gunfire cannon and nuclear weapons.

In a new interview, ex-General Stanley McChrystal – who led coalition forces in Afghanistan for two years and now runs a consulting firm called the McChrystal Group – said artificial intelligence will inevitably come to make deadly decisions on the battlefield. However, he recognized the “frightening” risks of potential malfunction or error.

“People say, ‘We will never give control of deadly strikes to artificial intelligence,'” says McChrystal, who recently co-authored a book called “Risk: A User’s Guide.” “That’s wrong. We absolutely will.”

“Because at some point you can’t respond quickly enough unless you do,” he adds. “A hyperspeed missile, a hypersonic missile hitting the US aircraft carrier, you don’t have time for individuals to follow up, you don’t have time for senior leaders to be in the decision loop , or you won’t be able to engage the missile.

A stand-alone weapons ban has received support from 30 countries, although an in-depth report commissioned by Congress advised the United States to oppose a ban because it could prevent the country from using weapons already in existence. its possession.

In 2015, prominent tech figures like Elon Musk, CEO of Tesla (TSLA), and Apple (AAPL) co-founder Steve Wozniak, along with thousands of AI researchers, signed an open letter calling for the ban on these weapons.

President Joe Biden, speaking at a summit of US and EU leaders in February, called for international collaboration to “shape the rules that will govern the advancement of technology and standards of behavior in cyberspace, the artificial intelligence, biotechnology so that they are used to lift people, not used to immobilize them.

The increasingly accelerated pace of war will force U.S. military officers to cede decision-making power to artificial intelligence, McChrystal said. But that comes with risks, he noted.

“You created technology, you put processes in place to make it work, but then, to run at the speed of war, you turn it on and trust it,” he says.

“It can be quite scary, especially if there is the potential for dysfunction or impersonation or any of those other things,” he adds.

Soldiers surround a Titan Strike unmanned ground vehicle. (Photo by Ben Birchall / PA Images via Getty Images)

McChrystal, who graduated from the US Military Academy at West Point in 1976, had a 34-year military career that included a stint as Commander of US Special Forces and ultimately a two-year term as Chief of the Forces of the coalition in Afghanistan which ended in 2010.

Then-President Barack Obama accepted McChrystal’s resignation days after a Rolling Stone article in which McChrystal and his aides criticized senior administration officials.

Speaking to Yahoo Finance, McChrystal generally cautioned against the power AI systems take when organizations do not fully understand their capabilities.

“It’s difficult to have a full understanding, in a modern organization today, of the decisions that are actually made algorithmically and those that are made by people,” he says.

“When you don’t have that, I would say you run the risk of not having a real understanding of how your organizations are in control,” he adds.

Read more:

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, Youtube, and reddit.

Source link

Leave A Reply

Your email address will not be published.