September 11, 2020
AI Makes Military Breakthroughby dhiram
One of the biggest fears that human beings have when it comes to artificial intelligence is that it will eventually rise up and destroy us. The late, great Stephen Hawking had serious reservations about AI being allowed to develop unchecked. Elon Musk is so convinced that rogue AI will eventually turn on the people who created it that he’s busy trying to create machines that mesh with the human mind and elevate the brain’s processing capacity. You’ll rarely see a day go by without somebody, somewhere, writing an alarmist headline about artificial intelligence. There’s usually little or no basis for the sensationalist headlines that are written. As of this week, we might not be able to say that for much longer.
Until the end of August 2020, the biggest ‘breakthrough moment’ that AI had enjoyed in the past twelve months probably happened in the UK. There, artificial intelligence has been introduced to online slots games to monitor the gambling habits of players. If a player places a hasty-looking, unwise bet on an online slots terminal shortly after a big loss, the AI will step in and ask them to pause, sense-check their bet, and reconsider. If the AI becomes especially concerned, it can lock players out of their online slots website or online slots terminal entirely for a period of several minutes. In that case, the AI aims to ensure that the people who play online slots are doing so sensibly and safely. However, the breakthrough announced at the end of August suggests that AI is now capable of doing the precise opposite of looking after human welfare.
During mid-August, the Pentagon in America simulated an in-air battle between two F-16 fighter jets in the skies above the US military headquarters. One of them was flown by a trained Air Force fighter pilot, known only by his call sign of ‘Banger.’ The other was flown by the latest and most advanced military artificial intelligence. The simulation was programmed to end when one of the pilots had taken significant damage from the other. The human pilot did no damage to their AI opponent whatsoever. The AI program peppered the human-flown plane with its cannon no less than five times. The contest wasn’t even close, and after it was over, ‘Banger’ complained that the AI used tactics and techniques that weren’t covered in training. In other words, the AI trounced its opponent, and it did it by out-thinking and out-flying them.
The simulation was devised, planned, and executed by the US Military’s Defense Advanced Research Projects Agency (DARPA), and the fledgling department was said to be extremely happy with the outcome. A 5-0 victory over a highly-trained and experienced pilot was a better result than they’d been hoping for. Only one year ago, DARPA said that no AI software in the world could outperform a human in precisely this scenario. In 2020, this new AI pilot created by Heron Systems has proven unequivocally that what they said twelve months ago no longer holds true. AI can’t just beat a human pilot; it can do so without putting a scratch on its own plane. This is only the first test of many that will have to be planned and passed before the military starts replacing people with machines at the controls of its planes, but it’s a significant moment in military AI development. It’s also one that will have conspiracy theorists petrified.
AI isn’t subject to some of the limitations inside a plane that a human being is. AI it’s hampered by exposure to extreme g-force. AI doesn’t feel the physical impact of high-velocity maneuvers, twists, and turns in the air. AI doesn’t flinch or panic if the plane it’s flying takes a hit. If you take all of this information together, it means that an AI pilot can pull off moves that a human pilot can’t, and will therefore always have an advantage in an aerial combat situation. That’s a big plus for the military forces of whichever nation owns that AI so long as the AI remains obedient to human instructions. It’s an enormous concern if AI one day decides that it doesn’t want to take human input anymore and starts flying planes for its own purposes. That might seem far fetched as an idea right now, but the Heron AI pilot is still learning – and it’s learning fast.
Initially, the Heron AI was tasked only with not flying its plane into the ground. Its equipped with deep reinforcement learning that rewards behaviors that lead to rapid success and punishes those that lead to failure (‘rewards’ being a less-than-ideal explanation, but a close enough analog for its purpose). Before it was put into battle against a human, the software had undertaken more than four billion simulated flights (most of which started and ended within a matter of milliseconds). The experiment has clearly proven that during those four billion flights, it’s learned every move that a human could make with an F16 and some that a human can’t. It’s possible that a different human pilot may be able to defeat it, but even if that were to happen, the Heron AI would learn from the defeat and come up with ways to prevent it from occurring again. Beating it only makes it stronger in the long run.
The project’s end goal is to allow the military to one day create a fleet of autonomous planes that don’t require pilots and will therefore be far cheaper to build. The software still has much to learn – dogfighting is just one tiny aspect of flying a military plane, and so far, the Heron AI knows nothing about how to deal with long-range missiles, or how to take on more than one opponent at once. It’s also not good at working out how to behave co-operatively with a ‘friendly’ pilot to co-ordinate an attack. It will be one day, though. As DARPA said, a year ago, the idea of an AI pilot beating a human one was unthinkable. Today it’s a reality. This time next year, it might be able to perform literally any task a human pilot is capable of, and outperform them in all of them. Depending on where your perspective lies, you’ll find that idea either fascinating or frightening in the extreme.