AI Slays Top F-16 Pilot In DARPA Dogfight Simulation WASHINGTON: In a 5 to 0 sweep, an AI ‘pilot’ developed by Heron Systems beat one of the Air Force’s top F-16 fighter pilots in DARPA’s simulated aerial dogfight contest today. “It’s a giant leap,” said DARPA’s Justin (call sign “Glock”) Mock, who served as a commentator on the trials. AI still has a long way to go before the Air Force pilots would be ready to hand over the stick to an artificial intelligence during combat, DARPA officials said during today’s live broadcast of the AlphaDogfight trials. But the three-day trials show that AI systems can credibly maneuver an aircraft in a simple, one-on-one combat scenario and shoot its forward guns in a classic, WWII-style dogfight. On the other hand, they said, it was an impressive showing by an AI agent after only a year of development. (As I reported earlier this week, the program began back in September last year with eight teams developing their respective AIs.) Heron, a small, female- and minority-owned company with offices in Maryland and Virginia, builds artificial intelligence agents, and is also a player in DARPA’s Gamebreaker effort to explore tactics for disrupting enemy strategies using real-world games as platforms. The company beat eight other teams, including one led by defense giant Lockheed Martin — which came in second in the AlphaDogfight “semi-finals” that pitted the AI pilots against each other this morning. Heron’s team did a live-stream Q&A on Youtube. “Even a week before Trial 1, we had agents that were not very good at flying at all. We really turned it around, and since then we’ve been really number one,” said Ben Bell, Heron’s co-lead for the project. The team intends to publish later this year some of the details about its reinforcement learning process for the AI, he said. The trials were designed as a risk-reduction effort for DARPA’s Air Combat Evolution (ACE) program to flesh out how human and machine pilots share operational control of a fighter jet to maximize its chances of mission success. The overarching ACE concept is aimed at allowing the pilot to shift “from single platform operator to mission commander” in charge not just of flying their own aircraft but managing teams of drones slaved to their fighter jet. “ACE aims to deliver a capability that enables a pilot to attend to a broader, more global air command mission while their aircraft and teamed unmanned systems are engaged in individual tactics,” the ACE program website explains.
IMO, this is a good outcome. If the future of war is drones fighting drones, because computers are superior to humans, we can cut out the human cost and it can just be an entertaining super episode of Battlebots that we can watch on TV as a spectator sport. And if the computers eventually decide not to be battle monkeys for our amusement and kill us all...well, that will probably be sometime after my natural lifetime anyway.
Well wars are about ‘people problems’ its neat that we can make computers and metal blow each other up. It’s just new ways to eventually kill people though. I dont think it will cause less ‘human’ death but ultimately more.
Oh, I definitely agree. I was just making a joke. Even if drones could fight drones, nations will also use them to bomb civilians and military installations and scientific labs. Driving up the human cost is a pretty traditional war tactic to try to take the heart out of your opponent to keep fighting.
Reagan struggled with the same type arguments decades ago.. https://www.history.com/news/reagan-star-wars-sdi-missile-defense
You know, 2020 has made it so I'm actually looking forward to when Skynet takes over the planet and kills us all.
Seriously this shit isn’t funny to me... I’m not terrified because I think we are gonna learn in time but this shit is real and we need to stop developing smarter technology with the intent of killing ourselves. “It’s not to kill us it’s to kill them, those humans over there. To defend our country.” ... doesn’t matter when you reach a certain point. We will smack ourself in the face with the lesson we’ve ignored, that we are all of one - and a computer could realize that before we will; and when it does - we will all be dead at the hands of what we created.
Much more optimistic than I am. We wont learn, and even if many did. Just takes one rich guy like Elon Musk (just an example), that doesn’t care or hasnt learned, one country (like China or even the US). That’ll push it as far as they can because they want too.
There’s a really legitimate and solid point to that if you think about what we are as humans. BWe are a piece of a puzzle and it’s bigger than us, it’s bigger than life here on earth. I think about “money” and the way the wealthy are the ones able to essentially “create” the world around us; and it can be a concern honestly. But I also have faith that maybe at some point we will put restrictions on certain things so 1 incredibly rich individual can send a shit ton of satellites into space. There’s a point where I’d stand up and make a scene myself about it but at the same time it’s like... Stephen effing Hawking already said it so WHAT THE F. It’s going to have to happen it’s just like please I don’t want to be the one to start a movement. It’s kind of sketch that the militaries of the world are often the ones with the most groundbreaking technology. Fact is most of all these creations are inevitable - things we imagine that are science fiction are indeed real and from the future when technology is advanced enough to create it, that is precisely why so much “science fiction” has already become science fact... But we can’t get to the days of all the wonder of what we are gifted with through the power of our minds and the true nature of what we are if we continue to seek a path of violence; because quite literally no bs and no tinfoil hat - technology very much could and very much would enslave us if we created it to do so in such a primitive state of consciousness.