The future of war

Algorithms may make proficient soldiers but poor generals


THE UN’S Panel of Experts on Libya rarely grabs the headlines. But its valedictory report in March caused a furore. It noted that in a battle around Tripoli last year, Libya’s government had“hunted down and remotely engaged”the enemy with drones—and not just any drones. The Kargu-2 was programmed to attack“without requiring data connectivity between the operator and the munition”。The implication was that it could pick its own targets.

联合国的利比亚问题专家小组很少登上头条。但今年 3 月,该小组的一份总结报告激起了波澜。报告指出,去年在的黎波里周边的一场战役中,利比亚政府运用无人机「追击并远程攻击」敌军。这种名为 Kargu-2 的无人机非同一般,它被设定为「无需在操作员和本机之间做数据连接」就能发动攻击,也就是说,它能自主选择攻击目标。

Was this a true autonomous weapon, or just a clever missile? In June the Turkish manufacturer insisted that, contrary to its own marketing, for now the drone required a human to push the button. This sort of technology is at the heart of“I, Warbot”by Kenneth Payne, a thought-provoking reflection on how artificial intelligence (AI) will change conflict.

这究竟是一款名副其实的自主武器,还是只是一种智能导弹?6 月,其土耳其制造商坚称,就目前而言,这款无人机在发起攻击时仍需要人类按下操作按钮。这和它在营销时的说法自相矛盾。这类技术正是肯尼斯・佩恩(Kenneth Payne)的新著《我,战争机器人》(I, Warbot)讨论的核心,书中对人工智能(AI)将如何改变军事冲突的思考发人深省。

In some ways, the story is familiar. It involves the entwined histories of computing and warfare; the recent evolution of new, powerful forms of AI modelled on the neurons of the brain rather than the logic of the mind; and the ensuing possibilities for weapons to see what is around them—and strike with superhuman speed and precision. Mr Payne, an academic at King’s College London, is especially bullish on the potential of swarms,“a menagerie of specialist robots”that can concentrate to attack and melt away just as quickly.

在某些方面,这些内容并不稀奇。它涉及计算与战争交织的历史、模拟大脑神经元而非思维逻辑的强大新型 AI 的进展,以及武器因而可能「看见」周围环境——并以人类无法企及的速度与精度发起攻击。佩恩是伦敦国王学院的一名学者,他特别看好「无人机群」的潜力,即可以快速聚集攻击而后又快速分散的「专用机器人集群」。

“The tactical implications are profound,”he predicts. The offence will dominate. Defenders will have to rely on deception, generating clouds of decoy targets, rather than on protections like armour and fortification. Martial virtues such as courage and leadership will give way to technical competence. Dividing armed forces into services optimised for land, air and sea may look increasingly strange in a world of machines that can range across them.


Above all, though,“I, Warbot”is a reminder that war is about more than tactics. It is about choosing which battles to fight, how to knit them into a successful campaign and how to connect military victories to political aims—in short, war is about strategy. And soldiery and strategy are fundamentally different. Computer programs can already defeat human pilots in simulated dogfights. But could they come up with the bold, swift and visionary attacks that let Napoleon Bonaparte knock out one European army after another?


Algorithms can certainly outwit opponents in games that blend skill, chance and psychology. In 2017 Libratus, a computer program, saw off four poker stars. AI can also innovate: in 2016 AlphaGo, another program, thrashed a world champion of Go, an ancient Chinese board-game, with moves that dazzled onlookers.

在结合技巧、概率和心理战的游戏中,算法确实有可能战胜对手。2017 年,名为冷扑大师(Libratus)的计算机程序就击败了四位顶尖扑克选手。AI 也能创新:2016 年,另一个名为阿尔法狗(AlphaGo)的程序走出令人诧异的棋着,最终击败了围棋世界冠军。

But, argues Mr Payne, this is a simulacrum of genius, not the real thing. These gizmos exhibit“exploratory creativity”—essentially a brute-force calculation of probabilities. That is fundamentally different from“transformational creativity”,which entails the ability to consider a problem in a wholly new way, and requires playfulness, imagination and a sense of meaning. All that may depend on emotion, and thus on parts of human biology alien to computers.“AI is a statistical processor par excellence”;but in essence it remains“a wonderfully sophisticated abacus”。

但佩恩认为,这只是对人类才智的模拟,而非才智本身。这些程序展现的「探索性创造力」实质上是对概率的蛮力运算。这与「变革性创造力」有质的区别,后者需要以全新方式看待问题的能力,也需要一种随性顽皮、想象力和意义感。这一切可能都依赖于情感,也就是计算机所缺失的那部分人类生物特质。「AI 是卓越的统计学处理器」,但本质上仍不过是「一个非常精密的算盘」。

A proficient soldier, the warbot may thus be a limited general. The problem is that the line between tactics and strategy can blur. Battlefield decisions can have geopolitical ramifications. Consider the case of B-59, a Soviet submarine pounded by American depth-charges during the Cuban missile crisis of 1962. The frazzled captain ordered the use of a nuclear-tipped torpedo. Conscious of the stakes, Vasily Arkhipov, the second-in-command, refused to authorise the launch.

所以,战争机器人可以是精兵,但可能做不了强将。问题在于战术和战略之间的界限会模糊。战场上的决策可能产生地缘政治后果。想想 1962 年古巴导弹危机期间遭美国深水炸弹围截的苏联潜艇 B-59。当时烦躁不堪的艇长下令发射装有核弹头的鱼雷,但副艇长瓦西里・阿尔希波夫(Vasily Arkhipov)意识到其中的利害关系,拒绝批准发射鱼雷。

Would a computer have done so?“A warbot is likely to be more accurate, proportionate and discriminate”than humans, says Mr Payne. The risk is that“a machine is undeterred by the sobering fear of things getting out of hand.”


Published since September 1843 to take part in “a severe contest between intelligence, which presses forward, and an unworthy, timid ignorance obstructing our progress.”


Project Che