服务热线:400-123-4567
当前位置: 首页 > BB贝博ballbet

【BB贝博ballbet】不能赋予机器人杀人的权力

时间:2024-10-18 20:57:01 文章作者:BB贝博ballbet 点击:

本文摘要:Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.想象一下这样的未来场景:以美国派的联军正在迫近叙利亚的拉卡(Raqqa),决意歼灭“伊斯兰国”(ISIS)。

Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.想象一下这样的未来场景:以美国派的联军正在迫近叙利亚的拉卡(Raqqa),决意歼灭“伊斯兰国”(ISIS)。多国部队派出一批可怕的自律机器人,外面城市四处飞行中,跟踪敌人。

Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.利用面部辨识技术,这些机器人辨识和杀掉ISIS的指挥官,斩落了这个的组织的头目。在联军和平民死伤最多的情况下,崩溃了不知所措、士气低下的ISIS部队。

Who would not think that a good use of technology?有谁不指出这是很好地运用了技术呢?As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.事实上,有很多人不这么指出,还包括人工智能领域的很多专家,他们最理解研发这种武器所必须的技术。In an open letter published last July, a group of AI researchers warned that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”去年7月,众多人工智能研究人员公开发表了一封公开信,警告称之为这种技术早已发展到一定程度,几年以后——而需要几十年——就有可能部署“可怕自律武器系统”(Lethal Autonomous Weapons Systems,它还有一个不相称的全称,Laws,意为“法律”)。不像核武器,这类系统可以以便宜成本大规模生产,沦为“明天的卡拉什尼科夫步枪(Kalashnikov,即AK-47)”。

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing,” they said. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”“它们早晚不会经常出现在黑市上,落到恐怖分子、期望更佳地掌控民众的独裁者和想展开种族清除的军阀的手中,”他们回应,“在军用人工智能领域打开一场军备竞赛是一个坏主意,应当对远超过人类有效地掌控的攻击性自律武器产生禁令,以避免这样的军备竞赛。”Already, the US has broadly forsworn the use of offensive autonomous weapons. Earlier this month, the United Nations held a further round of talks in Geneva between 94 military powers aiming to draw up an international agreement restricting their use.美国大体上已允诺退出用于攻击性自律武器。本月早些时候,联合国(UN)在日内瓦举办了有94个军事强国参与的新一轮谈判,目的制订一项容许此类武器用于的国际协议。

The chief argument is a moral one: giving robots the agency to kill humans would trample over a red line that should never be crossed.主要论据是道德层面上的:彰显机器人杀人的代理权,将跨过一条总有一天不该被跨过的红线。Jody Williams, who won a Nobel Peace Prize for campaigning against landmines and is a spokesperson for the Campaign To Stop Killer Robots, describes autonomous weapons as more terrifying than nuclear arms. “Where is humanity going if some people think it’s OK to cede the power of life and death of humans over to a machine?”因为积极开展赞成地雷的运动而取得诺贝尔和平奖的乔迪威廉斯(Jody Williams)是“制止刺客机器人运动”(Campaign To Stop Killer Robots)的发言人,他回应自律武器比核武器更加可怕。“如果一些人指出把人类的生杀大权转交一台机器是可以的,人性又何以处之?”There are other concerns beyond the purely moral. Would the use of killer robots lower the human costs of war thereby increasing the likelihood of conflict? How could proliferation of such systems be stopped? Who would be accountable when they went wrong?除了纯粹的道德问题以外,还有其他令人担忧的问题。

刺客机器人不会减少战争中的人员成本,愈演愈烈冲突的可能性否不会因此提升?如何制止这类系统的蔓延?当它们出有问题的时候谁来负责管理?This moral case against killer robots is clear enough in a philosophy seminar. The trouble is the closer you look at their likely use in the fog of war the harder it is to discern the moral boundaries. Robots (with limited autonomy) are already deployed on the battlefield in areas such as bomb disposal, mine clearance and antimissile systems. Their use is set to expand dramatically.在一个哲学研讨会上,赞成刺客机器人的道德理由已是充足显著。问题在于,你越是近距离地仔细观察它们在战争硝烟中有可能的用处,就就越无以辨别出有道德的界限。(受限自律的)机器人早已被用作战场上,应用于在拆弹、排雷和反导系统等。

它们的应用于范围还将深感不断扩大。The Center for a New American Security estimates that global spending on military robots will reach $7.5bn a year by 2018 compared with the $43bn forecast to be spent on commercial and industrial robots. The Washington-based think-tank supports the further deployment of such systems arguing they can significantly enhance “the ability of warfighters to gain a decisive advantage over their adversaries”.据新的美国安全性中心(Center for a New American Security)量度,到2018年,全球范围内在军用机器人方面的开支将超过每年75亿美元。相比之下,该机构预测用作商业和工业机器人的开支将为430亿美元。

这家坐落于华盛顿的智库反对更进一步利用这类系统,主张它们需要明显提升“登陆作战人员获得凌驾输掉的绝对性优势的能力”。In the antiseptic prose it so loves, the arms industry draws a distinction between different levels of autonomy. The first, described as humans-in-the-loop, includes predator drones, widely used by US and other forces. Even though a drone may identify a target it still requires a human to press the button to attack. As vividly shown in the film Eye in the Sky , such decisions can be morally agonising, balancing the importance of hitting vital targets with the risks of civilian casualties.军工界用其最爱人用于的置身事外的论调,对机器人有所不同的自律等级展开了区分。

第一类被称作“人在环中”(humans-in-the-loop),还包括被美军和其他军队普遍用于的“捕食者”无人机。即使一架无人机也许需要辨识目标,还是必须一个人类来按下反击按钮。就像电影《天空之眼》(Eye in the Sky)生动地反映出来的,这类决策可能会给人带给道德上的伤痛,你必须在压制关键目标和导致平民死伤的风险之间展开权衡。

The second level of autonomy involves humans-in-the-loop systems, in which people supervise roboticised weapons systems, including anti-aircraft batteries. But the speed and intensity of modern warfare make it doubtful whether such human oversight amounts to effective control.第二级的自律是“人在环中系统”(humans-in-the-loop system),人对机器人武器系统展开监督,还包括防空炮。但现代战争的速度和强度让人猜测这种人类的监督能否构成有效地掌控。The third type, of humans-out-of-the-loop systems such as fully autonomous drones, is potentially the deadliest but probably the easiest to proscribe.第三类是“人在环外系统”(humans-out-of-the-loop system),比如几乎自律的无人机,这种有可能是最可怕的,但也很有可能是最更容易禁令的。

AI researchers should certainly be applauded for highlighting this debate. Arms control experts are also playing a useful, but frustratingly slow, part in helping define and respond to this challenge. “This is a valuable conversation,” says Paul Scharre, a senior fellow at CNAS. “But it is a glacial process.”人工智能研究人员通过公开发表公开信,引发人们对这场辩论的注目,这一行径当然有一点称赞。军备掌控专家在协助定义和应付这一挑战方面起着简单的起到,但他们的行动步伐却快得让人失望。“这是一次有价值的对话,”新的美国安全性中心的保罗沙勒(Paul Scharre)说道,“但这是一个极为较慢的过程。

”As in so many other areas, our societies are scrambling to make sense of fast-changing technological realities, still less control them.就像在其他很多方面一样,我们的社会在企图解读较慢变化的技术现实方面就已穷于应付,更加别提加以控制了。


本文关键词:BB贝博ballbet,贝博betball体育app下载,贝博ball登录入口艾弗森,BB贝博艾弗森官方网站

本文来源:BB贝博ballbet-www.bbooom.com


【产品推荐】