AI Package Delivery Drones Are Just Killer Robots In Waiting

AI Package Delivery Drones Are Just Killer Robots In Waiting

By Blair Morris

June 18, 2019

” innerhtml=”

.
.

Terminator robot. (YOSHIKAZU TSUNO/AFP/Getty Images)

.
Getty.

.

As governments across the world continue to debate the merits and dangers of AI-powered fully autonomous weapons, it is worth stepping back a moment and looking critically at the state of the autonomous landscape. Advances in everything from consumer drones to facial recognition to autonomous flight are yielding a constant march of advances in fully autonomous drones capable of navigating the human environment and delivering items to specific individuals. While most of the press has focused on the positives of these new systems, militaries around the world have been eagerly transforming these tools into weapons systems. Modified civilian drones today are capable of navigating denied spaces, seeking targets based on facial recognition and delivering lethal force, all using the same tools and technology being built by universities and companies for helpful tasks like package delivery drones to deliver aid to disaster regions. What does the future look like when we realize that AI-powered package delivery drones are really just autonomous weapons in waiting?

.

Policymakers the world over have spent recent years debating an inevitable future in which weapons systems are increasingly automated.

.

While autonomous and semi-autonomous weapons systems have been in widespread deployment for decades, to date these systems have automated only their navigation and coordination tasks, leaving targeting firmly in the hands of humans.

.

Yet, as society as a whole moves towards ever-increasing automation, Western militaries are being forced to grapple with the simple fact that their adversaries may not be as adverse to self-targeting weapons as themselves.

.

.

Weapons that can take over the most sensitive cognitive functions of a soldier, deciding who to kill, pose some of the most ethically fraught considerations of warfare beyond offensive cybersecurity’s quandaries regarding the targeting of civilian infrastructure like bringing down airplanes or triggering radiological releases from power plants.

.

Take an AI-powered drone that can loiter over an area and select its own targets. At what point does that drone cease to be a smart weapon and legally become a combatant itself?

.

More existentially, when there is no risk of human casualty to one side, the cold calculus of reciprocity begins to break down, lowering the barrier to conflict and potentially encouraging greater interventionism.

.

.

What happens when one side of a conflict adopts AI-powered weapons while the other, citing moral and ethical objections, does not? Half a century after the introduction of nuclear weapons, a country’s nuclear arsenal still plays an outsized role in determining its ability to exert its will globally, meaning military might still wins out over ethical considerations, placing considerable pressure on even peaceful nations to adopt AI weapons systems.

.

What happens when an AI-powered military system malfunctions? The science fiction canon is littered with reminders that a malfunctioning AI system can perform so many wrong operations so quickly that a conflict may be over before the human side even realizes there was a problem.

.

Most problematically, AI systems today are still merely simplistic correlation engines, representing their worlds as naive primitive assemblies of colors and textures. An AI-powered weapon does not recognize a target as a specific individual, vehicle or structure, but rather as a unique set of colors and textures in a specific relationship to each other.

.

This makes such weapons uniquely vulnerable to subtle modifications that can mask their targets or even cause them to attack unrelated targets.

.

Setting all of these issues aside, how far are we from actually having AI-powered autonomous weapons?

.

Universities and companies across the world have been rushing to build package delivery drones capable of navigating complex urban environments entirely on their own and even coordinating with other drones.

.

While the developers of these drone systems are building systems for good, the Islamic State reminds us that one person’s package delivery drone is another person’s autonomous weapons system.

.

In fact, universities, companies and militaries all over the world are already building killer robots with society’s fully encouragement and blessing: drone-killing drones.

.

As consumer drones have caused chaos at airports, threatened public safety and stalked us through our bedroom windows, there has been a growing societal consensus of a greater need to combat illicit use of drones.

.

This, in turn, has led to the growing world of anti-drone technology. Ranging from simple RF jammers to EMP pulse systems to high-powered lasers to projectile systems to emerging exotic technologies, the ability to bring down an errant drone in flight has become a major focus of public safety officials, especially regarding their danger to aircraft and public gatherings.

.

One technology with particular relevance to autonomous weapons is the drone-killing drone. These modified civilian drones are equipped with various sensors and navigation systems and designed to identify an unauthorized drone and bring it down through various means.

.

In short, a killer robot, though one that kills other robots rather than humans.

.

A drone killing drone could be easily modified to patrol the ground rather than the sky, autonomously targeting and applying lethal force to any vehicle or pedestrian that strays into a denied space.

.

Ironically, some of the same institutions and researchers that have come out so forcefully against “killer robots” are among those building these dual-purpose robot-killing robots.

.

Militaries across the world, including our own, have been quick to adapt civilian advances in both drone and AI technologies towards autonomous weapons systems.

.

There are already modified civilian drone platforms that have been specifically designed to be used in combat, with autonomous visual flight and onboard maps that permit them to successfully operate in radio and GPS-denied spaces, allowing them to fly to a target destination, use an onboard camera and AI system to visually identify a target, deliver a payload to that target and return, all without any human intervention and while stalking a fluid and moving target.

.

There are systems for scanning military uniforms for rank indicators on bases and in the battlefield, allowing autonomous weapons to target senior officers automatically under an interpretation that they represent more permissible targets for autonomous weapons than enlisted personnel.

.

More troubling, there are already military drones with onboard facial recognition databases that can be launched to scan a large public gathering and identify any persons of interest. These individuals could simply be tracked and filmed for reconnaissance or marked with infrared lasers for ground security forces or dropping marking dye on them. Experimental systems have even been designed to identify individuals in a crowd carrying weapons or behaving in a violent or disruptive manner.

.

It would take little modification for such systems to utilize more incapacitating or lethal means against their targets and indeed such systems are already being explored.

.

In the midst of our societal debate over the high error rate of facial recognition systems in the field, what does it mean when a facial recognition error could mean someone is mistakenly killed by a terrorist-hunting AI-powered drone?

.

Moreover, who bears legal or even criminal responsibility for that fatal facial recognition algorithm failure? Is it the government deploying the drone, the defense contractor that built the drone or the technology company that built the facial recognition software it used?

.

We have the technology today to deploy drones that can loiter over denied spaces, targeting anything humanoid that enters a geofenced area, even filtering by whether the individual matches a facial recognition database, is wielding a weapon or is judged by the algorithm to be behaving in a “threatening” manner.

.

These aren’t science fiction visions of a faraway future. These are commercial products being sold today by defense contractors to governments across the world today and put into active service.

.

Attend any drone event in DC and you’re likely to see literally dozens of such products being discussed and demonstrated, sometimes being presented by the technology companies whose AI platforms they run on.

.

While their real world performance may not yet match their marketing hype, the simple fact remains that these systems are already out there and getting better by the day.

.

Most importantly, these aren’t billion-dollar weapons systems dependent on exotic export-restricted equipment. They are typically modified civilian drone chassis with onboard consumer mobile AI processing platforms and using commercially available visual recognition algorithms, meaning governments all across the world can readily produce them. Most importantly, to civilian populations underneath, they can easily pass for ordinary civilian tourist drones until deploying their payloads.

.

Such civilian-based drones are still limited by battery limitations to relatively short deployments, but their autonomous components, based entirely on consumer mobile AI hardware platforms and readily available video processing technology, can be easily repurposed to long-range and long-duration drone systems. Some AI companies are already actively pitching defense contractors on the military applications of their consumer technologies.

.

Putting this all together, while policymakers slowly debate the dangers of autonomous weapons systems in abstract terms, those very systems are already being deployed across the world, but in an unexpected form. Rather than the bipedal walking Terminator units of science fiction or traditional military drones the size of small planes, autonomous weapons-capable military systems have come into widespread use through civilian drones.

.

Most importantly, their autonomy has come entirely through consumer AI platforms, making them readily portable to a wide range of weapons systems.

.

Halting the progression of consumer AI developments to military use is nearly impossible in a world in which every advance in image and video processing represents another new capability easily added to a military drone.

.

In the end, every self-following selfie drone, package delivery drone and mobile AI camera platform is merely an autonomous weapons system in waiting.

.

It seems that wait is increasingly over.

“>< div _ ngcontent-c14 ="" innerhtml ="

Terminator robot.( YOSHIKAZU TSUNO/AFP/Getty Images)

Getty

As governments throughout the world continue to dispute the merits and dangers of AI-powered completely autonomous weapons, it deserves stepping back a minute and looking seriously at the state of the autonomous landscape. Advances in whatever from consumer drones to facial acknowledgment to autonomous flight are yielding a continuous march of advances in fully autonomous drones capable of browsing the human environment and providing products to specific people. While the majority of journalism has focused on the positives of these new systems, militaries worldwide have actually been excitedly changing these tools into weapons systems. Modified civilian drones today are capable of navigating denied spaces, looking for targets based upon facial acknowledgment and providing deadly force, all utilizing the exact same tools and innovation being built by universities and companies for handy jobs like plan delivery drones to deliver help to disaster areas. What does the future appear like when we recognize that AI-powered plan shipment drones are actually simply autonomous weapons in waiting?

Policymakers the world over have actually spent current years disputing an inescapable future in which weapons systems are progressively automated.

While autonomous and semi-autonomous weapons systems have been in prevalent implementation for decades, to date these systems have actually automated just their navigation and coordination tasks, leaving targeting strongly in the hands of human beings.

Yet, as society as a whole relocations towards ever-increasing automation, Western armed forces are being required to face the basic reality that their enemies may not be as adverse to self-targeting weapons as themselves.

Weapons that can take control of the most sensitive cognitive functions of a soldier, choosing who to kill, present some of the most ethically filled factors to consider of warfare beyond offending cybersecurity’s predicaments regarding the targeting of civilian facilities like reducing planes or triggering radiological releases from power plants.

Take an AI-powered drone that can loiter over an area and pick its own targets. At what point does that drone stop to be a wise weapon and legally become a contender itself?

More existentially, when there is no risk of human casualty to one side, the cold calculus of reciprocity starts to break down, lowering the barrier to dispute and potentially motivating greater interventionism.

What occurs when one side of a dispute adopts AI-powered weapons while the other, mentioning moral and ethical objections, does not? Half a century after the introduction of nuclear weapons, a nation’s nuclear arsenal still plays an outsized function in determining its ability to apply its will internationally, indicating military might still wins out over ethical considerations, positioning considerable pressure on even serene nations to embrace AI weapons systems.

What takes place when an AI-powered military system malfunctions? The science fiction canon is cluttered with suggestions that a malfunctioning AI system can perform many incorrect operations so quickly that a dispute may be over prior to the human side even recognizes there was a problem.

The majority of problematically, AI systems today are still simply simplified correlation engines, representing their worlds as ignorant primitive assemblies of colors and textures. An AI-powered weapon does not recognize a target as a particular individual, car or structure, however rather as a special set of colors and textures in a specific relationship to each other.

This makes such weapons distinctively vulnerable to subtle adjustments that can mask their targets or perhaps trigger them to attack unrelated targets.

Setting all of these problems aside, how far are we from really having AI-powered autonomous weapons?

Universities and business throughout the world have actually been hurrying to build plan delivery drones efficient in navigating complex city environments entirely by themselves and even coordinating with other drones.

While the designers of these drone systems are building systems for excellent, the Islamic State advises us that a person individual’s plan shipment drone is another individual’s autonomous weapons system.

In fact, universities, business and armed forces all over the world are currently constructing killer robotics with society’s totally motivation and blessing: drone-killing drones.

As customer drones have caused turmoil at airports, threatened public security and stalked us through our bed room windows, there has actually been a growing societal consensus of a higher requirement to combat illicit use of drones.

This, in turn, has led to the growing world of anti-drone technology. Varying from easy RF jammers to EMP pulse systems to high-powered lasers to projectile systems to emerging unique innovations, the ability to lower an errant drone in flight has ended up being a significant focus of public safety authorities, particularly regarding their threat to aircraft and public gatherings.

One innovation with specific importance to autonomous weapons is the drone-killing drone. These modified civilian drones are geared up with various sensing units and navigation systems and designed to recognize an unapproved drone and bring it down through various methods.

In short, a killer robot, though one that eliminates other robots instead of human beings.

A drone killing drone could be easily modified to patrol the ground rather than the sky, autonomously targeting and using deadly force to any car or pedestrian that wanders off into a denied area.

Ironically, a few of the very same organizations and researchers that have actually come out so powerfully versus “killer robotics” are among those developing these dual-purpose robot-killing robotics.

Militaries across the world, including our own, have actually fasted to adapt civilian advances in both drone and AI innovations towards autonomous weapons systems.

There are already customized civilian drone platforms that have actually been particularly created to be utilized in battle, with self-governing visual flight and onboard maps that permit them to successfully run in radio and GPS-denied spaces, allowing them to fly to a target destination, use an onboard cam and AI system to aesthetically identify a target, deliver a payload to that target and return, all with no human intervention and while stalking a fluid and moving target.

There are systems for scanning military uniforms for rank signs on bases and in the battleground, permitting autonomous weapons to target senior officers automatically under an interpretation that they represent more allowable targets for autonomous weapons than employed personnel.

More troubling, there are currently military drones with onboard facial recognition databases that can be released to scan a big public event and determine anyones of interest. These individuals could merely be tracked and filmed for reconnaissance or marked with infrared lasers for ground security forces or dropping marking color on them. Experimental systems have actually even been created to recognize people in a crowd carrying weapons or behaving in a violent or disruptive manner.

It would take little modification for such systems to make use of more incapacitating or lethal ways against their targets and undoubtedly such systems are currently being explored.

In the middle of our societal dispute over the high error rate of facial acknowledgment systems in the field, what does it mean when a facial recognition error could indicate somebody is incorrectly killed by a terrorist-hunting AI-powered drone?

Furthermore, who bears legal and even criminal responsibility for that fatal facial recognition algorithm failure? Is it the government releasing the drone, the defense specialist that constructed the drone or the innovation company that built the facial acknowledgment software it used?

We have the innovation today to release drones that can loiter over rejected areas, targeting anything humanoid that gets in a geofenced location, even filtering by whether the specific matches a facial recognition database, is wielding a weapon or is judged by the algorithm to be behaving in a “threatening” way.

These aren’t science fiction visions of a faraway future. These are industrial items being offered today by defense specialists to federal governments across the world today and took into active duty.

Participate in any drone event in DC and you’re most likely to see actually lots of such products being discussed and demonstrated, often being provided by the innovation business whose AI platforms they run on.

While their real world efficiency might not yet match their marketing buzz, the simple fact stays that these systems are currently out there and getting better day by day.

Most significantly, these aren’t billion-dollar weapons systems based on unique export-restricted devices. They are generally customized civilian drone chassis with onboard customer mobile AI processing platforms and utilizing commercially readily available visual acknowledgment algorithms, indicating federal governments all throughout the world can easily produce them. Most significantly, to civilian populations underneath, they can easily pass for common civilian traveler drones till deploying their payloads.

Such civilian-based drones are still restricted by battery limitations to relatively brief releases, but their self-governing components, based totally on customer mobile AI hardware platforms and readily available video processing technology, can be easily repurposed to long-range and long-duration drone systems. Some AI business are already actively pitching defense specialists on the military applications of their consumer technologies.

Putting this all together, while policymakers slowly dispute the dangers of autonomous weapons systems in abstract terms, those extremely systems are currently being deployed across the world, but in an unforeseen type. Rather than the bipedal walking Terminator systems of sci-fi or conventional military drones the size of little airplanes, self-governing weapons-capable military systems have entered extensive use through civilian drones.

Most importantly, their autonomy has come completely through consumer AI platforms, making them readily portable to a wide variety of weapons systems.

Stopping the progression of consumer AI advancements to military use is almost impossible in a world in which every advance in image and video processing represents another new ability easily contributed to a military drone.

In the end, every self-following selfie drone, bundle delivery drone and mobile AI camera platform is merely a self-governing weapons system in waiting.

It appears that wait is significantly over.

” >

Terminator robot. (YOSHIKAZU TSUNO/AFP/Getty Images)

Getty

.

As governments throughout the world continue to discuss the benefits and threats of AI-powered completely autonomous weapons, it is worth going back a minute and looking seriously at the state of the self-governing landscape. Advances in whatever from consumer drones to facial recognition to self-governing flight are yielding a consistent march of advances in totally self-governing drones efficient in navigating the human environment and delivering products to particular individuals. While most of journalism has concentrated on the positives of these new systems, militaries worldwide have been eagerly transforming these tools into weapons systems. Customized civilian drones today can navigating rejected areas, looking for targets based upon facial recognition and delivering lethal force, all utilizing the very same tools and technology being developed by universities and business for helpful tasks like plan delivery drones to provide aid to disaster areas. What does the future look like when we understand that AI-powered package shipment drones are truly simply self-governing weapons in waiting?

Policymakers the world over have spent recent years debating an inevitable future in which weapons systems are significantly automated.

While self-governing and semi-autonomous weapons systems have been in prevalent release for years, to date these systems have actually automated only their navigation and coordination jobs, leaving targeting strongly in the hands of people.

Yet, as society as an entire moves towards ever-increasing automation, Western armed forces are being required to grapple with the easy truth that their enemies may not be as negative to self-targeting weapons as themselves.

Defense that can take control of the most delicate cognitive functions of a soldier, deciding who to eliminate, pose some of the most morally laden considerations of warfare beyond offensive cybersecurity’s predicaments regarding the targeting of civilian infrastructure like lowering planes or triggering radiological releases from power plants.

Take an AI-powered drone that can loiter over an area and pick its own targets. At what point does that drone stop to be a clever weapon and legally end up being a contender itself?

More existentially, when there is no risk of human casualty to one side, the cold calculus of reciprocity starts to break down, lowering the barrier to conflict and potentially motivating greater interventionism.

What happens when one side of a conflict embraces AI-powered weapons while the other, mentioning ethical and ethical objections, does not? Half a century after the introduction of nuclear weapons, a country’s nuclear toolbox still plays an outsized function in determining its capability to apply its will worldwide, meaning military may still wins out over ethical factors to consider, placing significant pressure on even tranquil countries to adopt AI weapons systems.

What happens when an AI-powered military system breakdowns? The sci-fi canon is cluttered with tips that a malfunctioning AI system can perform a lot of incorrect operations so rapidly that a dispute might be over before the human side even recognizes there was an issue.

The majority of problematically, AI systems today are still simply simplistic connection engines, representing their worlds as ignorant primitive assemblies of colors and textures. An AI-powered weapon does not acknowledge a target as a particular person, vehicle or structure, but rather as a distinct set of colors and textures in a specific relationship to each other.

This makes such weapons distinctively vulnerable to subtle modifications that can mask their targets or perhaps cause them to assault unrelated targets.

Setting all of these issues aside, how far are we from in fact having AI-powered self-governing weapons?

Universities and business throughout the world have actually been rushing to build bundle shipment drones efficient in navigating intricate metropolitan environments entirely by themselves and even coordinating with other drones.

While the developers of these drone systems are building systems for good, the Islamic State reminds us that a person person’s plan shipment drone is another individual’s autonomous weapons system.

In fact, universities, business and militaries all over the world are currently building killer robotics with society’s fully encouragement and blessing: drone-killing drones.

As customer drones have triggered turmoil at airports, threatened public security and stalked us through our bedroom windows, there has actually been a growing societal consensus of a greater requirement to combat illegal usage of drones.

This, in turn, has actually resulted in the growing world of anti-drone innovation. Ranging from basic RF jammers to EMP pulse systems to high-powered lasers to projectile systems to emerging unique innovations, the ability to bring down an errant drone in flight has actually ended up being a major focus of public safety officials, specifically regarding their risk to airplane and public events.

One technology with particular importance to autonomous weapons is the drone-killing drone. These modified civilian drones are equipped with different sensing units and navigation systems and designed to identify an unapproved drone and bring it down through different means.

In other words, a killer robotic, though one that eliminates other robotics rather than people.

A drone eliminating drone might be easily modified to patrol the ground rather than the sky, autonomously targeting and applying deadly force to any vehicle or pedestrian that wanders off into a denied space.

Paradoxically, some of the very same organizations and scientists that have actually come out so powerfully against “killer robots” are amongst those building these dual-purpose robot-killing robotics.

Militaries throughout the world, including our own, have actually fasted to adapt civilian advances in both drone and AI innovations towards self-governing weapons systems.

There are already customized civilian drone platforms that have been specifically developed to be utilized in battle, with autonomous visual flight and onboard maps that allow them to effectively run in radio and GPS-denied areas, allowing them to fly to a target location, use an onboard video camera and AI system to aesthetically determine a target, provide a payload to that target and return, all with no human intervention and while stalking a fluid and moving target.

There are systems for scanning military uniforms for rank indicators on bases and in the battleground, enabling autonomous weapons to target senior officers automatically under an interpretation that they represent more permissible targets for self-governing weapons than gotten personnel.

More troubling, there are currently military drones with onboard facial recognition databases that can be launched to scan a large public gathering and identify anybodies of interest. These individuals could merely be tracked and filmed for reconnaissance or marked with infrared lasers for ground security forces or dropping marking dye on them. Speculative systems have actually even been developed to identify people in a crowd bring weapons or behaving in a violent or disruptive manner.

It would take little modification for such systems to make use of more incapacitating or lethal means against their targets and certainly such systems are already being explored.

In the midst of our societal dispute over the high mistake rate of facial acknowledgment systems in the field, what does it mean when a facial recognition mistake could indicate someone is erroneously killed by a terrorist-hunting AI-powered drone?

Moreover, who bears legal or even criminal duty for that fatal facial recognition algorithm failure? Is it the federal government deploying the drone, the defense professional that built the drone or the innovation company that built the facial acknowledgment software application it used?

We have the innovation today to deploy drones that can loiter over denied areas, targeting anything humanoid that enters a geofenced area, even filtering by whether the specific matches a facial recognition database, is wielding a weapon or is judged by the algorithm to be acting in a “threatening” way.

These aren’t science fiction visions of a distant future. These are business products being offered today by defense professionals to governments throughout the world today and took into active duty.

Participate in any drone occasion in DC and you’re most likely to see literally lots of such items being talked about and shown, in some cases existing by the innovation business whose AI platforms they run on.

While their real life efficiency might not yet match their marketing buzz, the simple fact stays that these systems are already out there and improving by the day.

Most significantly, these aren’t billion-dollar weapons systems depending on exotic export-restricted equipment. They are generally modified civilian drone chassis with onboard consumer mobile AI processing platforms and utilizing commercially readily available visual acknowledgment algorithms, indicating governments all across the world can easily produce them. Most significantly, to civilian populations beneath, they can quickly pass for normal civilian traveler drones up until releasing their payloads.

Such civilian-based drones are still limited by battery restrictions to reasonably short releases, but their autonomous components, based completely on customer mobile AI hardware platforms and easily offered video processing innovation, can be easily repurposed to long-range and long-duration drone systems. Some AI business are currently actively pitching defense professionals on the military applications of their consumer innovations.

Putting this all together, while policymakers gradually discuss the threats of autonomous weapons systems in abstract terms, those very systems are already being deployed throughout the world, however in an unforeseen type. Instead of the bipedal walking Terminator units of science fiction or traditional military drones the size of little planes, autonomous weapons-capable military systems have entered prevalent use through civilian drones.

Most importantly, their autonomy has actually come totally through customer AI platforms, making them easily portable to a large range of weapons systems.

Halting the development of customer AI advancements to military use is almost impossible in a world in which every advance in image and video processing represents another brand-new capability quickly contributed to a military drone.

In the end, every self-following selfie drone, plan shipment drone and mobile AI camera platform is simply a self-governing weapons system in waiting.

It seems that wait is progressively over.

Find Out More .

About Blair Morris

Leave a Reply

Your email address will not be published. Required fields are marked *