The following report is by The New York Times:

It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.

But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.

That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons.

This is really one of the most significant inflection points for humanity. What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.

Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview.

But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The United States, Russia, Australia, Israel and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.

The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon.

“We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the United Nations told diplomats who were packed into a basement conference room recently at the U.N. headquarters in New York.

The debate over the risks of artificial intelligence has drawn new attention in recent days with the battle over control of OpenAI, perhaps the world’s leading A.I. company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of A.I. in decisions about deploying nuclear weapons.

Against that backdrop, the question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the U.N. simply to adopt nonbinding guidelines, the position supported by the United States.

“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief international agreements officer at the State Department, told other negotiators during a debate in May over the language of proposed restrictions.

Mr. Dorosin and members of the U.S. delegation, which includes a representative from the Pentagon, have argued that instead of a new international law, the U.N. should clarify that existing international human rights laws already prohibit nations from using weapons that target civilians or cause a disproportionate amount of harm to them.

But the position being taken by the major powers has only increased the anxiety among smaller nations, who say they are worried that lethal autonomous weapons might become common on the battlefield before there is any agreement on rules for their use.

“Complacency does not seem to be an option anymore,” Ambassador Khalil Hashmi of Pakistan said during a meeting at U.N. headquarters. “The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout.”

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that United States military will “field attritable, autonomous systems at scale of multiple thousands,” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitates that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.

The United States has already adopted voluntary policies that set limits on how artificial intelligence and lethal autonomous weapons will be used, including a Pentagon policy revised this year called “Autonomy in Weapons Systems” and a related State Department “Political Declaration on Responsible Use of Artificial Intelligence and Autonomy,” which it has urged other nations to embrace.

The American policy statements “will enable nations to harness the potential benefits of A.I. systems in the military domain while encouraging steps that avoid irresponsible, destabilizing, and reckless behavior,” said Bonnie Denise Jenkins, a State Department under secretary.

The Pentagon policy prohibits the use of any new autonomous weapon or even the development of them unless they have been approved by top Defense Department officials. Such weapons must be operated in a defined geographic area for limited periods. And if the weapons are controlled by A.I., military personnel must retain “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

At least initially, human approval will be needed before lethal action is taken, Air Force generals said in interviews.

But Frank Kendall, the Air Force secretary, said in a separate interview that these machines will eventually need to have the power to take lethal action on their own, while remaining under human oversight in how they are deployed.

Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose. I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.

He said

Thomas X. Hammes, a retired Marine officer who is now a research fellow at the Pentagon’s National Defense University, said in an interview and a recent essay published by the Atlantic Council that it is a “moral imperative that the United States and other democratic nations” build and use autonomous weapons.

He argued that “failing to do so in a major conventional conflict will result in many deaths, both military and civilian, and potentially the loss of the conflict.”

Some arms control advocates and diplomats disagree, arguing that A.I.-controlled lethal weapons that do not have humans authorizing individual strikes will transform the nature of warfighting by eliminating the direct moral role that humans play in decisions about taking a life.

These A.I. weapons will sometimes act in unpredictable ways, and they are likely to make mistakes in identifying targets, like driverless cars that have accidents, these critics say.

The new weapons may also make the use of lethal force more likely during wartime, since the military launching them would not be immediately putting its own soldiers at risk, or they could lead to faster escalation, the opponents have argued.

Arms control groups like the International Committee of the Red Cross and Stop Killer Robots, along with national delegations including AustriaArgentinaNew Zealand, Switzerland and Costa Rica, have proposed a variety of limits.

Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.

Mr. Kmentt, the Austrian diplomat, conceded in an interview that the U.N. has had trouble enforcing existing treaties that set limits on how wars can be waged. But there is still a need to create a new legally binding standard, he said.

Just because someone will always commit murder, that doesn’t mean that you don’t need legislation to prohibit it. What we have at the moment is this whole field is completely unregulated.

He said.

But Mr. Dorosin has repeatedly objected to proposed requirements that the United States considers too ambiguous or is unwilling to accept, such as calling for weapons to be under “meaningful human control.”

The U.S. delegation’s preferred language is “within a responsible human chain of command.”

He said it is important to the United States that the negotiators “avoid vague, overarching terminology.”

Mr. Vorontsov, the Russian diplomat, took the floor after Mr. Dorosin during one of the debates and endorsed the position taken by the United States.

“We understand that for many delegations the priority is human control,” Mr. Vorontsov said. “For the Russian Federation, the priorities are somewhat different.”

The United States, China and Russia have also argued that artificial intelligence and autonomous weapons might bring benefits by reducing civilian casualties and unnecessary physical damage.

Smart weapons that use computers and autonomous functions to deploy force more precisely and efficiently have been shown to reduce risks of harm to civilians and civilian objects.

The U.S. delegation has argued.

Mr. Kmentt in early November won broad support for a revised plan that asked the U.N. secretary general’s office to assemble a report on lethal autonomous weapons, but it made clear that in deference to the major powers the detailed deliberations on the matter would remain with a U.N. committee in Geneva, where any single nation can effectively block progress or force language to be watered down.

Last week, the Geneva-based committee agreed at the urging of Russia and other major powers to give itself until the end of 2025 to keep studying the topic, one diplomat who participated in the debate said.

If we wait too long, we are really going to regret it. As soon enough, it will be cheap, easily available, and it will be everywhere. And people are going to be asking: Why didn’t we act fast enough to try to put limits on it when we had a chance to?

Mr. Kmentt said.

AUTHOR COMMENTARY

In addition to this report, it is worth noting that during a speech in August, U.S. Deputy Secretary of Defense Kathleen Hicks, said technology like AI-controlled drone swarms would allow America to offset China’s People’s Liberation Army’s (PLA) numerical advantage in weapons and people.

We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.

she said, reported Reuters.

It was only a matter of time when the prospect of autonomous drones and robots doing our killing. But the real question here is, like so many are probably thinking right now, how long before these AI drones commit “friendly fire,” or God forbid they commit mutiny? We know it’s coming but it is now just a matter of when, not if.

As he that bindeth a stone in a sling, so is he that giveth honour to a fool.

Proverbs 26:8

And how long before this happens domestically? The WinePress has a number of examples of nations around the world, namely the ones listed in this article, are arming themselves with weaponized AI drones, and/or increased drone usage for domestic surveillance:

Two weeks I reported how an autonomously factory robot killed a man in South Korea, after it mistook the man for a box of paprika.

This is going to be a total disaster…


[7] Who goeth a warfare any time at his own charges? who planteth a vineyard, and eateth not of the fruit thereof? or who feedeth a flock, and eateth not of the milk of the flock? [8] Say I these things as a man? or saith not the law the same also? [9] For it is written in the law of Moses, Thou shalt not muzzle the mouth of the ox that treadeth out the corn. Doth God take care for oxen? [10] Or saith he it altogether for our sakes? For our sakes, no doubt, this is written: that he that ploweth should plow in hope; and that he that thresheth in hope should be partaker of his hope. (1 Corinthians 9:7-10).

The WinePress needs your support! If God has laid it on your heart to want to contribute, please prayerfully consider donating to this ministry. If you cannot gift a monetary donation, then please donate your fervent prayers to keep this ministry going! Thank you and may God bless you.

CLICK HERE TO DONATE

4 Comments

  • I know. This AI obsession will result in chaos. Another thing that’s been bothering me for a while now, I’ve been trying to warn about the dangers with brethren having cell phones and closely-related tech devices, not just here but other websites too. Like the forum (you know which one I’m referring to), where I’ve noticed some people using mobile phones with posting replies. I was being lax on that site regarding this, mainly because I feel I have to study this harder to provide more detailed info.

    Aside from the health detriments, tracking and spying aspects, they have been integrating more advanced AI programs into these things. I’ve read an article about 5G Warfare linked on Bryan’s study. It mentions that cell phones can be used as homing targets for specific types of weapons and ammunition. I’m guessing possibly even other devices like tablets or whatever else that’s related similarly to cell phones’ general functions and circuitry. I think AI, as it is developed further, will function completely like a hive mind, interconnected despite appearing to be dispersed within separate machines and devices, specifically the newer technological devices people have and continue to upgrade to.

    I don’t understand how some brethren cannot bring themselves to get rid of these things. The truth of the evil and dangers of these devices is plain as day.

    Quotes from the article:

    ‘Katz and Bohbot describe separately in their book “Weapon Wizards, 2017”, how IMSI-catchers and CELLULAR NETWORK ANALYSIS were used to previously identify and destroy Hamas tunnels. If an IMSI “teleports” from one place to another, it’s a tunnel. A single fighter (likely many) forgetting to turn off their CELLULAR TRANSMITTERS after the news reports may have resulted in massive, heavy bombing attacks. There is so much data in our corner of the universe, that the absence of data can even provide information.

    The IDF Spokesperson’s Unit announced two weeks after operation “Guardian of The Walls”, that the conflict was the “FIRST AI WAR”. IDF continued to describe a system built by Unit 8200 that fused “signal intelligence (SIGINT), visual intelligence (VISINT), human intelligence (HUMINT), geographical intelligence (GEOINT)”. While such battlefield management systems (BMS, or C5ISR) have existed for years before the 2021 Gaza crisis, the announcements themselves combined with SOCIAL MEDIA DECEPTION AND PRECISION-GUIDED MUNITIONS represent a stark contrast to the Lind definition of fourth-generation warfare.’

    Bryan’s Video:
    https://youtu.be/vepFuas7bmQ?feature=shared

    Welcome To Fifth Generation Warfare:
    https://greydynamics.com/an-introduction-to-fifth-generation-warfare/

    5G phones emit so much radiation they can open a bottle of champagne
    https://rumble.com/v2vana3-5g-phones-emit-so-much-radiation-they-can-open-a-bottle-of-champagne…..html

    • Thanks Jacob for letting that post come through. I’m thinking that, “you know the forum I’m referring to”, may be worded wrong. I only meant that I didn’t need to broadcast the specific name publicly in relation to the comment, since not all readers that come here are Christians.

      I thought to add something that came to me while writing a more careful and detailed post for brethren over there.

      Ephesians 2:2
      wherein in time past ye walked according to the course of this world, according to the prince of the power of the air, the spirit that now worketh in the children of disobedience:

      EMF waves; prince of the power of the air. So every time Christians willingly use any wireless device, they’re essentially letting Satan be channeled directly to them from cell towers or satellites. It’s like inviting a vampire to come into your house.

      • *I’m not saying Christians can be possessed by devils since the Holy Spirit is in us. But vexation and oppression from devils can happen with having these things.

  • oops, they took out our own people, glitch glitch. Not to worry they will use it on citizens that they deem a problem.

Leave a Comment

×