The following report is a blog post by the Department of Defense, on December 14th, 2023:
As part of that effort, known as Task Force Lima, Streilein said his office has identified nearly 200 use cases for how the department could leverage the breakthrough technology across a variety of functions.
And we’re assessing them, we’re trying to understand which ones would be appropriate given the state of technology, which is important to acknowledge.
There is still a lot to learn about it. It definitely has commercial application, but within the DOD, the consequences are perhaps higher and we need to be responsible in how we leverage it.
Streilein said during a discussion on the role of trusted AI in the DOD hosted by Government Executive, a government-focused publication based in Washington, D.C.
Streilein explained that critical importance is establishing trust in each application of the technology, meaning the confidence that the AI algorithm produced the intended result.
So that means we have to be good with our testing. We have to be able to specify what we want the algorithms to do, and then can move forward with justified confidence.
He added that in addition to trust, the DOD places special emphasis on key tenants underpinning the ethical principles of AI: responsibility, reliability, equitability, governability and traceability. “Those are actually terms […] that apply to the human in their application of AI,” he said. “Meaning that we should always be responsible in our use of AI. We should know how we’re applying it, know that we have governance over it, know that we understand how it provided its answer.”
Last month, the DOD released its strategy to accelerate the adoption of advanced artificial intelligence capabilities to ensure U.S. warfighters maintain decision superiority on the battlefield for years to come.
The 2023 Data, Analytics and Artificial Intelligence Adoption Strategy, which was developed by the Chief Digital and AI Office, builds upon and supersedes the 2018 DOD AI Strategy and revised DOD Data Strategy, published in 2020, which have laid the groundwork for the department’s approach to fielding AI-enabled capabilities.
The strategy prescribes an agile approach to AI development and application, emphasizing speed of delivery and adoption at scale leading to five specific decision advantage outcomes:
- Superior battlespace awareness and understanding
- Adaptive force planning and application
- Fast, precise and resilient kill chains
- Resilient sustainment support
- Efficient enterprise business operations
The blueprint also trains the department’s focus on several data, analytics and AI-related goals:
- Invest in interoperable, federated infrastructure
- Advance the data, analytics and AI ecosystem
- Expand digital talent management
- Improve foundational data management
- Deliver capabilities for the enterprise business and joint warfighting impact
- Strengthen governance and remove policy barriers
As the technology has evolved, the DOD and the broader U.S. government, have been at the forefront of ensuring AI is developed and adopted responsibly.
In January, the Defense Department updated its 2012 directive that governs the responsible development of autonomous weapon systems to the standards aligned with the advances in artificial intelligence.
The U.S. has also introduced a political declaration on the responsible military use of artificial intelligence, which further seeks to codify norms for the responsible use of the technology.
Streilein said those trust and the ethical use of AI underpins the department’s experimentation with the technology.
A lot of what we’re doing to understand this technology is to figure out how we can be true to those five principles [of ethical use] in the context of what’s happening.
He said.
AUTHOR COMMENTARY
Last month I cited a report by The New York Times that detailed how world powers, none more so than the United States, are working on AI-powered killer drones to be used in combat, among other things.
Like everything else the U.S. does, this too will eventually end badly; especially when these devices and weaponry is turned on the people, in the name of “safety and security,” and maintaining “law and order,” and defeating “terrorists.”
He that soweth iniquity shall reap vanity: and the rod of his anger shall fail.
Proverbs 22:8
[7] Who goeth a warfare any time at his own charges? who planteth a vineyard, and eateth not of the fruit thereof? or who feedeth a flock, and eateth not of the milk of the flock? [8] Say I these things as a man? or saith not the law the same also? [9] For it is written in the law of Moses, Thou shalt not muzzle the mouth of the ox that treadeth out the corn. Doth God take care for oxen? [10] Or saith he it altogether for our sakes? For our sakes, no doubt, this is written: that he that ploweth should plow in hope; and that he that thresheth in hope should be partaker of his hope. (1 Corinthians 9:7-10).
The WinePress needs your support! If God has laid it on your heart to want to contribute, please prayerfully consider donating to this ministry. If you cannot gift a monetary donation, then please donate your fervent prayers to keep this ministry going! Thank you and may God bless you.
A bunch of AI robots being used as troops instead of actual men? Sure, that sounds safe.
I’m not recommending Hellywood, but this is what the Terminator warned us about!