Military + Aerospace Electronics (MAE) reported last week:
U.S. military researchers are asking Lockheed Martin Corp. to continue work on prototyping a system to detect and defeat automated enemy disinformation campaigns launched by manipulating the Internet, news, and entertainment media.
Officials of the Air Force Research Laboratory Information Directorate in Rome, N.Y., announced a $19.3 million order in August to the Lockheed Martin Advanced Technology Laboratories in Cherry Hill, N.J., to finish a prototype for the Semantic Forensics (SemaFor) program.
DARPA explains the purpose of SemaFor in a short post on its website, which states:
Media generation and manipulation technologies are advancing rapidly and purely statistical detection methods are quickly becoming insufficient for identifying falsified media assets. Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources (algorithm development, data, or compute). However, existing automated media generation and manipulation algorithms are heavily reliant on purely data driven approaches and are prone to making semantic errors. For example, generative adversarial network (GAN)-generated faces may have semantic inconsistencies such as mismatched earrings. These semantic failures provide an opportunity for defenders to gain an asymmetric advantage. A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.
The Semantic Forensics (SemaFor) program seeks to develop innovative semantic technologies for analyzing media. These technologies include semantic detection algorithms, which will determine if multi-modal media assets have been generated or manipulated. Attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies will help detect, attribute, and characterize adversary disinformation campaigns.
To support SemaFor technology transition, DARPA launched two new efforts to help the broader community continue the momentum of defense against manipulated media.
The first comprises an analytic catalog containing open-source resources developed under SemaFor for use by researchers and industry. As capabilities mature and become available, they will be added to this repository.
The second comprises an open community research effort called AI Forensics Open Research Challenge Evaluation (AI FORCE), which aims to develop innovative and robust machine learning, or deep learning, models that can accurately detect synthetic AI-generated images. Through a series of mini challenges, AI FORCE asks participants to build models that can discern between authentic images, including ones that may have been manipulated or edited using non-AI methods, and fully synthetic AI-generated images.
The MAE article goes onto explain how many current “purely statistical detection methods quickly are becoming insufficient for detecting falsified media,” which the paper says includes biometric data and security checks.
Read the rest of the report here.
AUTHOR COMMENTARY
In July, I reported on a forum with U.S. Secretary of State Anthony Blinken and State Department’s Matthew Graviss, the chief data and AI officer, where Blinken revealed the Biden administration is working on AI tools to combat “misinformation.” Was this project between DARPA and Lockheed Martin what he was referring to?
I think we can use this technology to actually improve our analysis, to unearth new insights. We’ve seen already, as we’ve been testing things out, using AI as a tool for helping negotiations in multilateral organizations – we’ll talk about that. Using it as a way to combat disinformation, one of the poisons in the international system today.
Blinken said
The Biden administration has of course been funding plenty campaigns to quench so-called “disinformation,” with the blessing of the Supreme Court too:
Supreme Court Rules To Back White House On Deleting Social Media Posts It Wants Deleted
Of course, Trump during his tenure was no better in this, namely his actions regarding resigning the damnable FISA Act.
Ecclesiastes 5:8 If thou seest the oppression of the poor, and violent perverting of judgment and justice in a province, marvel not at the matter: for he that is higher than the highest regardeth; and there be higher than they.
[7] Who goeth a warfare any time at his own charges? who planteth a vineyard, and eateth not of the fruit thereof? or who feedeth a flock, and eateth not of the milk of the flock? [8] Say I these things as a man? or saith not the law the same also? [9] For it is written in the law of Moses, Thou shalt not muzzle the mouth of the ox that treadeth out the corn. Doth God take care for oxen? [10] Or saith he it altogether for our sakes? For our sakes, no doubt, this is written: that he that ploweth should plow in hope; and that he that thresheth in hope should be partaker of his hope. (1 Corinthians 9:7-10).
The WinePress needs your support! If God has laid it on your heart to want to contribute, please prayerfully consider donating to this ministry. If you cannot gift a monetary donation, then please donate your fervent prayers to keep this ministry going! Thank you and may God bless you.
The question needs to be asked is “who decides that the information is credible?? or is disinformation. (an algorithm!!!) Here is a good question to test this out, Jesus Christ stated that no one come to the Father but by Him, and that there is no other name given unto men ( the name Jesus) that we must be saved, for He is the Light and the Truth. What do you think that this algorithm is going to say on this truth.
Doesn’t make sense. Then they would have to kill themselves off.
Who decides what info is Disinformation And Propaganda?