Tech giant Microsoft has recently unveiled a new artificial intelligence tool that is designed to scan imagery and text for what is deemed as mis- and disinformation, and other forms of explicit and hateful content.
The service is called the Azure AI Content Safety tool, offered via Microsoft’s preexisting Azure AI services and products.
The service launched yesterday when the company announced a new line of AI innovation, services, and plugins. “We have everything you need on Azure for making a copilot,” Kevin Scott, Microsoft’s chief technology officer, said in a blogpost on Microsoft’s website.
‘A copilot is an application that uses modern AI and large language models to assist you with a complex cognitive task – from writing a sales pitch or catching up on a missed meeting to generating images for a presentation or planning a themed dinner party,’ Microsoft explains.
In comes the new Content Safety service, which the company said is now being integrated and powering the Bing browser. This tool can sift through potential mis- and disinformation contented in images and texts, and can be used to monitor videogame chats and conversations, along with other online communities, alerting the moderators to any potential issues. The moderators have the option to throttle and direct the AI to be really stringent or fairly loose, judging key phrases based on context.
The company wrote in their post:
Importantly, developers also need to ensure the copilot returns the intended results and avoids outputs that are biased, sexist, racist, hateful, violent or prompt self-harm, noted Sarah Bird, a partner group product manager at Microsoft who leads responsible AI for foundational technologies.
Today at Microsoft Build, the company announced that Azure AI Content Safety is in preview. This new Azure AI service helps developers create safer online environments and communities with models that are designed to detect inappropriate content across images and text. The models assign a severity score to flagged content, indicating to human moderators what content requires urgent action.
It’s the safety system powering GitHub Copilot, it’s part of the safety system that’s powering the new Bing. We’re now launching it as a product that third-party customers can use.
Bird said
Azure AI Content Safety is integrated into Azure OpenAI Service, providing customers of generative AI seamless access to it. The service can also be applied to non-AI systems, such as online communities and gaming platforms, and the filters can be fine-tuned for context. For example, the phrase, “run over the hill and attack” used in a game would be considered a medium level of violence and blocked if the gaming system was configured to block medium severity content. An adjustment to accept medium levels of violence would enable the model to tolerate the phrase, Bird explained.
In addition, Microsoft announced new media provenance capabilities coming to Microsoft Designer and Bing Image Creator that will enable users to verify whether an image or video was generated by AI. The technology uses cryptographic methods to mark and sign AI-generated content with metadata about its origin.
The tool fully understands text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese.
It also assigns a ‘severity score’ to the flagged content, signaling to moderators what needs the most attention and to what degree the potential offense is.
“Our models are designed to detect hate, violent, sexual, and self-harm content, so your users can feel safe and enjoy their online expirence without compromising their safety,” Microsoft says in a promo.
In a statement to Tech Crunch, a spokesperson for the company said this about the new Content Safety AI:
Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages.
New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start […] and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.
We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context. We then trained the AI models to reflect these guidelines […] AI will always make some mistakes, [however,] so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.
Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.
The concept of analyzing pictures and other content is nothing necessarily new. In 2021 The WinePress reported that Adobe partnered with Twitter and the New York Times to tackle “misinformation” embedded in photo’s metadata. Not long after that in August that same year, Apple announced that they would begin analyzing all images uploaded to their iCloud service in a bid to prevent child exploitation.
Also, Tech Crunch notes that there are similar AI tools to Microsoft’s Content Safety manager:
‘Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team, and Jigsaw, and succeeds Microsoft’s own Content Moderator tool. (No word on whether it was built on Microsoft’s acquisition of Two Hat, a moderation content provider, in 2021.) Those services, like Azure AI Content Safety, offer a score from zero to 100 on how similar new comments and images are to others previously identified as toxic,’ Tech Crunch wrote.
But these AI tools have been proven to be flawed time and time again, sometimes showing blatant bias, while in other cases Tech Crunch claims that blatant cases of purported “hate speech” have slipped right on by.
The tech outlet reported:
But there’s reason to be skeptical of them. Beyond Bing Chat’s early stumbles and Microsoft’s poorly targeted layoffs, studies have shown that AI toxicity detection tech still struggles to overcome challenges, including biases against specific subsets of users.
Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.
The problem extends beyond toxicity-detectors-as-a-service. This week, a New York Times report revealed that eight years after a controversy over Black people being mislabeled as gorillas by image analysis software, tech giants still fear repeating the mistake.
Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there are differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who don’t identify as either of those two groups.
AUTHOR COMMENTARY
If you’ve ever played Microsoft’s Halo videogames that thumbnail is befitting for this. If you don’t get the reference then don’t worry about it…
And the people shall be oppressed, every one by another, and every one by his neighbour: the child shall behave himself proudly against the ancient, and the base against the honourable.
Isaiah 3:5
The censorship is getting very out of hand as we knew it would, and migration to the so-called alternatives platforms are just bait & switches in the wake, owned by the same culprits and agents who work for these big-tech companies.
The time is rapidly approaching where we will truly have to go underground and return to handwritten letters and parcels, though since so many people are so lazy nowadays to pick up the pencil and pen, most will be complacent with the AI thinking and serving them.
A slothful man hideth his hand in his bosom, and will not so much as bring it to his mouth again.
Proverbs 19:24
[7] Who goeth a warfare any time at his own charges? who planteth a vineyard, and eateth not of the fruit thereof? or who feedeth a flock, and eateth not of the milk of the flock? [8] Say I these things as a man? or saith not the law the same also? [9] For it is written in the law of Moses, Thou shalt not muzzle the mouth of the ox that treadeth out the corn. Doth God take care for oxen? [10] Or saith he it altogether for our sakes? For our sakes, no doubt, this is written: that he that ploweth should plow in hope; and that he that thresheth in hope should be partaker of his hope. (1 Corinthians 9:7-10).
The WinePress needs your support! If God has laid it on your heart to want to contribute, please prayerfully consider donating to this ministry. If you cannot gift a monetary donation, then please donate your fervent prayers to keep this ministry going! Thank you and may God bless you.
Notice one of the things this censorship AI is aiming for: EXPLICIT CONTENT.
Because of all the sin that the world just loves and refuses to give up and tries to defend and justify, now tyranny is coming in.
The pope is responsible for internet pornography!
when the people have become evil and given themselves into sin the Lord will set evil rulers to destroy them but should and if the peoples repent and do away with sin then the Lord shall bless the peoples and nations because they have harkened unto the words of the Lord.
many dont know that and some do.
may the Lord raise up godly might men to preach
anyone remember microsofts lauch slogan when they lauched for the very first time…..lol
ooohhh, how far away have they come away from their initial purpose of giving the people the freedom to connect with loved ones and see all over the world without censorship, paving the way to freedom and helping all those countries oppressed to connect to the liberty of the western world for help.
Seem like they got tangled up in satans web of lies and oppression of communism all for the Love of money is the root of all evil.
if nothing else at leastvthey prove Jesus’ words to be true!
James 2:19
King James Version
19 Thou believest that there is one God; thou doest well: the devils also believe, and tremble.