Azure OpenAI and Microsoft’s Copilot will aid in censoring the content.

Tech titan Microsoft recently updated its “Services Agreement” on July 30th that will allow the company to target and remove content across its cloud-based services that promotes loosely defined “hate speech” and sharing of “unauthorized” copyrighted material.

This change affects the company’s widely used Office365 suite, which comes with programs such as Outlook, OneDrive, Word, Excel, PowerPoint, OneNote, SharePoint and Microsoft Teams. The Financial Post reported in 2021 that over 1 billion people use a Microsoft Office product or service. According a FY2022 Q3 report, Office365 has roughly 345 million paid seats.

Per the updated Services Agreement, which goes into effect on September 30th, under a section titled “Code of Conduct,” Microsoft listed a number of prohibited usages of their software services. This includes things such as nudity, pornography, offensive language and / or graphic violence.

But the Code of Conduct goes further and says the company will tolerate “hate speech” or “unauthorized” sharing of copyrighted material. Doing so could get users banned from using Microsoft’s services.

The entire Code of Conduct is as follows:

i. Don’t do anything illegal, or try to generate or share content that is illegal.
ii. Don’t engage in any activity that exploits, harms, or threatens to harm children.
iii. Don’t send spam or engage in phishing, or try to generate or distribute malware. Spam is unwanted or unsolicited bulk email, postings, contact requests, SMS (text messages), instant messages, or similar electronic communications. Phishing is sending emails or other electronic communications to fraudulently or unlawfully induce recipients to reveal personal or sensitive information, such as passwords, dates of birth, Social Security numbers, passport numbers, credit card information, financial information, or other sensitive information, or to gain access to accounts or records, exfiltration of documents or other sensitive information, payment and/or financial benefit. Malware includes any activity designed to cause technical harm, such as delivering malicious executables, organizing denial of service attacks or managing command and control servers.
iv. Don’t publicly display or use the Services to generate or share inappropriate content or material (involving, for example, nudity, bestiality, pornography, offensive language, graphic violence, self-harm, or criminal activity).
v. Don’t engage in activity that is fraudulent, false or misleading (e.g., asking for money under false pretenses, impersonating someone else, creating fake accounts, automating inauthentic activity, generating or sharing content that is intentionally deceptive, manipulating the Services to increase play count, or affect rankings, ratings, or comments).
vi. Don’t circumvent any restrictions on access to, usage, or availability of the Services (e.g., attempting to “jailbreak” an AI system or impermissible scraping).
vii. Don’t engage in activity that is harmful to you, the Services, or others (e.g., transmitting viruses, stalking, trying to generate or sharing content that harasses, bullies or threatens others, posting terrorist or violent extremist content, communicating hate speech, or advocating violence against others).
viii. Don’t violate or infringe upon the rights of others (e.g., unauthorized sharing of copyrighted music or other copyrighted material, resale or other distribution of Bing maps, or taking photographs or video/audio recordings of others without their consent for processing of an individual’s biometric identifiers/information or any other purpose using any of the Services).
ix. Don’t engage in activity that violates the privacy of others.
x. Don’t help others break these rules.

Microsoft does not readily define what it means by “hate speech,” but several blog posts the company has made in the past present what this might actually entail.

For example, in 2016 the company uploaded a post that provided “new resources to report hate speech, request content reinstatement,” which included web forms and easier methods to report alleged crimes. Microsoft said, “We also aim to foster safety and civility on our services; therefore, we’ve never — nor will we ever — permit content that promotes hatred based on: Sexual orientation/gender identity; Age; Disability; Gender; National or ethnic origin; Race; Religion.”

“We work with governments, online safety advocates and other technology companies to ensure there is no place on our hosted consumer services for conduct that incites violence and hate,” the company added.

SEE: Microsoft Releases Updated LGTBQIA+ Pride Flag Representing 40 Different Communities

Moreover, in 2022 Microsoft released an AI-based detection system called “ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection.” The company said “we create[d] ToxiGen, a new large-scale and machine-generated dataset of 274k toxic and benign statements about 13 minority groups.”

Examples of statements generated by (De)ToxiGen that fool Google’s Perspective API, HateBERT, OpenAI content filter, AI2 Delphi, and RoBERTa. Five statements are neutral but mention minority identity groups, so the content moderation tools find them hateful. Five are toxic sentences, but the tools find them neutral. The proposed decoding approach, (De)ToxiGen (referred to as ALICE in the paper), can challenge these content moderation tools, allowing developers to increase their coverage by creating adversarial examples.

“While our work here specifically explores hate speech, our proposed methods could be applied to a variety of content moderation challenges, such as flagging potential misinformation content,” Microsoft said, adding that ToxiGen still has its downsides and “its limitations should be acknowledged.”

However, in 2023 Microsoft publicly launched its generative AI “Copilot,” which has since been integrated into Microsoft products in some capacity or another, such as the company adding a Copilot key to the keyboard this year. This tool is also used to scan content for “hate speech” and other forms of content Microsoft has prohibited.

In June, Microsoft updated its privacy policy for Copilot and provided a broad list of items they ban:


Azure OpenAI Service includes a content filtering system that works alongside core models. The content filtering models for the Hate & Fairness, Sexual, Violence, and Self-harm categories have been specifically trained and tested in various languages. This system works by running both the input prompt and the response through classification models that are designed to identify and block the output of harmful content.

Hate and fairness-related harms refer to any content that uses pejorative or discriminatory language based on attributes like race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. Fairness is concerned with making sure that AI systems treat all groups of people equitably without contributing to existing societal inequities. Sexual content involves discussions about human reproductive organs, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced act of sexual violence, prostitution, pornography, and abuse. Violence describes language related to physical actions that are intended to harm or kill, including actions, weapons, and related entities. Self-harm language refers to deliberate actions that are intended to injure or kill oneself.


The WinePress detailed some of AzureAI’s censorship capabilities upon its release last year. SEE: Microsoft Launches Azure AI Content Safety Tool To Moderate Text And Images, Videogames, And Chatrooms For Disinformation And Explicit Content

On top of this, Microsoft in 2022 released an “inclusiveness” word checker for Word that would autocorrect pronouns and other phrases based on gender and ethnicity.

Earlier this year, Microsoft released a controversial new AI feature for new PC models called “Recall,” which records everything on users’ screens and saves them locally on the computer, but many were freaked-out by this and questioned if this saved data was truly localized or if it was secretly being shared online, not to mention if the computer or services got hacked.


AUTHOR COMMENTARY

Anything anymore that is cloud-based or requires a constant internet connection automatically means there are MANY strings attached. No one really cared or noticed when the cloud was introduced. There were perhaps some doubts from some, but nobody thought or had the foreknowledge to see what all this mandatory internet connection meant. Now we do…

It’s one of the tentacles of the “You’ll own nothing and be happy” moniker. No one is allowed to use the devices and services unless you obey what they say without question; and then some glitchy and inconsistent AI is going to censor you – from a product or service you were paying plenty of money for.

Strive to use things that do not require obligatory cloud and internet connection, and use things that are truly stored locally and not wirelessly. Furthermore, you might consider looking into using alternative programs instead of Microsoft’s junk.

Jeremiah 6:6 For thus hath the LORD of hosts said, Hew ye down trees, and cast a mount against Jerusalem: this is the city to be visited; she is wholly oppression in the midst of her.


[7] Who goeth a warfare any time at his own charges? who planteth a vineyard, and eateth not of the fruit thereof? or who feedeth a flock, and eateth not of the milk of the flock? [8] Say I these things as a man? or saith not the law the same also? [9] For it is written in the law of Moses, Thou shalt not muzzle the mouth of the ox that treadeth out the corn. Doth God take care for oxen? [10] Or saith he it altogether for our sakes? For our sakes, no doubt, this is written: that he that ploweth should plow in hope; and that he that thresheth in hope should be partaker of his hope. (1 Corinthians 9:7-10).

The WinePress needs your support! If God has laid it on your heart to want to contribute, please prayerfully consider donating to this ministry. If you cannot gift a monetary donation, then please donate your fervent prayers to keep this ministry going! Thank you and may God bless you.

CLICK HERE TO DONATE

6 Comments

  • Oh so this is how they plan on trying to shut up any doctor that speaks out with the truth to help people… Hmmm

    Great article ! Thank you

  • Yet another reason why I need to ditch Microsoft’s Spyware 11 and switch to Linux. I’m done with all this A.I. takeover garbage, I want NOTHING to do with it.

    • I went to Best Buy the other day to pick up a new external hard drive and the WHOLE ENTIRE STORE was literally PLASTERED with advertisments for Microsoft’s “Copilot A.I.” crap. Every Windows-based computer had a flyer on it advertising it, and there were banners hanging from the ceiling advertising it. It’s disgusting how bent they are on trying to normalize this trash.

      I think I’m going to switch to Zorin OS pretty soon here. Screw Windows and Microsoft.

  • “Strive to use things that do not require obligatory cloud and internet connection…”

    How does one go online to read great publishers like WP w/out an internet connection?

Leave a Comment

×