Google’s new ‘inclusive language’ assistant uses artificial intelligence to detect “discriminatory” words, and suggest that users swap them for more politically correct terminology. Free speech and privacy advocates say the feature undermines “freedom of thought.”
Google announced the tool at the beginning of April, as part of a host of “assistive writing features” for Google Docs users. Some of these AI-powered add-ons suggest more concise and snappy phrases for writers, while others polish up grammar.
However, Google said that with its new ‘inclusive language’ assistant, “Potentially discriminatory or inappropriate language will be flagged, along with suggestions on how to make your writing more inclusive and appropriate for your audience.”
Users soon noticed its prompts creeping into their work and posted screenshots of Google’s suggestions on Twitter. The term ‘motherboard’ is flagged as potentially insensitive, as is ‘housewife’, which Google suggests should be replaced with ‘stay-at-home-spouse’.
‘Mankind’ should be replaced with ‘humankind’, ‘policeman’ with ‘police officer’, and ‘landlord’ with ‘property owner’ or ‘proprietor’. Other technical phrases flagged, Vice reported last week, include ‘blacklist/whitelist’ and ‘master/slave’.
Despite highlighting these common terms as potentially offensive, Google’s assistant placed no warnings on a transcript of an interview with former Ku Klux Klan leader David Duke, in which Duke repeatedly used the word ‘n****r’ to describe black people, Vice’s reporters discovered.
The feature, which can be turned off and is currently available only to corporate users of Google’s Workspace software, has alarmed privacy and anti-censorship activists.