Interesting blog post from the Jisc (UK) team Opinion: how generative AI can help us address linguistic inequalities.
The bit that immediately caught my eye was, ‘I think it’s important to recognise that being able to express your ideas clearly in standardised written English is a form of privilege. Not everyone has equal access to the linguistic norms that are often expected in professional and academic spaces. For many people, like those who speak English as an additional language, varieties of stigmatised English, or who are neurodivergent, tools like generative AI can be used for linguistic accessibility.’ (Post on LinkedIn)
Should generative AI should be used to help us game a linguistically rigged system – allowing us to translate from ‘bad’ English to ‘good’?
TL:DR The author is becoming convinced that AI can help level the playing field by reducing linguistic inequality. This is not just because generative AI tools can translate great ideas into the version of English that is considered proper – AI can also support people to diversify and adapt how they communicate. That said, it is important to reflect on whether expectations of how English should be written are unduly disadvantaging different groups of learners. Generative AI can support us in undergoing such reflections.
Although not specifically related to publishing, I’d be interested to hear people’s thoughts (from an ashamed monoglot!).
I work with mostly non-native language users who have English at C2 level and are doing doctoral or post-doctoral work. The arrival of Grammarly and other services that offer to “correct” and translate their L1 into English has rapidly bred a dependency on checking the final drafts of emails, papers, chapters, application statements, etc. I think it’s an almost entirely negative process and brings revenue-oriented product design into multi-lingual scholarly life, in way that encourages hesitancy and leads to a very normative style of English. This idea that its democratising language or opportunity is just what a value-extractive, resource-intensive information-polluting sector of the tech economy would say!
It also introduces an idea that English has to be correct and “native” above all else. What about Walter Pater, who wrote in a form of English so strange and beautiful and infused with classical Greek that it was said that he wrote in English “as if it were a foreign language”? Maybe that’s going too far when it comes to the functionalism of most academic prose…
An adjacent point: it seems that the way the model training is happening means that the language outputs of LLMs in other languages are introducing more Anglo patterns into, for instance, French. See this article for more information: L’étrange plume de ChatGPT, par Frédéric Kaplan (Le Monde diplomatique, juillet 2025) Flattening the range of expression in English isn’t the end of it; it’s not flattening the language use of other languages.
I note that the author is not a linguist, is a monoglot, and has a professional interest in the expansion of AI in education.