Biased LLM Outputs, Tiananmen Square & Americanisations
R̶e̶a̶l̶i̶z̶i̶n̶g̶ Realising bias in LLMs is important, but it goes both ways. I see a vast number of people wasting a lot more of their time correcting the written English (spelling) in almost every single output generated from American trained LLMs - than I do correcting Chinese trained LLMs on what happened at Tiananmen Square. Just as we need to have systems to augment and verify LLMs knowledge with facts - it would be pretty nice to not have to replace Zs with Ss in the output of every single models output. ...