“Exercise caution when reviewing AI-generated content, particularly when it touches on sensitive topics like China’s revered Chairman Mao”
The stock of iFlyTek, a Chinese company specialising in artificial intelligence, fell by 10% recently. Why? Because a tablet designed to help students with their studies started criticising Mao Zedong.
Image: CN Wire
Imagine you have a smart assistant, like Siri or Alexa, but it starts talking politics—sensitive politics.
That’s what happened here. This student aid gadget, sold by iFlyTek, came under fire for describing Mao Zedong, the founding father of modern China, as “narrow-minded” and “intolerant.”
This incident spotlights a concern: even artificial intelligence can have a ‘mind’ of its own, sometimes skirting around official censorship tools. It’s like your well-behaved dog suddenly deciding to do its business in the neighbour’s garden, uninvited.
This is happening against a backdrop of new rules in China, aimed at managing how AI technology interacts with the public. A blacklist has even been proposed to restrict the data sources that can train these AI models.
A representative from iFlyTek’s customer service told Reuters that the issue had been “dealt with,” but declined to provide further details.
iFlyTek’s Chairman, Liu Qingfeng, explained that the controversial content originated from a supplier during a trial phase. Both the supplier and some iFlyTek employees faced repercussions.
To prevent this kind of hiccup in the future, Liu said the company has beefed up its review process for content on the device.
Notably, iFlyTek recently announced its new Spark AI model, claiming it to be on par with OpenAI’s ChatGPT in several crucial areas.
The recent mishap with iFlyTek’s AI might cast a shadow over its ongoing collaborations with Huawei, potentially shaking the foundation of trust between the two companies.