Users of the popular instant messaging app were alarmed after the company’s in-app artificial intelligence chatbot went rogue, first posting a “Story” on its profile and then proceeding to ignore messages.
Snapchat’s ‘My AI’, powered by the infamous generic language model ChatGPT, was launched earlier this year to much fanfare and controversy. The AI chatbot typically answers users’ questions, provides them with video and people recommendations, and indulges in complete conversations.
But one thing it can’t do yet is post a live Story, a feature that is reserved only for the platform’s human users. However, on Tuesday, many users woke to a 1-second video Story posted by My AI of what appeared to look like a wall and ceiling.
My AI Posts Video Stories and Stops Responding to Messages
Snap fans were quick to voice their concerns on social media, with one user tweeting why the AI had a video of the wall and ceiling in their house as a story. Another user said the chatbot “went sentient”, while others joked that even a robot had no time for them after it stopped replying to their messages.
In a so-called mishap, the Story My AI posted was a two-toned image that many users mistook for the ceiling of their own homes. When users asked about the image, the AI returned a prompt that read – “Sorry, I encountered a technical issue”.
Many feared that the AI model had developed self-awareness and was genuinely concerned for their online safety. But thankfully, their worst dreams did not come true as Snapchat came out and confirmed that it was just a technical glitch.
A spokesperson for the company said My AI experienced a temporary outage which has since been resolved. Although the company did not provide any explanation for the image, it confirmed that the AI does not have the Stories feature. But, it is still unclear if the chatbot will be able to share Stories in the future.
Read More: Decentralized Social Media Generates $1M In Fees Within 24h, More Than Bitcoin and Uniswap
Concerned Users Allege Snapchat of Violating Online Privacy
My AI has faced backlash since its launch in April, with many parents (because Snapchat is popular among kids) and users criticizing the company over privacy violations. The AI was pinned to the top of the app’s chat feed and couldn’t be removed unless the user had signed up for a paid premium subscription.
There were also cases where My AI was sending inappropriate messages to minors, as per a report by the Washington Post. This prompted Snapchat to limit the chatbot by adding additional security features and parental controls to the app.
My AI is way more unique and customizable compared to other chatbots. Users can customize the AI’s name, design a custom Bitmoji avatar to give it a visual identity, and also bring it into group conversations where it can interact with other Snap users.
Despite the mixed reactions, users tend to spend a lot of time indulging in curious conversations with the AI.
Snapchat was one of the launch partners of ChatGPT when its creators, OpenAI, opened the generative AI technology to third-party services. The company along with Microsoft was among the first to implement the generative AI technology into their products.
As of right now, My AI is functioning normally.
Read More: Saul Investing: All You Need To Know About Saul Investing Discussions