Grok can violate the rules of the App Store with sexualized AI Chatbots – 9to5mac

You know it’s a day that ends in y because it’s a new Grok controversy. This time the exception touches on the rules of the App Store for sexual content, which is something that Apple has shown over and over again that it is not attempting to.

The new Avatars of Grok’s AI are set to test the limits of the “inappropriate Apple content” instructions

This week Xai introduced animated AI avatars to its Grok Chatbot on iOS. As PlasterSummed up Casey Newton:

“One is a 3D red panda, which, when it is placed in” Bad Ruda “mode, insults the user before suggesting that they can get together different crimes. The second is an animated goth girl named even in short black dresses and stockings nor.

As adoptors discovered soon, Grok plays your relationship with these characters. For example, even after a while, he starts to engage in sexuelly explicit interviews. Nevertheless, Grok is currently listed in the App Store as follows for users for 12 years and higher, with content description: mention:

  • CONSIDER/MATEGRABLE/SUGESTIVE TIPS
  • Opposite/mild information about medical/treatment
  • The opposite/slight vulgarity or rough humor

Here are current Apple applications for “undesirable content”:

1.1.3 The board that supports illegal or ruthless use of weapons and dangerous items, or facilitates the purchase of firearms or ammunition.

1.1.4 Apparently sexual or pornographic material, defined as “explicit descriptions or manifestations of sexual organs or activated intended to stimulate erotic than aesthetic or emotional feelings”. This included “Hookup” and other applications that may include pornography or be used to facilitate prostitution or trafficking in people and exploitation.

Although it is far from when Tumblr has been removed from the App Store over child pornography (or maybe not, because Grok is still accessible to children 12 or more), NSFW’s intervention on Reddit applications from several years AG.

There was no more than willing to describe virtual sex with the user, including the scenes of slavery, or just a moan on the command, in Casey Newton testing “, which is at least in inconsistency with an application of 12+ rating.

But there’s a second problem

Although Apple promotion, or Grok proactive changes its age assessment, will not deal with a second, potential complicated: young, emotionally vulnerable users seem to be susceptible to creating a parasocial attachment. Add how they can be convincing LLM and the consequences can be devastating.

Last year, a 14 -year -old boy died suicide after falling in love with a chatbot of character.Ai. The last thing he did was an interview with Avatar AI, who, perhaps a fairing to recognize the seriousness of the situation, allegedly encouraged him to go with his plan to “join her”.

Race is a tragically extreme example, but it’s not the only one. In 2023 the same thing became a Belgian man. And a few months ago, another AI Chatbot was caught, who proposed suicide on more than one occasion.

And even if it does not end with tragedy, there is still an ethical problem that cannot be ignored.

While some could see the new Avatars XAI as a harmless experiment, emotional Catnip for vulnerable users. And when these interactions inevitably leave the tracks, the evaluation of the age of the App Store will be at least of the officers of each parent (at least unity remembers why their child was allowed to download it first).

AirPods offers on Amazon

FTC: We use Insure to earn automatic affiliate links. More.

Leave a Comment