Update to Our Community

  • Continuously improve the AI based on feedback to be both fun and welcoming
  • Providing a way for users to report false positives so that we can limit the impact on other types of content.
  • Inform moderators in advance when platform changes are being implemented

Questions and Answers

What changes did you make to AI Dungeon?

We are in the process of implementing technical safeguards and policies to support our community guidelines prohibiting sexual content involving minors in AI Dungeon. Additionally, we are updating our community guidelines and policies to clarify prohibited types of user activity.

Why did you make this change?

As a technology company, we believe in an open and creative platform that has a positive impact on the world. Explicit content involving descriptions or depictions of minors is inconsistent with this value, and we firmly oppose any content that may promote the sexual exploitation of minors. We have also received feedback from OpenAI, which asked us to implement changes.

How will it affect my gameplay?

For the vast majority of players, it shouldn’t. It will only affect your gameplay if you pursue these kinds of inappropriate gameplay experiences.

What kind of content are you preventing?

This test is focused on preventing the use of AI Dungeon to create child sexual abuse material. This means content that is sexual or suggestive involving minors; child sexual abuse imagery; fantasy content (like “loli”) that depicts, encourages, or promotes the sexualization of minors or those who appear to be minors; or child sexual exploitation.

Are you preventing all sexual content or swearing?

AI Dungeon will continue to support other NSFW content, including consensual adult content, violence, and profanity.

Is Latitude reading my unpublished adventures?

We built an automated system that detects inappropriate content. Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law.

How do you plan to prevent future tests and changes from disproportionately impacting underrepresented users?

As a diverse team, many of us have had personal experience with features like reporting being misused against marginalized groups to which we belong. We are aware of the possible misuse of any reporting feature on our platform, especially against underserved and marginalized groups, such as women, LGBTQ individuals, and disabled people, and will be taking reports seriously on a contextual basis.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store