How Content Moderation Works
- Proactive Detection - Mirage Studio uses automated detection systems to identify potentially harmful or prohibited content during the creation process. This includes scanning for content that may violate our Acceptable Use Policy.
- Human Oversight - The Mirage Studio team is deeply involved in the creation and maintenance of these automated systems, and may also review content that is flagged to confirm compliance with our guidelines.
- Enforcement Actions - Content found to violate our policies may be blocked, removed, or prevented from being generated. Repeat or severe violations may result in permanent bans.
Safety and Security Commitment
As outlined in our Safety and Security statement, Mirage Studio prioritizes:- Secure Systems – Data encryption in transit and at rest
- Privacy – User content is stored and processed in accordance with our Privacy Policy
- Transparency – Clear communication on moderation decisions and policies
- Responsible AI – Guardrails to reduce the risk of generating harmful content
