A team of human moderators review all content flagged by our AI content moderation tool. In addition to this, as part of our ongoing quality control process, this team moderates a minimum of 1% of all content not flagged by AI.
How it works
All content flagged by AI moderation is sent for human moderation to confirm the flag is accurate, with all decisions logged to aid in the training and refinement of the AI performance.
Human moderators also review a small sample set of approved content for quality control purposes to confirm any automated decisions are accurate.
By manually reviewing 1% of all content, we ensure the correct moderation decisions are being made and the health of your content is maintained.
Explore the wide range of safeguarding and compliance features that work together to protect your business and its users.
Any questions? Let's chat. Our dedicated team are always on-hand to discuss identity and content verification.