Trace Id is missing
Moderation and enforcement

Content review

How review works

When a potential content or conduct violation on our services is reported, a specially trained human reviewer may look at the material. Human reviewers consider images, video, messages, and context to determine whether the identified content or conduct violates our policies, Code of Conduct, or service-specific terms. When content or conduct violates our policies and human reviewers need additional information, they may follow up with the person who reported it to ask questions, or they may seek assistance from subject matter experts within Microsoft.

We may rely on automated technology to help us identify and categorize violations without human review.

Examples include violations like:

  • Blocking known hateful words in a Gamertag.
  • Malware, virus, spam, and phishing detection.
  • Blocking terrorist imagery when it matches imagery previously identified as terrorist content.

Reviewer training

Our human reviewer teams are diverse and receive extensive training.

Content detection

We use technology to find harmful content and review concerns reported from others.