2 min read
Microsoft Teams adds false-positive reporting feature
Gugu Ntsele Nov 27, 2025 7:51:40 AM
Microsoft Teams is rolling out a new feature allowing users to report messages incorrectly flagged as security threats, with worldwide availability expected by the end of November 2025.
What happened
Microsoft announced that Teams users will soon be able to report false-positive threat alerts when messages are wrongly flagged as malicious. The feature, which first entered a targeted rollout phase in September 2025, enables users to provide feedback on incorrect security detections in both chats and channels. The capability will be available to organizations using Microsoft Defender for Office 365 Plan 2 or Microsoft Defender XDR across all platforms, including desktop (Windows and macOS), mobile (Android and iOS), and web. Once generally available, the feature will be enabled by default, though administrators can toggle it on or off through the Teams admin center and Microsoft Defender portal. Admins can enable user reporting by signing into the Teams Admin Center, selecting "Messaging settings," scrolling to "Messaging safety" settings, turning on "Report incorrect security detections," and saving the changes.
What was said
Microsoft stated in a Microsoft 365 message center update that the new feature "empowers users to provide feedback on false positives, helping improve detection accuracy and strengthen organizational security."
In the know
False-positive alerts occur when security systems incorrectly identify legitimate messages as threats. This can disrupt communication and productivity while potentially causing users to distrust legitimate security warnings. User reporting mechanisms help security teams refine their detection algorithms by providing real-world feedback on system accuracy, creating a feedback loop that improves threat detection over time while reducing unnecessary alerts.
Why it matters
This feature addresses maintaining strong threat detection without creating alert fatigue. False positives can lead users to ignore security warnings or find workarounds that compromise protection. By allowing users to report incorrect flags, Microsoft creates a feedback mechanism that improves detection accuracy over time. This is important as Teams has become a primary communication platform for a lot of users, making it an attractive target for attackers. The feature complements Microsoft's recent security enhancements, including warnings for malicious links in private messages and enhanced protection against malicious files and URLs in Teams. As collaboration platforms become vectors for phishing and malware attacks, accurate threat detection that doesn't disrupt legitimate communication becomes essential for maintaining both security and productivity.
The bottom line
As Microsoft strengthens Teams security features, false-positive reporting represents a step in balancing protection with usability. Organizations using Microsoft Defender for Office 365 Plan 2 or Microsoft Defender XDR should verify this setting aligns with their security policies once it reaches general availability. The feature's success will depend on user adoption and Microsoft's responsiveness to feedback in refining detection algorithms.
FAQs
Can users report false positives on all message types in Teams?
Yes, the feature applies to messages in chats, channels, and threaded conversations.
Will reporting false positives notify IT administrators automatically?
Reports are collected centrally, but administrators can configure notifications through the Teams Admin Center.
Can this feature be used without Microsoft Defender for Office 365 Plan 2 or Defender XDR?
No, the false-positive reporting feature requires one of these Microsoft Defender plans.
Does reporting a false positive affect the user’s account or permissions?
No, reporting incorrect alerts does not impact user privileges or access.
How does Microsoft use the feedback from false-positive reports?
Feedback is used to refine threat detection algorithms and reduce unnecessary alerts.