How do models “learn” from user actions (spam reports, opens, deletes)?
ML models learn from user actions by treating spam reports, opens, deletes, and other interactions as signals about message quality. When users consistently mark messages from a sender as spam, models learn to filter similar messages. When users engage positively, models learn to trust that sender.
Spam button clicks are particularly strong signals because they represent explicit negative feedback. Rescuing messages from spam folders provides positive correction. Implicit signals like opens, time spent reading, clicks, and replies indicate engagement value.
The learning applies both individually (personalizing one user's inbox) and collectively (adjusting sender reputation based on aggregate behavior). A sender generating complaints from many users faces filtering for all users, even those who have not complained personally.
Was this answer helpful?
Thanks for your feedback!