Twitter Amps Up Their Protection Against Offensive Comments Alerts
Twitter is no stranger to its users spewing hateful tweets and offending others, just recently it permanently suspended a celebrity's Twitter account on the grounds of behavior that has the potential to lead to offline harm.
In the latest announcement, Twitter is now launching a new, updated version of the notifications for both iOS and Android, following a recent re-start of its tweet-test notifications, Twitter is now launching a new, updated version of the prompts for both iOS and Android, which use an improved algorithm to prevent misidentifying, while still providing more contexts and options for users to better understand what the alert signifies.
According to the social media giant, "In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn't differentiate between potentially offensive language, sarcasm, and friendly banter. Throughout the experiment process, we analyzed results, collected feedback from the public, and worked to address our errors, including detection inconsistencies."
The revised negative algorithm will be used to identify stronger language like profanity, in such ways as to influence the interaction between the speaker and replier while the prompts themselves would now provide additional choices to allow users to see the situation more closely and to provide Twitter feedback about the same.