The “experiment” will advise users to rethink sending out tweets which may contain offensive language.
Currently in the testing stages, the new moderation feature will be available on iOS and will “give you the option to revise your reply before it’s published if it uses language that could be harmful”, according to Twitter Support.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
It is not yet clear what language Twitter will class as harmful, but it can be assumed it will closely follow the policies laid out under the site’s official rules.
It is also not yet clear if the prompt will be given for original tweets, as well as replies, but users are expected to learn more as the feature continues to be tested.
Twitter is not the first social platform to take this approach. Last summer, Instagram tested a warning system for users posting offensive comments and, more recently, for captions on posts, where they’re warned that the caption may look similar to others that have been reported.
Whilst previously Twitter would remove, suspend or ban users for more extreme forms of harassment or abusive content, this new tool will simply encourage users to rethink before posting and thus hopefully avoid tense situations on the platform.
Users will be able to ignore the warnings and share their original tweet.
Want More?
Check out how Instagram is helping those missing their prom due to social distancing. Alternatively, you could read about YouTube’s upcoming online film festival.
For updates follow @TenEightyUK on Twitter or like TenEighty UK on Facebook.