Twitter is reportedly expanding its security mode feature, which could temporarily block accounts that could harm users or send inappropriate tweets.
The organization will block for seven days accounts that use hateful comments to flag accounts or attack people with uninvited comments.
Half of the site’s users now have access (access) in the UK, US, Canada, Australia, New Zealand and Ireland.
And they can now use a sub feature called Proactive Safety Mode. This will prompt people to detect harmful responses in advance and consider enabling security mode.
The company said it incorporated this feature based on the feedback of some users in the initial testing. This dynasty should help users identify undesirable contacts.
The security mode feature can be enabled in systems, and the system evaluates both the content of the tweet and the relationship between the tweet writer and the respondent. Accounts that the user follows or frequently contacts are not automatically blocked.
The company says it will gather more insights into how this feature works and incorporate further improvements.
Twitter has struggled to cope with abuse and harassment on its site and is now facing close scrutiny from regulators. In January, a French court ruled that Twitter should show how it fights online attacks.
At the same time the UK is drafting legislation that would force all social media sites to make hate speech or impose fines. Like all social media sites, Twitter relies on a mix of automation and human moderation.
The New York Business School NYU Stern’s 2020 report suggests that it has approximately 1,500 evaluators worldwide.