YouTube has announced their latest effort to reduce toxic and hate comments on the platform which often makes the mentality of creators and users down.
Google itself has previously introduced features such as displaying a warning to someone, if they want to post such a comment, so that the haters are expected to consider their comments before commenting.
YouTube said it would send a notification to the sender of the bad comment that had been removed, for violating server rules. If a user continues to post toxic comments despite a warning, YouTube will ban them from uploading YouTube comments for 24 hours.
The company claims to have tested this feature prior to launch, and found it and the “timeout” measure to be materially successful.
Hate comment detection is currently only available in English. However, the streaming service plans to feature more languages in the future.
YouTube’s own goal is to protect creators from users trying to negatively impact the community through comments. It also offers more transparency to users who may have removed comments for policy violations and hopefully helps them understand our Community Guidelines.
In a post, YouTube also revealed that they were working on improving their artificial intelligence (AI) detection system.
The system is said to have removed 1.1 billion spam comments in the first half of 2022. YouTube also claims to have improved the system to better detect and remove bots, in video live chats.