Whether you’re a blogger, vlogger, e-newspaper or any other kind of online content creator, it’s likely that you receive a fair amount of comments on the content that you post. This creates meaningful dialogue and allows content consumers to engage and debate opinions.
Online comments can be extremely valuable. People are able to provide insight and valuable knowledge on areas that they are passionate about. However, it’s not possible to filter out who can and cannot comment, and for every constructive, relevant comment, there’s sure to be an equally unconstructive, insulting one.
Anonymity, and the lack of consequences has led to too many people leaving abusive, uninformed and altogether terrible comments under various forms of online content. Moderating these comments and striking a balance between our constitutionally entrenched right to freedom of expression[1] while ensuring civil and constructive discussions, is no easy task.
What is moderation?
Moderation basically means deleting or blocking unsuitable comments. Abusive, irrelevant, and generally unethical comments can be very damaging to the content creators’ brand. Negative comments can create a negative image or perception about the content creator even though they may not agree with the negative comment.
Why do we need comments to be moderated?
Most content creators agree that comments and being able to interact with content consumers allows for invaluable information exchange and can be considered an integral part of the actual published content. It compels content creators to ensure that the content produced is relevant and accurate. In fact, the Mail & Guardian once commented
“The paper talks to readers, while online is so much more: it talks with readers. We allow the conversation to happen. That is essentially what sets us apart from the printed press – we have that to and for dialogue. It’s one of the pillars of online journalism.” [2]
This just goes to show how much value is placed on comments, and why they should be moderated and not removed altogether.
Case Study – News24
Space was created for feedback at the bottom of every story on News24. Approximately 5000 comments a day were automatically published. This led to filters and other technical mechanisms being built into the content management system to prevent certain words and phrases such as racial slurs and profanities from being posted. Users could also report the comments of other users. Over time, News24 attracted a large amount of inappropriate comments and suffered reputational damage for the negative comments published under stories. News24 management then took the decision to disable automatic commenting.
But was this the correct decision? While News24’ management generally agree with the decision taken, one person in particular states that they
“think it was a fairly binary decision about ‘it would be bad if we are sued therefore we should not do it’, not ‘ it would be bad if we were sued therefore we should look at doing this [moderating] better so that we aren’t sued’”[3]
Guidelines and Policies
Was disabling comments the correct decision for News24? We would argue that it wasn’t. Drafting and publishing a comments guideline or policy would allow you to monitor the comments made on posts and only publish those that are appropriate. Racist, sexist, homophobic, pornographic, vulgar and hateful content is generally not allowed.
YouTube, for example, allows for users to report certain kinds of content. It says that:
“You might not like everything you see on YouTube. If you think content is inappropriate, use the flagging feature to submit it for review by our YouTube staff. Our staff carefully reviews flagged content 24 hours a day, 7 days a week to determine whether there’s a violation of our Community Guidelines.”[4]
YouTube also has policies on:
- Nudity or sexual content;
- Harmful or dangerous content;
- Hateful content;
- Violent or graphic content;
- Harassment or cyberbullying;
- Spam, misleading metadata, and scams;
- Threats;
- Copyright;
- Privacy;
- Impersonation; and
- Child safety.
The Mail & Guardian[5], Times Live[6] and Business Live[7] are a bit more relaxed with what they permit to be published.
We would recommend that content creators allow for content consumers to comment on their online content. However, these comments should be moderated and any comments that incite violence or could be construed as being racist, sexist, homophobic, vulgar, or defamatory in any way, should be removed.
[1] Section 16 of the Constitution of the Republic of South Africa states that “Everyone has the right to freedom of expression, which includes freedom of the press and other media; freedom to receive or impart information or ideas; freedom of artistic creativity; and academic freedom and freedom of scientific research.”
[2] Online comment moderation: emerging best practices by Emma Goodman (2013, World Association of Newspapers)
[3] Closing online comments: A case study of News24 by Roy McKenzie (2017).
[4] https://www.youtube.com/yt/about/policies/#community-guidelines
[5] https://mg.co.za/page/comments-guidelines
[6] https://www.timeslive.co.za/comments/
[7] https://www.businesslive.co.za/comments/
What the #@$% did you just say???
Whether you’re a blogger, vlogger, e-newspaper or any other kind of online content creator, it’s likely that you receive a fair amount of comments on the content that you post. This creates meaningful dialogue and allows content consumers to engage and debate opinions.
Online comments can be extremely valuable. People are able to provide insight and valuable knowledge on areas that they are passionate about. However, it’s not possible to filter out who can and cannot comment, and for every constructive, relevant comment, there’s sure to be an equally unconstructive, insulting one.
Anonymity, and the lack of consequences has led to too many people leaving abusive, uninformed and altogether terrible comments under various forms of online content. Moderating these comments and striking a balance between our constitutionally entrenched right to freedom of expression[1] while ensuring civil and constructive discussions, is no easy task.
What is moderation?
Moderation basically means deleting or blocking unsuitable comments. Abusive, irrelevant, and generally unethical comments can be very damaging to the content creators’ brand. Negative comments can create a negative image or perception about the content creator even though they may not agree with the negative comment.
Why do we need comments to be moderated?
Most content creators agree that comments and being able to interact with content consumers allows for invaluable information exchange and can be considered an integral part of the actual published content. It compels content creators to ensure that the content produced is relevant and accurate. In fact, the Mail & Guardian once commented
“The paper talks to readers, while online is so much more: it talks with readers. We allow the conversation to happen. That is essentially what sets us apart from the printed press – we have that to and for dialogue. It’s one of the pillars of online journalism.” [2]
This just goes to show how much value is placed on comments, and why they should be moderated and not removed altogether.
Case Study – News24
Space was created for feedback at the bottom of every story on News24. Approximately 5000 comments a day were automatically published. This led to filters and other technical mechanisms being built into the content management system to prevent certain words and phrases such as racial slurs and profanities from being posted. Users could also report the comments of other users. Over time, News24 attracted a large amount of inappropriate comments and suffered reputational damage for the negative comments published under stories. News24 management then took the decision to disable automatic commenting.
But was this the correct decision? While News24’ management generally agree with the decision taken, one person in particular states that they
“think it was a fairly binary decision about ‘it would be bad if we are sued therefore we should not do it’, not ‘ it would be bad if we were sued therefore we should look at doing this [moderating] better so that we aren’t sued’”[3]
Guidelines and Policies
Was disabling comments the correct decision for News24? We would argue that it wasn’t. Drafting and publishing a comments guideline or policy would allow you to monitor the comments made on posts and only publish those that are appropriate. Racist, sexist, homophobic, pornographic, vulgar and hateful content is generally not allowed.
YouTube, for example, allows for users to report certain kinds of content. It says that:
“You might not like everything you see on YouTube. If you think content is inappropriate, use the flagging feature to submit it for review by our YouTube staff. Our staff carefully reviews flagged content 24 hours a day, 7 days a week to determine whether there’s a violation of our Community Guidelines.”[4]
YouTube also has policies on:
The Mail & Guardian[5], Times Live[6] and Business Live[7] are a bit more relaxed with what they permit to be published.
We would recommend that content creators allow for content consumers to comment on their online content. However, these comments should be moderated and any comments that incite violence or could be construed as being racist, sexist, homophobic, vulgar, or defamatory in any way, should be removed.
[1] Section 16 of the Constitution of the Republic of South Africa states that “Everyone has the right to freedom of expression, which includes freedom of the press and other media; freedom to receive or impart information or ideas; freedom of artistic creativity; and academic freedom and freedom of scientific research.”
[2] Online comment moderation: emerging best practices by Emma Goodman (2013, World Association of Newspapers)
[3] Closing online comments: A case study of News24 by Roy McKenzie (2017).
[4] https://www.youtube.com/yt/about/policies/#community-guidelines
[5] https://mg.co.za/page/comments-guidelines
[6] https://www.timeslive.co.za/comments/
[7] https://www.businesslive.co.za/comments/
Filter By
Must Reads
South Africa’s National Data and Cloud Policy, 2024 Published Yesterday
Read the articleNext-Generation Radio Frequency Spectrum for Economic Development Policy Published Today
Read the articleContractual Liability for Cybercrimes – Who Bears the Brunt?
Read the articleSubscribe to receive our latest articles
Follow Us
Related Posts
Next-Generation Radio Frequency Spectrum for Economic Development Policy Published Today
South Africa’s National Data and Cloud Policy, 2024 Published Yesterday
Contractual Liability for Cybercrimes – Who Bears the Brunt?