Moderation Demands Trends of Content
- 289 Views
- admin
- April 25, 2022
- Technology
Increasing Amounts of Data Being Shared on Social Media and Forums are Attracting New Threats
One of the largest and most popular platforms on the internet is YouTube. According to ComScore, over 100 million people visit YouTube each month, viewing 1 billion videos per day. Often these videos are uploaded by users, and their content varies greatly. For example, a video of a user’s cat or some other pet doing something funny will be viewed by many people who love cats or dogs or birds or whatever it is. These viewers may also upload their videos featuring their pets doing funny things.
Some users upload videos containing extreme hate speech directed toward groups of individuals defined by ethnicity, religion, disability, and/or sexual orientation. This type of content attracts viewers that feel animosity toward those same groups. Such hateful speech on social media has been linked to increased violence against members of these groups. While governments around the world have attempted to curb such hateful speech by passing laws prohibiting its dissemination on social media outlets like YouTube and Facebook, such efforts have met with mixed results at best with Web 2.0, users generate and share content through comments, chats, photos, and other types of user-generated content.
Users have become increasingly active with social media; this is often attributed to the widespread use of smartphones and the internet. Users also have become increasingly engaged with social media by participating in user-generated content; for example, a study found that 64% of Instagram followers and 75% of Facebook fans made some kind of purchase last year after liking or sharing content from those platforms. A key takeaway from this study is that consumers are willing to spend money on brands they like, which helps illustrate that users have become more engaged with social media since it started trending about a decade ago.
This Creates New Challenges Such as Cyberattacks and Abuse on Media Platforms
Credit card companies and banks have been dealing with cyber threats for years, so it’s not a huge surprise that the recent hacking of Target Corp. was able to spread rapidly and wreak havoc. But it also raised questions about how hackers might be able to use that same technology for their gains, as well as about the security measures available at retailers to protect themselves against such attacks.
In Cyber rack is on the rise. According to Verizon’s 2014 Data Breach Investigations Report, there were up to 1 million compromised accounts per day in 2013—a number far higher than previous years (and a nearly 50% increase from 2012). Cyber crooks often use social engineering (using trickery like charm or intimidation) or email phishing scams to lure people into handing over the information they shouldn’t. Victims usually receive an email message saying they’ve received a gift certificate or prize winnings through some kind of gaming platform—only then do they discover that the link goes nowhere if it even exists at all. This approach is pretty rudimentary and easy to figure out; however, many people fall for phishing emails because they trust those who send them.
The psychological effects of cyberstalking can be devastating—both emotionally and physically—but because there isn’t enough data on this particular phenomenon yet, we’re unable to offer specific advice on how best to protect yourself from this type of attack.
In the Past Few Months, We Have Seen a Variety of Abusive Content Across Our Sites
You might have seen some of these stories in the news, and we’ve received questions about why content like this is allowed to exist on our sites. We’re always working on improving our systems based on feedback from our communities, so we appreciate these conversations.
These examples show how complicated it can be to decide what content should and shouldn’t be allowed on our sites. In most of these cases, we apply a policy called ‘differential moderation’ where a piece of content would be removed from the site if posted by a regular user, but permitted if posted by a premium or high profile user.
Differential moderation is used in situations where there are conflicting values at play (e.g., free expression vs privacy). By applying differential moderation in these situations, we believe that we can uphold both values while minimizing harm. For example, when sensitive personal information is shared publicly without consent by an average member of the community, it’s not okay because it could lead to abuse or harassment for that person. However, when sensitive information is shared by a public figure (i.e., someone who has sought power or attention as part of their life), it may be newsworthy and important for people to see because it relates directly to their behavior or character as a public figure. This approach allows us to balance two important values: free expression and privacy rights
We Have Even Seen Examples of Political Expression, Hate Speech, and Violent Extremism
Violence and extremism are not on the rise. There has been a fundamental change in the way people talk, which is reflected on social media sites. We have even seen examples of political expression, hate speech, and violent extremism. As part of our commitment to free speech, we want to encourage users to express themselves as freely as possible, but we also want to protect our users from abusive content. We do not tolerate hate speech and incitement to violence online. This means that while you may be able to express yourself freely on our sites, your actions may still have consequences, especially if your posts or comments are perceived as hateful or threatening by others. If a user violates these policies and their account is suspended or banned, they will be removed from the service immediately without notice.
We are also Seeing Growth in Misinformation about Politics, Fake News, and Terrorism
A survey conducted by Ipsos-MORI indicated that 31% of people around the world have read fake news, and 20% have shared it. We are also seeing growth in misinformation about politics, fake news, and terrorism. Misinformation is a threat to democracy, national security, and public health. It is being used as a tool of propaganda and is designed to manipulate the reader into believing false information. Fake news often goes viral because it plays on emotions. It can be difficult to identify because it looks like real journalism and sometimes appears on legitimate sites or well-known news organizations, making it harder for the reader to discern between what’s true or false.
As These Threats Evolve, We Need to Continue to Invest in Technology and Human Resources to Keep ahead of them
Content moderation is a human-scale task that combines an understanding of context with an ability to make judgment calls. That’s why it’s already clear today that machines cannot entirely replace humans in content moderation. A computer program can help easily catch text, images, and illegal videos, violate specific policies or fail to meet certain standards. But even with the most advanced technology, it is still often impossible for AI to distinguish between satire and propaganda, define whether something violates a platform’s terms of service, or determine if the content has been altered.
Instead of trying to have AI take over completely from human oversight, we should use machine learning programs as one tool among many to assist human moderators in their work, especially when handling large volumes of user-generated content that need immediate attention. For example, machine learning could help identify threats by analyzing the emotional intent behind statements containing hateful rhetoric and separating violent propaganda from posts about historical events.
However, even with sophisticated image analysis tools available today—which can successfully detect nudity or digitally manipulated images—it is possible for AI to incorrectly flag innocuous material because it fails to recognize cultural context or humor or because an algorithm was trained on biased data sets. We need systems designed by experts who understand these risks and can ensure machines do not misidentify normal content as suspicious material or remove legitimate discussions from public view out of fear of offending someone—or worse yet, censor speech unintentionally based on outdated assumptions about what kinds of content are acceptable online.
Among These Strategies is the Use of Content Moderation
Content moderation is one of the most popular strategies for taking care of user-generated content. Content moderation is using human or automated moderators to review and either delete or approve content that is uploaded by your users.
The most important thing to remember when you’re managing your own community’s UGC is that different communities require different strategies. What works in one community may not work in another and what works today might not work tomorrow.
One core strategy among these strategies is the use of Content Moderation.
By definition, content moderation means reviewing and approving the content that your users want to post on your site. It’s a process where you decide what stays online based on set guidelines and whether or not the content violates those guidelines.
You can either do this manually as a moderator reviewing each post individually before it gets published, or you can automate it using AI, which will automatically approve or reject posts based on pre-defined guidelines. The best way to moderate your community depends largely on how big it is: if there are only a few hundred users then manual moderation might be enough; but for larger communities with hundreds of thousands (or millions) of members, automating this task becomes essential because moderating everything manually would take up too much time.
Content moderation helps us take action against inappropriate content on our sites. This can include reporting inappropriate content, flagging false news as false, and also taking action against users who share the content. Businesses that rely on user-generated content have a responsibility to ensure that the content shared is appropriate for their audiences. The most effective way to ensure this is through manual or automated moderation.
Content moderation is an important part of our strategy to protect users from online abuse by allowing them to flag inappropriate content so it can be blocked or removed by moderators. We are committed to ensuring that we have the most effective controls in place to keep abuse out of the hands of kids, and we already moderate millions of posts a day. Professionals review every video flagged for violent extremism, and we have over 10,000 people working on YouTube’s Trust & Safety teams worldwide. Our systems also help us automatically remove more than 90 percent of this type of content before anyone sees it.