The Internet and Other Technological Apps: A Safe Space For People of Colour and Ethnic Minorities?

BY AMELIA ALAM (staff writer)

In 2021, there are currently 4.80 billion internet users in the world! Any teenager with the privilege of having internet access will most likely be jumping at the chance to use it. After all, the internet seems to have an infinite amount of benefits. For example, it provides access to plenty of free learning websites students can use to study to achieve better marks at school. Furthermore, the importance of the internet has also shown itself more clearly in this global pandemic. Applications such as Zoom and Google Meet allow for schools and businesses to run during such a challenging time. In fact, since so many people had been spending a lot more time on their iPhones during quarantine, many apps have gained a lot more users. For instance, TikTok, an app used for uploading comedy sketches, dance challenges, and sharing other types of videos online, has been downloaded more than two billion times worldwide on both the App Store from Apple and Google Play, which doubled the number of downloads from it’s previous year. Currently, there are approximately 1 billion global active users on TikTok (Mohsin, Maryam). On Instagram, a platform used to share photos and short videos, there are around 1 billion users as of the year 2020 (Instagram by the Numbers: Stats, Demographics & Fun Facts). Other platforms such as Twitter and Facebook have roughly over 206 million users and 2.8 billion users, respectively, as of 2021 (Statista Research Department). These statistics just go on to prove how prevalent social media is in our society.

Besides the obvious negative health risks of too much screen time (e.g. lack of sleep, headaches, nausea, eye strain), this dramatic increase of users on the internet during pandemic creates a situation where there are many vulnerable children and teenagers across the world on the internet who may find themselves constantly exposed to racism. Racist people and comments on the internet may seem like a trivial thing to some people, as most people have the option to ignore the comments, block hate accounts or put down the device. These are all acceptable ways of dealing with racism online, but oftentimes these efforts aren’t enough to stop the detrimental impact that online racism can have on children and teenagers. For instance, racism goes against TikTok’s community guidelines, but there have been several different reports and accusations of wide-spread racism. Some examples include racist comments and videos featuring racist opinions, like normalized racism in videos and comment sections, Black Lives Matter videos taken down as "hate speech," greater probabilities of white creators being boosted to someone's For You Page over POC creators, or continuous appropriation of many Black creators' original dances. There are so many other instances in other apps or websites, too. Two huge consequences of online racism to teenagers: it is easy for vulnerable children to be inducted into extremist hate groups, and extremist racist ideologies cause harm on the mental health of BIPOC (Black, Indigenous, people of colour) teenagers.

As mentioned before, more and more young teenagers are being exposed to extremely hateful ideologies from extremist hate groups. Although there is no age limit to access the Internet on Google, there are some age minimums that certain apps require users to be. For instance, TikTok, Instagram, Youtube, and Facebook require their users to be a minimum of 13 years old. This is one way apps prevent anyone under the age of 13 from being exposed to the content shared on those apps, but still many young children do end up finding ways to create accounts. On the other hand, young and impressionable teenagers (those from ages 13-17) are freely able to create accounts on these aforementioned apps. In fact, as of July 2021, roughly 50% of TikTok’s audience is under the age of 34, 32.5% of users are aged between 10 and 19, and 41% of users are aged between 16 and 24 (Aslam, Salman). On Instagram, as of June 2021, more than half of the active global users were under the age of 34. As of April 2021, with Facebook, most people on the app are over the age of 24. 70.4% of people are over the age of 25, while 29.6% of users are from the ages 13-24 (Statista Research Department). And, in 2018, 85% of teens in the U.S. use Youtube (Anderson, Monica, and Jingjing Jiang). Clearly, teenagers make up a lot of the users on the internet and on other various apps. As mentioned before, this can be a great way to connect with friends and showcase fun aspects of your life, but a big downside of the excessive usage of social media from teenagers is the augmentation of extremist hate group exposure to teenagers.

Hate Groups and Hate Speech on Facebook and Other Apps:

*Before reading, it should be known that there is a lot more information produced by professionals surrounding this specific topic. Links for that will be linked at the bottom of the article.*

In 2020, there were many complaints by anti-hate speech activists about the mostly self-moderated Groups feature on Facebook. Groups, a feature on the app where users with a shared interest could interact and communicate with each other on public or private forums. This feature was launched in 2010, but was promoted a lot more in 2017. In fact, in February 2017, there were 100 million people on the platform who were in Facebook Groups, and as of February 2021, there were more than 600 million. These Facebook Groups give users a private place to communicate with each other (without moderators). This feature allowed many conspiracy theories from different groups to spread, with some groups even advocating for violence. There are so many prominent hate groups that began to spread on Facebook from the Group filter. For example, despite currently being banned from Facebook, the group “QAnon News & Updates – Intel drops, breadcrumbs, & the war against the Cabal” continues to be active. This facebook group is composed of believers of a far-right conspiracy theory, which alleges that certain world leaders are part of a cabal of Satanic, cannibalistic pedophiles and are running a global child sex trafficking ring. The members of this group have conspired against former President of the United States Donald Trump during his 2017-2021 term in office and were part of the insurrection on January 6, 2021, among many other events. This group had around 191,145 members on Facebook before getting banned from Facebook sometime between August 2020 and now. They were known for spreading misinformation about politicians andanti-semitic and racist hate. Another example of prominent hate groups that existed on Facebook is the group “Three Percenters” that is now banned. They were formed sometime between 2008-2009 and believe the myth that only three percent of colonists fought against the British during the Revolutionary War, but still achieved liberty for all. They believe that they are the modern-day three percent of colonists and that they are called to fight against the “tyrannical” American government. This militia movement is primarily anti-government, but many members hold anti-immigrant and Islamaphobic views. The Three Percents have used many Facebook groups to spread hate and misinformation in the past. Their original Facebook page, entitled “The Three Percenters - Original”, had 225, 144 followers that spread misinformation about Black Lives Matter protesters in August 2020. They also spread other types of hate. There were many other noteworthy antisemitic and racist hate groups on Facebook that were spreading racist hate that any teenager could easily stumble upon.

These aforementioned hate groups are banned now due to being removed from Facebook’s moderators during 2020, but it is without a doubt that other similar racist groups exist on Facebook today. Even groups that may have the same followers with the same conspiracy theories believed in could exist, too. Just a quick search on the app can find them. In 2020, Facebook hired 35,000 people, including engineers and moderators, to address safety issues on Facebook. Facebook also invested in AI technology to spot posts that violate the Facebook Guidelines. This new AI technology detects hate speech from comments and accounts and strives to remove them. According to Facebook, it removed 97% of hate speech that was spotted by automated systems before any real person flagged it during the final three months of 2021, which was up from 94% in the previous quarter and 80.5% from late 2019. Facebook also claims that in the first three months of 2020, their systems spotted just 16% of the bullying and harassment content that was not reported, but by the end of the year, that number had increased to almost 49%. Facebook has also said that their AI tools can detect hateful content in widely spoken languages (e.g. Spanish and Arabic) so that the amount of hate speech content that was taken down reached 26.9 million, up from 22.1 million in the previous quarter of the year (Schroepfer, Mike).

It’s great that Facebook is finally addressing this problem, but it’s also important to remember that there is so much work to be done on Facebook and on other apps. The internet is a huge place, after all. Contrary to Facebook, there isn’t a “group feature” on TikTok, Twitter, Instagram, and Youtube, but hateful ideologies can still be exposed to children on these apps through comments, videos, reels, and tweets. In fact, with TikTok, there are many other claims of racial bias within the app. For instance, in July 2021, 2 popular Black creators, Nakita David and Cindy Manu, have claimed that TikTok has still been secretly taking down their videos or shadow banning them (which is when a platform purposely hides videos so that they don’t reach their projected success). Many other Black creators have claimed the same thing has been happening for a while. In June 2020, TikTok even released an apology to the Black community, stating, “We want you to know that we hear you and we care about your experiences on TikTok. We acknowledge and apologize to our Black creators and community who have felt unsafe, unsupported, or suppressed. We don't ever want anyone to feel that way. We welcome the voices of the Black community wholeheartedly.” The full apology can be read here:

Still, many BIPOC creators claim to face discrimination on the app, and there are many private accounts that like to “troll” others with racist hate. With Instagram, the app released a statement on February 11, 2021, surrounding the racist hate directed to certain football players (Marcus Rashford, Bukaya Saka, and Jadon Sancho) in the UK. The statement claims, “Our rules against hate speech don’t tolerate attacks on people based on their protected characteristics, including race or religion. We strengthened these rules last year, banning more implicit forms of hate speech, like content depicting Blackface and common antisemitic tropes,” and it announced stricted penalties for those who continue spread hate speech online. The full statement can be read here:

Mental Health:

It is already known that racism has a huge impact on people of colour’s mental health. There is already enough racism that BIPOC people experience in-person, so why should it be experienced online, too? If POC teens want to relax and have some down-time on their phone, it’s not fair that they may be at risk to encounter content which degards them for no valid reason. However, as platforms and people alike start to take the situation more seriously, more improvements and resources have come out.

The website YoungMinds is a great website for teenagers of colour to figure out how to deal with the impact of racism on their mental health. Some tips they suggest are talking to someone you trust, using therapy resources (whatever is available, whether it be online or in-person), finding supportive communities, joining a movement that fights racism, and remembering that it’s not your responsibility to fix racism. The website also mentions that cleaning out your social media feed can help, too.

Racism is ever-present online and continues to harm the mental health of so many people. Sometimes, it might be best to practice self-care when things can get overwhelming. Taking a bath or reading a book or doing anything else that makes you feel relaxed are great ways to pass time without going on social media.

Where to go next:

Racism on the internet is a real problem for so many teens today. There are methods that the founders of popular apps have put in place to protect users of all ages from seeing racist content. Whether or not these measures have made an impact depends on different apps and updates. Since this fluctuates heavily, there are other ways that teens are able to protect themselves on the internet and for adults to protect their children. When dealing with hateful content, it would be best to report it right away. If the content is not taken away by the moderators of the app being used, then the next step would be to block the account from which the content is being shared completely. This way, even if the hateful content is still out there on the internet, you personally won’t have to see it. Fact-checking information you may hear online is also another great way of making sure you don’t fall down the conspiracy-theory-extremist mentality. As for dealing with your mental health, there are many online resources available to people from different countries. If unable to access online resources, please talk to a trusted adult about your feelings surrounding this topic.

If you ever feel suicidal, please use this link to access hotline numbers from countries all around the world:

Racism is a real problem in the real world and online. We need to step up as allies and we need to make sure that people of colour feel safe in spaces on the internet and on apps. Additionally, we need to remember to take the time to practice self-care and take breaks from social media.


Sharing culturally diverse stories to educate, inspire, and empower others