Skip to main content
Humanities LibreTexts

6.4: Popular, but Troublesome Internet Resources

  • Page ID
    95051
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    There are a lot of sources people consistently rely on to get their online information. We will look at some good places to go in section 6.5. For now, we are going to concern ourselves with some popular, yet unreliable sources of information online. This section is not intended to be exhaustive (there is a lot of bad stuff out there), but you should be able to learn broader lessons from this discussion that can be applied broadly. (We are not linking to these websites as, to varying degrees, we are encouraging you not to use them).

    Wikipedia

    Of all the sources of information we will consider in this section, Wikipedia is the most reliable. For those unfamiliar, Wikipedia is an internet encyclopedia. What sets it apart from other encyclopedias is that all its content is generated by its users. So, while a traditional encyclopedia pays experts and researchers to write entries, literally anyone can create or edit the entries on Wikipedia. This may sound like a receipt for disaster, but research has found that, for most entries, Wikipedia does a reasonably accurate job. You can test the site by going to an entry on a subject about which you are an expert and reading what it says.

    That said, plenty of people do create intentionally inaccurate entries. The entry for the first law of thermodynamics was edited to say that “the first law of thermodynamic is do not talk about thermodynamics” and at onepoint Mariah Carey’s entry was edited to say that she died of embarrassment. These edits are often obvious and can be amusing pranks. Still, it isn’t all that helpful if you’re looking for accurate information.

    And things have gone far beyond pranks at times. Media critic Anita Sarkeesian was the subject of an organized online harassment campaign, and as a part of it, her entry was repeatedly edited to show pornographic images and threats of sexual assault. Wikipedia is also a very popular location to dox (releasing personal information, like home addresses) public figures.

    Companies and individuals have also been known to alter entries as a form of marketing. In 2019, North Face changed images on entries for outdoor activities to promote their products. More sinisterly, a Wikipedia editor was found to have submitted an edit for a medical procedure that framed things in a light that was beneficial for the company he worked for. He attempted to change “controversial” to “well documented and studied.” Where this specific editor failed others succeed, and there are now people who do this professionally.

    Moving past intentionally false information, there are also issues with inaccurate posts. The more complicated or nuanced the topic, the fewer people there are who understand the subject well enough to explain it clearly. Most academics and professional researchers don’t consider updating Wikipedia pages to be a valuable use of their time, and the result is those updates come from passionate lay people. Almost every teacher has at least one story of someone who quoted a Wikipedia entry that says something wildly off base.

    You should also be mindful of bias from editors in the way they present material. The more controversial the topic, the more likely it is that, intentionally or not, an editor has framed an issue in a way that doesn’t accurately depict the research. The decision to list arguments for a view can make it look like it is legitimate, even if those arguments are discredited. Sometimes attempting to avoid accusations of bias can do the same thing. At the time of this writing, the Wikipedia entry for abortion says, “the health risks of abortion depend principally upon whether the procedure is performed safely or unsafely.” This sentence tells us nothing useful (everything is safe when it is done safely) and certainly doesn’t tell the reader what they want to know, which is how safe is having an abortion (the answer is, very – you are 14 times more likely to die from childbirth).

    Lastly, you should be aware that posts are only as accurate as their most recent update. So, in fields and areas undergoing rapid change, entries can become outdated pretty fast. This is especially true when the topic is one that doesn’t have a lot of interest.

    Facebook

    As a living person, you know what Facebook is. There are a lot of problems with this company, and you would likely find yourself much happier if you stopped using it, but we are going to focus on the problems with using Facebook as a source of information.

    As with all social media, if you get your information about the world primarily from Facebook, you run the risk of finding yourself in a bubble. The people you are friends with on Facebook are people with whom you have elected to interact. As such, you are overwhelmingly likely to be interacting with people who think like you. If you are getting your news through Facebook, you are getting it through a filter that is overwhelmingly predisposed to your way of thinking. This means you are less likely to encounter new ideas, and it means you are less likely to encounter ideas that make you question your current beliefs. As if this weren’t bad enough, it also leads you to a lot of confirmation for views you support. This leads a lot of people to make appeals to popularity about things that may not even be that popular.

    The problem is compounded by how Facebook works. The site makes it very easy to share posts, and the algorithm they use actually promotes content based on how likely it is to go viral (based on how similar it is to past viral content). This means that posts which invoke high emotional reactions are more likely to be seen and more likely to be shared. Just because a post makes a person feel like sharing it doesn’t mean it’s likely to be true, though. In fact, the opposite is probably the case.

    In April 2020, Facebook announced there were over 40 million separate posts made with false information about Covid-19. 40% of these posts stayed up even after Facebook was notified by multiple organizations. This is the heart of why you should avoid Facebook. Users are largely left on their own to sort credible claims from misinformation. As a result, Facebook has been a breeding ground for conspiracy theories and organized misinformation campaigns. Studies out of Oxford and Princeton both founds that Facebook spreads fake claims faster than any other website or social media platform. The Princeton study found that Facebook referred users to untrustworthy news sources 15% of the time and they only referred to trustworthy sites close to 6% of the time (so this also means most of what you see is also coming from an unverified source).

    Twitter

    Twitter is a social media platform that is just microblogging. Users compose short posts, no more than 280 characters, and they show up in interface for people who “follow” them (although they are viewable to others if they search for them or if someone they follow aggregates them by ‘retweeting’). Users are also able to respond to the posts of others. What is really great about this is it allows for relatively fast and easy communication between the generators of content and the audience.

    Many of Twitter’s problems overlap with Facebook. Because you must opt in to following a person, most Twitter users find themselves in a bubble. Misinformation also spreads quickly on Twitter because posts can be aggregated very easily. With just the click of a button something you see can be shown to all your followers. The result is many users see something they think is important and share it without ever verifying if the content is correct. Beyond the echo chamber and misinformation, we also need to be concerned about how commonplace the views we see are assumed to be. While a lot of people use Twitter (22% of adults in the U.S.) most of the content is created by the same very small group of people. Pew Research Center found that 80% of tweets are made by just 10% of users. This means that the views of a select few are amplified over everyone else, and we are likely to make wrongheaded appeals to popularity if use what we see as evidence for those arguments.

    Unfortunately, there is more to Twitter’s problems. While people are encouraged to interact on the platform as themselves, users can create anonymous profiles. When users post anonymously or using pseudonyms, it can be incredibly hard to verify their credibility on an issue. Twitter does allow for people to verify their identity (and they are awarded a blue checkmark to indicate this next to their username), but the process is difficult, and typically a person needs to have some level of celebrity. This means that even when a normal person appears to be posting under their real name/identity we can’t be certain. A user may claim to be speaking from experience as a nurse, mother, teacher or whatever, but we as other users can’t ever trust that this is the case. Users have even been caught using alternate accounts to post positive replies to posts made on their main accounts to manufacture a perception of credibility. The anonymity offered to users on twitter also encourages harassment and trolling since it is virtually impossible to hold nameless people accountable. We will discuss trolling more in section

    Reddit

    Reddit is an incredibly popular message board community. Message boards have been around since the earliest days of the internet, but they had largely fallen out of favor until Reddit came along. The way message boards work is they divide their site into threads, and each of these threads houses a different conversation. If you’ve ever taken an online class with discussion boards, the structure is a lot like that. You have the main page that houses all the separate boards where conversations on different topics take place. Part of what makes Reddit different than message boards of the past is its scope. There are over 100,000 separate threads on Reddit (they call them ‘subreddits’). Whatever your interests, there is likely at least one subreddit about it. Seriously. Any topic. At the time of this writing, there are 13 separate subreddits on pimple popping. None of them have fewer than 100 members, and the largest has almost 16 million members.

    The fact that Reddit has allowed people with similar interests to find each other certainly counts in its favor (even if it is incredibly weird that what brings some of them together is watching and talking about pimple popping). Unfortunately, much of how Reddit is structured makes it an incredibly unreliable source of information. For starters, users on Reddit are anonymous. Every user needs to set up a username, but that name does not identify you as a person in the real world. Just like with Twitter, this causes real problems for anyone trying to evaluate the claims made by a person. If you have a medical question and you ask it on the appropriate subreddit (which people do all of the time), you will get responses from people who claim to be doctors or to have the condition in question. There is no way for you to know if ‘Windycitymayhem’ is an expert, a blowhard, or a troll. Any information you get from Reddit will need to be independently verified to have any confidence in it – so you might as well skip over the anonymous advice.

    Another issue is that the site is literally run as a popularity contest. The posts made within a subreddit are displayed based on a popularity score given to them by users. If someone likes a post, or thinks it’s interesting, they can ‘upvote’ it, which helps it move up the page, or ‘downvote,’ which moves it down. The lower on page a post is, the less likely it is to be seen. Answers/replies to posts work the same way. Users up/downvote replies, and they are displayed accordingly. For obvious reasons, this system is not particularly effective at identifying intelligent conversations or accurate answers. This system also means that content is strongly filtered based on the biases of users. Reddit users are overwhelmingly young (under 30), male, and white. As a result, posts concerning other groups (women, people of color, etc.) are often downvoted, as the average Reddit user sees them as unimportant. At its worst, this behavior elevates to outright harassment, with female users getting their posts in male-dominated subreddits (sports, videogames, etc.) downvoted into obscurity.

    Earlier we said it was a good thing that Reddit allows people to find each other, but there is a pretty serious caveat to that claim – it also helps repugnant people find each other. Because the site allows subreddits to be created about virtually anything, the site has become a meeting ground for conspiracy theorists, white supremacists, misogynists, and every other type of bigot. These malignant views fester in the echo chamber Reddit gives them until they spill over to into the real world. And it does spill over. Gamergate, Pizzagate, and QAnon are all examples of dangerous conspiracies that gestated on Reddit and resulted in real world violence.

    YouTube

    There is some really good stuff on YouTube. If you have an interest in philosophy (and who wouldn’t) you can watch the excellent Crash Course Philosophy. The problem with YouTube, as with so much on the internet, is its algorithm. More than 70% of the content viewed on YouTube comes from recommendations from the algorithm. This means that much of what we get out of the site is a function of what the site itself presents to us. A big part of what gets recommended has to do with “engagement” – how many likes and comments the video has gotten, how many times it has been watched, how often its host channel posts new content, etc. If you’ve ever asked yourself why most people end their videos with an appeal to “rate, review and subscribe,” this is why. The more people interact with the video, the more other people will see it.

    One consequence of this system is that videos from people with loyal and active fanbases are recommended at a very high rate. If all you are doing is killing time, this might seem good to you. Maybe one of these videos will be recommended to you, and you can become one more fan. If what you are looking for is accurate information, the system doesn’t really help you. There is no reason to think that popular videos have good information. If you’ve ever watched videos from a popular YouTuber on a topic you know a lot about you have probably found yourself thinking “this person is full of it.” It’s because they are charismatic – not necessarily an expert.

    Another issue is the algorithm has the effect of preferencing controversial videos. If you go to YouTube looking to learn about something you don’t know much about, you are likely to find yourself watching videos that do not accurately represent reality. At the time of this writing, the top ten results for ‘vaccinations’ resulted in 5 videos touting false dangerous conspiracy theories, 3 arguing against the conspiracy theories, 1 explaining dog vaccinations, and 1 music video for a band called the Vaccines. Leaving aside the band and the dogs, all of the videos address conspiracy theories. If you searched looking to learn about how vaccines work or what they are, you would have failed, and this is pretty typical of the YouTube experience.

    This problem is bad enough when dealing with searches, but the algorithm also has a tendency to move you away from accurate content if you do manage to find it. The issue is, the recommendations that come on the side bar (and that auto-play after your initial video is over) are being recommended with the same algorithm. So, even if you start by watching a quality post recommended to you by your instructor, you are only three or four videos away from garbage. In your authors’ field, philosophy, it is shocking how quickly you are moved from lectures from highly regarded professors to pseudointellectuals arguing in favor of racism, misogyny and countless other forms of bigotry. These low quality and outright toxic videos benefit from a halo effect that comes from being associated with the original quality content – making people who don’t know any better treat them as reliable.


    This page titled 6.4: Popular, but Troublesome Internet Resources is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jason Southworth & Chris Swoyer via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.