Hate Isn’t A Strong Word: Social Media and Political Polarization in a Segmented Society
Social media usage in the United States has skyrocketed in the past two decades. According to data from the Pew Research Center, in 2005 only 5% of U.S. adults claimed to have used at least one social networking site. In 2021, that number has risen to 72% of the U.S. adult population (Pew Research Center, 2021). Although social media adoption numbers have remained stagnant in recent years due to high market saturation, almost every American is on or knows someone else who is on Facebook, Twitter, Instagram, or any number of big social networking sites.
In tandem with rising levels of social media usage in the United States, the past two decades have also been characterized by increasing levels of political polarization and a general lack of social cohesion. Since 1994, the median ideological difference between Republican and Democratic voters has increased rather dramatically, according to data from the Pew Research Center (Pew Research Center, 2014). Furthermore, data collected by the American National Election Studies shows that in 1992, 55.13% of Americans felt that there were “important differences in what the Republicans and Democrats [stood] for,” compared to 83.02% of Americans who felt the same way in 2016 (ANES, 1992–2016).
In this analysis, I argue that widespread social media usage explains rising levels of political polarization in the United States. Social media encourages the perpetuation of filter bubbles that create knowledge feedback loops, engrain partisan identities, and foment anger towards information that is perceived as ideologically opposed to the group’s disposition. Filter bubbles are produced by algorithms and psychological selective exposure, which will be discussed in further sections.
This topic and the intersection between social media usage and political polarization is increasingly important, especially in the aftermath of the January 6th attack on the U.S. Capital by pro-Trump extremists, several of whom were undoubtedly radicalized on platforms like Facebook and smaller right-wing networks like Parler (Hamilton, 2021). More fundamentally, I believe that increasing levels of political polarization will lead to differential and segmented sources of knowledge that are antithetical to the efficient functioning of a democratic society. These fears are affirmed by the Economist Intelligence Unit, which has categorized the U.S. as a “flawed democracy” since 2017 (Economist Intelligence Unit, 2017–2020). And at the end of this analysis, I hope to provide a more cohesive understanding of the interplay between social media usage and political polarization that may be able to explain future trends of partisan and violent extremism in the United States.
The most recent survey of news consumption by the Pew Research Center recorded that 68% of adults in the US get their news online, with 53% claiming to get their news from social media applications like Facebook, Twitter, and Instagram (Shearer, 2021). Since people are increasingly getting their news served to them by online algorithms curated by social media companies, the effects of political polarization in news consumption have become more apparent. In recent years, social media networks have displayed symptoms of political polarization as is evidenced by the limited reaction and interaction between different “sides” of online communities, thus leading to the formation of the so-called filter bubbles (Pariser, 2012) associated with the emerging phenomenon of fake news and the rejection of common knowledge bases by partisans, especially those on the authoritarian far-right.
There are two dominant arguments concerning the source of growing political polarization in online media spaces. Some academics claim that algorithms alter our online experiences and effectively place people inside filter bubbles of their own beliefs, thus strengthening groupthink and polarization within the group (Pariser, 2011; Spohr, 2017; Elhai et al 2020). They emphasize how algorithms gather personal information about a person and attempt to curate, in some cases, highly specialized and personalized content to maximize viewer retention and engagement. Some examples of this technology in use include Facebook’s News Feed, which delivers content to users based on a combination of variables surrounding a person’s political identity (Manjoo, 2017). Scholars that study algorithms’ effect on filter bubbles also state that the impact algorithms have on information delivery is oftentimes non-transparent and happens without the user’s knowledge and informed consent (Sengupta, 2020).
The other source of online political polarization rests in human psychology and positive feedback stimulation. These scholars put more of an emphasis on organically created filter bubbles, formed through selective exposure behavior and confirmation bias. These behavioral traits make people more likely to interact with content that confirms their pre-existing views, which leads to further political polarization (Guess et al, 2018). There is also evidence that confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position (Baron, 2000).
In this analysis, algorithmic sorting and psychological selective exposure are both explained in depth in order to achieve a more cohesive and holistic approach to political polarization and its relationship to social media usage. As noted above, I argue that increasing levels of political polarization can be attributed to social media because of a combination of digital and psychological forces at play. In the following sections, I will discuss these trends in further depth.
Section III: Characterizing Polarization
Political polarization, which is defined as a “divergence of political attitudes in the mass public to the political extremes,” has been an influential force within American media and society since the founding of our nation (IGI Global, 2015). The discussion and debate around polarization have increased since the 2016 election and 2020 election cycles. Important events that have occurred during this period, most notably the COVID-19 pandemic and the January 6th attack on the U.S. Capitol, demonstrate that the United States is more divided and that social groups are more politically opposed to each other. For instance, polling done one week after the assault on the U.S. Capital by pro-Trump extremists indicated that 53% of Democrats and 56% of Republicans believed that “the greatest danger to America’s way of life comes from their fellow citizens” (CBS News/YouGov, 2021). Since 1992, the number of Americans who self-identify as moderates has decreased from 43% to 35% in 2019 (Saad, 2021). And according to data from the Institute for Family Studies, only 16% of Democrats and 12% of Republicans would be willing to marry someone from the opposite party (BYU/Deseret News American Family Survey, 2020).
Some authors contend that political polarization is a myth, conceived by media and academic elites who created the term to emphasize disagreement between low-propensity voters (Fiorina et al, 2011). However, this is false, as opinion polling data demonstrates a clear difference between how partisans conceive of themselves and others within the political landscape. In a 2018 study by SurveyMonkey, 61% of Democrats characterized Republicans as “Racist/Bigoted/Sexist” (Hart, 2018). In that same survey, 54% and 49% of Republicans believed Democrats were “Spiteful” and “Arrogant.” Another poll from the Public Religion Research Institute in 2020 shows that 8 in 10 Democratic voters believe the Republican Party has been taken over by racists, and that 8 in 10 Republican voters believe the Democratic Party has been taken over by socialists, although neither party openly advocates for either ideology on a massive scale (PRRI, 2020). Indeed, data demonstrates key misunderstandings and miscalculations in identity and how partisans see each other, which is reflected by high levels of political polarization driven from the grassroots level.
Several academics and political analysts emphasize how polarization is connected to identity and perceptions of a perceived other. In her book Uncivil Agreement: How Politics Became Our Identity, author and social scientist Lilliana Mason argues that partisan and political identifications have changed the way Americans think and feel about themselves and their opponents (Mason, 2018). She highlights the shift in American society towards isolated and loose social ties to people in other parties and claims that a partisan re-arrangement of civil organizations has compelled Americans to “seek comfort in increasingly homogeneous neighborhoods, towns, and churches, causing American citizens to sort themselves into isolated groups that share their culture, values, race, and politics” (Mason, 2018). As a result, elections are seen as battles between opposing ways of life. This belief has been cemented into the minds of Republicans and Democrats who see their non-partisan identities (e.g., religion, race, ethnicity, sex, occupation, class) as ultimately connected to their partisan and political identification. This process plays out online as well and pushes people to information sources that confirm and validate their beliefs.
When elections and pieces of legislation are framed in a way that makes Democrats feel victorious, their non-partisan identities are affirmed and cemented. Democratic voters feel good about preventing the threat of a Republican win, which would be perceived as a direct attack on their ideal way of life. Republicans, on the other hand, would decry any Democratic victory as an assault on their non-partisan identities. They would feel resentful and ashamed that their ideology and ideal way of life was bested by a party they see as morally repugnant and (undeserving of cooperative effort, even towards mutual goals. This process has two impacts: it creates a cycle of vicious backlash between partisan groups, and it incentivizes partisans to regroup and further segment themselves from the other side (Mason, 2018). This model of political polarization, laid out by Lilliana Mason, is a good way of considering how polarization occurs in modern society online.
Additionally, many have concluded that civic engagement, especially civic engagement online, is much more differentiated and partisan now than in the past. This could be attributed to a negative political environment, where people who do not want to be involved in the crossfire between the opposing sides choose to leave the political arena altogether, thus creating a scenario where ideological moderates have removed themselves from party engagement. The impact of this is increasing levels of polarization and hatred on both sides of the spectrum, as ideological extremists represent a greater proportion of each party’s organizing base. Data from Gallup illustrates this trend, with around 41% of Americans self-identifying as an “Independent” in recent surveys (Gallup, 2021). Because history tells us that most independents (80–90%) still tend to lean towards one party or the other, the true explanation for this trend is that a plurality of American voters dislike being associated with either party due to negative perceptions of identity. In turn, polarization increases within and outside of the parties, as the ideological extremes in both push the political discourse in further opposite directions. Indeed, the implications of this are far-reaching, as political parties remain the most important source of policy creation and discussion in this country, and probably will for the foreseeable future.
Section IV: Threats of Polarization
Now that we have discussed political polarization and its causes more generally, it is necessary to review the threats attributed to polarization before we discuss algorithms, selective exposure, and how these trends are ultimately connected to political polarization and social media usage.
Political polarization brings with it a variety of dangerous symptoms that pose a threat to the healthy functioning of democratic societies. The most relevant consequence of polarization is a loss of diversity of opinions and arguments. Due to political polarization, people are less prone to think about issues and policies from an individual level of analysis. Instead, people frame debates through the lens of the potential impact on their ideological grouping, which lends itself to groupthink and less diverse and intersectional civic debate (Bishop & Crushing, 2009). Indeed, the result of this is a political landscape where a vote becomes a signal of opposition against a perceived other, rather than a tool used to direct public policy and inform politicians about civic engagement (Bishop & Crushing, 2008). This trend is detrimental to the adequate and productive functioning of liberal democracies, which rely on a certain level of open debate and trust among opposing groups.
Scholars and policy experts also focus on domestic and foreign security threats that can be attributed to high levels of political polarization. Susan E. Rice, the former national security advisor under the Bush II Administration, wrote that “political polarization is a ‘force multiplier’ that deepens other threats and cripples [the United States’] ability to combat them” (Rice, 2020). Rice believes polarization is a force multiplier because it limits the nations’ effectiveness to deal with vital issues: it alters our threat perceptions, inflames fears, erodes faith in democracy, and allows foreign actors (e.g., Russia, China, Iran) to take advantage of American infighting to grow their power on the world stage. She highlights the nominal impacts of polarization on threat assessment, indicating that partisan issues like climate change, the COVID-19 pandemic, and China rank differently on Republican and Democrat perceptions of high-level security threats (Rice, 2020). Polarization in America has left little common ground between the two parties, due to the relative ease of convincing already-radicalized partisans that a particular threat (e.g., China, COVID-19) is an even graver threat because it is directly related to their fears of the other party. An excellent example of this is how former President Trump and the Republican Party have connected preconstructed fears of “communist Democrats” to the communist government of China.
Section V: Social Media and Polarization
Social media networks have transformed the ways in which we discuss, interpret, and deliberate political issues within public discourse. The realm of social media and online interaction has been transformed into a “new public sphere,” a legitimate platform for social reformers, politicians, and activists to do their groundwork and heavy operations (Kruse et al, 2017). Since the early development of the Internet, many argued that digital technologies would increase exposure to political differences because they would tend to overcome social and geographical boundaries. However, as highlighted previously, this prediction has so far proved to be false. As social media networks have moved beyond primitive chat messaging and photo uploading to a billion-dollar ad revenue industry, scholars note that social platforms are likely to foster selective exposure behavior due to an overload of information that must be sorted through: usually by our cultural, racial, and ideological biases.
Because of human psychology and the revenue incentives social media corporations (e.g., Facebook, TikTok) must maximize user engagement and viewership through likes, comments, and shares; the optimistic “diversity of ideas” view of the Internet has not materialized. In the following subsections, I will discuss how algorithms and selective exposure create filter bubbles, which in the end leads to political polarization through social media.
Section V-I: Algorithms and Filter Bubbles
Filter bubbles, a term popularized by activist Eli Pariser in 2011, are communities of like-minded individuals that exist online. They effectively function as feedback loops or echo chambers, where people become accustomed to “hearing [their] own thoughts about what’s right and wrong bounced back to [them] by the television shows [they] watch, the newspapers and books [they] read, the blogs [they] visit online… and the neighborhoods [they] live in” (Bishop & Crushing, 2009). Filter bubbles are structured in a way that makes it difficult for outside information to reach the group since most bubbles arise out of a perceived hatred or distrust of outside groups. Indeed, people in extremely homogenous ideological communities in many instances ignore undeniable facts, data, and testimony that would prove their arguments wrong.
Pariser and others argue that algorithms, which personalize the user’s online experience, place the user in a bubble where he or she is only presented with information that matches with previous consumption behavior (Pariser, 2011). This interpretation of filter bubbles and their prominence is focused on algorithmic sorting and design and is less concerned with psychological trends like selective exposure.
Algorithms lend themselves to filter bubbles in two ways. First, personalization and customizability. Algorithms have been designed and calibrated by social media companies to connect people with information they are likely to want to consume, which results in a personalized stream of content that fails to deliver users a set of alternatives to choose from. (Rader & Gray, 2015) This effect, often called “pre-selected personalization,” artificially alters the news and information diets of social media users and pushes them towards partisan extremes (Fletcher 2020). In a 2017 study of the impact of technological customizability, researchers found that pre-selected personalization and algorithmic sorting could be “effective at reducing cognitive dissonance associated with the avoidance of challenging information” (Dylko et al., 2017). The study also indicated that ideologically moderate and low political propensity individuals were especially susceptible to the influence of customizability, leading to an increase of groupthink among individuals who had little to no connection to politics previously, which allows political extremism to take root much easier than people with pre-established political connections.
Second, sorting and categorization. When algorithms sort people and characterize them into political subgroupings, there is a psychological predisposition of group members to become ideological or personality adherents of the political extreme. Because individual members’ cultural and social identities are increasingly tied to the group’s political stances, the othering and pursuit of violent counteraction against opposing groups is justified. This is best exemplified in a 1954 study, the Robbers Cave experiment. In this highly controversial experiment, two groups of young boys from similar socio-economic and religious backgrounds were divided into separate camps (Sherif, 1954, 1958, 1961). The separate groups did not know the others existed until study observers told the boys another group existed. Instantly, the boys sought to compete and triumph over the opposing group, despite the similarities and lack of knowledge between both sides. The boys tended to characterize their own in-group in very favorable terms, and the other out-group in very unfavorable terms. In fact, the boys’ hatred towards each other was so extreme that the study was ended prematurely, out of fear for the children’s safety after an incident where a group of boys from one group attacked the other with rocks, sticks, and other projectiles. This study illustrates how algorithms, which also personalize and separate information similarly to the designers of the simulation, impact group and individual identity and perceptions of others not in the group. Additionally, it shows how anger and distrust is allowed and even encouraged by group members due to the closed-off nature of the group, similar to filter bubbles on social media networks.
Section V-II: Selective Exposure and Filter Bubbles
Although algorithms are one trend associated with the persistence of filter bubbles on social media, scholars contend that it is not the only factor that can explain their ferocity and linkage to increasing levels of political polarization. Partisan selective exposure generates filter bubbles organically, without the need for algorithmic sorting and pre-selected personalization (Messing & Westwood, 2012). Specifically, this psychological phenomenon illustrates the tendency for individuals- regardless of preconceived partisan identity or ideological extremeness- to get information and news from sources that confirm their beliefs. It also emphasizes the comfortability and unhappiness associated with information that does not conform to an individual’s ideology or partisan opinions, which creates a situation where an individual will avoid specific sources of information. Recent research validates the existence and prevalence of this phenomenon. In a 2015 study of approximately ten million Facebook users in the U.S. it was shown that “compared with algorithmic ranking, individual choice played a… stronger role in limiting exposure to cross-cutting content” outside of a user’s filter bubble (Bakshy et al., 2015). Additionally, other research points out that even when participants are presented with information that does not conform to their beliefs, there is a tendency to maintain the individual’s cognitive system through avoidance or counterarguing attitude-inconsistent information (Frey, 1986).
The cause of selective exposure is a psychological predisposition which scholars call confirmation bias, defined as the “tendency of people to favor information that confirms or strengthens their beliefs or values, and is difficult to dislodge once affirmed” (Plous, 1993, p. 233). While confirmation bias can occur as a result of cognitive partisan gatekeeping, it can also happen without an individual knowing it, which strengthens the legitimacy of group-think and filter bubbles more generally. Indeed, when confirmation bias and the self-affirming, self-legitimizing nature of selective exposure are combined, the partisan implications are staggering. The result is a total rejection of not only attitude-inconsistent news, but common sources of knowledge and different beliefs. For instance, an internal study at Facebook leaked by the Wall Street Journal revealed that its products “exploit the human brain’s attraction to divisiveness,” and that 36 percent of people who joined an extremist group on Facebook did so on their own free will (Statt, 2020).
Filter bubbles arise out of psychological selective exposure for two reasons. First, as noted above, selective exposure disrupts rational decision-making for individuals and collective groups. The decisions people make and the views that they have as individuals or as a group are often difficult to challenge, despite conflicting and reliable information from other sources (Fischer et al., 2011). This rejection of information is organic, which frustrates any effort at delivering credible facts to people. This failure to coalesce around truth and factual knowledge engrain filter bubbles since group members believe they are the only place to receive and transmit reliable information.
Second, accuracy and defense motivation. Researchers state that individuals are motivated to be accurate in their decision-making (accuracy motivation) and that individuals seek out information to confirm their beliefs (defense motivation) (Fischer et al., 2011). These primary motivations create a situation where people actively search out filter bubbles to join, for the sake of proving others’ right and to subconsciously affirm their beliefs in a collective spirit of group unity.
Section VI: 2020 Election Cycle
I argue that social media usage is attributable to rising political polarization due to filter bubbles tendencies to exacerbate partisan information flows and limit the incentives individuals have to seek out other sources of information. In the United States, there are large segments of the population who disavow information that would once have been perceived as factual regardless of party affiliation or ideology. For example, in June 2020, the Pew Research Center reported that only 43% of Republicans believed that pandemic was a threat to public health, despite professional consensus and daily-deaths data saying otherwise (Deane et al., 2021). Data shows that counties that voted for Trump remain the most vaccine-hesitant. Furthermore, approximately 30% of the Republican voters indicated having a favorable view of QAnon, a cultish movement that alleges a secret cabal of Satan-worshipping and cannibalistic Democratic lawmakers who run a global child sex-trafficking ring and plotted against former president Donald Trump while he was in office. (YouGov, 2021). Finally, opinion polling data from the Associated Press shows that 65% of Republican voters believe President Joe Biden was not legitimately elected despite the overwhelming evidence to the contrary.
(AP-NORC, 2021). In this section, I illustrate the prevalence of ideological extremism and hyperpolarization within the American electorate and provide empirical evidence for how filter bubbles work on social media.
Although the 2020 election cycle was heavily characterized by issues like the COVID-19 pandemic and the economy, analysts and policy experts point to historically high levels of turnout to emphasize how voters viewed the presidential election as an inflection point of the direction of the country and American culture itself (Montanaro, 2020).
In a study published days before the 2020 presidential election, researchers noted that the gap between “in-party love” and “out-party hate” had increased dramatically over the past decade, with most partisans hating the other party more than they love their own (Finkel et al., 2020). The study claimed one of the reasons for this divergence was social media usage, which the researchers claim plays an overly “influential role in political discourse” which intensifies “political sectarianism” (Finkel et al., 2020, p. 534). They discuss how emotional and moralized posts- like QAnon conspiracy theories related to the #SaveTheChildren movement- are more likely to be shared within partisan circles rather than outside of them, reinforcing political engagement and radicalization within filter bubbles themselves. They note that algorithms, combined with the “contagious power of content that elicits sectarian fear or indignation,” have led massive amounts of people to justify radical right-wing extremist ideologies and groups, stoking hatred of other groups that are perceived as identity and cultural threats (e.g., foreigners, Muslims, Black Lives Matter) (Finkel et al., 2020, p. 534). In return, people spend an increasing amount of time on social media networks due to their heightened levels of concern, anger, and partisanship- earning social media companies more ad revenue.
These findings indicate that the relationship between social media usage and political polarization is almost reciprocal in a sense. Algorithm and selective exposure induced filter bubbles existing within social media networks heighten fears of a perceived other, which drives people to the political extremes to protect their political and non-political identities which are ultimately connected to their partisan preferences. In turn, these increased levels of polarization tend to lead to higher levels of social media usage and engagement due to contagious and provocative partisan content, which boosts ad revenue for social media companies like Facebook and Twitter. This reciprocal relationship is a key way of understanding how increased usage of social media leads to heightened political polarization, as the users and companies lack important incentives to stop the cycle of political polarization.
The 2020 presidential election cycle also demonstrates a growing rift in society over political and ideological differences. This development has dangerous effects on societal cohesion and commonly shared beliefs in democratic governance at large. Take the post-election period from November to January, for instance. After the announcement of Joe Biden’s electoral college win, former president Donald Trump and pro-Trump adherents immediately called foul play. On November 7th, President Trump tweeted “I WON THIS ELECTION, BY A LOT,” once it was clear Joe Biden had captured enough electoral college votes to secure the presidency (Tweet). After the former president had established his defiance to the results, Trump adherents began to disseminate false information about the election process to their followers online, referring to the nationwide poll as a rigged, fraud-ridden, and mismanaged election meant to hand power to the so-called radical left-wing agenda. They argued that millions had voted illegally through the usage of mail-in ballots, conveniently the preferred voting option of Democratic voters in swing states like Pennsylvania, Georgia, and Arizona (Rakich & Mithani, 2021). Videos, pictures, leaked audio, and other “pieces of evidence” continued to circulate through pro-Trump filter bubbles. In one instance, Republican voters on Facebook disseminated a video that claimed to show a man taking ballots illegally to a Detroit counting center (Washington Post, 2020). Although the video quite noticeably depicted a photographer transporting his camera equipment, pro-Trump individuals continued to claim massive electoral conspiracy and fraud against former President Trump.
These falsehoods continue to garner majority support within the Republican Party, with 76% of GOP voters in a February 2021 survey affirming their belief that the 2020 election was rigged in the favor of President Joe Biden (Quinnipiac University, 2021). Republican voters have seemingly made up their minds, regardless of judicial decisions, independent investigations, and massive evidence that there was no evidence of widespread voter fraud during the 2020 presidential election cycle. In fact, when asked in a Pennsylvanian court former President Trump’s personal attorney Rudy Giuliani admitted his case was “not a fraud case” (Berenson, 2020).
Despite all the evidence against them, why do a vast majority of Republican voters believe the 2020 presidential election was rigged against them? While the notion of election fraud and mass conspiracy against former President Trump in part was driven by himself and other high-ranking Republicans and administration officials, the spread and legitimacy of his argument were facilitated by filter bubbles with a pro-Trump angle on social media.
First, the spread. Facebook groups like “Stop the Steal” amassed more than 320,000 users in less than 22 hours after it had become clear Joe Biden had won the election (Frenkel, 2020). Although prominent accounts and groups like this were shut down by social media companies, smaller groups and pages continue to exist in alternative forms. For instance, a Facebook account called “Young Conservative, @TheRealconservativ3lephant” utilizes picture and video “memes” to spread false information about the election process during the last election cycle (Facebook Page). For instance, a post from November 12, 2020 pictures an electoral college map from the Epoch Times- a far-right news outlet- claiming that Trump had won more votes than Biden. Although this particular post was only shared 26 times, that magnitude of the spread can be multiplied by the thousands of other pro-Trump accounts that are also sharing the same picture or message: that the 2020 election was stolen.
Indeed, the quick spread of misinformation and disinformation online can be attributed to the fact that rather than following one particular account, pro-Trump adherents seem to follow several smaller accounts that are able to spread disinformation more covertly. When a piece of conspiratorial news is picked up, the pro-Trump network activates, facilitating information flows that are too quick to regulate and circumvent. According to data from a 2018 study published in Science, it takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people (Vosoughi et al., 2018).
Second, the legitimacy. Even though pro-Trump groups had no factual evidence to back up their claims of electoral fraud, massive amounts of the Republican base choose to believe them anyway. This indicates that right-wing filter bubbles facilitate legitimacy by invoking ideational sources of the group’s ideology. For example, distrust of the media of a defining feature of the pro-Trump ideology In a January 2021 poll by YouGov, 92% of his supporters in strongly or somewhat agreed that “the mainstream media today is just a part of the Democratic Party” (YouGov, 2021). Because of this ideology, Republican voters are more likely to assume that any news source is biased against them, which compels them to believe only what other pro-Trump adherents are saying online. And thus, legitimacy is assured. All outside knowledge, facts, and information are seen as untrustworthy, because it is not being delivered by those within the filter bubble. It is my belief, as well as other experts, that this orientation of legitimacy has helped fuel the belief that Joe Biden and the Democratic Party stole the 2020 presidential election.
Section VII: Conclusion
I have argued that social media usage can be attributed to the rise in political polarization within the United States. This is due to filter bubbles, communities that exist to facilitate biased and personalized flows of partisan misinformation and disinformation online. I also discussed two variables that interact with the formation and cementation of filter bubbles: algorithms and psychological selective exposure. In my analysis and review of the 2020 presidential election, it is clear that political polarization and hatred of others are at an all-time high, which I worry is antithetical to the correct functioning of democracy.
Going forward, we have a choice. For our government to work for all, there must be some semblance of a baseline, a commonly respected basin of information all Americans trust. Issues like COVID-19 and election reform should not be politicized by either party. We must rebuild trust in our institutions through outreach and compassion. Educational efforts against misinformation and disinformation should be encouraged.
Clearly, there is a lot of work to be done. But even as we speak, progress is being made. Just last week, it was revealed scientists had developed a new machine learning tool that can “identify Covid-19-related conspiracy theories on social media and predict how they evolved over time,” an advancement that may aid in the fight of misinformation online through education and outreach (Sankaran, 2021). For the sake of our segmented society, I hope that our digital lives will acclimate to civility and the restoration of cross-partisan information flow. For the functioning of our democracy and the protection of our most sacred rights I hope that once again, hate will become a strong word.