Abstract
Concerns about public misinformation in the United States—ranging from politics to science—are growing. Here, we provide an overview of how and why citizens become (and sometimes remain) misinformed about science. Our discussion focuses specifically on misinformation among individual citizens. However, it is impossible to understand individual information processing and acceptance without taking into account social networks, information ecologies, and other macro-level variables that provide important social context. Specifically, we show how being misinformed is a function of a person’s ability and motivation to spot falsehoods, but also of other group-level and societal factors that increase the chances of citizens to be exposed to correct(ive) information. We conclude by discussing a number of research areas—some of which echo themes of the 2017 National Academies of Sciences, Engineering, and Medicine’s Communicating Science Effectively report—that will be particularly important for our future understanding of misinformation, specifically a systems approach to the problem of misinformation, the need for more systematic analyses of science communication in new media environments, and a (re)focusing on traditionally underserved audiences.
Keywords: misinformation, disinformation, fake news, motivated reasoning, science literacy
Concerns about an underinformed or even misinformed public when it comes to scientific issues are not new. Disconnects between public opinion in the United States and the scientific consensus on topics such as vaccine safety, evolution, or climate change have existed for a long time. More recently, however, increasingly polarized political environments and fundamental changes in how information is shared by media and audiences have given new urgency to the problem. What do we know from evidence-based social science about the origins and drivers of misinformation? What can research in communication, psychology, political science, and related fields tell us about potential solutions? What agendas for future research have emerged from the existing body of work on misinformation in science?
By not making societal influences on the production and dissemination of misinformation (1) or group-level dynamics (2) a focal point of our paper, we do not mean to imply that they are irrelevant. In fact, they provide important context for understanding factors that might contribute to or remedy misinformation among citizens, as we will discuss below. We also exclude from our analysis misinformation among individuals that has unique relevance to nonscientific contexts (e.g., misinformation triggered by false memory effects in legal settings) (3).
What Does It Mean for Citizens to be Misinformed or Uninformed About Science?
First, it is important to note that “misinformation” can be broadly defined as information that is incorrect, possibly by accident. Comparatively, “disinformation” has sometimes been used to denote a specific type of misinformation that is intentionally false. However, the distinctions between these terms—as well as terms like “rumor” or “fake news”—have not always been clearly drawn in research pertaining to these topics (4). Similarly, there has been some conceptual debate surrounding what it means to be “misinformed,” compared with “uninformed.” Being misinformed is often conceptualized as believing in incorrect or counterfactual claims. However, the line between being misinformed or uninformed—that is, simply not knowing—has long been blurry in different literatures.
For example, early empirical observers of the modern US political system equated being misinformed to not being informed in the first place, or to making decisions based on factors other than the best available information. “After examining the detailed data on how individuals misperceive political reality or respond to irrelevant social influences,” Berelson et al. (5) wrote over 60 y ago, “one wonders how a democracy ever solves its political problems.” Much of the empirical work since then has focused on providing citizens with competencies to absorb relevant facts to “overcome areas of ignorance or … correct mistaken conceptions” (6). Thus, being misinformed has sometimes been understood as both holding inaccurate views and being uninformed about scientific facts and processes.
However, others have argued that believing incorrect information about both scientific topics (e.g., childhood vaccination) and political topics (e.g., weapons of mass destruction in Iraq) can have unique causes and consequences, especially if the person is also politically active. Compared with people who are uninformed or those who are informed but inactive, individuals who are “active” and “misinformed” have “unite[d] their purported knowledge and their political actions, [so] they have little incentive to abandon old beliefs, accept new ones, abandon old allegiances, find a new group, and change their behavior” (7). Consequently, strategies to encourage people in different epistemic states to be “active” and “informed” might differ.
Of course, citizens can be uninformed and misinformed all at once—for example, they may be uninformed about how scientific processes work while being misinformed about the facts of a specific scientific issue—and these factors may influence each other. In practice, then, it is difficult to cleanly separate the “misinformed” from the “uninformed.” Our goal, therefore, is not to tease apart these epistemic states, but rather to review what is known about individual-level “unawareness,” “misinformation,” or both, to identify what may be missing from proposals to improve those encounters and to bring these discussions to bear on the specific field of science communication.
Lack of Understanding Of Science.
One problematic area is citizens’ understanding of basic scientific facts and the scientific process more broadly.
Knowledge about scientific facts.
The US Science & Engineering Indicators (SEI) surveys measure factual knowledge about science biannually as the average number of correct answers to a series of true-or-false and multiple-choice items. Although such batteries of closed-ended factual knowledge questions are imperfect representations of what citizens know about science more broadly, these trend data suggest that knowledge of scientific facts and terms has not decreased significantly in recent years. There are differences across respondents, however, with factual knowledge being “strongly related to individuals’ level of formal schooling and the number of science and mathematics courses completed” (8).
Epistemic knowledge about science.
Notably, recalling isolated scientific facts might have a limited impact on citizens’ ability to make meaningful policy choices about risks and benefits surrounding emerging technologies. Likely more important is what has been called “epistemic knowledge”—that is, levels of information or misinformation among nonexpert publics about the scientific process and how this process shapes the findings produced by science (9). This is slightly different from discussions of “epistemic beliefs” more broadly, which can be understood as “beliefs about the nature of knowledge and how one comes to know” (10).
Survey data on Americans’ (mis)understanding of the scientific process does not show meaningful changes over time (with some variations due to coding of open-ended data). In the most recent SEI survey, one in three Americans (36%) misunderstood the concept of probability; half of the population (49%) was unable to provide a correct description of a scientific experiment; and three in four (77%) were unable to describe the idea of a scientific study, indicating an inability among “[m]any members of the public … to differentiate a sound scientific study from a poorly conducted one and to understand the scientific process more broadly” (11).
Public opinion surveys also suggest that significant proportions of the public are concerned about how low levels of epistemic knowledge can negatively influence their own or other Americans’ understanding of science news. Although there is certainly some complexity (and irony) in measuring individuals’ perceptions of misperceptions among their peers, it is nonetheless interesting to note that two in five Americans (44%) agree that it is “a big problem” that “[t]he public doesn’t really know enough about science to understand findings in the news,” with only 16% not considering it a problem. Similarly, 4 in 10 (40%) think that it is “a big problem” that “[t]here are so many findings that it’s hard to distinguish between high and low quality studies,” with only 18% not seeing it as a problem (12).
Holding Beliefs Inconsistent with the Best Available Science.
In Theaetetus Plato described knowledge as “justified true belief.” Philosophical critiques of this definition have focused on the fact that people’s reasonable, justified perceptions of reality, driven by their own direct observations of the world around them, can be inconsistent with truth (13). This highlights the unique role that science plays in society for providing citizens with information that is justified beyond their own observations because it is based on reliable, systematic bodies of scientific knowledge. In fact, the low levels of factual and epistemic knowledge discussed earlier would be less disconcerting if large proportions of nonexpert audiences would routinely defer to scientific judgment (14) and make policy choices that are consistent with evidence-based consensus within the scientific community. Unfortunately, data do not always support this expectation.
Inaccurate views of scientific consensus and the willful rejection of scientific consensus.
In a 2014 US survey (15), two-thirds of respondents (67%) thought that scientists did “not have a clear understanding about the health effects of GM crops,” despite broad scientific consensus on the topic (16). Similarly, half of Americans (52%) thought scientists were “divided” in the belief that the universe was created in the Big Bang, and about a third each thought that scientists were divided on anthropogenic climate change (37%) and evolution (29%). Of course, these data do not make clear the cause of these inaccurate views, which, arguably, could stem from people being uninformed, intentionally misinformed, or a bit of both.
Furthermore, split-ballot survey experiments have shown that even when Americans do seem to possess accurate knowledge of scientific consensus (however large or small that number may be for a given issue) there is no guarantee that they will integrate that knowledge into their attitudes or policy preferences (17). In other words, these respondents know what the scientific community has established as fact, but they nonetheless refuse to “know” it. Some have therefore argued that rejection of scientifically accurate accounts of the Big Bang or evolution by nonexpert audiences indicates neither a lack of information about scientific consensus nor the presence of misinformation, but, rather, motivated information processing (18).
Conspiratorial beliefs.
Another potential problem is persistent belief in conspiracy theories, or theories in which explanations for events and phenomena offer “as a main causal factor a small group of persons (the conspirators) acting in secret for their own benefit, against the common good” (19). Conspiratorial beliefs can thus involve not only a willful rejection of scientific consensus but also false attributions of intent to members of the scientific community, as well as the fabrication of relationships between actors. For this reason, conspiratorial beliefs are typically understood as distinct from simple ignorance or misperception about isolated facts. Many of us believe in facts that turn out to be wrong. For instance, 7 in 10 Americans falsely attribute the statement “I can see Russia from my house” to Sarah Palin instead of Saturday Night Live’s Tina Fey (20). Many individuals, however, would adjust their views about this fact when presented with information showing their initial beliefs to be wrong. However, individuals who endorse conspiracy theories often refuse to adjust their belief systems when they are confronted with new and better information contradicting their misunderstandings (21).
Importantly, there is evidence to suggest that additional, traditional education will not be enough to dispel belief in conspiracy theories. For example, conspiratorial beliefs and inaccurate beliefs about scientific issues such as vaccine safety and climate change have also been linked to certain “epistemic beliefs,” or broader convictions about how people can and should come to know what is true. Specifically, people who “[put] more faith in their ability to use intuition to assess factual claims than in their conscious reasoning skills” are particularly likely to support conspiracy theories, whereas people who believe that empirical evidence is needed to validate truth claims exhibit the opposite tendency (10). This is true even when issues have become politicized, leading to the conclusion that belief in conspiracies is “[f]ar from being an aberrant expression of some political extreme or a product of gross misinformation [but rather] a widespread tendency across the entire ideological spectrum” (22).
It might be argued that survey respondents who express belief in conspiracy theories do not actually believe the falsehoods, but rather that they endorse certain conspiratorial views as a means of expressing their political or ideological allegiances, or to engage in a sort of out-group “mudslinging” (e.g., by claiming that Barack Obama is not American). Recent research, however, supports the claim that individuals actually do hold the conspiratorial beliefs they assert and are not simply engaging in “expressive survey responding” (23). Unfortunately, conspiratorial beliefs often persist because the falsehoods which help to sustain them are repeated and “boosted” by politicians, corporate actors, fringe media organizations, and others to mobilize political support from their base (7).
How Does Misinformation Take Root, and Why Does It Persist?
Thus far our discussion has outlined the various ways in which people can lack accurate beliefs about science. Based on this overview, factors known to be associated with these varying epistemic states and proposed solutions can be examined across three levels of analysis: individual, group, and sociostructural.
Individual-Level Roots of Misinformation.
Recent attempts to combat widespread misinformation have primarily focused on citizens’ ability to recognize misinformation or disinformation and to correct their own views accordingly. As a result, proposed solutions often focus on the supply side of news, ranging from increased access to fact-checking sites to changing algorithms to stem the flow of fake news through various online channels (24).
The (in)ability to recognize misinformation.
Implicitly, most approaches to algorithmic curation of facts assume that citizens are misinformed because they are unable to sift through and critically evaluate information in emerging (social) media environments. There is no doubt that low levels of media literacy among citizens are part of the problem (25).
News and media literacy has been broadly defined as “the ability to access, analyze, evaluate and create messages in a variety of forms” (26). Arguably, it is the “evaluation” skill that poses the most relevant challenge for misinformation, as those with limited ability to evaluate “cannot distinguish dated, biased or exploitative sources” (26). A recent assessment of American students’ media literacy demonstrates that the vast majority of them struggle to (i) recognize the possible biases of politically charged tweets and (ii) distinguish between a news story and news-like advertisement (27). Moreover, as the Pew Research Center (28) reports, one in four (23%) American adults admitted to sharing misinformation via social media.
These circumstances have led some to argue that “the ultimate check against the spread of rumor, pernicious falsehood, disinformation, and unverified reports masquerading as fact” is a “generation of astutely educated news consumers” who can also function as competent digital content producers, and who can “identify for themselves fact-and-evidence-based news and information” (29). Interestingly, others have critiqued proposals to increase media literacy by noting that these efforts have the potential to backfire, as “some media literacy skills could be used to justify belief in misinformation [and that] elite discourse about ‘fake news’ may decrease the public’s trust in the media and their ability to recognize actual news stories without facilitating the identification of fake news” (30). Consequently, these researchers suggest pairing news media literacy education with “activities designed to spur discussion about political issues.”
Indeed, recent events such as the 2016 presidential election have brought increasing public attention to the role of social media in structuring and presenting information in such a way that may limit an individual’s ability to assess the quality and usefulness of information, and to distinguish between fact and fiction. In response to mounting criticism of this kind, Facebook and others have proposed a dizzying array of technical remedies intended to bolster users’ ability to identify misinformation and, generally, to make it easier for their users to have more “positive” information encounters. Unfortunately, their hasty attempts to diffuse criticism have often backfired or have the potential to backfire, which they have sometimes admitted (31).
For example, Facebook recently announced that it will prioritize the display of content that has been shared by users’ friends and family members, and that they will see “less public content, including news, video and posts from brands,” in an effort to offer users more “meaningful connections,” and to ensure that Facebook is a force for good (32). However, this latter solution may cause people to see “more content that reinforces their own ideologies” (33), and, in countries where these specific technical changes were piloted, frustrated users have reported that the modifications actually promoted the spread of fake news (34).
As technology companies continue to grapple with changes to their algorithms and interfaces, third-party fact-checking groups such as PolitiFact.com and Factcheck.org have also emerged to boost people’s abilities to debunk misinformation (35), and efforts are ongoing to offer reliable, automated “deception detection” for both text and images (36, 37). Furthermore, Google is working on real-time “fact check snippets” that appear as individuals search for disputed information (38), and computer scientists are devising solutions to automatically detect and combat the influence of “bots” (robots), which have been shown to successfully spread fake news with real-world consequences in both politics and the stock market (39). Finally, there is some evidence that correcting misinformation via an algorithmically driven “related stories” function on social media platforms can reduce misperceptions of science-related information (40).
Technical innovations like the ones above have been rightfully advocated for as possible solutions to the spread of misinformation. However, one unfortunate and defining feature of the posttruth era is that “facts and objective evidence are trumped by existing beliefs and prejudices,” such that “a notable segment of the American public now subscribes to a non-standard epistemology that does not meet conventional criteria of evidentiary support” (41). If it is indeed the case that facts no longer matter as much as they normatively should to some Americans, then technical solutions that make facts more recognizable or more visible will need to be supplemented by strategies that pair individual ability-focused solutions with solutions that address individuals’ (lack of) motivation to seek out, consume, and interpret information in ways that privilege accuracy over other possible goals, such as the protection of their preexisting beliefs.
Motivations to recognize inaccurate information … or not.
Beyond issues of ability, there are also psychological factors that contribute to people becoming misinformed, and these factors can make it difficult for people to identify for themselves what is fact versus fiction. Specifically, individuals are more likely to accept information that appears to follow a logical narrative, that comes from a source they perceive to be “credible,” that is consistent with their preexisting values and beliefs, and that seems to be something other people believe (42). The psychological unease or inconvenience of encountering worldview-challenging information can produce a desire to minimize feelings of “cognitive dissonance,” which can lead to biased perception and information processing that complicates the recognition and rejection of falsehoods (43).
Among other strategies, people may grapple with the complexity of the external world by engaging in selective exposure and motivated reasoning. Selective exposure refers to the act of choosing to read or view belief-consistent information over belief-inconsistent information (when given the choice), and there is evidence that such selectivity occurs among strong partisans who are especially knowledgeable about politics and who gravitate more toward news sources that mirror their preexisting views (44). Importantly, selective exposure is not limited to politics and has also been shown to occur, for example, as individuals seek information about scientific topics (45). When individuals expose themselves primarily to media sources that (knowingly or unknowingly) convey falsehoods, there is some evidence that selective exposure plays a role in keeping people misinformed, as shown among Fox News viewers during the Iraq war (46).
Although individuals may indeed engage in selective exposure in some circumstances, recent research suggests that this is not widespread in individuals’ everyday news consumption. A study tracking individuals’ online news use over time, for example, found no evidence of partisan selective exposure, instead concluding that the “online political news audience tends to overwhelmingly congregate within a handful of popular, brand-name news sites… [and that] all sites in the sample, including the more obscure, more partisan political news outlets, attract ideologically diverse audiences in proportion with the overall online audience” (47).
Still, as decades of research on motivated reasoning in political science (48), science communication (49), and other fields have shown, even when facts are not filtered out by selective exposure or “filter bubbles,” they can be interpreted in very different ways by different audiences. Even if ideologically diverse audiences are exposed to the same content, as the above study indicates, they may not do so for the same reasons or with the same outcomes (e.g., some Democrats may be “hate-reading” news on Breitbart News Network). Individuals, in other words, engage in goal-directed processing of new information to protect preexisting values, beliefs, and ideologies (50). When such directional goals influence reasoning processes, individuals are prone to “biased assimilation,” which is characterized by confirmation and disconfirmation bias, or the parallel tendencies to privilege information that is consistent with one’s predispositions and to discredit information that seems contradictory (51). As with selective exposure, motivated reasoning can contribute to an individual becoming misinformed, and it can occur not only in political contexts but also when individuals process information about science and emerging technologies (52–54).
Evidence also suggests that motivated reasoning is most likely to occur among individuals who have the most sophistication and knowledge of the topic at hand, further polarizing views among different publics (55). Unfortunately, simply providing individuals with corrective, factual information is not guaranteed to fix misperceptions, as “strongly entrenched beliefs” are often likely to “survive the addition of non-supportive evidence” (51). In fact, there is some evidence to suggest that attempts to correct misinformation about both political and scientific topics among individuals with the most strongly held beliefs can instead backfire, entrenching them further in their false views (56). However, some recent experimental work assessing people’s beliefs in eight ideologically polarized issues (none of which were science-related) has not been able to replicate this “backfire effect.” One interpretation for this is that individuals feel no need to “counter-argue” factual corrections (as the backfire effect suggests) because their ideological commitments may have more to do with affect than with evidence-based reasoning (57).
The role of emotion.
This brings us to discussions of the influence of affect in motivated reasoning processes. There is some evidence that a person’s emotional state can shape the accuracy of his or her beliefs. In recent experimental work, angry partisans who saw uncorrected political misinformation from their own party held less accurate beliefs than emotionally neutral partisans, raising concern that anger can facilitate belief in falsehoods, which might be “especially troubling given that anger also depresses information seeking and increases selective exposure” (58). However, when misinformation from an angry person’s in-party was coupled with a correction, there were no significant differences in belief accuracy between the two groups, suggesting that certain kinds of correctives may be effective despite emotional arousal.
Research also suggest that emotional states—especially anger—can interact with individuals’ ideologies and the information environment (e.g., the presence or absence of correctives, and in-group or out-group source cues) to influence people’s encounters with (mis)information, potentially exacerbating their belief in falsehoods and shaping how (mis)information is assimilated into their worldviews. Furthermore, recent research about the spread of falsehoods online has revealed that false rumors tend to elicit more feelings of surprise and disgust than the truth (24). Given that falsehoods were also shared more frequently, it is plausible that certain emotional states have greater power to inspire individuals to share information (24). Notably, individuals’ attraction to emotionally charged content is not limited to politics, and even when it comes to scientific discoveries, individuals are more inclined to spread information that has a greater emotional impact (59).
Addressing misinformation at the individual level.
As argued earlier, individuals may not only lack the ability to recognize and evaluate misinformation but also the motivation. Of course, citizens are constrained in their choices and behaviors by the information environments and institutions that surround them and that often produce content that is intentionally designed to circumvent the motivations of even the most well-intentioned news consumer. Acknowledging this additional wrinkle, research has started to examine ways to reduce motivational influences on how facts are processed.
For example, one proposal to achieve this might be to structure information environments in a manner that encourages accountability, since individuals are more likely to engage in effortful attempts to understand multiple sides of an issue—including on scientific topics—when they expect that their views will be challenged by others (60). Furthermore, instead of showcasing factual correctives or “the other side” of an argument sourced from out-group members or even an algorithm, it may be especially beneficial to source such content from like-minded others (e.g., “co-partisans”), whose arguments may be evaluated as more convincing (4). There may also be some value in disincentivizing expressions of partisan anger and outrage so they cannot be leveraged by disinformation campaigns to exacerbate biased assimilation of information (61).
Misinformation in Groups and Informational Cascades.
While cues about what others think are of course communicated through traditional media, they are also made salient through people’s social networks, where individuals “selectively disclose” information in biased ways (e.g., sharing only their achievements), thus contributing to shared misperception about the attitudes and behaviors that are socially “normal” or most prevalent (62). Importantly, beliefs that we think are most widespread are often the same ones that are most repeated, which also makes them most familiar to us, and we tend to assume that familiar information is more reliable (42). Furthermore, communications that repeat misinformation to correct it can backfire in the long term, as people forget the details of the corrective to which they were briefly exposed and instead rely on the now-increased familiarity of a false claim when forming opinions (63). To this point, a recent study on “false rumors” about healthcare reform has found that although debunking misinformation is possible it is nonetheless risky, as “merely repeating a rumor increases its power,” largely by increasing its familiarity (64).
Given that people’s attitudes and beliefs are particularly tenacious within homogenous social groups (65), insular social networks can be especially ripe for misinformation, in that homogeny can make acceptance of a falsehood appear socially “normal” by decreasing the visibility and familiarity of contradictory information. For example, work on rumor transmission and persistence in the face of ambiguity shows that social network “clustering” can contribute to clusters of belief in specific rumor variations—defined here as “unverified information statements”—and that further discussion of a rumor within a given social cluster can result in increased confidence in the rumor’s veracity (66). Although this study does not focus specifically on science, it is worth considering the possibility that belief in science-related rumors might spread and cluster in similar ways as individuals respond to real or perceived ambiguity regarding certain scientific topics.
Moreover, other research on the structure of social networks has shown that certain network configurations characterized by high visibility for certain nodes—as found on Twitter—can increase the power of those nodes “to skew the observations of many others,” leading to some beliefs being perceived by users as more prevalent in a network than they really are (67). Similarly, work on the transmission of “rumors with different truth values” on Facebook has found that “rumor cascades run deeper in the social network than reshare cascades in general,” demonstrating “how readily and rapidly information can be transmitted online, even when it is of dubious accuracy” (68). It can therefore be argued that the defining features of social media technologies (i.e., the ability to establish desired networks and to share or discuss information within one’s chosen network) are the same features that make it possible for nefarious actors to exploit processes of collective sense making to spread misinformation (4).
Such nefarious actors can be human—as in the case of the individuals and groups who intentionally fanned political fires during the US 2016 presidential election—but they can also be machines, as evidenced by the armies of Russian propaganda bots that we now know have infiltrated Twitter and Facebook (69). Perhaps it could be argued that, if belief in falsehoods will result whether humans or bots the ones spreading information, then the distinction does not matter. However, the distinction will matter in terms of proposing solutions, and, unfortunately, a recent assessment of the prevalence and spread of misinformation online reports that humans are mostly to blame for spreading falsehoods, rather than robots (24).
However, there is some evidence that group-level cues in social media can be useful in correcting misinformation, at least in some circumstances. For example, there is reason to believe that social media news consumers look to social endorsements (i.e., “likes” and “shares” from peers in their network) when selecting content for perusal, above and beyond their ideological preferences (70). Therefore, if individuals could be encouraged—through technical or nontechnical means—to recognize homophily in their social networks and to develop more diverse contacts, as some have proposed (30), then the group-based cues on social media may actually encourage individuals to view more diverse content than they would otherwise.
Communication Dynamics at the Societal Level.
Politicians and other political actors have a long history of spreading misinformation (or even disinformation) to shape public opinion in their favor. A recapitulation of this work is beyond the focus of this paper. What is relevant to the problem at hand, however, is how the role of mass media as a potential corrective agent against such misinformation has changed over time, and how these changes have engendered structural realities that can contribute to Americans’ becoming misinformed.
To start, it is worth noting that American media have not always striven for “objectivity.” Early print media delivered unabashedly slanted and misleading information during the “party press” era of the early 19th century, “when advertising and subscriptions brought in little revenue to many newspapers, [and] political support was invaluable,” to the point where “editors frowned on impartiality” (71). Although US newspapers eventually committed themselves to “the truth,” this development was not driven exclusively by normative democratic considerations, but likely also by the rise of printing presses (72) and by shifting entrepreneurial and political activity that encouraged support for the “objectivity” of market logic and advertiser-supported papers (73). Since then, America’s self-described “independent” or “objective” presses have relied more and more heavily on advertiser funding over the course of the 20th century, and this trend has intensified as paid readership has declined (74).
As with print media, the evolution of radio, television, and the Internet has also been heavily shaped by commercial actors, which have increasingly incentivized the creation of tailored content capable of attracting and segmenting audiences for targeted advertising (75–78). Notably, commercial actors online have steered technology companies away from subscription-based services and toward advertising-based revenue models (76), such that organizations like Facebook now rely on highly sophisticated data collection and “profiling” tools to catalog users’ preferences for the sale of hypertargeted ads.
As younger audiences worldwide flock to social media and other seemingly “free” sources of news (79), legacy media organizations that must compete with social media for advertisers’ support are pressured to offer similar targeting services, and we thus see traditional news producers driving audiences to online versions of their stories rather than to newspapers or television broadcasts (80). The intensity of modern commercial pressures on traditional news media was perhaps best summarized by Axios cofounder and former Washington Post political correspondent Jim VandeHei in an interview with the New York Times: “Survival … depends on giving readers what they really want, how they want it, when they want it, and on not spending too much money producing what they don’t want” (81).
These changing economic realities are part of what some have described as “social mega-trends” (41), which arguably contribute to the spread of misinformation in the United States: (i) a decline in social capital, (ii) growing economic inequalities, (iii) increasing political polarization, (iv) declining trust in science, (v) politically asymmetric credulity (i.e., conservatives are liberals are differently susceptible to misinformation), (vi) evolution of the media landscape (e.g., filter bubbles, incivility, and heightened outrage), and (vii) a fractioning of the media that rewards political extremism.
In previous sections we already provided a more refined look at areas in which empirical realities might be at odds with some of these fairly broad claims. This included discussions of how the relationships between motivated reasoning and beliefs in falsehoods likely differ depending on factors such as anger and of mixed evidence at best about conservatives’ and liberals’ differential responses to misinformation (30). Similarly, SEI survey data also suggests that US public trust in science as an institution has not decreased over time and is in fact higher than trust in most other institutions, except for the military (8). Finally, emerging bodies of research also challenge the idea that online filter bubbles or “echo chambers,” based on partisanship, play a crucial role as breeding grounds for misinformation (61).
As a result, this emerging literature offers limited lessons for news organizations navigating changing information environments. Beyond episodic accounts of readers cancelling subscriptions in protest, for instance, there is little systematic evidence supporting the idea that audiences automatically abandon sources that occasionally deliver news that might be at odds with readers’ preferences or beliefs. At the same time, the commercial motivations driving shifts toward news that is microtargeted toward specific audiences (based on consumer preferences, prior viewing behavior, and a host of other factors) is likely a permanent one. It remains to be seen how the interactions among audience preferences, journalistic values, and economic realities shape the system of societal information exchange in the long run.
These uncertainties are partly a function of research about misinformation—across all three levels of analysis—being conducted in different issue domains, which raises questions about the extent to which findings from political contexts can generalize to scientific contexts and vice versa. Interestingly, research suggests that there are in fact a lot of parallels with respect to how audiences deal with potential misinformation for science and politics. Studies using very similar designs to test selective exposure in political (44) and scientific (45) contexts, for example, found parallel patterns of selectivity based on prior beliefs, even for scientific issues that have not been engulfed in political controversy, such as nanotechnology. Similarly, results from studies on motivated reasoning suggest that differences in how audiences process (mis)information in scientific vs. political contexts might be even less pronounced for scientific issues that have been surrounded by significant political disagreements, including evolution (82), vaccine mandates (83), or stem cell research (53).
Outlook
Above, we discussed what it means for citizens to be misinformed about science, and what the processes are that may exacerbate or alleviate some of the strains on our democratic system originating from misinformation. Before shifting to implications of this existing body of work for the field of empirical science communication research, it is important to briefly outline a few nonobvious and potentially understudied actors that might—intentionally or not—contribute to the ability of misinformation to take hold in public discourse.
Collateral Influences.
Actors that might inadvertently contribute to misinformation spreading among nonexpert audiences include scientists themselves, universities and science journalists, and, finally, readers of science news. As we discuss below, research is just beginning to understand the different roles that each group can play in ensuring that the best available evidence is heard in public debates about science.
Scientists.
Some scholars have argued that decreasing public and policy support for science, among other factors, has created new incentives for scientific organizations to use mass media and other public-facing channels to promote particular findings or areas of research. Media, in turn, rely on celebrity scientists as resources for newsworthy accounts of breakthrough science (49). This has engendered concerns that hype and overclaims in press releases and other scientific communication can lead to misperceptions among nonexpert audiences about the true potential of emerging areas of science, especially if science is unable to deliver on early claims of cures for diseases, for instance.
While there is limited empirical work on the effects of hype or overclaims on misperceptions about science being wrong or producing contradictory findings (84), survey data suggests that these kinds of concerns are not completely unfounded. In national surveys, one in four Americans (27%) think that it is a “big problem” and almost half of Americans (47%) think it is at least a “small problem” that “[s]cience researchers overstate the implications of their research.” Only one in four (24%) see no problem. In other words, while levels of confidence in the scientific community remain high (11), science may run the risk of undermining its position in society in the long term if it does not navigate this area of public communication carefully and responsibly.
Media.
While the scientific community continues to enjoy a great deal of trust, public confidence in the press has declined substantively over past decades. In surveys, only 1 in 10 Americans (8%) express a “great deal of confidence” in the press (8). Similarly, three in four Americans (73%) think that “[t]he biggest problem with news about scientific research findings is the way news reporters cover it” (12). Without a doubt, these poll numbers do not adequately capture all nuances of public trust in science or journalism as institutions (85). Even as imperfect indicators, however, they do not bode well for media’s ability to better inform various publics about science or to even counter misinformation among publics that are particularly critical of science.
Part of the problem is related to a well-documented decline in science journalism and “the trend among many media organizations to no longer use (full-time) science journalists” (11). As a result, coverage of scientific issues has often become the responsibility of political reporters, business writers, and journalists in other nonscientific beats (86). Paired with an increasingly polarized political environment, this has also promoted what some have called “false balance” (87). The term refers to reporting—often by nonscience journalists—that puts established scientific consensus around issues like genetically modified foods or climate change on equal footing with nonfactual claims by think tanks or interest groups for the sake of “showing both sides.”
Ironically, legacy media might also inadvertently be promoting their own demise by repeatedly and increasingly raising the specter of “fake news,” which myriad political actors have framed strategically as a disease of traditional media specifically to discredit them. Indeed, a search on Lexis Nexis for appearances of “fake news” in newspaper coverage in the United States and globally (Fig. 1) reveals a clear increase in the yearly frequency with which newspapers have uttered this specific phrase and have arguably made more familiar—and therefore more believable—its false connotations.
Fig. 1.
Lexis Nexis appearances of “fake news” in newspaper coverage in the United States and globally show an increase in the yearly frequency with which newspapers have used this specific phrase and have arguably given prominence to its false connotations among audiences.
This is not particularly surprising. Media are in the business of reporting on political discourse, including potential accusations of bias in their own coverage. Recent research, however, also suggests that frequent discussion of elite discourse about fake news in news media may affect whether individuals trust news media (88). As a result, media may inadvertently undermine public trust in their own work by providing a forum for accusations of bias and fake news put forth by political elites.
Consumers of science news.
A third group central to the dynamics surrounding misinformation about science are consumers of science news themselves. A growing body of research suggests that the comments left by readers on news websites are frequently uncivil in tone, and that uncivil commenters are less likely to use evidence than civil ones (89). Even when the evidence that is being presented in user comments is held constant, however, uncivil comments following a science news article can promote perceptions among readers that the article itself is biased and can polarize readers based on their initial attitudes toward the technology that is being discussed (90, 91).
In other words, the nature of interactions among science news consumers can overshadow the quality of the information that is presented in an article or discussed in associated comments, and it can thus influence whether the information itself is seen as credible enough to be integrated into belief systems. Research is beginning to understand the processes behind these mechanisms as well as potential solutions (92), but there is still a very limited research base on how news environments that provide readers with real-time cues on popularity or partisan agreement (such as user comments, likes, or retweets) ultimately influence information processing and knowledge intake.
Misinformation: Items for a Science Communication Research Agenda.
The current research base on misinformation points to at least three strands of thinking to guide future work. Two of these strands echo recommendations from the National Academies of Sciences, Engineering, and Medicine’s report Communicating Science Effectively: A Research Agenda (93).
Systems approach.
The Academies’ Communicating Science Effectively report highlighted the need for a systems approach to the study of science communication—that is, the need for research that treats science communication as a multilevel problem. This suggestion acknowledges the fact that misinformation among individual citizens does not occur in a vacuum. Instead, individual misperceptions emerge in group-level processes as part of social networks, and they are embedded in and shaped by societal dynamics (such as political events or campaigns), as discussed above.
As a result, both researchers and practitioners of science communication have struggled to identify mechanisms for better tackling misinformation—especially in highly contentious areas of science—that generalize across issues and contexts. Particularly, there is limited research that examines how mechanisms for effective science communication that were originally studied in laboratory-experimental settings perform in competitive communication settings—for instance, when individuals are faced with competing messages from interest groups or political actors.
Systematic research on emerging media environments.
Calls for a systems approach also highlight a second area where research is needed, as outlined in the Communicating Science Effectively report: research on emerging media environments. There is new urgency to this problem, in light of findings that link use of social media to, for example, lower levels of political knowledge (94), as well as recent work suggesting that misinformation travels faster in social network than true facts (24).
One participant in the first Sackler Colloquium on the Science of Science Communication already highlighted the challenges for both researchers and practitioners resulting from a lack of a systematic corpus of work in this area: “[E]mpirical research examining specifically online science communication processes and outcomes is still scant … [and] many of the “best practices” of online science communication currently exchanged among practitioners are based on experiential evidence rather than on an empirical understanding” (95). Scholars have diagnosed similar research deficits related to misinformation and fake news in online communication environments. Their call to action mirrors systems approach thinking and calls for multidisciplinary efforts to empirically understand “the vulnerabilities of individuals, institutions, and society” to fake news disseminated through ubiquitous online channels (25).
Underserved audiences.
A last area with a dearth of systematic empirical work are mechanisms to reach audiences that are often underserved by traditional channels for science communication. Newspapers, science television, or even science museums, for instance, tend to reach more educated and higher-income audiences (11). Furthermore, from work dating back to the 1970s on widening knowledge gaps, we also know that citizens with higher socioeconomic status (levels of education and income) are able to learn more efficiently from new information than their peers. As a result, quality scientific information is not only more likely to reach more educated and higher-income audiences, but, when it does, the ability of citizens with higher socioeconomic status to process new information more efficiently can further widen existing gaps between the already information-rich and the information-poor.
Our inability to reach all segments of the population equally well with high-quality scientific information is particularly troubling, given that the need for antidotes to misinformation might be particularly pronounced among certain groups of the public. One indicator of this problem is the educational and income gaps related to seeing fake news as a problem in the first place. In a 2016 Pew survey after the US presidential election (28), almost three in four Americans (73%) with a household income over $75,000 thought that completely made-up news caused “a great deal” of confusion. Comparatively, fewer than three in five (58%) respondents with a household income of $30,000 shared that concern. However, socioeconomic gaps also emerge with respect to people’s confidence in their own ability to spot fake news in the first place. A 2018 US survey commissioned by The Economist asked respondents about their ability to distinguish real and fake news (96); 83% of respondents with a family income of at least $100,000 felt “very confident” or “somewhat confident” that they could “tell real news from fake news.” Among respondents with an income of less than $50,000, that number dropped to 63%.
Those self-assessments might be overly optimistic across all segments of the population (27). Nonetheless, gaps in perceptions of fake news as a problem and gaps in the ability to meaningfully access corrective information across groups with different socioeconomic status are troubling based on normative democratic standards. Indeed, they are especially worrisome as new technologies like CRISPR or artificial intelligence (AI) raise political and societal concerns that are of particular relevance to audiences who traditionally have been hard to reach for science communicators. For example, applications of AI for self-driving cars and other forms of automated services are raising concerns about changing labor markets that might disproportionately affect blue-collar workers. Similarly, expensive therapies based on new genome editing techniques raise questions of equitable access to treatments regardless of patients’ income or social status. All of these concerns will require political debates based on the best available scientific evidence. Those debates will have less-than-ideal outcomes if groups who will be significantly affected by these technologies are systematically uninformed or misinformed. Harnessing the science of science communication to ensure that socioeconomic disparities do not impact access to the best available scientific information for policy choices will therefore be crucial for the responsible development of these emerging technologies and new technologies in the future.
Footnotes
The authors declare no conflict of interest.
This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “The Science of Science Communication III” held November 16–17, 2017, at the National Academy of Sciences in Washington, DC. The complete program and audio files of most presentations are available on the NAS Web site at www.nasonline.org/Science_Communication_III.
This article is a PNAS Direct Submission.
References
- 1.Nuccitelli D. February 12, 2018 The EPA debunked Administrator Pruitt’s latest climate misinformation. The Guardian. Available at https://www.theguardian.com/environment/climate-consensus-97-per-cent/2018/feb/12/the-epa-debunked-administrator-pruitts-latest-climate-misinformation. Accessed June 13, 2018.
- 2.Lewandowsky S, Oreskes N, Risbey JS, Newell BR, Smithson M. Seepage: Climate change denial and its effect on the scientific community. Glob Environ Change. 2015;33:1–13. [Google Scholar]
- 3.Loftus EF. Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learn Mem. 2005;12:361–366. doi: 10.1101/lm.94705. [DOI] [PubMed] [Google Scholar]
- 4.Lazer D, et al. 2017. Combating fake news: An agenda for research and action. Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Cambridge, MA), p 2.
- 5.Berelson BR, Lazarsfeld PF, McPhee WN. Voting: A Study of Opinion Formation in a Presidential Campaign. The Univ of Chicago Press; Chicago: 1954. [Google Scholar]
- 6.Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF. Misinformation and the currency of democratic citizenship. J Polit. 2000;62:790–816. [Google Scholar]
- 7.Hochschild JL, Einstein KL. Do facts matter? Information and misinformation in American politics. Polit Sci Q. 2015;130:585–624. [Google Scholar]
- 8.National Science Board 2018. Science and engineering indicators 2018 (National Science Foundation, Alexandria, VA)
- 9.National Academies of Sciences, Engineering, and Medicine . Science Literacy: Concepts, Contexts, and Consequences. National Academies Press; Washington, DC: 2016. p. 166. [PubMed] [Google Scholar]
- 10.Garrett RK, Weeks BE. Epistemic beliefs’ role in promoting misperceptions and conspiracist ideation. PLoS One. 2017;12:e0184733. doi: 10.1371/journal.pone.0184733. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Scheufele DA. Communicating science in social settings. Proc Natl Acad Sci USA. 2013;110:14040–14047. doi: 10.1073/pnas.1213275110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Funk C, Gottfried J, Mitchell A. 2017. Science news and information today (Pew Research Center, Washington, DC)
- 13.Turri J. Is knowledge justified true belief? Synthese. 2012;184:247–259. [Google Scholar]
- 14.Brossard D, Nisbet MC. Deference to scientific authority among a low information public: Understanding US opinion on agricultural biotechnology. Int J Public Opin Res. 2007;19:24–52. [Google Scholar]
- 15.Funk C, Rainie L, Page D. 2015. Public and scientists’ views on science and society (Pew Research Center, Washington, DC)
- 16.National Academies of Sciences, Engineering, and Medicine . Genetically Engineered Crops: Experiences and Prospects. National Academies Press; Washington, DC: 2016. p. 420. [PubMed] [Google Scholar]
- 17.Bhattacharjee Y. Scientific literacy. NSF board draws flak for dropping evolution from Indicators. Science. 2010;328:150–151. doi: 10.1126/science.328.5975.150. [DOI] [PubMed] [Google Scholar]
- 18.Kahan DM. ‘Ordinary science intelligence’: A science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J Risk Res. 2017;20:995–1016. [Google Scholar]
- 19.Uscinski JE, Klofstad C, Atkinson MD. What drives conspiratorial beliefs? The role of informational cues and predispositions. Polit Res Q. 2016;69:57–71. [Google Scholar]
- 20.Cacciatore MA, et al. Misperceptions in polarized politics: The role of knowledge, religiosity, and media. PS Polit Sci Polit. 2014;47:654–661. [Google Scholar]
- 21.van Prooijen JW. Why education predicts decreased belief in conspiracy theories. Appl Cogn Psychol. 2017;31:50–58. doi: 10.1002/acp.3301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Oliver JE, Wood TJ. Conspiracy theories and the paranoid style(s) of mass opinion. Am J Pol Sci. 2014;58:952–966. [Google Scholar]
- 23.Berinsky AJ. Telling the truth about believing the lies? Evidence for the limited prevalence of expressive survey responding. J Polit. 2018;80:211–224. [Google Scholar]
- 24.Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359:1146–1151. doi: 10.1126/science.aap9559. [DOI] [PubMed] [Google Scholar]
- 25.Lazer DMJ, et al. The science of fake news. Science. 2018;359:1094–1096. doi: 10.1126/science.aao2998. [DOI] [PubMed] [Google Scholar]
- 26.Livingstone S. Media literacy and the challenge of new information and communication technologies. Commun Rev. 2004;7:3–14. [Google Scholar]
- 27.Wineburg S, McGrew S, Breakstone J, Ortega T. 2016. Evaluating information: The cornerstone of civic online reasoning (Stanford History Education Group, Stanford Univ, Stanford, CA) [DOI] [PubMed]
- 28.Barthel M, Mitchell A, Holcomb J. 2016. Many Americans believe fake news is sowing confusion (Pew Research Center, Washington, DC)
- 29.Klurfeld J, Schneider H. 2014. News literacy: Teaching the internet generation to make reliable information choices (The Brookings Institution, Washington, DC)
- 30.Vraga EK, Bode L. Leveraging institutions, educators, and networks to correct misinformation: A commentary on Lewandosky, Ecker, and Cook. J Appl Res Mem Cogn. 2017;6:382–388. [Google Scholar]
- 31.Smith J, Jackson G, Raj S. 2017. Designing against misinformation (Facebook Design, Menlo Park, CA)
- 32.Mosseri A. 2018. Helping ensure news on Facebook is from trusted sources (Facebook Newsroom, Menlo Park, CA)
- 33.Isaac M. January 11, 2018 Facebook overhauls news feed to focus on what friends and family share. New York Times. Available at https://www.nytimes.com/2018/01/11/technology/facebook-news-feed.html. Accessed June 15, 2018.
- 34.Frenkel S, Casey N, Mozur P. January 14, 2018 In some countries, Facebook’s fiddling has magnified fake news. New York Times. Available at https://www.nytimes.com/2018/01/14/technology/facebook-news-feed-changes.html. Accessed June 15, 2018.
- 35.Graves L. Deciding What’s True: Fact-Checking Journalism and the New Ecology of News. Columbia Univ Press; New York: 2016. [Google Scholar]
- 36.Conroy NJ, Rubin VL, Chen Y. Proceedings of the Association for Information Science and Technology. Wiley; New York: 2015. Automatic deception detection: Methods for finding fake news; pp. 1–4. [Google Scholar]
- 37.Gupta A, Lamba H, Kumaraguru P, Joshi A. Proceedings of the 22nd International Conference on World Wide Web. Assoc for Computing Machinery; New York: 2013. Faking Sandy: Characterizing and identifying fake images on Twitter during Hurricane Sandy; pp. 729–736. [Google Scholar]
- 38.Weise E. April 10, 2017 We tried Google’s new fact-check filter on the Internet’s favorite hoaxes. USA Today. Available at https://www.usatoday.com/story/tech/news/2017/04/10/google-fact-check-snopes-politifact-factcheck/100263464/. Accessed June 13, 2018.
- 39.Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016;59:96–104. [Google Scholar]
- 40.Bode L, Vraga EK. In related news, that was wrong: The correction of misinformation through related stories functionality in social media. J Commun. 2015;65:619–638. [Google Scholar]
- 41.Lewandowsky S, Ecker UK, Cook J. Beyond misinformation: Understanding and coping with the “post-truth” era. J Appl Res Mem Cogn. 2017;6:353–369. [Google Scholar]
- 42.Lewandowsky S, Ecker UK, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: Continued influence and successful debiasing. Psychol Sci Public Interest. 2012;13:106–131. doi: 10.1177/1529100612451018. [DOI] [PubMed] [Google Scholar]
- 43.Festinger L. A Theory of Cognitive Dissonance. Standford Univ Press; Stanford, CA: 1957. [Google Scholar]
- 44.Iyengar S, Hahn KS. Red media, blue media: Evidence of ideological selectivity in media use. J Commun. 2009;59:19–39. [Google Scholar]
- 45.Yeo SK, Xenos MA, Brossard D, Scheufele DA. Selecting our own science: How communication contexts and individual traits shape information seeking. Ann Am Acad Pol Soc Sci. 2015;658:172–191. [Google Scholar]
- 46.Kull S, Ramsay C, Lewis E. Misperceptions, the media, and the Iraq war. Polit Sci Q. 2003;118:569–598. [Google Scholar]
- 47.Nelson JL, Webster JG. The myth of partisan selective exposure: A portrait of the online political news audience. Soc Media Soc. 2017;3:2056305117729314. [Google Scholar]
- 48.Druckman JN. The politics of motivation. Crit Rev A J Polit Soc. 2012;24:199–216. [Google Scholar]
- 49.Scheufele DA. Science communication as political communication. Proc Natl Acad Sci USA. 2014;111:13585–13592. doi: 10.1073/pnas.1317516111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Kunda Z. The case for motivated reasoning. Psychol Bull. 1990;108:480–498. doi: 10.1037/0033-2909.108.3.480. [DOI] [PubMed] [Google Scholar]
- 51.Lord CG, Ross L, Lepper MR. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. J Pers Soc Psychol. 1979;37:2098–2109. [Google Scholar]
- 52.Druckman JN, Bolsen T. Framing, motivated reasoning, and opinions about emergent technologies. J Commun. 2011;61:659–688. [Google Scholar]
- 53.Ho SS, Brossard D, Scheufele DA. Effects of value predispositions, mass media use, and knowledge on public attitudes toward embryonic stem cell research. Int J Public Opin Res. 2008;20:171–192. [Google Scholar]
- 54.Kraft PW, Lodge M, Taber CS. Why people “don’t trust the evidence” motivated reasoning and scientific beliefs. Ann Am Acad Pol Soc Sci. 2015;658:121–133. [Google Scholar]
- 55.Lodge M, Taber CS. The Rationalizing Voter. Cambridge Univ Press; Cambridge, UK: 2013. [Google Scholar]
- 56.Nyhan B, Reifler J. When corrections fail: The persistence of political misperceptions. Polit Behav. 2010;32:303–330. [Google Scholar]
- 57.Wood T, Porter E. The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Polit Behav. 2016;2018:1–29. [Google Scholar]
- 58.Weeks BE. Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. J Commun. 2015;65:699–719. [Google Scholar]
- 59.Milkman KL, Berger J. The science of sharing and the sharing of science. Proc Natl Acad Sci USA. 2014;111:13642–13649. doi: 10.1073/pnas.1317511111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Xenos MA, Becker AB, Anderson AA, Brossard D, Scheufele DA. Stimulating upstream engagement: An experimental study of nanotechnology information seeking. Soc Sci Q. 2011;92:1191–1214. [Google Scholar]
- 61.Garrett RK. The “echo chamber” distraction: Disinformation campaigns are the problem, not audience fragmentation. J Appl Res Mem Cogn. 2017;6:370–376. [Google Scholar]
- 62.Kitts JA. Egocentric bias or information management? Selective disclosure and the social roots of norm misperception. Soc Psychol Q. 2003;66:222–237. [Google Scholar]
- 63.Schwarz N, Sanna LJ, Skurnik I, Yoon C. Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Adv Exp Soc Psychol. 2007;39:127–161. [Google Scholar]
- 64.Berinsky AJ. Rumors and health care reform: Experiments in political misinformation. Br J Polit Sci. 2017;47:241–262. [Google Scholar]
- 65.Huckfeldt RR, Mendez JM, Osborn T. Disagreement, ambivalence, and engagement: The political consequences of heterogeneous networks. Polit Psychol. 2004;25:65–95. [Google Scholar]
- 66.DiFonzo N, et al. Rumor clustering, consensus, and polarization: Dynamic social impact and self-organization of hearsay. J Exp Soc Psychol. 2013;49:378–399. [Google Scholar]
- 67.Lerman K, Yan X, Wu X-Z. The” majority illusion” in social networks. PLoS One. 2016;11:e0147617. doi: 10.1371/journal.pone.0147617. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Friggeri A, Adamic LA, Eckles D, Cheng J. Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media. AAAI; Palo Alto, CA: 2014. Rumor cascades. [Google Scholar]
- 69.Shane S. September 7, 2017 The fake Americans Russia created to influence the election. New York Times. Available at https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html. Accessed July 28, 2018.
- 70.Messing S, Westwood SJ. Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communic Res. 2014;41:1042–1063. [Google Scholar]
- 71.Sloan WD. The Media in America: A History. Vision; San Ramon, CA: 2002. [Google Scholar]
- 72.Hamilton J. All the News That’s Fit to Sell: How the Market Transforms Information into News. Princeton Univ Press; Princeton: 2004. [Google Scholar]
- 73.Schudson M. Discovering the News: A Social History of American Newspapers. Basic Books; New York: 1981. [Google Scholar]
- 74.Downie L, Schudson M. 2009 The reconstruction of American journalism. Available at https://archives.cjr.org/reconstruction/the_reconstruction_of_american.php.
- 75.Wittebols JH. The Soap Opera Paradigm: Television Programming and Corporate Priorities. Rowman & Littlefield; Lanham, MD: 2004. [Google Scholar]
- 76.Schiller D. Digital Capitalism: Networking the Global Market System. MIT Press; Cambridge, MA: 1999. [Google Scholar]
- 77.Prior M. Post-Broadcast Democracy: How Media Choice Increases Inequality in Political Involvement and Polarizes Elections. Cambridge Univ Press; Cambridge, UK: 2007. [Google Scholar]
- 78.Turow J. Breaking up America: Advertisers and the New Media World. The Univ of Chicago Press; Chicago: 2007. [Google Scholar]
- 79.Newman N. Reuters Institute digital news report 2017. Reuters Institute for the Study of Journalism; Oxford: 2017. [Google Scholar]
- 80.Scheufele DA, Nisbet MC. Online news and the demise of political debate. In: Salmon CT, editor. Communication Yearbook. Vol 36. Sage; Newbury Park, CA: 2012. pp. 45–53. [Google Scholar]
- 81.Rutenberg J. April 17 For news outlets squeezed from the middle, it’s bend or bust. New York Times. Available at https://www.nytimes.com/2016/04/18/business/media/for-news-outlets-squeezed-from-the-middle-its-bend-or-bust.html. Accessed May 27, 2018.
- 82.Drummond C, Fischhoff B. Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proc Natl Acad Sci USA. 2017;114:9587–9592. doi: 10.1073/pnas.1704882114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Kahan DM. Protecting or polluting the science communication environment?: The case of childhood vaccines. In: Jamieson KH, Kahan DM, Scheufele DA, editors. The Oxford Handbook of the Science of Science Communication. Vol 1. Oxford Univ Press; Oxford: 2017. pp. 421–432. [Google Scholar]
- 84.Weingart P. 2017. Is there a hype problem in science? If so, how is it addressed? The Oxford Handbook of the Science of Science Communication, eds Jamieson KH, Kahan DM, Scheufele DA (Oxford Univ Press, New York), pp 111–118.
- 85.National Academies of Sciences, Engineering, and Medicine . Trust and Confidence at the Interfaces of the Life Sciences and Society: Does the Public Trust Science? A Workshop Summary. The National Academies Press; Washington, DC: 2015. p. 66. [PubMed] [Google Scholar]
- 86.Dudo A, Dunwoody S, Scheufele DA. The emergence of nano news: Tracking thematic trends and changes in U.S. newspaper coverage of nanotechnology. Journal Mass Commun. 2011;88:55–75. [Google Scholar]
- 87.Boykoff MT, Boykoff JM. Balance as bias: Global warming and the US prestige press. Glob Environ Change. 2004;14:125–136. [Google Scholar]
- 88.Van Duyn E, Collier J. Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Commun Soc. 2018 doi: 10.1080/15205436.2018.1511807. [DOI] [Google Scholar]
- 89.Coe K, Kenski K, Rains SA. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. J Commun. 2014;64:658–679. [Google Scholar]
- 90.Anderson AA, Brossard D, Scheufele DA, Xenos MA, Ladwig P. The “Nasty Effect:” Online incivility and risk perceptions of emerging technologies. J Comput Mediat Commun. 2014;19:373–387. [Google Scholar]
- 91.Anderson AA, Yeo SK, Brossard D, Scheufele DA, Xenos MA. Toxic talk: How online incivility can undermine perceptions of media. Int J Public Opin Res. 2018;30:156–168. [Google Scholar]
- 92.Yeo SK, et al. The effect of comment moderation on perceived bias in science news. Inf Commun Soc. 2017;22:129–146. [Google Scholar]
- 93.National Academies of Sciences, Engineering, and Medicine . Communicating Science Effectively: A Research Agenda. The National Academies Press; Washington, DC: 2017. p. 152. [PubMed] [Google Scholar]
- 94.Cacciatore MA, et al. Is Facebook making us dumber? Exploring social media use as a predictor of political knowledge. Journal Mass Commun Q. 2018;95:404–424. [Google Scholar]
- 95.Brossard D. New media landscapes and the science information consumer. Proc Natl Acad Sci USA. 2013;110(Suppl 3):14096–14101. doi: 10.1073/pnas.1212744110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.The Economist 2018 The Economist/YouGov Poll. Available at https://d25d2506sfb94s.cloudfront.net/cumulus_uploads/document/wa3gpxn761/econTabReport.pdf. Retrieved September 13, 2018.