Morgan T. Sammons, PhD, ABPP

An intriguing article in the July/August 2018 issue of the APA Monitor on Psychology asked 33 prominent psychologists to identify the critical questions facing the discipline (What’s Next? APA Monitor on Psychology, July/August 2018. Pp. 40-63). Respondents were an impressive and diverse group of clinicians, educators, and neuroscientists—all of whom provided insightful predictions based on their particular area of expertise.

As informative (and sometimes even inspiring) as it was, at the end of the article I found myself with a vague sense of dissatisfaction, a feeling that something was missing, but I wasn’t quite sure what. It was akin to trying out an unfamiliar dish in a restaurant and recognizing there was a flavor or a spice missing that would have made an OK recipe a great one. So, like a good behavioral scientist, I set out to analyze the 33 responses to see if I could determine what might have been added to improve the overall outcome.

I found that the responses covered seven broad themes, which I arbitrarily broke down into the following categories: Personality/creativity, improving treatment outcomes, neural substrates/cognition, professional roles/identity, racism/multiculturalism, the human–technology interface, and politics/decision making. In other words, indications that leaders in our field are addressing manifold aspects of the complex and far-flung discipline of psychology. Approximately one third (10) of the responses dealt with one of the most pressing societal issues of our time—racism and prejudice in its multiple manifestations. Responses regarding personality, treatment outcome, neural substrates, professional roles, and human/technology interfaces all got four or five mentions. So far, so good. Thought leaders seemed in broad agreement regarding major areas of inquiry for the profession. But this analysis revealed the ingredient in the dish that, while not entirely missing, was in my mind surprisingly underemphasized: Only two of the polled experts focused on the contributions of psychological science toward understanding political decision-making. This absence is particularly notable given that we are indisputably in the midst of one of the greatest political upheavals in American history.

It is not that psychology has been silent upon this point. One of the more influential psychology books of the past decade was published by Emory psychology professor Drew Westen (The Political Brain, 2007), who argued that voters made political decisions not on the basis of dispassionately gathered information but according to unfiltered emotion. Westen persuasively argued that the most successful political strategists were those who tapped into voters’ emotions —usually negative or fear-based emotions—and manipulated these in order to alter behavior at the ballot box. By using emotions to understand voting behavior, we can understand why members of the electorate frequently choose alternatives that are not in their best personal interests. Manipulating emotions to influence electoral decisions is as old as democracy itself, and it has a long and robust history in American politics. One particularly trenchant example occurred less than 100 years after the founding of the republic, when the tiny minority of slave owning Southerners (less than 5% of citizens in the pre-Civil War south had the economic wherewithal to purchase other human beings) were able to convince the remainder of the electorate to vote to secede from the Union. Westen’s arguments are well supported by another analyst, the linguist and cognitive scientist George Lakoff, whose findings are widely followed in psychology. Lakoff has over several decades identified the elements that make political messaging effective, not surprisingly, political operatives who employ these techniques show consistent success among voters.

In this context, I was quite taken by two cover articles in the 9 March 2018 issue of Science, succinctly titled “How Lies Spread.”  In the first of these, Lazer and colleagues (Lazer, D.M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., et al. [2018]. The science of fake news, Science, 359, 1094-1096) reminded readers that journalistic objectivity as a broad construct is a relatively recent phenomenon that came about largely as a reaction to the blatant use of propaganda in World War I. The authors made the obvious point that the ability to reach huge numbers of electronic readers via platforms like Twitter and Facebook have created a renaissance in propaganda and false information, often via the use of bots that manipulate algorithms to spread content appealing to a particular segment of the electorate. Lazer recommended a two-prong response in order to mitigate the effect of maliciously spread information: working with the major conduits of such news (Google, Facebook, and Twitter) to improve detection of bots and deliberate manipulation of social media, and educating the public about how to detect and squelch fake news when it appears. One of these objectives may be more challenging than the other, given that the internet giants controlling access to much information have financial disincentives to regulate content. Public education may be more promising. Although Lazer and colleagues noted that the efficacy of public education has not been well studied, another article in the same issue provides some grounds for optimism on this front. In the second article, Vosoughi and colleagues downplayed the influence of bots in the dissemination of false information (Vosoughi, S., Roy, D., & Aral, S. [2018]. The spread of true and false news online, Science, 359, 1146-1151). By analyzing patterns of dissemination of rumors on Twitter, these researchers concluded that humans, not robots, were primarily responsible for the wide dissemination of rumor and that false information was far more likely to be retransmitted, to reach larger numbers of people, and to be spread far more rapidly than true information. Falsehoods were more than 70% more likely to be retransmitted, or retweeted, than true information, even though Twitter users who spread lies had significantly fewer followers and were generally less active than those who spread true information. In other words, there is something about the quality of the information disseminated, rather than the means of dissemination, that causes lies to spread quicker and reach more people. Vousoughi, et al.’s  next question, of course, was “why?”. Again, the answer points to human agency, not the efficiency of robots. False information was viewed as more novel, and elicited emotions of surprise and disgust, whereas true information provoked responses reflecting sadness, trust, and anticipation. No surprises here, really. Unscrupulous journalists have relied on fact-free juicy bombshells to sell tabloids since the early days of movable type. Garnishing bombshells with a hint of sexual peccadillo makes them tastier morsels to consume, regardless whether you’re selling scandal sheets or a candidate for office. Also of no great surprise, Vosoughi and colleagues reported that once undergraduate reviewers had been trained to independently detect true or false rumors, detection rates were quite high as was interrater reliability. Even though these undergraduates were from MIT or Wellesley, I suspect that if average citizens (of whom over 72% do not have a college degree) were trained in the same detection scheme, they would also be able to suss out truth from falsehood with high degrees of accuracy. Most of us, regardless of education level, know when our legs are being pulled, individually or collectively. Sometimes we make a deliberate but comforting choice to believe false information, or at least to suspend disbelief. Doing so allows attribution errors to go unchallenged and confirmatory biases to persist.

So what to do?  One solution, and the option that I’ve always used, is to simply ignore outlets like Twitter or Facebook. I have never had a Twitter, Facebook, Instagram, Whatsapp, or similar account, and from what I can discern my life has not been significantly impoverished by this absence. I am continually amused by reports that the internet has “exploded” after the posting of an attention-grabbing tweet or Facebook item. If it did explode, I guess I was far enough away that I didn’t hear it. When friends or colleagues argue that giving up social media would be too hard or cause unbearable withdrawal pains, I have to wonder how they survived 10 or 20 years ago. When they complain that it is too inconvenient to send a letter, write an email, or make a phone call, I wonder if this hardship outweighs the inconvenience of having to manage multiple sources of electronic information. Culling out extraneous information is not effortless, and the opportunity costs of spending significant portions of the day doing so are considerable.

Going back to the premise of this column, one solution would be to devote more professional attention to how lies spread and work with policy makers and internet providers to implement rules making it harder to spread them. But as the research cited in this column indicates, this will only yield a partial solution. It is not how lies spread, but our enduring capacity to believe them that is the fundamental issue.

Copyright © 2018 National Register of Health Service Psychologists. All Rights Reserved.