For a while now, I’ve been struggling to find words for a certain kind of mental state I keep experiencing, and of which I see signs in others. It’s not “being distracted”, but it’s not not “being distracted”, either. We tend to think of distraction as an all-or-nothing affair: either you’re concentrating successfully on something, or else you’ve been distracted by Twitter or Netflix yet again. But this is more like an erosion of attention, consistent with at least nominally remaining focused on the task at hand. (The label that comes closest is probably “continuous partial attention”, coined by the writer Linda Stone.) One key symptom is what I can only describe as an impatience with one’s own cognitive processes – an unwillingness to think your thoughts all the way through to the end. And I’m beginning to wonder if this bears some blame for the various predicaments we’re in.
My hunch was reinforced by a new study, which I discovered via Research Digest, about why people share fake news online. The two usual theories are that people who do so aren’t the sharpest tools in the shed (they believe the stories are true), or that they’re cynical jerks focused on slandering the opposition (they don’t care if they’re not true). But the Canadian psychologist Gordon Pennycook and his colleagues found evidence that most people prone to sharing fake news do think it’s important to share only true stories, and are capable of detecting fabricated ones. It’s just that they get distracted – by, among other things, the urge to share the story they’re reading – before they’ve had a proper chance to reflect on its veracity.
“It is hard to imagine,” the researchers write, “that large numbers of people really believed, for example, that Hillary Clinton was operating a child sex ring out of a pizza shop,” and their findings suggest they probably didn’t. When prompted to reflect on the accuracy of a fake headline, participants became much less likely to share it; that simple intervention was sufficient to make them dwell on their own thought processes long enough to see the story was suspect.
I wonder if this also helps explain the depressing tendency in contemporary debate to assume one’s opponents must be acting in bad faith – that instead of believing what they claim to believe, they must be motivated, deep down, by the desire to be evil. After all, how likely is it really that the average Conservative politician “hates poor people”, in a literal or conscious way? Or that people who disagree with you about how to treat childhood gender dysphoria must secretly revel in the suffering of children? Or that Bernie Sanders is, in any meaningful sense of the term, a white supremacist? A few moments’ reflection is enough to see that all are highly improbable. But I’ve seen them all, more or less often, online. And they’re clearly a terrible basis for actually changing anybody’s mind.
Maybe one day they’ll invent a smartphone accessory that physically seizes you by the collar just as you’re about to retweet and yells: “Come on – really? Listen to yourself!”
In the meantime, before we set out to convince others to believe what we believe, it might be worth pausing for a minute, to decide if we truly believe it ourselves.