Thursday, July 9, 2020

Analyzing Conspiracy Theories

I talk a lot about narratives that can be dismissed as "conspiracy theories" here on Augoeides. Usually the stories I cover are the sorts of things that are outright unbelievable or impossible. And keep in mind, if I take a look at your conspiracy theory and describe it as unbelievable, that assessment is coming from someone who is serious about casting spells. So there's that.

But here's the thing - while many conspiracy narratives are absolute garbage, it also can be an easy hand-wavy way to discredit anything that sounds weird by calling it a "conspiracy theory." Government agencies like the CIA, for example, have a long history of doing just that when information about dubious plots or projects get out into the public eye.

What we really need is an objective way to analyze a narrative and see if it holds up, no matter how weird it might sound. And according to this article, a team of artificial intelligence researchers are on the case. They compared debunked conspiracy theories with real conspiracy narratives that turned out to be true, and found significant differences between them that their algorithm could identify.

A new study by UCLA professors offers a new way to understand how unfounded conspiracy theories emerge online. The research, which combines sophisticated artificial intelligence and a deep knowledge of how folklore is structured, explains how unrelated facts and false information can connect into a narrative framework that would quickly fall apart if some of those elements are taken out of the mix.

The authors, from the UCLA College and the UCLA Samueli School of Engineering, illustrated the difference in the storytelling elements of a debunked conspiracy theory and those that emerged when journalists covered an actual event in the news media. Their approach could help shed light on how and why other conspiracy theories, including those around COVID-19, spread—even in the absence of facts.


The study, published in the journal PLOS ONE, analyzed the spread of news about the 2013 "Bridgegate" scandal in New Jersey—an actual conspiracy—and the spread of misinformation about the 2016 'Pizzagate' myth, the completely fabricated conspiracy theory that a Washington, D.C., pizza restaurant was the center of a child sex-trafficking ring that involved prominent Democratic Party officials, including Hillary Clinton.

The researchers used machine learning, a form of artificial intelligence, to analyze the information that spread online about the Pizzagate story. The AI automatically can tease out all of the people, places, things and organizations in a story spreading online—whether the story is true or fabricated—and identify how they are related to each other.

Now as for the graphic above, I try to be a real skeptic rather than a capital-S one, which means I'm willing to consider that almost any statement could be true. And seeing as the only way anybody could possibly "weaponize" animals like that is with a spell (Leo, maybe? The power of taming wild beasts?), suffice it to say I totally want that spell.

I've talked about tracking down spells to throw bees at enemies, but really, moles are a lot funnier when you think about it. "Nice patio you got there. Shame if an army of tiny burrowing creatures went and undermined its structural integrity." I want to believe, man. Because I totally want to be able to do that. Like bees, it's not regular vengeance - it's amusing vengeance.

Seriously, though, I think the idea is pretty far-fetched. The point of a viable artificial intelligence algorithm that can analyze stories like this one, though, is that it can give you a much better idea of how likely it is to be true based on how the information disseminates across the Internet. And that's a pretty cool thing.

Maybe if I were to run it, I would find that the spell I'm looking for is out there after all. You never know for sure.

Technorati Digg This Stumble Stumble

2 comments:

Roger Bacon said...

I'm a skeptic when it comes to A.I. I'm sure you've read "The Emperor's New Mind" by Roger Penrose, which is a pretty good intro to why I think A.I. which is designed to "think" (I'd place deciding if a conspiracy theory makes sense in that category) will always require continuous human intervention to maintain any sort of logical coherence. And once humans get involved, human error gets involved.

Scott Stenwick said...

This is machine learning - not quite the same thing. AIs are good at this sort of thing, analyzing patterns in complex data sets. But as a supporter of the Penrose-Hameroff model of consciousness (it's probably not entirely correct yet, but it's the best one we have) I agree with Penrose that a conscious AI is not something we're going to be able to build with current computer technology.

"Emperor's New Mind" was all the way back in 1988 when the principles of cognitive science were getting sorted out. Unfortunately, the linguists took over too much of the discipline and pushed us down a twenty-year waste of effort called "symbolic AI." Even if you're open to the idea of digital consciousness, "linguistic consciousness" is still a ridiculous idea. We did get some good language parsing algorithms out of it like what they're using today for digital assistants, but that's it.

Basically, more of the cognitive scientists should have listened to Penrose. And there we are...