ChatGPT: A Powerful Tool for Propaganda Evaluation

ChatGPT: A Powerful Tool for Propaganda Evaluation

Every day, millions of pieces of content flood social media, news sites, and messaging apps. Some are honest. Some are misleading. And some are carefully crafted to manipulate how you think - that’s propaganda. It’s not always obvious. It doesn’t always shout. Sometimes, it whispers through a viral TikTok video, a Reddit thread, or a seemingly neutral news headline. So how do you tell what’s real and what’s designed to push a button in your brain? Enter ChatGPT - not as a source of truth, but as a tool to expose the truth behind the noise.

What Makes Propaganda Hard to Spot?

Propaganda doesn’t look like a 1940s poster with a stern general pointing at you. Today, it’s wrapped in memes, influencer endorsements, and AI-generated articles that mimic real journalism. It uses emotional triggers - fear, anger, hope - and repeats messages until they feel true. The more you see it, the more you believe it. That’s called the illusory truth effect. And it works because humans aren’t built to fact-check everything we read.

Take a recent example: a video claiming that a local hospital in Adelaide was turning away patients because of "foreign funding." It went viral on Facebook. Thousands shared it. Some commented, "I knew it!" - even though no official source confirmed it. When you dig into the video’s source, it’s a YouTube channel with no credentials, no byline, and a history of posting similar claims. But who has time to trace every viral post?

How ChatGPT Helps Break Down Propaganda

ChatGPT doesn’t decide what’s true. But it can help you ask the right questions. Here’s how:

  • It identifies emotional language. Ask ChatGPT: "Highlight the emotionally charged words in this text." It’ll point out phrases like "they’re stealing our future" or "the truth is being buried." These aren’t facts - they’re triggers.
  • It checks for logical fallacies. Paste a claim like "If we allow this policy, next thing you know, we’ll have no borders left." ChatGPT can label that as a slippery slope fallacy. That doesn’t prove the claim false - but it shows the reasoning is weak.
  • It traces repetition patterns. If you feed it ten similar posts from different sources, ChatGPT can tell you which phrases, hashtags, or framing techniques are copied across them. That’s a red flag for coordinated disinformation.
  • It compares sources. Ask: "What does the BBC say about this topic compared to this blog?" It won’t say which is right, but it can show you the difference in tone, evidence, and sourcing.

One user in Adelaide tested this with a viral post about rising crime rates. The post cited "a 2025 study from the National Safety Council." ChatGPT responded: "There is no such organization. The Australian Institute of Criminology is the official source. Their latest report shows a 3% decline in violent crime." That single exchange saved the user from sharing misinformation.

Students in a classroom analyze news headlines with an AI tool pointing out logical fallacies.

Real-World Use Cases

Here’s what this looks like in practice:

  1. For journalists: A reporter in Sydney used ChatGPT to analyze 50 comments from a Facebook post about immigration. The tool flagged 17 that reused the exact same phrases - a sign they were likely bot-generated or copied from a template.
  2. For educators: A high school teacher in Perth used ChatGPT to create a classroom exercise. Students pasted news headlines into the tool. ChatGPT returned analysis like: "This headline uses an appeal to authority by citing an unnamed "expert."" Students learned to question who "the expert" really was.
  3. For community leaders: A local council in Adelaide used ChatGPT to scan hundreds of messages in a community group. They found a pattern: every post blaming the city’s water shortage on "government waste" used the same three phrases. They published a simple fact sheet - and shares dropped by 60% in two weeks.

What ChatGPT Can’t Do

It’s not magic. It’s not a truth detector. Here’s what it misses:

  • Contextual nuance. A post saying "Our schools are failing" might be true in one suburb and false in another. ChatGPT doesn’t know your local school system.
  • Intent. It can’t tell if someone is lying, mistaken, or being manipulated themselves.
  • Visual propaganda. A deepfake video of a politician saying something they didn’t - ChatGPT can’t analyze that. You need tools like reverse image search or video forensic tools.
  • Real-time updates. If a claim is brand new, ChatGPT’s training data might not include it. Always cross-check with trusted sources like ABC News, Reuters, or government data portals.

Think of ChatGPT as a magnifying glass, not a flashlight. It doesn’t light up the whole room - but it helps you see the details you’d otherwise miss.

A community board displays a fact sheet beside a graph showing a 60% drop in misleading shares.

How to Use It Responsibly

Using ChatGPT for propaganda evaluation isn’t about replacing critical thinking - it’s about enhancing it. Here’s how to do it right:

  • Always ask for sources. When ChatGPT gives you an answer, ask: "Where did you get this?" Then verify it yourself.
  • Don’t trust the first answer. Ask the same question three different ways. If the answers change, dig deeper.
  • Use it with human judgment. If ChatGPT says a post is "likely propaganda," don’t assume it’s true. Ask: "Why?" and "What’s the evidence?"
  • Share the process, not just the conclusion. If you’re warning a friend, show them how you used ChatGPT - don’t just say "I checked it."

One of the most powerful things about this approach? It turns passive consumers into active investigators. You’re not just scrolling - you’re questioning.

The Bigger Picture

Propaganda thrives in silence. When people don’t ask questions, it spreads. When they do, it loses power. ChatGPT isn’t here to save us from misinformation. But it can help us build a habit of skepticism - the kind that doesn’t need a PhD to use.

In a world where algorithms push the most emotional content to the top, your ability to pause, analyze, and ask "Who benefits?" is more valuable than ever. You don’t need to be an expert. You just need to be curious.

Can ChatGPT detect deepfake videos or audio?

No, ChatGPT cannot analyze video or audio files. It works only with text. To check deepfakes, you need tools like InVID, Amnesty’s Verify, or Adobe’s Content Credentials. These tools examine metadata, frame inconsistencies, or digital watermarks. ChatGPT can help you find those tools - but it can’t use them.

Is ChatGPT biased when evaluating propaganda?

Yes - but not in the way you might think. ChatGPT doesn’t have political opinions, but its training data reflects patterns from real-world text. That means it’s more likely to recognize propaganda styles common in Western media than, say, those used in state-run outlets in other regions. Always test it with diverse sources. If you’re analyzing content from non-English sources, pair it with a native speaker or translation tool.

Can I use ChatGPT to fact-check news articles?

You can, but don’t rely on it alone. Paste the article into ChatGPT and ask: "What claims does this make?" Then compare those claims to trusted databases like FactCheck.org, ABC FactCheck, or the Australian Competition and Consumer Commission’s Scamwatch. ChatGPT can summarize the claims - but you need official sources to verify them.

Is it ethical to use AI to evaluate propaganda?

Yes - as long as you’re not using it to silence dissent or automate judgment. The goal isn’t to label people as "propagandists." It’s to understand how messages are constructed so you can respond with better information. Used ethically, this tool helps restore public trust in information - not replace human judgment.

What’s the best way to teach kids to use ChatGPT for propaganda detection?

Start with simple games. Give them two headlines - one real, one made up. Ask ChatGPT: "Which one uses emotional language?" Then let them guess. After a few rounds, they’ll start spotting patterns on their own. The goal isn’t to make them experts - it’s to make them skeptical. Skepticism is the first step to truth.

Propaganda doesn’t disappear because we ignore it. It fades when we learn to look closer. ChatGPT gives us a new lens. Use it wisely.