Think about the last pharmaceutical commercial you saw on TV. What did you notice?

Most of them contain smiling people, upbeat music, and a doctor treating patients. But have you ever stopped to consider why these features are there?

Techniques of persuasion — like having an actor dress up as a doctor or shooting the commercial in a park on a beautiful summer day — play an integral part in convincing people to take an action or adopt a certain belief. 

With these techniques of persuasion being used in all forms of media, it can often be difficult to spot when something is disinformation. Two professors at the iSchool, Lu Xiao and Bei Yu, study these techniques of persuasion and the nature of disinformation. Here are some of their best pieces of advice for identifying misinformation or disinformation online:

1. Analyze Both the Content and the Source

First, look at the content itself and try to identify any pieces of information that seem too good to be true, are inconsistent with others that you have read or seen, or seem overblown, dramatic, or lacking specific evidence. 

“It’s also important to look for sensational language,” says Yu. 

Then, look at the source of the information and ask yourself a few questions. Who is the author of this page or comment? Are they credible enough to speak on this topic? What other content have they produced in the past? Looking at an author’s profile can often reveal a clear agenda when they have published repeatedly on the same topic. 

Another method for checking credibility is by conducting a quick Google search to see if there are any articles directly refuting what you just read. Often when a particular piece of disinformation starts to become more mainstream, other reputable authors will put together statements and evidence proving that the claim is false, which can help you uncover the full scope of the story.

A link’s URL can also be a signal of credibility. A professional or well-recognized domain name can be trusted more often than a string of numbers and letters.  Additionally, the top level domain (e.g. .com, .org, .gov, .edu) can be an indication of the type of source the information comes from.

Finally, the order in which information is presented can alter a reader’s perception and understanding of the information. Platforms like Twitter and Youtube have eschewed showing content in chronological order and now sort Tweets and comments by engagement  — meaning that widespread fake news stories and information taken out of context can easily rise to the top.

2. Identify Whether the Information is Misinformation or Disinformation

Xiao says there are two types of false information: mis-information and dis-information. Misinformation is false information that results from an honest mistake.

Have you ever made a typo that completely changed the meaning of what you were trying to say? Believed something your friend said online, only to find out later that they were wrong? That’s misinformation. 

Disinformation, on the other hand, happens when information is intentionally distorted by an individual or organization in order to advance a specific agenda. It’s here that persuasion and the intent to persuade comes into play. 

It is important to note, however, that the distinction between misinformation and disinformation may not always be black and white. For example, science journalists may explain a correlational finding as causal out of misunderstanding or exaggeration for sensationalism. This means that it may sometimes take extra attention to analyze whether a piece of information is misinformation or disinformation.

Common reasons for spreading disinformation include endorsing a political party or candidate, damaging a person’s reputation, or inciting a specific action from users — like clicking through to a website that makes money from advertisements.

In the 2016 presidential election, for example, a fake news site masquerading as a local television outlet reported that Pope Francis had endorsed Donald Trump. A few months earlier, a different fake news site reported that the Pope had endorsed Bernie Sanders. Neither were true. But both were designed to elicit clicks – and therefore ad dollars – to the fake sites.

3. Try to Identify the Intent Behind the Post or Information

Xiao says there are a few indicators that can give us clues to a person’s intent behind the post. 

The increasing prevalence of “bots” means that many are being created and used for malintent.  Signs of a bot include a high volume of posts (sometimes in multiple languages and seemingly around the clock), inflammatory visuals and phrases, hashtag spamming, and a lack of personal information.

“People are good at hiding their intentions,” Xiao said. “Others may be more willing to agree with an idea or statement when it’s not so straightforward to identify that it’s disinformation.”

4. Use Authoritative Resources

Yu recommends looking for and using materials developed by domain experts for accessing quality information, and says that there are a lot of resources out there like this, especially for assessing science-based information, which is her area of expertise. 

For example, if you are looking for health information, CDC, Mayo Clinic, and WebMD are all good sources of information accessible to the general public. Information on these sites is written for the public by health experts based on research results published in peer-reviewed journals. If you are interested in reading the original research papers, you can search PubMed from the National Library of Medicine to find the latest research results.

5. Evaluate how the Information Fits into Your Own Belief System

Yu encourages users to evaluate how their belief systems and who they trust impacts whether or not they believe something to be true. Cognitive psychology research has found that trust plays a large role in the cognitive process because individuals often care more about who they trust, not about the facts themselves.

Sometimes mere facts do not help persuasion, because individuals may rely on their personal belief system to interpret evidence, according to Yu. She also said that confirmation bias is a significant problem because people are more likely to fall victim to what she calls “Tolstoy Syndrome,” which is when people do not change their minds about an issue no matter how much evidence is presented. For example, the tobacco industry once promoted the research from a scientist who firmly believed that cancer is the result of genetic factors, not environment.

“We as human beings should be concerned about how we consume information,” she said. “It’s not just about how we consume facts as black and white, but how we interpret them and how they fit into our individual belief systems.