Fact-Checkers as a Source of Misinformation
Fact-Checkers as a Source of Misinformation
Be careful. Conclusions from fact-checking sites can be just as misleading as the statements being rated. If you say things like “politifact.org rated that statement as false” or “factcheck.com rated that statement as true,” and expect that alone to settle an issue; you could be wrong. The same is true for snopes.com and other fact-checking organizations.
What is more important than the conclusion is the rationale behind the conclusion. Basing an argument on the conclusion only, in an attempt to end a discussion, can be very misleading. Including the rationale used by the fact-checking sites as part of the argument might also be misleading, but at least it leaves the issue open for discussion.
- They might have an agenda which makes some of their conclusions questionable. Do they go overboard in order to support a specific agenda, or do they go overboard in an attempt to appear to be balanced in their reporting?
- Perhaps most importantly, these sites do not rate the intent of the statements. A statement designed to mislead people can include cherry-picked facts in order to support a misleading conclusion, and still be rated as true.
- These sites might use faulty logic or make assumptions based on incomplete information. Fact-checkers are not always experts in the fields they report on. Even if their reports are based on “expert” testimony, their conclusions could depend on which “experts” they choose to consult.
Faulty logic or assumptions based on incomplete information
Over the past several years, numerous and often competing claims of “who increased the debt” have circulated throughout social media. Factcheck.org took on this issue. Factcheck came up with their own numbers, created a graph based on those numbers, and stated that “Partisan graphics circulating via email and Facebook are both incorrect…Both sides are circulating deceptions about the federal debt.”
One problem with this is that people are still citing this rating by factcheck.org as “proof” of something, even though it is based on old economic projections which have since changed. But that is a problem with sharing, not a problem with the original report. Alert readers can easily point out that this is an old study. A bigger problem is that the fact-check itself is misleading, due to its faulty use of economic principles. Factcheck.org is not an expert in economics. As an explanation, allow me to quote from a comment I left when I saw this conclusion being shared on Facebook:
This is a misleading story for a number of reasons:
- The most obvious reason is that this is old information. It uses old numbers, and it relies on CBO projections that were future projections at the time this was written but are for time frames which are in the past by now. Those projections turned out to be incorrect. The accuracy of CBO predictions is another story.
- It separates administrations according to time in office, not according to policies. The most glaring example in this story would be that it dates Obama’s “record” from his 2009 inauguration date. But on inauguration day, we didn’t suddenly wake up to an economy full of policies adopted during his administration. On inauguration day 2009, we were actually in the middle of a budget year from the Bush administration. The fiscal year is from October to October, and the budget is passed months in advance. Economists generally start rating economic results one year past inauguration date in order for new policies to go into effect. That would change 2009 from an Obama year to a Bush year. This is significant because 2009 was an outlier in terms of economic results, due to the nature of the Great Recession. You can count the immediate effects of the stimulus as an Obama policy in 2009, but that is the only economic policy for the first year. And doing so would be misleading if taken out of context (see the next point).
- This doesn’t make any attempt to measure anything according to opportunity cost. The true cost of anything, including a deficit, is its opportunity cost. Opportunity cost counts “what would have happened without it.” The difference has to do with a very important difference between long-term and short-term debt. Like I said, the only “new” policies for 2009 had to do with the stimulus package. When you look at the trends before and after that policy change, you will see that while the stimulus added a lot of money to the debt, according to accounting cost, it actually shrunk the debt according to economic cost (opportunity cost). This is the difference between long-term and short-term policies and effects. Before the stimulus, the trend was for rapidly growing deficits to keep accelerating – due to the Great Recession with its inherent decrease in revenue and increase in automatic stabilizer spending – and this wasn’t going to change without a new economic event to change the direction of the economy. The stimulus can be shown to be the only such event that we had. Without it, the debt would have kept growing much faster. With it, the long-term debt can be shown to have decreased relative to what it would have been. Accounting costs don’t show the “what would have happened without it” and the “how did it affect the future” part. They only count the difference between receipts and outlays.
Fact checking stories, from all fact check organizations, are only as good as the logic and facts that the conclusions are based on. The conclusions alone – ignoring the rationale behind them – are worthless. But if you can find that they did not use faulty logic or facts for a specific story, then the story has meaning. Simply saying “because factcheck.org says so” is not a logical argument for anything.”
An agenda which makes some of the conclusions questionable
For example, you might be interested in the scoreboard of certain people being rated. These sites do not rate every statement made by these people. How do they choose which statements to rate? Do they skew the results by looking for statements which will give the overall scoreboard some sort of balance? Or do they do the opposite and look for statements in order to project either a positive or negative impression on the person making the statements? How do these sites choose which statements to rate? Even if they choose their subjects based on suggestions from readers, these can easily reflect the point of view of the readers.
The intent of the statements
To illustrate the point I am trying to make, consider the following historical scenario:
- Ronald Reagan took office in January of 1981. The economy went into a recession in July of 1981 and stayed in recession until November of 1982. This followed a shorter recession from January 1980 to July 1980. The annual unemployment rate peaked at 9.7% in 1982, which is still the highest on record. The economy grew following the recession, as it always does following a recession. It continued to grow until a new recession began in July 1990.
- The economic growth following the recession of 1981-1982 was accomplished using the classical Keynesian policies of massive government debt and large increases in government spending. Several changes occurred in the tax code throughout these years, most notably the Tax Reform Act of 1986. Some taxes decreased, some increased; some loopholes were closed. Government deficits and debt grew far out of proportion to anything seen before, due to a much larger increase in government expenditures over revenue.
- In dollar terms, government revenue for the most part increased due to an increase in the size of the economy as well as population growth. In terms of percentage of the economy and percentage of expenditures, revenue decreased.
- What was non-Keynesian about this method of escape from recession is the long-term nature of fixes for a short-term problem. Changing the tax code instead of using short-term spending and revenue decisions made this a long-term “fix” to a short-term situation. Deficits continued to skyrocket even after the goal of economic growth was reached; a “no new tax” and deregulatory mentality developed that negatively affected future decades. Without going into detail here, it can be argued through the use of historical data that the long-term problems far outpaced the short-term economic gains.
That’s a summary of the Reagan-era economy, with a brief mention of its long-run consequences. From this summary, you could cherry-pick the terms “Reagan” “economy grew,” “government revenue increased,” and “taxes decreased” and come up with a statement such as:
That’s what many supporters of Reaganomics have done, and they are still doing it. But wait. That’s not a logical conclusion from the scenario. In fact, it is the opposite of what can logically be concluded from the evidence. The “trickle-down economics works” argument is used by proponents to mean that “Keynesian economics doesn’t work.” The evidence points to Keynesian economics serving its purpose during a recession. The evidence points to long-term consequences of trickle-down economics, which proponents do not mention.
But how would a fact-checker rate this statement? “Trickle-down economics works because the economy grew and government revenue increased after Reagan lowered taxes.” Despite the fact that this conclusion directly contradicts the evidence, the statement itself is likely to receive a “true” rating. A fact-checker would likely ignore the part about “trickle-down economics works” because it is a conclusion and not a fact to be checked out. But the rest of the statement, one phrase at a time, is a carefully worded and carefully chosen set of facts. The facts were cherry-picked in an obvious attempt to mislead, but they are true statements when viewed in isolation.
Even if fact-checkers themselves use no errors in logic or facts in order to reach a conclusion, they still only check for facts and not the conclusions that people make about those facts. I see this as a major problem, because it is precisely these conclusions that make people want to cite fact-checkers as reliable sources. In order to improve relevance, and to equate the conclusions of fact-checkers with the conclusions that people draw from their reports, we should move beyond the mere checking of facts. Instead of just checking facts, we should rate statements according to how misleading they are. The entire purpose of a political statement is to get the audience to reach a specific conclusion. Facts are mere tools for this purpose. A statement should be judged according to the accuracy of the conclusion that it leads people towards.
Perhaps a point system, using a letter grade based on total points, would be more appropriate than rating a statement as true or false according to the facts presented within the statement. Points can be deducted for cherry-picking evidence. Points can be deducted for red herrings within the argument. Points can be deducted for false statements within the argument, but the number of points deducted should be based on the nature of the falsehood. Was it deliberately misleading, or was more along the lines of a typo or an inadvertent slip of the tongue? How relevant is the untruthfulness of the statement to the conclusion?
Fact-checkers should also have a system in place so that we can have confidence in the overall fact-checking process. Think of fact-checkers as auditors of political speech. If the totality of the conclusions by fact-checkers is not representative of what is being said, then the entire process is suspect. There should be no real or perceived attempt to discredit one side; this includes not choosing statements to rate based on the number of suggestions received. There should be no real or perceived attempt to balance the results; we can’t assume the truth is always down the middle. There only should be an attempt to match the statements being judged with the statements being made. A statement being repeated over and over should be more likely to be chosen for a rating than a one-time-only statement.
See on blue-route.org