"

10 Decision Making

Learning Objectives

  • Understand the scientific process, including both benefits and drawbacks of various research methods
  • Be able to describe the relationship between scientific research and mass media
  • Apply this information to real-world decision making regarding citizenship issues

 

Science is Good

Why do we use science to make decisions? Science and politic have a very close relationship. Politics should be informed by science, and science should be informed by politics. For example, health care policies should be made based on medical research in order to make the most informed decisions. Alternatively, when a policy is causing a health problem (e.g. gun violence in America), we should know to conduct more research on that issue to maximize informed decision making.

To do this, we look at both the natural sciences (e.g. biology, chemistry, etc.) and the social science (e.g. communication, psychology, etc.). The natural sciences help us make decisions on issues such as vaccine mandates for schools, steps to reduce anitibiotic resistance, and food assitance programs that impact citizens. We also have global issues relevant to the natural sciences, like climate change that require a shared understanding of our objective reality.

Information from social sciences often overlap with the natural sciences issues in the previous paragraph. There are other policy issues though that are also informed by the social sciences. For example, let’s say I want to figure out how to make a policy that’s going to help actually protect the victims of domestic violence. You can’t just rely on making a good guess, or common sense, because that’s not always what happened. Domestic Violence is a very complicated issue. So our policies have to adapt to that complicated nature of the beast. We need to really understand it via social sciences looking at questions such as:  How often is it happening? Where is it happening? Why is it happening? If we can get involved in the why, then we can start looking at real answers to the solution. Issues like workplace discrimination need policies to prevent workplace discrimination, we know if we don’t have these policies in place, it’s going to happen. It’s still legal for certain groups of people to be discriminated against at work for issues that have nothing to do with their ability to perform their job satisfactory. So that’s where social science really needs to inform politics.

But we don’t always have that great communication between science and politics, even though it’s so vital that we do. This is where science journalism can help solve some of these communication issues.  Right now, basically, we have academics who are doing research studies using the scientific method, both in the natural sciences and in the social sciences, they come up with results, they write them in peer-reviewed journal articles, and they basically share it with each other, and are terrible at disseminating it to the general public. There are certain people who are getting better and better about it. For example, Brad Bushman, who works in our department, also writes articles for psychology today that summarize research so that people don’t have to be reading through all the the methodology and whatnot, they can get kind of straight to the take home message. However, in general, academics are not good at sharing their information with the general public.

Scientific method:

There are the six steps in the scientific method.The goal of the scientific method is to find out the truth.

The importance of adapting to new information.

Wait, Is Science Good?

Science is awesome. But it’s not perfect. Lets discussion some of the things that can go wrong with science. But I’d also like to address that even though the scientific method isn’t perfect, it’s still the best way to stay informed.

Don’t be fooled by statistics in popular sources. This is a real problem; I  see a lot of people getting their statistics from unreliable sources such as social media. When it comes to statistics, you really want to get your information from the original source. It happens in popular media that non-scientists read an original peer-reviewed study, and then they try to translate that for the general public in a way that is interesting and flashy and gets the attention needed to generate revenue. Unfortunately, information can be misunderstood. Or at worst, details can be exaggerated to garner attention. This can be very manipulative, distracting, and get people to come to the wrong conclusions.

Deceptive graphing scales. Make sure the scales are proportional on graphs. If a scale doesn’t start at zero, is there a good reason for that? Do the numbers on the scale increase in equal increments (e.g. 10, 20, 30) or are some increments eggagerated (e.g. 10, 20, 50). These distortions can make an effect look more impressive than it actually is.

Twice as much as a small number is still a small number. For example, twice the number of tap dancing squirrels on my patio is a double in the number of tap dancing squirrels on my patio. However, I promise, that number is still very small.

Look out for framing. For some reason, the human brain does think that 30 out of 1000 is bigger than 3 out of 100 even though mathematically, these are the same thing. But the framing it just the way it works with our brains is sometimes difficult to figure out and look for examples of exaggerated possibilities like the lottery.

Watch out for non-normative examples. So if you see an example of someone who has won the lottery, it makes it seem like this kind of thing happens more often than it actually does.

The absence of evidence is not evidence of absence. This reminds me of a lyric from a song by the band, Nirvana. “Just because you’re paraoid, doesn’t mean we’re not after you”.

Any scientific theory is falsifiable. Real scientists know that there is no such thing as scientific “proof”. In other words, you can never prove your theory to be 100%. All you can do is find a lot of supporting evidence that it could be correct, or you might find evidence that does not support your theory. We should make decisions based on that evidence, but as a general rule, if someone says they’ve “prooven” a scientific theory, be distrustful of them.

Correlation is not causation. While this may seem obvious, mistaking these two conditions happens a lot in the real world. Causation means that one thing caused another thing to happen.

Missing information. Colgate once ran an advertising campaign, claiming that 80% of dentists recommend Colgate. What they didn’t tell us is that when they asked dentists to select their preferred toothpaste, Colgate was just one of many other brands they also recommended.

P-hacking.

Another thing that can happen is people will jump to conclusions based off of very low number of participants.

There is also a systematic problem with publishing. So if you do a study and you say this thing is positively related to this thing, like if Thing A happens, there’s this statistically significant chance that Thing B is going to happen too, that’s what we would call a positive result. Over 95% of studies that are published have positive results. It can be very difficult to get a study published that has negative or no results.

Another thing that can go wrong is unrealistic research environments. For example, studies that examine health issues often use rats instead of people. This does not mean that their results are wrong. There are a lot of things that rats and humans have in common that could make it a good study. But there’s always that potential that the results are not generalizable. So ideally, they would do this study with rats, and then if things are going well follow through this study with people as well.

Another thing I’ve seen is they’ll say everything’s a carcinogen now. The flaw with that methodology is that people will sometimes feed these animals insane amounts of a carcinogen, and then say, “Oh, yeah, it causes cancer”. But in reality, if you’re just exposed to a little bit of it, it doesn’t make that big a difference. So results can be very exaggerated in that way.

Finally and unfortunately, we also just have some bad actors in science. We have people who feel the pressure to publish even though we don’t get paid to publish. But keeping your job is very often dependent on publishing, depending on what your job is. So even though people aren’t being directly paid for it, publishing more helps them keep their job. Fortunately, this issue with falsifying data doesn’t happen very often. When it does, they will get caught, it’s just a matter of how long it takes. When people fake statistics, it’s actually pretty easy to catch them in the act. Then if that happens, that paper has to be redacted.

So all these things can go wrong. Do they happen all the time? No, because peer review catches a lot of these issues. Social science is  self correcting because when we do repetitions, that’s one of the ways that we catch mistakes. So these problems are rare in peer-reviewed work.

Also, we’re improving the process all the time. Right now, people are really focusing on pre-registering hypotheses. That can help a lot because it can help studies get published, even if they found those null results instead of just the positive. It also helps ensure that people aren’t doing the p-hacking, and that they have a clear hypothesis and stick to that, and not publish anything outside of what they pre-registered. We also see a lot of double blind review right now, especially in our department, we pretty much almost exclusively publish in journals that are double blind review, to the best of their ability. We’re also seeing a little bit more, though it’s not actually that common yet, of having a two step review process where you (Step 1) design your study and decide what journal that might be a good fit for, and you have them review your process and procedures before you collect any data. Then (Step 2) you do the actual study, and then they’re much more likely to publish those negative results as well.

So although the peer-review process isn’t perfect, it’s the best we have. Let me give you an example. When I was a baby, doctors used to say that you should put a baby to sleep on their stomach because that was the healthiest thing for them. Of great concern was SIDS, Sudden Infant Death Syndrome. It seemed to make sense inuititively that if you put the baby to bed on their stomach, that that should decrease the chances of SIDS. But through scientific study, we found statistically, more babies who pass away from SIDS were put to bed on their stomach. So, research was done where they started looking at what actually happens when babies sleep on their stomach verses their back and found that if you put your baby to sleep on their back, they are less likely to pass away from SIDS. This new information started the Back To Sleep campaign. By relying on new information from the sciences, we can make sure we are doing the best we can with the knowledge that is most reliable at the time of decision making.

Trust in Expertise 

As with so many of the topics that we’ve talked about, trust in expertise is a bit more nuanced than the general impression sometimes is. So we have for this topic, three different goals. The first is to talk about how science and science media, or science journalism, and citizenship are related to each other. Then we’ll talk about why this relationship matters. Finally, we’ll cover the current status,

Science and Science Media:

A lot of the things that we do in our daily life that may not feel like they are political in nature can have some political component to them. Things like shopping, the kind of media you consume, the schools you go to, or support the transportation that you use,  even something like potholes in the road, are political issues. Unfortunately, lots of people try to avoid “politics”, because it can be  very polarized  and emotions. So they try to be apolitical. But I would make an argument that there really is no such thing as being apolitical, only apathetic. Generally, if someone strives to be apolitical, they may really be not noticing all the connections with their everyday lives. Alternatively, they may not realize how much political issues impact the health and wellbeing of other citizens as well as nation-wide issues such as economics and climate control that have serious long-term consequences.

Because so many of these aspects of our life do have a connection to politics and citizenship, it can be really helpful if we look at evidence before we decide what kind of policy we’re going to support. One thing that I have found in my experiences in the social sciences over and over again, is that people don’t always behave in ways that are predictable, or rational, especially if you’re just looking at an issue from your own point of view. If you use scientific theories and evidence, you can see that people’s behavior may not be what we initially expect it to be. If we really dig into the the who, what, when, where, and why we can start to figure it out.

Issues like health and social justice and civic justice, these things do have available scientific information, particularly peer reviewed scientific information, that can help guide us in decision making. But very often American citizens don’t know how to incorporate science into their policy support. A lot of politicians actually can’t or won’t do the same.

If you’re looking for some examples of issues that have a basis in science, or that should be informed by science, I recommend checking the website for the ACLU “issues” page.

One of the reasons that Americans stuggle so much to incorporate evidence into decision making is that many people are unsure who counts as being an expert. So, let’s look at some peer reviewed articles on this topic.

(Cite) Media and popular culture function as primary sources of information about who is and who is not a scientist. So in other words, people figure out who counts as a scientist by the media that they consume, and media tells them or guides them into who they consider to be a scientist, and thus who they consider to be an expert. The problem is that people are more likely to consider someone to be an expert if that person agrees with them on whatever issue is coming up. This happens for both liberals and conservatives. Both political ideologies tend to distrust science when it reaches conclusions that they don’t agree with. If you’re interested in this, one of the former professors in our department, Dr. Eric Nesbitt and former grad student here, Dr. Katie Cooper,  did some really interesting research on how both liberals and conservatives would distrust science if that science said something that they didn’t like (cite). So that certainly is problematic, although I would be misleading you if I implied that these two things are happening equally; they’re not. Conservatives tend to value science less than liberals do. However, as good scientists, we need to examine why that is because that is an important part of social science research. Also, I don’t want people who identify as conservative to hear “We’re being attacked because you’re saying we don’t believe in science”, because that’s really the case. Likewise, I don’t want people who identify as liberal to think “There’s the evidence that we’re smarter or better”, because evidence doesn’t really support that either. It’s really the media that’s driving this discrepancy, and not individual characteristics. So one of the problems with this is that the general public doesn’t really have a solid understanding of what science is. So many people, when you say the word “science”, they really just think of chemistry. As an example, I was talking to someone in my hometown once about a policy issue that we disagreed on. I offered to share with her scientific information. Her response was “What do beakers and chemicals have to do with this policy?” I don’t think that she understood that science goes beyond chemistry, and the social sciences follow the same scientific method that the natural sciences do.

As far as the status, let’s talk about two things. Let’s talk about how Americans are doing overall, as far as this relationship between science and policy support attracts trust and expertise goes. Then we’ll talk about some individual differences. So for Americans overall, it’s actually a misconception that Americans have less confidence in scientists, according to the Pew Research from 2019. It hasn’t actually decreased confidence in scientists. But what has decreased is agreeing that science matters. So people still think that scientists know what they’re doing, that they’re coming up with answers to questions, but they disagree on whether that’s important on issues. Especially with issues like the environment, with individual differences, we find is that the more people know about science, the more they trust it. Of course, why would someone trust science, if they don’t know how it works? If they don’t know what the scientific method is because no one’s ever taught them about the scientific method? So of course, when people learn about controlled experimental design, and how that reduces the likelihood of coming to incorrect answers, they’re more likely to believe in science.

As I mentioned before, on average, Democrats and liberals tend to trust scientific experts more than Republicans and conservatives. That doesn’t really seem to be a difference in personality, or interest in science. It seems to be driven mostly by media elites because prominent Republicans in the media tend to distrust science. That seems to be the driving factor on why Republicans, on average, trust science less. And again, I want to be very clear that when we’re in the social sciences, we are just talking about averages. So of course, you can find Republicans who have very strong faith in science. And you can find Democrats who don’t. You can find those individual people, but in the social sciences, we look at the group overall. And there’s differences in the average person and how much they trust expertise. Also, on average, Democrats are more likely to support policy decisions that are based on scientific evidence or valuing it. In fact, 66% of Republicans believe that if you have a scientific expert on in a certain field that person’s opinion is no better informed than your average everyday person. This is  is concerning because we are in the social sciences here and we’re saying if someone dedicates their their career to learning about this one particular topic that they probably do know more about it than the average person. But the media tends to be driving these kinds of differences.

Next, what’s going on with social media. (CITE) says that social media is important for science communication, because your average everyday person often uses the internet to get their information. So a lot of people just get their news from social media. That means when science communication on social media is not up to par, that’s very problematic. Comments on social media tend to be very short, especially sites such as Twitter, where there’s a limit to how much you can type. When you have these very short comments, it opens the door to misunderstandings and oversimplification. Science is nuanced. So, what we do often is, we may find evidence of a media effect (e.g. people who are exposed to a certain type of media are more or less likely to support different policies). But then we have to look into the why and the for whom does that media effect occur because if we look into those, then we actually learn a lot more detail. Then, we can be more correct in our observations of this objective reality.

An example of such an oversimplication comes from Time Magazine (cite). There was a peer-reviewed research study examining the different components that are in flatulence, farts. The idea is when people pass gas, there are different components making up the gas, and one of those compounds is associated with health and well being. So what happened was Time magazine heard about that result, and they jumped to this wild conclusion, that smelling farts is good for you, which it’s not. Also that’s not what the peer-reviewed article said. It just happens that there’s a compound in farts that is actually associated with health, but that doesn’t mean smelling it is going to help you absorb that compound. So they took this one tiny little part of a real peer reviewed article and wildly extrapolated from it.

That was a rather famous example of the general mass media getting it wrong when it comes to reporting the results about peer reviewed research. But also, when it comes to this kind of stuff on social media, again, we find that people’s prior attitudes are the strongest predictor of whether or not they’re going to agree with a comment. It’s not whether it’s scientifically sound that makes them agree with it or not; it’s whether it matches their pre-existing attitude. And certain types of attacks on science on social media are more popular. For example, if a certain kind of attacked is an affective in nature, in other words, emotional, it’s more likely to get attention on social media. An example of one of these is attacking the thematic complexity of the argument. So in other words, one of these attacks on science that is most popular is saying is “too complicated”. The logical fallacy is “We’ll never understand it anyway, so why bother?” Unfortunately, for people who don’t understand the scientific process that seems very appealing to them. The result is that they do not use science to make policy decisions.

Another common attack is comments disparaging the motivation of scientists. These type of attacks lower the perceived credibility of scientists more than attacks about their competence. So basically, if you say the scientist is not competent, people are less likely to believe you because they do in general still consider scientists to be competent people. But if you say this scientist is unethical, or they’re getting money to write certain kinds of results (which is not happening in the social sciences in general, we have procedures in place to reduce the likelihood that anyone is reporting incorrect information for financial gain), not that it never happens, but it’s extremely rare. And self correcting, if it does, but if youthese attacks on that person’s credibility tend to make people believe in science less than they would otherwise.

The main thing we can do to reduce this problem is just teach people how science works. If they can learn about peer reviewed research and the scientific method, then they’re going to be more likely to incorporate that information into their daily lives and their policy support. Also, if we can particularly reach out to people who may be getting information from extreme conservative media outlets, such as those that peddle conspiracy theories, we can bridge the parisan divide in science education. We can also reach out to people in our lives who may be in that kind of media bubble and help them find information outside of that bubble that is evidence based. Again, this problem with inaccurate sources happens for both liberals and conservatives. However, it happens more often for conservatives. But, if someone is getting all of their information, just from very liberal sources, then you want to suggest to them that they find other sources as well.

References

Balko, R. (February 20, 2021). Study finds cognitive bias in how medical examiners evaluate child deaths. The Washington Post.

Douthat, R. (March 2, 2021). A better way to think about conspiracies. The New York Times.

Salcedo, A. Ohio judge reverses colleague’s decision on covid patient’s ivermectin treatment: Judges are not doctors. The Washington Post.

Timberg, C. Facebook made big mistake in data it provided to researchers, undermining academic work. The Washington Post.

License

Icon for the Creative Commons Attribution 4.0 International License

Media Engagement for Democratic Citizenship Copyright © by Melissa Foster is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.