"

7 Algorithms and Bots

Learning Objectives

  • Understand how new media impacts citizenship behaviors through the use of algorithms
  • Apply this information to a case study
  • Create informed commentary of the use of algorithms in modern day communication

 

Bots

In our discussion of algorithms and bots, I’d like to share with you some information from a peer-reviewed article by Howard, Woolley and Kalo (2018). It’s important in media and citizenship to put more effort into examining peer-reviewed research because people don’t often use peer reviewed sources when they make decisions for democratic concerns. One reason is that unless you’re affiliated with a large university,  it can be difficult to have access to peer reviewed research. One of the other reasons why people don’t use peer reviewed sources, as much as we ideally would, is that they’re just difficult to read. So one of the ways that you can really improve your ability to read peer reviewed sources with ease is to work on doing something like what I’ve done with this particular article, which I’m going to kind of loosely call a reverse outline. This is a note taking technique in which you read each paragraph of the paper paragraph by paragraph and ask yourself: What was the main point of that paragraph? Then, try to summarize each paragraph into one sentence. You might find that not every paragraph has information that you think is really important, so you can condense those. Condensing large amounts of information is a good way to practice staying informed for citizenship. So in this section, I’m creating a reverse outline.

The article shared some posts from Twitter from four different people. These four people actually have four things in common. The first thing that they have in common, if you look at their pictures is that they look like real people. Then, if you take a look at their names: Juan Garcia, Pepe, Luis Lopez, all of these names sort of sounding Hispanic. A more obvious thing they have in common is that they all shared a post with the same wording, saying that Donald Trump was widely supported by the Latino vote. However, as you suspected, the final thing they all have in common is that they are not real. None of these posts were written by actual people. They were all written by bots. However, you can see how anyone could look at this and really feel like that’s a real person who made a real post. And so that is the problem that we’re looking at is that this happens quite frequently. We interact with bots quite frequently, and much of this interaction involves disinformation.

Our interactions with bots are not always bad. In fact, bots are always doing something for our lives, kind of working in the background. Sometimes, these social bots who post on social media post things we agree with. They’re not always trying to change our mind or create some kind of discord. This is because it’s often the case that the easiest type of persuasion is to convince people who already agree with you, and to feel encouraged and stirred to action. So these bots might be saying something that we agree with, and that’s motivating us to believe even more strongly in how we feel and to act on those beliefs.

Other times, the purpose of the bot might be to be divisive and stir up arguments. Sometimes people even flirt with bots without knowing that the “person” they are communicating with isn’t a person at all. There are some scams out there in which people use bots to  try to play on people’s emotions and sense of loneliness for finacial gain. Be aware though that robots and bots are different. Robots exist outside of a computer.

So we interact with bots all the time, and particularly do so on social media. The article made a comparison to having a bunch of bots supporting your tweet, it’s kind of like having an audience full of mannequins, cheering for you, and clapping for you. But these bots can make a tweet seem much more popular than it really is which does have a strong impact on public opinion. If people think that everyone else is supportive of something, they’re more likely to be supportive of it as well.

Some of the issues related to bots include whether they should be regulated, if so, how to regulatae them, and whether bots should have “free speech” rights.  These are not questions that we are going to answer explicitly, but they are questions that we should start thinking about when we are making policy decisions for our country.

So let’s get into some definitions. Like I said, bots are not robots; they don’t have a physical form outside of a computer. There are algorithms or code. Sometimes they’re very simple. There’s a bot that we use here in the School of Communication that helps us bake backup our data. So everything that I do is automatically saved, which is really helpful. Sometimes bots are more complex, and those kinds of fake social media accounts, they’re automated to scan an environment, learn something about it, and respond accordingly. So that is much more complex. Another term that you might hear used is botnets, botnets are basically a network of a bunch of little bots. So essentially, one person’s computer gets infected and in turn another person’s computer is infected and another person’s, and then they all start working together to do something. And it’s usually some kind of spam or some other nefarious purpose.

So like I said, bots can be used for good, sometimes we use them to take care of the mundane tasks of our lives. Maybe we want to manage personal news consumption. For example, if I just want to spend a little bit of time only hearing news that has to do with the children at our southern border and what is happening to migrant children seeking asylum, I can have a bot that arranges it so that I see all the articles related to that particular topic. Or perhaps you want to advertise yourself when looking for a new job; you can have a bot help you out with that by conducting a search online, makin connections between web content, and tracking breaking news. There’s really not a limit to kind of the good uses for bots.

But of course, bots and botnets have a lot of malicious intent as well. Spam, DDoS attacks, which is going to interfere with your service or your computer’s ability to function and theft of confidential information, click fraud, cyber sabotage and warfare are examples of current issues. Different countries are using bots to attack each other to actually break into sites that they should not have access to in order to manipulate public opinion to make it seem like the country is more divided than they actually are.

Different types of bots: Social bots are exactly what it sounds like. A user account is made purposely so that it’s going to interact with other users. And a lot of the times they’re meant to interact with humans, but sometimes they’re not. Sometimes they’re made to interact with other bots. And in that case, different bots are communicating with each other to really amplify their message. And so in this way these social bots can and do, this isn’t a future thing. This is happening, and has happened already have an impact on democracy. They do things like changing voter opinion, which is actually what people think of most often when they think of social bonds. But that’s not what we see occurring most often. More often, we see them attacking journalists or discrediting political leaders, particularly women are targeted more often than men. And they tend not to come from happy, neutral, moderate places. These kinds of social bots, they, they tend to come from radical political parties on on the far left or on the far right. And they do tend to have a more of a negative message than a positive message. And sometimes that really is overwhelming.

Political bots, are a specific type of social bot. They’re just social bots with a political agenda. Of course, the bot itself is an algorithm code. So it’s really the agenda of the person who wrote it. Any social media site can have them. Twitter is well known for having a lot of political bots on the Twitter platform, basically, because the people who own and operate Twitter have a pretty open policy to that they’re not really doing much to prevent it from happening. And you know, right now, if you don’t prevent it from happening, it’s going to happen. Because they’re just pretty frequent. It is difficult to measure how strongly these political bots are impacting people, because they are so ubiquitous, that it’s difficult to even sit down and get a concept of exactly what we’re doing. We know they have an impact. We know it’s a strong impact. But it’s hard to kind of put a number on that. And they have different kinds of goals, they might want to influence people’s opinions on issues. Or maybe really, they just want to pad a follower list. And celebrities do this as well, where they might purchase some bots that make it look like they have more followers or that they’re more popular than they are. And these bots might help promote different kinds of content, maybe even create content. they very often are spreading propaganda, or sending mobilizing information. Remember, we talked about mobilizing information. That’s the kind of information that you know tells you what to do when maybe it’s saying there’s going to be a protest at this date at this time, or here’s a place you can vote. Some of that mobilizing information is fake. You know, there was an issue where people were being told that they could vote online and you can’t you have to vote either with a absentee ballot or in person. So that can be a problem. And bots can make it look like a lot of people are on board with some idea even though a lot of people are not. So who uses bots, everybody, politicians, celebrities, you name it. A couple of examples. President Obama used bots to disseminate messages across social media, especially when he was trying to help people understand what Obamacare the Affordable Care Act was. They employed some bots to help disseminate that information. Mitt Romney has used bots by buying Twitter followers. President Trump has spent over $70 million on Facebook ads just in the 2016 election alone, again, using bots. But it’s not all political. I mean, there’s commentators, journalists, artists, artists, nonprofit organizations, you know, say you’re having a fundraiser for your nonprofit organization. There are ones that use bots to disseminate information about that.

Content in bots: We see that a lot of them are against something instead of in favor of something. Also, they’re very often quite emotional. They play a lot on people’s fears. Fear and anger are quite often the kind of result that we get from the use of these bots.

When it comes to bots and the law, things can get very tricky very quickly. And when we look at campaign finance regulations, and when we look at rules and guidelines for politicians actual behavior, so let’s look at both of those. First of all, when it comes to the money, campaign contributions are regulated. The idea is, if you accept a limited amount of money per person, if you accept money only from certain types of people or organizations, that that is going to limit corruption. However, campaign expenditures are not regulated, you know how how you get your money as a campaign is regulated, but how you spend your money is not. And this is because the Supreme Court ruled that people can be corrupted, corrupted by getting money, but not really by how they spend money. When it comes to bots, and other instances in how people use their money as well. So say, for example, that I want to donate money to John Doe in his political campaign. And maybe there’s a limit to how much money I can donate to his campaign. And I wish I could donate more than that, but I can’t. Well, maybe what I want to do is instead of donating directly to his campaign, I want to spend money, a ton of money on a bunch of advertisements for him. And so in this way, none of my money has gone directly to john doe. But I’ve used my money to purchase advertisement airtime for john doe. So that’s is that a donation? Or is that not a donation, you can see some gray area there. The Supreme Court made an exception to say that these coordinations are considered contributions. If you coordinate with each other to make purchases, even if the money doesn’t directly go to them, not expenses. This really sort of only works for like television and whatnot, the internet is not generally regulated. And so we see a lot of corruption and disparities and how money is spent online.

If an internet website is not being paid to show advertising content, then it’s not technically counting as public communication. And therefore people can create ads and disseminate them using bots or not using bots, even if it costs a lot of money to produce. So if I spent tons and tons of money on a political ad with, you know, big explosions and lots of computer generated effects, I can spend a ton of money on it, as long as I’m not spending money, actually having it be shared online, then that money is not regulated at all. And this is how a lot of corruption takes place in campaign financing.

There are other issues in addition to campaign financing. Let me give you an example of such an issue. The Russian government and many Trump supporters on Twitter, were sharing messages that contain false information. Again, false information is not regulated. And so if there’s a coordination between an actual campaign and a foreign government that is defined as treason. What’s tricky here, though, is that what we’re doing is relying on the social networks themselves, to monitor and make decisions about whether or not a campaign was indeed, actually collaborating with a foreign government. But the social network sites are not actually really self regulating, or supporting in a way that’s actually making an impact on what we know about these kind of communications. disclaimers are different online versus on television as well. So since unpaid internet sites are not considered to be public communication, they are not regulated. And so a message generated by or shared by a bot generally doesn’t have to have any kind of disclaimer information in it. So you can have the very same ad. And if it’s shown on TV, versus if it’s on the internet, they’re different. The one on TV has to have a disclaimer, the one on the internet does not.

To wrap it all down quickly here bots are used to impact public opinion to circumvent legal procedures and other kinds of dubious efforts as well. So what we need to do is learn about it and find out what’s going on and the more we know, the more we can make good policy decisions.

Facebook as a Casestudy

Most of the information from this section is coming from a book called Zucked, by Roger McNamee. This is a book that was designed for the general public; it’s not peer reviewed. Whenever I use a source that is meant for the general public, and it’s not peer reviewed, there are a couple things that I like to look at, such as who wrote the book, and why did they write it. So he is basically everything you would expect out of someone who was one of the pioneers of the internet coming out of Silicon Valley- very intelligent, very involved in everything. Silicon Valley has a rock band, like whatever you would imagine a Silicon Valley guy his age would be. And he was an advisor for Mark Zuckerberg, when he was first creating Facebook and bringing it to the public. He actually still owned stock in Facebook. So in Facebook benefits financially, so does he. But, he also wrote a book that pretty much just completely dumps on Facebook and says a lot of very negative things about Facebook. So on the one hand, you know, he should be motivated to have Facebook do well, because he’s financially invested in it. On the other hand, he wrote a book, which again, can be for financial gain, which says terrible things about Facebook. I find those things to be true and evidence based. So I enjoy this source, I think it was a good book. But I do like for you to be aware of where all of this information is coming from.

One of the things that he says is, even the best of ideas in the hands of people with good intentions, can still go terribly wrong. And so he was talking about Facebook, and how he thought that it was going to be really wonderful, particularly for democracy, bringing people together, when he says that this good idea can go terribly wrong, he means is that Facebook is bad for democracy, for public health, for personal privacy, and for the economy.

So what is it about Facebook, that makes him concerned for all of these things? There are a couple of different things he mentions. First of all, he mentions that just the way that Facebook is set up the whole structure of it, it really gives an advantage to the kinds of posts that appeal to our lizard brain emotions, such as fear and anger. The idea is Facebook, runs on ads. Anytime an application is free, usually they make their money off of advertisements. And so to get those advertisements and the money that comes with it, what they really want is those clicks, those likes, you know, evidence that people are engaging with the material. And people engage a lot with topics that do you make them feel afraid, or angry,, or sexual topics as well, anything that appeal to kind of those lower levels of our brain.

Another concern he has is he says that Facebook is turning citizens into consumers. So basically, what he means is we’re becoming more passive instead of being out there. You know, being active citizens, we are passively sitting back on our Facebook page. And just consuming information, not in an active way. He did not use the term slacktivism in the in the book. But I think that that is part of what he’s getting at is that instead of going to a protest, someone might write a tweet, or, you know, post on Facebook. He is concerned about foreign influence, as most communication scholars are right now, in terms of our political elections, particularly.

And he is concerned about civil rights violations. There’s some evidence that advertising on Facebook has contributed to housing discrimination. So that’s kind of how he opens the book. And he does discuss different kinds of evidence for each of these things. For example, the relationship between President Trump’s campaign and the Russian government is not really in question anymore. We’ve kind of found that evidence. And so he discusses that in the book a little bit.

So let’s talk about the early days of the internet and of Facebook. So McNamee, he was, like I said, a bit of an advisor or a mentor figure for Mark Zuckerberg, and they both felt that Facebook was going to be awesome. That it was gonna bring people together maybe even be good for democracy. But here’s this quote, “the notion that massive success by a startup could undermine society and democracy did not occur to me, or as far as I know, to anyone in our community. Now the whole world is paying for it”. So this book is in a lot of ways, like an atonement for mistakes that he feels that he and other people made when creating Facebook.

During these days, when social media was becoming popular, even the internet was just starting to become popular, a lot of people in Silicon Valley had to be making decisions about what was going to happen with the internet. Here’s some different things they were making decisions about, number one, net neutrality, the idea of net neutrality is basically that every website has an equal chance of being viewed by the general public. We had net neutrality for a while, at this particular moment in time, we don’t though, it is an ever changing controversial topic. So who knows, maybe we’ll have it again someday. But what frightens historians and communication scholars about not having net neutrality is that we kind of don’t know what we don’t know, by nature of not having it. You know, new startup companies, smaller websites, they don’t get the traffic, we’re not going to be exposed to it, and you don’t know what you’re not exposed to. So there’s no real way to measure the effects of not having net neutrality. But it was one of the perks of the internet, one of the whole purposes of the internet is that it was going to be free and available to us and access by so many people. So that was one issue.

Anonymity is another issue about interactions that take place online. At first, it seemed like a very good idea. But as we’ll explore throughout the duration of this semester, we can see that there are some problems associated with it.

Going back to Silicon Valley, we found an interesting point in history where, you know, in the far past, if you wanted to start a new company, you had to have a ton of money to do it. But as technology improved, what we saw is all these little companies and telecom in Silicon Valley, they were able to start up without having massive amounts of money. And you know, there are pros and cons to that. But one of the troubles with that is because they didn’t need a lot of money to start their company, they didn’t have to have their products be perfect. They could kind of have an imperfect product or website, launch it, see what problems occurred, and then fix them. And so in this idea, what they’re doing is asking forgiveness instead of permission. And of course, that has an effect on the general public, how they consume media and what that means for them. And this new increase in technology had an influence on the whole philosophy of the tech industry as well, as far as prioritizing the individual over the collective good. People were really focused on making money for themselves and their product, and not really focused on what that product meant for the general public.

We also saw it this time, particularly during the Reagan administration, cuts and regulations, regulations for the internet and just media in general, corporations in general, and that caused some problems that we didn’t really fully have seen the effects of until the 90s things like stagnant wages, income equality, and the rise of monopolies, which can be a trouble. So those are some of the things that he discussed as far as those early days of the internet. Eventually, he started to become very concerned about social media. And at this point in time, he saw an episode of 60 minutes and became very excited about what Tristan Harris had to say.

Clickbait

There are three main questions that I want to address. The first is, what is it? Then? Where is it? And finally, what can we do about it?

Meta-Cog: Actually, whenever you come across an issue that is relevant to media and citizenship, that might not be super familiar to, I recommend that you do start by looking at these three questions. What is it? Where is it? And what can we do about it?

Clickbait is trying to hook you, trying to get you to pay attention to it, without it really having any real substance. I’d like to share with you an article from 2016 (CITE).

Most online news and social media get their primary profit from advertising. And so and again, this is mostly social media that you see the clickbait taking place. But absolutely, it does happen with professional journalism as well. And so advertisers only want to put their particular advertisements on a website if that website is going to get a lot of traffic, or a lot of people viewing it. It’s the same thing with if you were going to have a billboard along the highway, to advertise your product, you want to choose a very busy highway where lots of people are going to see it, you don’t want it out in the middle of nowhere where no one’s going to see it. And there are a lot of websites out there, right infinite number of websites. So what that means is that in order for these websites to get money from advertisers, they’ve got to be very competitive with other websites, they have to be getting a lot more attention, and a lot more traffic from people than the other websites. And you can measure that in clicks. So the more people are clicking on the website, basically, the more money it can make through advertisements.

The result of this is that any website that has those kind of catchy or salacious headlines is more likely to lure readers in and have them actually click on the links, then often news that is truly of value. And so the clickbait works actually pretty well, a lot of these companies have gotten very good at getting your attention and getting you to click on the ads that they want you to click on. But they’re not very good at living up to people’s expectations. So you might see a flashy headline and think, ooh, that sounds interesting. I want to find out more about that. And you click on it, and it just it doesn’t, you know, fulfill you. And so this is where we see people doing a lot of the extended scrolling on social media where they might think, oh, I’ve got 10 minutes to spare. So let me just spend 10 minutes on Tick tock, and then they’re like, Oh my god, it’s been two days I’ve been I’ve been on Tick Tock for 48 hours or whatever. Okay, that’s, that’s an exaggeration. But people do end up spending a lot more time online than they intended to. And it’s because a lot of this clickbait is playing to your emotions. And you know, people are naturally sort of drawn towards stimuli that are appeals towards sex, anger, you know, anything that that is not the secondary is kind of that primary emotion. 

We’re gonna go sort of around the world talking about clickbait. One of the troubles is that social sciences are much more prevalent in like Europe, or while they’re starting to get more common in China. But there’s a lot of the world that is not represented well in the social sciences. And so I’m going to walk you through some articles I found about clickbait around the world, keeping in mind that there are a lot of places that aren’t being represented well in the research. In the EU, where there was an interesting study that analyzed the headlines from newspapers from 28 different countries in the European Union. The main result that they found is that clickbait was used most in newspapers, which is of course to the detriment of traditional journalistic values. So as I said, you’ll see clickbait a lot in social media. But
this result is very troubling because they’re finding a lot of clickbait in newspapers that are meant to be informative. So people were not clicking on the articles in the newspaper because they were well written high quality articles, instead, about half of the time. they clicked on an article, it was clickbait. It was this catchy or provocative or sensationalistic front page headline that is aiming to exploit the curiosity of the user as quoted from Rosa here. So that’s a concern.

Let’s go to Russia, where we see clickbait happening as well. But it’s a little different in Russia, because you have all the normal influences on journalism that you have anywhere else, you know, where the journalists have to make money. So their aim in creating their content may not just be to, to find the most hard hitting important news stories. They’re also being influenced by the need to make money to finance journalism. But uniquely, there’s a lot of political pressure on journalists, as well in Russia. And so you can imagine that these journalists are working in an environment where the quality of the work that they’re doing is definitely being influenced by some non journalistic values, which is not ideal. In an ideal world, you know, for democratic citizenship, they would be just able to focus on finding good news and doing investigative journalism. But the financial aspect and the political pressure that they’re being put under, definitely have an effect on what they’re able to produce.

Let’s hop now over to China, where the government often uses clickbait to get attention for political propaganda, which is quite common in China. Journalists use clickbait more than other methods, because it doesn’t decrease the space that’s available on that website, for the government, for their use of that space for propaganda. So this technique has been pretty successful, they found that clickbait in China does get more people to view it, more people to like it and a greater reach, meaning that it’s spanning more across the country, more people are seeing it and in different geographical regions of China.

So lots of clickbait there, we’re gonna go back to Spain, like I said, some of these countries are being represented twice, because there just isn’t a lot of social science and a lot of other areas of the world. But they did a content analysis of news stories on Facebook and Twitter. Content analyses are a really important part of communication research. Because, you know, while a lot of other communication research is looking at media effects, how does this media impact people, content analyses are different, because it’s not looking at media effects, instead, is just saying what is out there. And so those two different kinds of research go hand in hand really well, if we know what kind of media is out there in the world, then we can kind of look at what those effects are. But before we look at the effects, we need to really know what’s happening. And so what they were looking at is they took two of the most popular newspapers in Spain, and those newspapers will
post some of their stories on Facebook or Twitter. And it’s professional journalism. So you would expect that most if not all of their posts on Facebook and Twitter would be newspaper articles that were investigative journalism. That’s not the case, what they found in this very extensive content analysis, you know, 2200 news stories is that almost half of those news stories, were clickbait. Almost half were not really informative. And this problem is made even worse by the fact that more than half of the Spanish population who uses the internet, they’re going to social media as their primary method of staying informed.

So we discussed staying informed as one of the one of the necessary components of being a good citizen. We want to be active, informed and responsible citizens with our media use. But if over half the population is getting their their news from social media and almost half of that is clickbait. It’s become very difficult for your average everyday citizen to be informed, even when they want to be well informed. So we can see that we have a problem that is reinforcing itself here, in part because it’s lacking the gatekeeping of truth.

Moving back to the UK, they found that clickbait has created some some very serious issues regarding privacy, transparency, and human rights, but that the law is not keeping up with it. So let me give you an example that they shared in their paper. Back in 2019, there was a televised debate for the UK general election, you know, for their prime minister. And Twitter was, you know, people are keeping tabs on the debate. And there was a Twitter account, which was called at fact, check UK. And this Twitter account said that Boris Johnson was the clear winner. And if that already seems a little bit strange to you, it is strange because fact checkers are not generally in the business of declaring a debate winner. fact checkers are generally in the business of fact checking. So you know, if if the two people who are debating if they make certain claims, a fact checker, especially a fact checker on Twitter is going to say, okay, there’s evidence to support this claim. There isn’t evidence to support this claim. But they don’t kind of say an overall winner of the debate. So that was weird. So people started looking into it. And here’s what happened. So Boris Johnson is conservative in his political ideology. And what happened was the Conservative Party had an official Twitter account. And because it was an official Twitter account, it had been verified. So you know, the verified Twitter accounts have a little blue checkmark by it. So if it has that blue checkmark, you know that, you know, basically, they are who they say they are. So it was like Conservative Party, you know, blue checkmark verified. But just a few minutes before this general debate happened, the Conservative Party’s Twitter account changed their name, to fact check UK, which sounds you know, like nonpartisan, so if you’ve got your actual name in here, Conservative Party, it’s pretty obvious that you’re with the Conservative Party. But if you change your name to fact check UK, it sounds like a very non biased, non partisan name. But by changing their name right before the debate, not only did they get rid of their political affiliation, or the obvious clue to their political affiliation, but they got to keep the blue checkmark. So it made it look like it was a verified fact checking site, when really it was the Conservative Party. So what they did is clearly very deceptive. This is manipulative way to miss inform the public about who they were, and what their goals were, is deceptive as it was, though. There’s nothing illegal about it. Everything they did here was was perfectly legal.

And so what we see is political communication has been regulated historically, in many countries for many, many years. But we’re not keeping up with the internet. Internet advertising is subjected to different regulations than advertising on TV, radio, newspaper or all those traditional outlets. It’s not just the UK, it’s happening in the US and a lot of other companies too. And what we find is that online political ads and many other ads that are trying to sell products or whatever, they can be spread very quickly, very cheaply. And with really no transparency, you don’t know who’s posting the ad, what they stand to benefit, what their credentials are, you’re not being informed of those things on the internet, the way that you would be if those exact same ads were on the television. And what’s happening with those ads is that a lot of them are being run by algorithms. So these algorithms are in the process of collecting very deeply personal information about you, so that they can make their clickbait ads as targeted as possible. So that, you know, if they know what really interests you, and they’re going to know what makes you want to click on something, and then they’re going to be better at making you click on it, and then they make more money from advertisements. And a lot of the times, you may have consented to this.

So for example, if you’ve signed up for social media, they usually have a little term of agreement. And by little term of agreement, I mean, it’s usually like a lot of pages with big words. And then you click the i agree button before you can create your social media account. But honestly, how often do you actually read through the terms of agreement? Maybe some people do, but most people don’t. And so while you may have agreed to the terms, it’s not what we would think of as meaningful consent, because you didn’t actually read it or understand what it was that you were agreeing to. Or at least that’s what happens most of the time.

So that’s part of the problem with clickbait, what do we do about it, there’s a couple different directions we can go. And one is that people who are in the world of technology are working very hard to develop browser extensions that will warn readers if a media site is potentially clickbait. And they’re actually really good at it. So they get better and better as time goes on. In this particular study, they hear they find that this web browser extension they created was 93% accurate, it detecting if something was clickbait. And if it was clickbait, it would warn people, you know, automatically it would be labeled as clickbait, then those people got to choose if they wanted to block it or not, as they blocked it. The extension with learn about your preferences, like Oh, they like to block this kind of thing, but they don’t like to block this kind of thing. And then it starts doing it automatically automatically blocking clickbait, that, you know, based on the algorithm, there’s a pretty good chance that you would not have been wanting to click on. So there’s there’s some of the good points to it. But of course, anytime you have this kind of like automatic blocking or an algorithm making choices for you, it means there’s a decrease in the amount of control that you have over what you’re seeing online. So pros and cons there.

Another option is to really work on training ourselves. So you know, option B here is that instead of having a browser extension, identify the clickbait for you, you can really train yourself by for example, learning about the different types of clickbait. And more easily identifying those when you see it and developing the willpower not to click on them. Or if you’re watching YouTube, don’t click on the video that it suggests for you instead to click on something that you know you’ve you’ve specifically searched for. Those are some things you can do to kind of reduce the the the algorithms ability to learn about you and manipulate you. And then the third option is to make our laws about internet communication. Keep up with the technology. So none of these things are an easy fix. And probably none of them are singularly going to solve the problem. But these are all things that we can work on to help improve the problem. And well they all have pros and cons to them. It’s just good to be informed.

References

Avraham, R. & Yuracko, K. (April 29, 2021). The use of race and sex-based data to calculate damages is a strain on our legal system. The Washington Post.

License

Icon for the Creative Commons Attribution 4.0 International License

Media Engagement for Democratic Citizenship Copyright © by Melissa Foster is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.