Do biased research results influence our elections? - Orange County Registry
Advertisement

In July 2018, the Wall Street Journal reported a trick used to trick Amazon’s algorithm into ranking certain products higher in search results. “Dozens of young men crowd into tiny rooms with 30 computers each in northern Bangladesh every day,” the Journal reported. “They open Amazon.com and repeatedly type in search terms, each time clicking on the products for which they were paid.”

Amazon aims to show you what you’re trying to find, and collecting data from consumer clicks helps bring the right products to your screen. If you’re looking for a glue gun, your search results will show you glue guns, not water guns or “Top Gun” DVDs or safes. It is therefore likely that you will click on one of the products at the top of the search results, rather than browsing the product pages or clicking on another site.

Advertisement

For everyone involved, search rankings can mean the difference between phenomenal success and total failure. So there is a battle going on between the people who try to manipulate the search engines and the tech companies who want to prevent the manipulation. The search algorithms are secret and often modified. This cat and mouse game is one of the reasons why a website or product may suddenly stop appearing at the top of search results. When changes are made to stop the scammers, legitimate sites that were previously ranked may fall in the middle of page 3.

This became everyone’s problem when it started to affect political content.

In May 2018, just before the statewide primary elections in California, a Google search for “California Republican Party” revealed a prominent “knowledge panel” that listed the main ideologies of the party as the conservatism, market liberalism and Nazism. Majority House Leader Kevin McCarthy posted a screenshot of the research and said it was “shameful.”

Did well-known liberal politics in Silicon Valley lead Google employees to defame the Republican Party by falsely associating it with the Nazis? Did the research convince voters to turn away from Republican candidates?

According to a Google spokesperson, the answer to the first question is no. The Nazi reference to the CAGOP research result was taken from a vandalized Wikipedia page for the California Republican Party. Someone made a malicious change to the online encyclopedia, which was online on this site from May 24 to 30. Google “scratched” Wikipedia content and indexed it, and that’s how Nazism appeared in search results for the California Republican Party. “We have systems in place that catch vandalism before it impacts search results, but sometimes mistakes do happen, and that’s what happened here,” said the spokesperson. from Google.

This is a small consolation for the Republican Party of California, which is still slandered in the research results which evoke stories about the incident of vandalism.

How many voters were influenced by seeing the words “Nazi” and “Republican” on their screens at the same time? There is no way of knowing, but at the end of the 2018 election cycle, McCarthy was the minority, not the majority, House leader.

One researcher concluded that “fleeting” search results, including words that appear briefly on your screen as search suggestions as you type, can “shift the percentage of undecided voters supporting a political candidate by a substantial margin without let no one know. ” Dr. Robert Epstein, the former editor of Psychology Today, testified before the United States Senate about this manipulation effect of search engines. In a 2018 article titled “Taming Big Tech”, he wrote that his team of researchers believed that “Google’s search engine – with or without deliberate planning by Google employees – was determining the results of more than 25% of the citizens of the world. elections.”

Epstein would like to launch a monitoring project to capture ephemeral research results in order to obtain reliable data on what people see online when they search and how they are influenced by it. This could help researchers determine if research results are affected through technology companies, or perhaps by dozens of people who click on links in Bangladesh.

The stakes are high and technology companies are under pressure from all sides. Twitter has announced that it will no longer accept political advertising, while Facebook has taken the opposite position and has refused to ban political advertising. Of course, a paid ads policy does not answer questions about search results for unpaid content or ads that are not technically political, such as paid promotion of news stories. Words used in headlines, for example, can have a strong connotation of approving or disapproving of the actions of a politician. Most people will not read the whole story to get both sides.

However, we are here in extremely dangerous territory, on the most slippery slopes. Last April, Senator Ted Cruz, R-Texas, spoke of regulating tech companies to prevent “political censorship” on their platforms. This may sound good to some people, until we get to the part where law enforcement requires tech companies to report to government what is said and how often, and by whom.

It is much more dangerous to allow the government to regulate the media than to allow lies to roam freely on the Internet. Freedom of expression requires strict limits to the power of government. And this is the absolute truth.

Susan Shelley is an editorial writer and columnist for the Southern California News Group. Susan@SusanShelley.com. Twitter: @Susan_Shelley

Advertisement