Big Tech’s midterm defensive crouch

Google, Meta, and Twitter aren’t on any 2022 midterm ballot. But they might as well be.
The influential tech platforms are drawing as much scrutiny as any candidate for Senate, House governor, and the slew of down-ballot offices voters will choose on Nov. 8. The burden is on the tech companies to navigate the election scene fairly and impartially after there were legions of complaints that they had tilted in a partisan direction over the past few campaign cycles.
Campaign season 2022 already has included lots of tech-focused political grandstanding.
FACEBOOK’s ‘EMPTY’ AND ‘SAD’ METAVERSE IS SUFFERING
“Big Tech companies shouldn’t be allowed to silence political opponents. They shouldn’t be allowed to work with the CCP. And they shouldn’t be allowed to manipulate information to change election outcomes,” tweeted Arizona Republican Senate nominee Blake Masters.
On the other side of the political spectrum, the reelection website of Rep. David Cicilline (D-RI) touts his investigation of “the four dominant firms in the digital economy.”
Candidates engage in the “techlash” because it’s a base-pleasing issue. But it’s likely not a persuading one for most voters. The Democratic “big is bad” message and the Republican accusations of viewpoint discrimination against conservatives may be red meat for activists and partisans, but polls indicate tech issues don’t resonate with most voters.
A left-of-center tech trade group, the Chamber of Progress, recently polled voters in battleground states Arizona, Colorado, Georgia, Nevada, and New Hampshire. The results show inflation, the economy, and reproductive rights topped the list of voter priorities. Regulating tech companies came in last, with 1% responding it was their top priority. A poll from right-leaning tech trade group NetChoice earlier this year produced similar results.
Voters may not have the enthusiasm for regulating tech companies the way politicians do, but in the wake of backlash over the 2016 and 2020 elections, the industry is in a defensive crouch about how their platforms might be blamed for the midterm outcomes. If Republicans lose, many will blame online bias against conservatives. And if the Democrats’ losses are bigger than expected, they will likely blame “misinformation” that was allowed to remain and circulate online. Tech companies don’t want to get caught in that crossfire again and face increased threats of regulation or breakup.
In an effort to avoid more of the same political backlash, many social media firms have announced new policies around election information on their platforms. The biggest sites are instituting plans for identifying misinformation and pushing verifiable sources for technical voting information. Advertising policies are more varied. Twitter and TikTok banned candidate and political issue ads. Google and Facebook are allowing them but require disclosure of the sponsors. And Facebook will freeze new political ads during the week before the elections.
In September, Google said in a blog post that the company’s work surrounding the midterm elections “is centered around connecting voters to the latest election information, helping campaigns and people working on elections improve their cybersecurity and protecting our users and platforms from abuse.” The post details Google’s plan to show data from “nonpartisan organizations to make it easier for people to get helpful election information” when they search. The company will also partner with Democracy Works, a nonpartisan and nonprofit data provider, to supply information about how to register and vote. It also will link to “state government official websites” and work with the Associated Press to provide “authoritative election results on Google.”
Google-owned YouTube stated that when users “search for midterms content on YouTube, our systems are prominently recommending content coming from authoritative national and local news sources like PBS NewsHour, The Wall Street Journal, Univision and local ABC, CBS and NBC affiliates. This same approach goes for videos in your ‘watch next’ panels.” Likely more controversially among some on the Right, YouTube blogged, “Working in tandem, our systems are also limiting the spread of harmful election misinformation by identifying borderline content and keeping it from being widely recommended.”
The subjective nature of what sources are considered “authoritative” and which content qualifies as “harmful” points to the inherently difficult nature of content moderation. Those difficulties are made even more so by the sheer number of posts on the largest platforms.
Meta’s Facebook, the recipient of perhaps the most political heat for its past election-related content moderation, indicated its policies for the midterm elections are largely consistent with those it used for the 2020 presidential election. Meta’s President of Global Affairs Nick Clegg posted that the company is “focused on preventing voter interference, connecting people with reliable information and providing industry-leading transparency for ads about social issues, elections and politics.”
Twitter, with fewer users than the biggest social media platforms but has a reputation for punching above its weight in influence, announced it would activate its Civic Integrity Policy. That will cover “the most common types of harmful misleading information about elections and civic events, such as: claims about how to participate in a civic process like how to vote, misleading content intended to intimidate or dissuade people from participating in the election, and misleading claims intended to undermine public confidence in an election — including false information about the outcome of the election.” Also, candidate account labels will be placed prominently on the page of those running for office to avoid confusion and misidentification online.
Politically controversial for its Chinese roots, but immensely popular with more than 1 billion global users, TikTok announced an Elections Center “to connect people who engage with election content to authoritative information and sources in more than 45 languages, including English and Spanish.” The platform will also institute the labeling of “content identified as being related to the 2022 midterm elections as well as content from accounts belonging to governments, politicians, and political parties in the US.”
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
A smaller social media site, Parler, says it will take a different tack than its bigger competitors. In a press release, Parler said that “all legal speech is welcome” and that “candidate opinions will never get banned or suspended.” Additionally, the company pledged to never shadow-ban accounts or use algorithms to suppress content.
No information on election-specific policies appears on former President Donald Trump’s social media platform, Truth Social.