Artificial Intelligence

What’s at stake here is our shared reality: An AI Perspective

0
100
Opinion-editorial building upon the ethos of the work being done by a team of Cornell Tech & Parsons School of Design developers to create a proliferation trend map, mapping the highlighting instances and locations of documented abuse of synthetic media in the recent election cycle.

Read more about their project here.

To echo the journalist and Nobel Peace Laureate Maria Ressa, democracy is under threat from disinformation and lies online.

From phony robocalls to lifelike forged photos, generative AI is evolving at breakneck pace as 2 billion people in 50 countries head to the polls this year. And the big problem is, as of mid-2024, that governments, regulators and other branches of the state are just not prepared for the potential threat tied to AI, now widely being used to spread misinformation, as well as to confuse and entertain voters. Publicly, lawmakers, tech executives and outside groups monitoring elections urge caution when dealing with a technology developing faster than it can be controlled. Generative AI and synthetic media pose significant risks of abuse in electoral processes worldwide and the 2024 election cycle is particularly vulnerable in this regard.

We are experiencing a panic-stricken information age, undermined and eroded by Big Tech corporations that profit from distorting democracy and amplifying outrage. (Note: A secondary unfortunate byproduct of this business model is the extinction-level threat faced by journalistic institutions, as monolithic and divisive “feeds” replace and defund traditional journalism.) Every day social media users see only a fraction of what is posted daily on TikTok. And what they do see is highly curated by the company’s automated systems designed to keep people glued to their smartphones. Using machine learning and so-called recommender systems (collaborative and content-based filtering, see NVIDIA’s Glossary), these systems determine within milliseconds what content to display to social media users. This mechanism is what makes our public political discourse more and more extreme as the algorithm is trained to choose conflict and controversy, which light up the feed and attract likes in a way that subtlety and ambiguity never will. Big Tech favors extremism to boost engagement and thus profits.

The risks of AI-fueled disinformation and algorithmic distortion of our civic debates are everywhere, but they are likely to be even more pronounced in non-Western regions of the world where social media corporations are known to under-invest in safeguards (i.e. Trust and Safety boards largely only exist in the Global North). In authoritarian, rogue or unstable regimes, bad actors are able to exploit the chaos of the Infocalypse with greater impunity (although no political system or country is immune). Mis and disinformation have been causing chaos and even inciting genocidal violence in places like the Philippines, Myanmar and India (Indian political parties are estimated to spend over $50 million on AI-generated election campaign material this year)—before the West woke up to the problem with the Russian-led U.S. election interference of 2016.

Western democracies still have defenses in the rule of law, the free press and the democratic institutions, all of which are established (even as they become increasingly vulnerable). In countries where there are no institutional safeguards, however, the consequences of the corroding information ecosystem could be even more devastating. Our shared reality is at stake when the Infocalypse is broadly being used to threaten and intimidate domestic opposition, drown out dissenting opinions, incite violence (ethnic and/or gender) and suppress fundamental human rights.

Is it possible to win this information war? 

Much of the technical expertise to make changes to the algorithms to “win” the current information war resides deep within companies. Legislative efforts, including the European Union’s recently passed Artificial Intelligence Act, are, at best, works in progress. The near total lack of oversight of how social media platforms’ AI-powered algorithms operate makes it impossible to rely on anyone other than tech giants themselves to police how these systems determine what people see online.

Where I believe we’re heading, though, is a “post-post-truth” era, where people will think everything is made up, especially online. Think “fake news,” but to the utmost extent where not even the most seemingly authentic content on the BBC can be presumed to be 100 percent true. With the hysteria around AI often outpacing what the technology can currently do — despite daily advances — there’s now a widespread willingness to believe all content can be created via AI, even when it can’t. In such a world, it’s rational to not have faith in anything.

Thirty percent of Americans claim, despite all evidence to the ­contrary, that the last presidential elections were “rigged”. Millions are sure that the “deep state” is plotting to import immigrants to vote against “real ­Americans” in the future. Meanwhile in Russia, the majority of people claim that the Kremlin is the innocent party in its brutal invasion of Ukraine. When Ukrainians call their relatives in Russia to tell them about the atrocities, all too often they hear their own kin parrot the Kremlin’s propaganda lines. Across the world there is a growth of propaganda that promotes an alternative reality where truth is cast away in favor of a sense of superiority and paranoia.

One thing that we must remember and keep at the forefront of our minds in these debates is that computers don’t make decisions- that it is we, humans, who make decisions, and that these decisions are then amplified by AI. We must generate the buy-in necessary for an effective global regulatory framework through networked and inclusive multi stakeholder approaches, while acknowledging that self- and voluntary-regulation of the AI industry is an insufficient guarantee for human values centered approaches. AI that is not operated in compliance with international human rights law should be banned or suspended until adequate safeguards are put in place. A first (and perhaps the most obvious) step for policymakers is to begin with ensuring that what is illegal offline, is also illegal online.

References:

  1. Deepfakes, distrust and disinformation: Welcome to the AI election – POLITICO
  2. AI: Inside the shadowy global battle to tame the world's most dangerous technology – POLITICO
  3. Is the Media Prepared for an Extinction-Level Event? | The New Yorker
  4. What is a Recommendation System? | Data Science | NVIDIA Glossary
  5. Anatomy of a scroll: Inside TikTok’s AI-powered algorithms – POLITICO
you may also like