In 2024, more than 2 billion voters across 50 countries—including in the United States, the European Union, and India—will head to the polls in a record-breaking number of elections around the world.
In 2024, more than 2 billion voters across 50 countries—including in the United States, the European Union, and India—will head to the polls in a record-breaking number of elections around the world. It has been nearly a decade since social media was first weaponized to influence election outcomes, and today’s technological advancements, such as generative artificial intelligence (AI), are poised to worsen existing problems and cause new ones. In this climate, it is more prudent than ever that technology platforms and governments do everything in their power to safeguard elections and uphold democratic values online.
Against a backdrop of unforeseen, novel challenges alongside known threats, the reality of today’s technology and social media landscape paints a stark picture of platforms underprepared for the year ahead. Meanwhile, the prominent parent companies of many major social media platforms, known colloquially as Big Tech, have retreated from the election protection measures put in place in 2020 and initiated layoffs that have affected trust and safety teams across the industry, leaving them less prepared for a year of back-to-back and high-profile elections than ever before.
It is no secret that democracy and workers’ rights are inextricably connected and always will be. Defending workers’ rights is crucial to upholding the democratic values of equality and justice. To rebuild social trust in democracy as well as to prevent erosion of democratic norms globally, we must fight to uphold these rights both online and offline. At the highest levels, we must steer the creation of AI to complement workers, prepare workers for the adoption of AI, and meet the needs of displaced workers.
Unfortunately, democratic backsliding is on the rise, and the erosion of democratic institutions coincides with the undermining of workers’ rights and labor protections. In many countries, we have seen a greater willingness to embrace far-right leaders and parties. The rise of right-wing populism in democratic countries and emboldened authoritarianism in autocratic countries go hand in hand. For example, extremists in the U.S. House of Representatives are using the same isolationist language as Hungary’s right-wing Fidesz party in their bid to curtail support across the US and EU for Ukraine’s fight against Russian aggression. In turn, Russia will seize on and exploit this isolationist narrative to fuel misinformation. These incidents are global and interconnected – they do not stay isolated in one country.
The steep decline of democracy as the ideal governing structure in peoples’ minds coupled with the embrace of leaders who seek to impose their own agendas rather than serve the interests of their people is a global trend that may produce a domino effect. The more countries that head down this road, the more likely they are to influence others to follow.
This cycle has also reinforced the critical need for labor organizing and protecting workers from authoritarian abuses. For example, in January of this year, following the election of Javier Milei to the Argentine presidency, the country’s three major union federations united to stand against the serious threat to fundamental workers’ rights and civil liberties posed by Milei’s move to resurrect outdated and debunked neo-liberal policies.
This year will continue to show us whether people will turn their backs on democratic norms in favor of non-democratic alternatives they think may serve their needs better, or instead whether they will step up and change course, seeing the dangers of the road we are headed down. We will have a front row seat to the defining moment of the 21st century’s democratic experiment.
Along this experimental vein, the rapid development and deployment of artificial intelligence, including generative AI and large language models, is moving swiftly and may outpace any other technological advancements to date. AI has the potential to affect nearly every facet of peoples’ lives. Its unprecedented growth presents both transformative opportunities and significant challenges. As with any other form of technology, it makes good things better and bad things worse.
AI threatens to exacerbate issues of economic inequality and undermine the integrity of democratic institutions, and it can significantly deteriorate the already exceedingly weak information ecosystem. It is incumbent on all of us, along with governments around the world, civil society, and the private sector to appropriately balance these risks and opportunities.
AI has the potential to benefit workers by creating jobs, raising worker productivity, lifting wages, boosting economic growth, and increasing living standards. However, it can also harm workers by displacing them, eroding job quality, increasing unemployment, and making inequities worse. Policymakers will shape whether and how AI benefits or harms workers through action or inaction. The very people who are building AI believe that it will have a huge impact on jobs. Though we do not know the future, it is critical that we listen to the people building this new technology, along with their hopes and fears for it.
Sam Altman, CEO of OpenAI, told The Atlantic earlier this year, “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced. Jobs are definitely going to go away, full stop.” Now this could just be marketing hype, but we must consider the possibility this vision could be realized and prepare for potentially generation-defining job disruption.
We know that past labor market disruptions have led to inadequate policy responses and that the World Trade Organization’s governance of trade and the associated, localized economic and democratic costs have demonstrated the need for a proactive, worker-centered strategy.
However, I think we can all agree that something about this AI moment seems different, and so begs the questions: Where does all of this fit? What happens in the period of transition? Who is most at risk in this transition? What bridge could be intentionally built right now to shore up our systems?
These are choices, not inevitabilities. We are in an unprecedented moment. We know that we are at the beginning of a major new technological revolution. We see that disruption is coming, and for once we are in a position to act to ensure our best possible future.
In the United States, the federal government’s answers to job loss and disruption from trade or technology shifts – such as Trade Adjustment Assistance – have long been insufficient, falling short in terms of coverage, financial support, provision of high-quality training, and connecting workers to good jobs. As a result, too many American communities have been left out of the benefits of economic growth for generations, and the good, new jobs that have materialized often haven’t been in the same geographic places where the old ones have dried up. Moreover, AI’s disruptive impact has the potential to be an order of magnitude larger and faster than any previous moment of destabilization.
It is critical that national governments and multilateral organizations take a series of actions to protect against this disruption, but equally important is the role that labor unions and their members play around the world. Workers should absolutely have a voice to shape how AI impacts their work and ensure that those on the front lines of deployment, and not just CEOs and shareholders, benefit from the wealth that the new technologies create. AI will lead to changes in how profit and income are generated, and who bears the burden of risk. We must be ready to advocate for how to allocate that income and that risk so that the benefits of this transformation broadly impact workers and families, not just the 1%.
Unions representing Hollywood writers and actors have been on the front lines of fighting to ensure that studio executives cannot deploy AI tools in ways that infringe on workers’ intellectual property or steal their creative work. Reforms like the Protecting the Right to Organize Act in the U.S. will ensure that more working people have the power to organize and bargain for a fair share of the upside benefits of new technology. When technology disrupts industries, policymakers should help workers better use collective bargaining by creating mechanisms to extend existing union contracts to similar workers in new parts of industries.
While we focus on job disruption, we must also work to ensure the creation of good AI jobs. Right now, around the world, many of the jobs created by the AI industry are not high-paying software developer jobs, which are also at risk of being displaced by advanced AI, but low-paid contract workers who are forced to sort through the worst of humanity’s AI prompts and AI generation content to help train and make these systems safer. These AI contract workers join legions of outsourced trust and safety contractors for social media companies who are forced to look at the worst content on the internet for low wages and with few resources for the trauma it inflicts on them. I know this reality from firsthand experience working in crisis management at Meta, and the status quo is unacceptable. If we don’t make changes to ensure the creation of more good AI jobs, that status quo will prevail, and additional net new jobs created by AI will likely be lower skill or lower paid jobs.
Microsoft is showing a potential path forward for labor organizing in the tech sector. Earlier this year, the company committed to remaining neutral in organizing drives at two video gaming subsidiaries and has announced that it is “committed to creative and collaborative approaches with unions.” It is essential that we ensure that the outsourced contractors who test the trust and safety of the AI systems of the future and the social media companies of today have real protections through collective bargaining and sectoral bargaining.
Finally, we must always center worker rights and civil rights in the AI era. We need to make sure that all future efforts around the world, such as the Trade Union Advisory Committee to the OECD’s values-based recommendations on AI, address not just future harms from job displacement from AI but also current harms, including widespread and growing worker surveillance tools and algorithmic management tools. We must establish baseline standards that prevent new forms of technology from being deployed in ways that magnify existing racism, discrimination, and unfair treatment in the workplace. This includes stronger regulation, transparency, and enforcement of standards for the use of AI during hiring and advancement processes—and bans on pervasive automated workplace surveillance and algorithmic management technologies.
While we obviously don’t have all the answers, or even all the right questions to be asking, it remains more important than ever to care about these issues and to involve those in the critical debates and bargaining processes who are ultimately affected most: workers.
WASHINGTON, DC+1 202-478-4390fesdc[at]fesdc.org
OTTAWA, ON+1 202-478-4390canada[at]fesdc.org
A joint research project between the Broadbent Institute and the Friedrich-Ebert-Stiftung Canada More
After the historic midterm elections, the 118th Congress is in office and has gotten off to a bumpy start, splitting Republican party members. The... More
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
These technologies are required to activate the core functionality of the website.
This is an self hosted web analytics platform.
Data Purposes
This list represents the purposes of the data collection and processing.
Technologies Used
Data Collected
This list represents all (personal) data that is collected by or through the use of this service.
Legal Basis
In the following the required legal basis for the processing of data is listed.
Retention Period
The retention period is the time span the collected data is saved for the processing purposes. The data needs to be deleted as soon as it is no longer needed for the stated processing purposes.
The data will be deleted as soon as they are no longer needed for the processing purposes.
These technologies enable us to analyse the use of the website in order to measure and improve performance.
This is a video player service.
Processing Company
Google Ireland Limited
Google Building Gordon House, 4 Barrow St, Dublin, D04 E5W5, Ireland
Location of Processing
European Union
Data Recipients
Data Protection Officer of Processing Company
Below you can find the email address of the data protection officer of the processing company.
https://support.google.com/policies/contact/general_privacy_form
Transfer to Third Countries
This service may forward the collected data to a different country. Please note that this service might transfer the data to a country without the required data protection standards. If the data is transferred to the USA, there is a risk that your data can be processed by US authorities, for control and surveillance measures, possibly without legal remedies. Below you can find a list of countries to which the data is being transferred. For more information regarding safeguards please refer to the website provider’s privacy policy or contact the website provider directly.
Worldwide
Click here to read the privacy policy of the data processor
https://policies.google.com/privacy?hl=en
Click here to opt out from this processor across all domains
https://safety.google/privacy/privacy-controls/
Click here to read the cookie policy of the data processor
https://policies.google.com/technologies/cookies?hl=en
Storage Information
Below you can see the longest potential duration for storage on a device, as set when using the cookie method of storage and if there are any other methods used.
This service uses different means of storing information on a user’s device as listed below.
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
This cookie measures your bandwidth to determine whether you get the new player interface or the old.
This cookie increments the views counter on the YouTube video.
This is set on pages with embedded YouTube video.
This is a service for displaying video content.
Vimeo LLC
555 West 18th Street, New York, New York 10011, United States of America
United States of America
Privacy(at)vimeo.com
https://vimeo.com/privacy
https://vimeo.com/cookie_policy
This cookie is used in conjunction with a video player. If the visitor is interrupted while viewing video content, the cookie remembers where to start the video when the visitor reloads the video.
An indicator of if the visitor has ever logged in.
Registers a unique ID that is used by Vimeo.
Saves the user's preferences when playing embedded videos from Vimeo.
Set after a user's first upload.
This is an integrated map service.
Gordon House, 4 Barrow St, Dublin 4, Ireland
https://support.google.com/policies/troubleshooter/7575787?hl=en
United States of America,Singapore,Taiwan,Chile
http://www.google.com/intl/de/policies/privacy/