In defence of ending algorithms for content amplification

Icon
Icon
Opinion & Commentary
Icon
Aug 27, 2025
News Main Image

In the summer of 2022, as Elon Musk's acquisition of X/Twiter (X from now on) became inevitable, fears began to surface in the user community. As expected, under the tech-oligarch's leadership, the platform started to lose quality, morphing into an arena for radicals, white supremacists, and misogynists. It quickly devolved from being a privileged place to follow favourite politicians, columnists and journalists, engage with interesting people, and support the work of important organisations, to a space where toxic content was amplified, and polarisation seemed to be the end goal. It became increasingly difficult to rationalise the presence on X. Even the excuse "I'll stay here until the bitter end, not to cede space to the illiberals and anti-democrats", felt like a poor justification to endure the escalating discomfort and unpleasantness. 

Then, as we moved through 2024 and into the United States Presidential election campaign, X saw a growth of pro-Trump information and anti-Democrat Party disinformation. For your columnist, this became increasingly obvious when, even without interacting with similar accounts, I started to get recommendations to follow Marjorie Taylor Green, Lauren Boebert, and JD Vance. The "For You" page morphed into a horror show of MAGA influencers, radical conservative politicians, and Elon Musk's own tweets. 

It got so overt that The Wall Street Journal ran an article titled "X Algorithm Feeds Users Political Content – Whether They Like It or Not," and The Washington Post "On Elon Musk’s X, Republicans go viral, as Democrats disappear". These were based on studies that showed X algorithms promoted content from Republicans and Elon Musk, significantly increasing the reach of far-right narratives. It was also uncovered that Musk had demanded the implementation of a special system to boost his own content, bypassing filters by a factor of a thousand, a mechanism known as a power user multiplier. On the flip side, the New York Times reported that users, with millions of followers, saw their engagement in the platform disappear after criticizing the owner.  

It is no surprise then, that recently, the French government opened a criminal investigation into X due to allegations of manipulation of its algorithms for the purpose of foreign interference in European Union elections. This is not unprecedented. Independent studies, notably by Global Witness, DFRLab/AlgorithmWatch, and the Humboldt Institute for Internet and Society, showed a significant increase in the reach and engagement of Alternative für Deutschland (AfD) content on X leading up to the German Parliamentary Election of 2025.

Algorithms, made to optimize engagement, tend to privilege sensational, divisive, and misinformative content that generates strong emotional responses. This directly fuels increased social and political polarisation. Furthermore, the personalisation of the user's experience, driven by algorithmic interactions and profiling data, has fostered information bubbles and echo chambers. This limits exposure to diverse ideas, hinders healthy debate, deepens social and political divisions, and ultimately leads to the radicalisation of positions, thereby obstructing consensus or negotiated solutions.

Algorithms can also subtly guide user decisions through deceptive or coercive interface designs, a process known as dark patterns. This can lead users to make choices they wouldn't have consciously made, benefiting the platform at the expense of user interests (for example, accepting more cookies than intended, or sharing more data). This compromises user autonomy, their freedom of choice, and the protection of privacy.

In the European Union, the Digital Services Act (DSA) has introduced clear requirements regarding the application of algorithms, particularly on Very Large Online Platforms (VLOPs). For instance, the DSA mandates a clear explanation of main parameters: platforms must indicate, in their terms and conditions (in a clear and intelligible way) the key parameters used in their recommendation systems. This includes explaining the most significant criteria determining suggested information and the reasoning behind their importance. Article 38 further requires VLOPs to offer at least one recommendation system option not based on user profiling. Additionally, Articles 34 and 35 state that VLOPs must conduct annual assessments of systemic risks arising from their service design and functioning, including their algorithmic systems, and implement reasonable, proportionate, and effective mitigation measures after identifying these risks.

However, despite these regulations, the state of digital platforms, where content is disseminated and public political debate occurs, appears to be worsening, across platforms like TikTok, Facebook, and X. It's a fact that when algorithms are owned by private companies, they wield immense power over what is seen and what isn't. While the DSA seeks to mitigate this power, the control over the flow of information and the shaping of debate in the public square remains heavily influenced by these corporations.

So, it’s time to make the argument in defence of ending algorithms for amplification of political and social content, or, at least, give the user that power. 

Removing algorithmic amplification, or giving the option to have that environment, would reduce the large-scale dissemination of fake news and disinformation. What goes viral would then be determined by genuine human interest rather than algorithmic manipulation, significantly contributing to the quality of public discourse and citizens' ability to form informed opinions. Content would gain visibility through its intrinsic relevance or users' explicit choice (as in a chronological feed or one based on direct subscriptions), rather than its capacity to generate emotional clicks. Similarly, eliminating content-boosting algorithms would mean a drastic reduction in the intrusive collection and use of personal data for behavioural profiling purposes. Users would be less subject to manipulation by invisible systems, leading to a significant reinforcement of privacy.

Equally, VLOPs, through their algorithms, hold immense control over what millions of people see, what goes viral, and what remains in the background noise. Withdrawing from content-boosting algorithms would decentralise this control. The algorithmic 'gatekeeper' would be eliminated, allowing content visibility to be more organic and less dictated by corporate interests or distorted engagement logics. This aligns perfectly with one of the main pillars of the DSA: fostering a digital environment that places human beings and their rights at its core. Prohibiting algorithmic elevation would be a decisive step to ensure that technology genuinely serves citizens rather than the other way around, moving away from models that prioritise profit at the expense of social and democratic well-being.

While it may seem that the current algorithmic landscape is our only option in a market-based economy driven by supply and demand, this is not true. Bluesky, the microblogging digital platform that presents itself as an alternative to X, is not algorithmically driven. Instead, its content distribution relies on user-created subscriptions and lists. This non-algorithmic approach demonstrates that a constituency for such models already exists online.

Naturally, having platforms where users have the power to curate their interests, the issue of echo chambers and information bubbles will continue to exist. However, these would be driven by conscious and deliberate interaction with online information, freeing citizens from dark patterns and digital architectures that aims to maximise screen time at all costs. The online experience would be guided purely by user will, not by the platform's commercial interests.

Regulation (EU) 2022/2065, the DSA, states in Article 91 that "by November 2027 (…) the Commission should evaluate this regulation, and report (…) where appropriate, the report (…) shall be accompanied by a proposal for amendment of this Regulation." The time has come to seriously consider the end of algorithms for content distribution, specifically in the political and social realms. This becomes even more important as AI-generated material is going to flood the internet, making increasingly difficult to distinguish between real and legitimate content and easily generated and disseminated fake, malicious, or disruptive information, with immense viral power. Opt-out options can be part of a universal standard of access and participation in political and social debate online, that will increase its health and quality for those who want to have an active participation on the now digital square.

 It's time to get back to a time where the citizen/user/voter is the centre of the process regarding media consumption and understanding. 

Ricardo Silvestre
Columnist