It’s Not the Algorithm: New Study Claims Social Networks Are Fundamentally Broken
Cornell University research finds polarization and hate speech stem from the basic architecture of social media platforms rather than their recommendation algorithms.
Contrary to common belief, algorithms are not necessarily to blame for polarization and hate speech online.
A new study indicates that the deeper problems arise from the fundamental structure of social media platforms, rather than the code that powers them.
The conclusion is that meaningful reform would require changes at the core, not just cosmetic adjustments.
Researchers at Cornell University conducted a recent study suggesting that the persistent reality of hate speech, social polarization, and the spread of biased or false information is not a byproduct of complex algorithms, but a direct outcome of the basic architecture of these platforms.
Using an innovative social simulation, they modelled a simple social network and demonstrated how well-known issues emerged spontaneously, even without sophisticated recommendation algorithms.
In other words, contentious discourse and disinformation appear to arise naturally, regardless of who operates the platform or how it functions.
The study identified three principal failures in a virtual environment where agents based on large language models interacted.
First, echo chambers: virtual users tended to cluster into homogeneous ideological groups — conservatives with conservatives, liberals with liberals, racists with racists — mirroring human behaviour.
Second, an extreme concentration of influence emerged, with a small number of users capturing most attention and shaping discussions, similar to a “winner-takes-all” model.
Finally, extreme and polarizing voices gained amplified resonance, distorting the overall discourse.
These findings align with recent data showing rising polarization on platforms such as X (formerly Twitter) and Facebook, as reported by research institutions in the United States and Europe.
Studies in the European Union have also documented increased dissemination of extremist content and echo chamber effects, indicating the issue is not tied to any single culture or platform.
The most concerning aspect of the research is the failure of most external interventions tested to improve conditions.
Six deliberately extreme measures were evaluated to assess their impact on platform discourse.
One approach — removing algorithms and displaying posts in chronological order — reduced imbalances in attention but simultaneously strengthened the link between political extremism and influence, making extreme content stand out even in a neutral setting.
Another tested method involved reducing the visibility of dominant voices.
This led to slight improvements but did not affect polarization or echo chambers.
Notably, hiding metrics such as likes or follower counts had almost no effect on the bots, which continued to form connections with like-minded counterparts.
While the research focused on social networks operating in Western contexts, the findings were compared with conditions in China.
Platforms such as Weibo, China’s equivalent of X, operate under strict government oversight, which extends not only to content moderation but also to the architecture of the platforms themselves.
This oversight promotes a more “harmonious” discourse by intentionally censoring and restricting fringe and extreme voices, reducing polarization but limiting freedom of expression — a dynamic less applicable to democratic societies.
The study adds to calls from regulators and experts worldwide for solutions that address the root structural issues of social networks rather than their symptoms, reflecting a growing recognition that small-scale changes alone are unlikely to resolve the challenges these platforms face.