According to data from Duke University Reporters' Lab, there are 443 platforms actively engaged in fact-checking worldwide (Duke Reporters' Lab, 2025). These organizations play an important role in combating disinformation; however, their visibility and impact are largely determined by the preferences of digital platforms. In 2016, Meta launched a third-party verification program in response to the spread of false news on Facebook following the US presidential election (Facebook Newsroom, 2016). In doing so, Meta began collaborating with verification organizations that are signatories to the IFCN (International Fact-Checking Network) to prevent the spread of misinformation, delegating the gatekeeping role to independent organizations that perform accuracy checks. With the emergence of the Cambridge Analytica scandal in 2018, the third-party verification program gained even greater importance as a means of restoring public trust. In December 2019, Meta launched a new policy update, introducing a third-party verification program on a global scale specifically for Instagram.
However, eight years later, on January 7, 2025, Meta's announcement that it would end its third-party verification program, starting in the United States, made headlines around the world. Mark Zuckerberg said that Facebook, like X, would perform fact-checking through algorithms and community-based systems (Kaplan, 2025). He cited concerns that organizations conducting accuracy checks could be biased during the 2024 presidential election, in which Donald Trump was re-elected as president. However, the emphasis on impartiality by both X CEO Elon Musk and Meta CEO Mark Zuckerberg actually brought concerns about the platforms' bias to the forefront.
X's community notes system and Meta's decision to end its third-party verification program and shift toward algorithmic and community-based approaches have weakened trust in independent fact-checking organizations. With Meta cutting off funding, the financial sustainability of fact-checking organizations that have not developed an independent revenue model is now at risk.

'Digital Media Literacy' series is beginning
What is community-based verification?
Community-based verification is a new verification model adopted by digital platforms that relies on user contributions rather than professional fact-checkers. X (formerly Twitter) first launched this approach in 2021 under the name “Birdwatch,” and then restructured it under the name “Community Notes” in 2022 after Elon Musk took over the platform (Twitter Blog, 2021, X, 2021). However, community notes have been criticized for various reasons, including the lack of local context among community members, limitations in linguistic diversity, and the lack of knowledge among community members regarding content that requires expertise. Ultimately, community members are individuals designated by the platform, and they are expected to express their opinions on content that is believed to be false or to contain hate speech. The political bias of community members should also be taken into account here. Meta has similarly developed a less transparent community model based on algorithmic ranking, user reports, and AI-supported analysis models (AP News, 2025). In summary, community-based verification systems create inequalities in access to information, and factors such as user identity, political leanings, and platform visibility can determine which comments are highlighted. The extent to which these systems are transparent, inclusive, or accountable is debatable.
The commercialization of reality from digital capitalism to platform capitalism
The concept of digital capitalism, used to explain how the internet has integrated with neoliberal economic structures through its global communication infrastructure, was coined by Dan Schiller in 1999 (Schiller, 1999). Platform capitalism, a continuation of this approach, is a concept used to describe the dominance of digital systems that convert user data into economic value (Srnicek, 2017). The visibility of content on platforms determines its economic value as a commodity. The more interaction a piece of content generates, the more it is promoted. This leads to attention-grabbing or controversial content being prioritized over accurate content. Studies have shown that misleading content spreads faster and more widely than accurate information on digital platforms. This is due to the surprising and emotionally triggering nature of misinformation, which algorithms then highlight and make visible by prioritizing content with high engagement metrics (Vosoughi, Roy & Aral, 2018).
In summary, the fact that algorithms push quality content produced by independent verification organizations or news media into the background shows that the economic interests of platforms take precedence over the public good. Platform capitalism causes the visibility of content, i.e., its interaction rates, to become more important than the accuracy of information.
Whose side are algorithms on?
Whether or not content is visible on social media platforms is determined by economic criteria such as user engagement, click-through rate, and time spent on the platform. However, these criteria are not only technical; they are also shaped by cultural and ideological values.
Discussions about algorithmic bias date back to the early 2000s, when Google introduced personalized search engines. Cass Sunstein (2009) notes in his book Republic.com 2.0 that users read themselves every day and argues that algorithms that only allow users to interact with people who share similar views threaten democracy. Eli Pariser (2011) explains this phenomenon with the concept of the “filter bubble,” arguing that individuals encountering only content that aligns with their own views narrows digital diversity and social dialogue. Gillespie (2014) emphasizes that algorithms are not merely technical systems but also cultural structures, and therefore reflect certain values and biases in content rankings. Based on these arguments, it is understood that algorithms do not direct access to information in a neutral manner; rather, they create a political filtering system that serves the attention economy.
For example, changes made to Google's News and Discover algorithms reduced visitor traffic to independent news sites in Turkey by 70–90%. This directly affected advertising revenues, and Gazete Duvar was forced to cease operations on March 12, 2025, due to visitor loss. Google's opaque algorithm updates not only hinder independent media's revenue but also obstruct the public's access to quality news—that is, the truth (Bianet, 2025; T24, 2025).
Additionally, the increasing role of AI models in content production and distribution has brought copyright debates to the forefront. In particular, the unauthorized use of news content or its summarization and repurposing by artificial intelligence leads to the invisibility of creative labor. For example, artificial intelligence-powered search engines like Perplexity AI, or large language models like GPT, Gemini, and Deepseek, summarize content from news sites and directly provide information to users. Since users consume the content without visiting the original source, this situation creates a risk of copyright infringement and financial loss, especially for news media. On the other hand, when deciding which content to display, algorithms generally prioritize commercial considerations such as user engagement rather than the copyright or ethical principles of these productions. This situation necessitates a reconsideration of algorithmic transparency and copyright reforms in the digital age.
In summary, algorithms are not innocent codes based on machine learning and artificial intelligence, but ideological devices that shape the flow of digital information, making some content visible and hiding others (Erbaysal Filibeli, 2019).
If data is biased, how can reality be possible?
Numerous researchers have discussed how YouTube's recommendation algorithm has gradually directed users toward more extreme content over time (Tüfekçi, 2014; Tüfekçi, 2018; Marwick & Lewis, 2017). At this point, the effects of AI-based algorithms that enable a widely viewed video to be viewed even more are evident. AI systems reproduce the biases in the data they are trained on because they are trained on historical data. Safiya Noble (2018) explained that algorithms can reinforce social inequalities and strengthen racism through search engines. Therefore, it is known that if the training data for algorithms is biased, the results will also be biased.
Another issue we need to address is the relationship between large language models and data. Generative artificial intelligence systems, called large language models (LLMs), can produce biased content due to the biased structure of the data sets they are fed. This situation is stated in the user agreements of models such as ChatGPT developed by OpenAI, Google's Gemini, and DeepSeek. For example, ChatGPT's user guide emphasizes that users should be aware of the biases in the data used to train the model and that its outputs should not be automatically accepted as correct (OpenAI, 2024).
In this context, models trained with biased data sets may generate content that does not actually exist or is not included in the database. Such situations are defined as “hallucinations” in the literature and refer to artificial intelligence systems producing false, fabricated, or erroneous information. Hallucinations do not merely mislead users at the individual level. Users who lack the digital media literacy to recognize such erroneous or biased content may inadvertently circulate it, leading to serious issues such as the reproduction and dissemination of incorrect information.
Therefore, in an environment where educational data is biased and contains systematic prejudices, verification through algorithms or artificial intelligence systems has serious limitations in terms of producing reliable results.
European Union “Digital Services Act” and “Artificial Intelligence Act”
The European Union has developed comprehensive regulations such as the Digital Services Act (DSA) and the Artificial Intelligence Act (AI Act) to protect citizens' rights against digital platforms and create a safer ecosystem in the digital sphere. The DSA requires very large online platforms (VLOPs) to make their algorithmic systems more transparent, conduct risk assessments, and give users more control over content ranking systems (European Commission, 2022). The AI Act classifies artificial intelligence systems into different risk categories based on their intended use and requires that “high-risk” applications be subject to human oversight. These regulations are not only valid within European borders but also require global technology companies to comply with these legal frameworks. Social media platforms and companies developing large language models (LLMs) are expected to comply with these new norms, which prioritize user rights and algorithmic transparency.
Conclusion: Seeking the truth is a public responsibility!
In an age where reality is determined by algorithms, it is inevitable that users will demand more transparent, fairer, and more accountable digital ecosystems. Fact-checking is not just a technical process; it is the foundation of digital democracy. Algorithms are not neutral; they are ideological structures that shape reality through data selection.
Therefore, digital media literacy must be complemented by algorithmic literacy; users must be made aware not only of content but also of the structures of content production and distribution. In the digital age, seeking the truth is a public responsibility!
Footnotes
Kaynakça
Bianet. (2025, Mart 13). Bağımsız medya Google’ı protesto ediyor. https://bianet.org/haber/bagimsiz-medya-googlei-protesto-ediyor-305394
European Commission. (2022, October 19). The Digital Services Act. Digital Strategy. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
European Commission. (2024, August 1). Artificial Intelligence Act. Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Filibeli, T. (2019). Big Data, Artificial Intelligence, and Machine Learning Algorithms: A Descriptive Analysis of the Digital Threats in the Post-truth Era. Galatasaray Üniversitesi İletişim Dergisi, 31, 91-110. https://doi.org/10.16878/gsuilet.626260
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie et al. (Eds.), Media Technologies. MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009
Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society. https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf
Noble, S. U. (2018). Algorithms of oppression. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
OpenAI. (n.d.). Is ChatGPT biased? OpenAI Help Center. https://help.openai.com/en/articles/8313359-is-chatgpt-biased
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Press.
Schiller, D. (1999). Digital capitalism: Networking the global market system. The MIT Press.
Srnicek, N. (2017). Platform Capitalism. Polity Press. https://politybooks.com/bookdetail/?isbn=9781509504879
Sunstein, C. R. (2009). Republic.com 2.0. Princeton University Press.
T24. (2025, Mart 14). Google algoritması haber sitelerini nasıl etkiliyor? İşte grafikler... https://t24.com.tr/haber/google-algoritmasi-haber-sitelerini-nasil-etkiliyor-iste-grafikler,1225914
Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday, 19(7). https://doi.org/10.5210/fm.v19i7.4901
Tufekci, Z. (2018, March 10). YouTube, the great radicalizer. The New York Times. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science. https://doi.org/aap9559
Digital Media Literacy Series
Alghoritmic bias: Platform capitalism, Data and Reality - Tirşe Erbaysal Filibeli
Our Media
IPS Communication Foundation/bianet is among the partners of the EU-funded “Our Media” project, which will run from 2023 to 2025.
The “Our Media: Civil Society Movement for the Multiplication of Media Literacy and Activism, Prevention of Polarization, and Promotion of Dialogue” project will last for three years.
The project's initial focus will be on building the capacity of NGOs, media professionals, young activists, and the public in the Balkans and Turkey to address trends and challenges related to media freedom, development, and sustainability.
Funded by the EU and covering the years 2023–2025, the partners of the “Our Media” project are as follows:
South East Europe Network for Professionalization of Media (SEENPM)
Albanian Media Institute (Tirana)
Mediacentar Foundation (Sarajevo)
Kosovo Press Council
Montenegro Media Institute (Podgorica)
Macedonia Media Institute (Skopje)
Novi Sad School of Journalism (Novi Sad)
Peace Institute (Ljubljana)
bianet (Turkey).
The researcher for the “Our Media” project on behalf of the IPS Communication Foundation/bianet is Sinem Aydınlı, the foundation's research coordinator.
.jpg)
A new civil society initiative: 'Our Media'
Scope of the project
The project begins with research aimed at identifying key trends, risks, and opportunities for media sustainability and mapping good practices in media activism to support media freedom and media and information literacy (MIL). The research findings will be used to strengthen the capacities of NGOs and other stakeholders in the media field to address challenges in the media.
Advocacy activities will be carried out to understand the capacities of journalists, media organizations, and media institutions within the scope of “Our Media.” Local and national media and other actors will be encouraged to carry out media activism work on gender inequalities in the media. Within the scope of the project, young leaders will be empowered to oppose discrimination and gender stereotypes and to support gender equality through various activities.
The project will reach local communities through financial support provided to NGOs in urban and rural areas, with the aim of developing citizens' MIL skills, supporting media freedom and integrity, and countering polarization caused by propaganda, hate speech, and disinformation.

The regional program “Our Media: A civil society action to generate media literacy and activism, counter polarisation and promote dialogue” is implemented with the financial support of the European Union by partner organizations SEENPM, Albanian Media Institute, Mediacentar Sarajevo, Press Council of Kosovo, Montenegrin Media Institute, Macedonian Institute for Media, Novi Sad School of Journalism, Peace Institute and Bianet.
This article was produced with the financial support of the European Union. Its contents are the sole responsibility of IPS Communication Foundtaion/bianet and do not necessarily reflect the views of the European Union.
(TEF/VK)


