The digital age has brought an unprecedented abundance of information. We are no longer just consumptive actors, but also producers. Countless data, news, and comments appear on our screens every moment. Every piece of content is presented as information, but often it is taken out of context, its reliability is questionable, and its meaning is unclear. Algorithms focus on how quickly content spreads, not its accuracy. In this system, the biggest loser is often accurate information.
Today, information is a content format shaped by its potential for clicks, shares, and engagement. In other words, it appeals more to emotions and algorithms than to truth. In digital environments, the boundaries between data, opinion, and fiction have been blurred. An academic article can be presented as part of a conspiracy theory; a parody can be circulated with the aesthetics of serious news. Platforms treat expert opinions and anonymous comments as equal in form. Thus, content flows at the same speed and with the same appearance. And the content that stands out in this race is content that arouses anger, fear, and moral outrage and spreads rapidly. Information based on reason and research lags behind. Moreover, these platforms, which have become emotion-based through algorithms, have now become the center of news tracking.
The picture created by the information crisis may seem bleak. However, hope lies in a frequently overlooked feature of truth: Truth is not only about the correspondence of information with reality, but also about how that information gains meaning within society. In other words, truth has a social dimension. For a piece of information to be accepted as “true,” it must be based on common sense and institutional safeguards. Therefore, truth must be thought of as a relationship of trust. True information is not simply put forward; it is recognized, shared, and protected together. This collective dimension is one of the most powerful tools we have against the information crisis. Critical media literacy is the fundamental tool in this process. This is because it is not enough to simply access information; it is also necessary to question it, place it in context, and understand how it was produced. These skills pave the way for the reconstruction of a shared truth.

'Digital Media Literacy' series is beginning
Truth and reliability: the evolution of fact-checking
While critical media literacy places the individual's relationship with information on a questioning and contextual basis, the practice that takes this competence a step further is fact-checking. Fact-checking is a practice that dates back to the early 20th century. Initially, it was an internal audit process carried out by editorial staff who checked information such as names, dates, and statistics in news articles before publication. In the late 1990s in the US, fact-checking began to transition from an editorial process to a public oversight tool with the emergence of independent initiatives that examined candidates' statements during election campaigns. With the rise of social media in the 2010s, this field expanded further. Fact-checking platforms increased in number and spread to different regions. With technology companies such as Meta and Google collaborating with these organizations to combat misinformation, fact-checking evolved into an integrated structure on digital platforms.
Today, the International Fact-Checking Network (IFCN), with over 180 fact-checking platforms from more than 50 countries as signatories, provides a strong example of the bond of trust established by truth. The certification process conducted by this network is based not only on technical verification methods but also on the relationship of trust established through information. In this sense, fact-checking is an effort to protect the collective contract established between those who share information and the public. The IFCN principles define the ethical framework that makes this contract possible and emphasize that truth is not only a value that is produced but also one that is recognized and defended collectively.
Fact-checking: strategy, tools, and competencies
Fact-checking relies on various methods and tools when evaluating claims that may seem simple at first glance but often have complex contexts. The aim here is not only to determine whether a piece of information is true or false but also to understand the conditions under which it was produced, how it was disseminated, and what effects it had. False information does not appear in a single form. Each type of content requires its own unique verification approach. Scientific content and conspiracy theories also require different tools and forms of inquiry. This is because each type of claim is produced with different intentions, disseminated in different ways, and affects different audiences.
In this section, we will examine the most common types of claims encountered in the digital environment and will thoroughly analyze the strategies, methods, and tools used for each.
Factual claims
Factual claims shared as text in the digital environment contain verifiable data about a specific person, event, institution, time, place, or statistic. Statements such as “Iran has closed the Strait of Hormuz” are examples of this. These claims are mostly spread through news headlines, social media posts, statements, or chain messages. Textual factual claims are often produced or shared to be involved in a particular political, economic, or social debate.
When faced with such claims, a careful and systematic questioning process is necessary. The first step is to identify reliable sources that can be used to test the accuracy of the claim. If the claim is based on data, official data providers such as TÜİK, WHO, or Eurostat should be checked. If the content relates to legal regulations, sources such as the Legislation Information System or the Official Gazette should be examined. In the case of claims about events, news from different media sources can be compared to create a more balanced picture. For example, a claim such as “Iran has closed the Strait of Hormuz” can be verified both through news sources and tools that show maritime traffic, such as MarineTraffic. In the case of historical claims, the prevailing opinion should be determined through academic articles and reliable bibliographies related to the subject, and if there is a debate, the framework of that debate should be clarified.
Citation and quotation claims
Another type of textual misinformation frequently encountered in the digital environment is citation and quotation claims. These types of claims consist of statements attributed to a person or institution and are often circulated to shape opinion, create emotional impact, or legitimize a particular idea with authority. Famous people, scientists, politicians, and religious figures are particularly targeted by such false or out-of-context quotations. These claims are sometimes completely fabricated, while at other times they are created by distorting real words taken out of context. Official statements, archived interviews, full-text speeches, or academic publications are the main sources for verification. In addition, the date, context, and integrity of the speech from which the quote was taken must also be examined.
Visual and video manipulations
One of the most effective types of misinformation in the digital age is visual and video manipulation. These contents spread much faster than text and leave a stronger emotional impact. This is because visuals and videos give the impression of being “evidence.” Much misinformation is circulated either by repurposing outdated content or by manipulating visuals/videos using digital tools. Such content spreads rapidly, especially during times of crisis.
These manipulations take various forms. For example, an old video may be shared as if it were new; an image taken in a different country may be presented with altered location and time information. Some videos are cut and taken out of context, or sped up or slowed down to convey a new meaning. Images can be altered using Photoshop-like editing tools; even scenes that never existed can be created using artificial intelligence. Deepfake technologies can be used to create fake speeches by famous people.
Various tools and methods can be used to verify such misinformation. Reverse image search (Google Lens, Yandex Images, TinEye, etc.) is an effective method for determining where images have been published before. For videos, searches can be conducted using frame extraction. During the verification process, it is important to find the original source of the video, analyze whether the audio is genuine, and check the sharing dates. Details in the image—signs, weather, text, clothing, and facial expressions—provide important clues about the context of the content. AI-generated content is becoming increasingly convincing, so it requires extra attention. In such content, small but telling details such as facial asymmetry, lack of blinking, light-shadow imbalances, artificial gazes, or the number of fingers can indicate that the content is fake.
Scientific claims and conspiracy theories
The COVID-19 pandemic clearly demonstrated how quickly health-related claims can spread during times of crisis and how they can be added to existing conspiracy theories. Such content often presents complex information in a simplified or distorted manner. Sharing false treatments, anti-vaccine rhetoric, or “miraculous” dietary supplements can seriously jeopardize individuals' health. These claims may be driven by a lack of information, distrust of authority, or the pursuit of financial gain. Additionally, the misuse of scientific uncertainty further exacerbates confusion.
Although conspiracy theories often appear in scientific claims, they are actually a broader narrative form seen in many areas, from politics to economics. Their common feature is that they reduce complex events to the deliberate plans of a few secret perpetrators. They often go beyond reasonable doubt, aiming to create intense feelings of distrust and suspicion in the audience. These narratives rarely provide direct evidence; even the absence of evidence is interpreted as “the truth is being hidden” and becomes part of the narrative. During crises, such content deepens the crisis of confidence and increases social polarization. Dealing with conspiracy narratives requires a different kind of attention than scientific claims. While scientific content is based on the quality of academic publications, expert identity, and institutional statements, conspiracy theories require careful analysis of the tone of the language, implied relationships, and underlying assumptions.
The information crisis we are currently experiencing is often exacerbated by content we share without questioning. Critical media literacy and fact-checking represent our efforts to preserve a common truth amid this chaos. With simple questioning and reasonable doubt, it is possible to change the flow of information shaped by algorithms. Because truth can only survive in this climate of distrust when it is built and defended together.
References
Digital News Report 2025. (2025). Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025
Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding Conspiracy Theories. Political Psychology, 40(S1), 3–35. https://doi.org/10.1111/pops.12568
Guess, A. M. et al (2023). How do social media feed algorithms affect attitudes and behavior in an election campaign? Science, 381(6656), 398–404. https://doi.org/10.1126/science.abp9364
Paletz, S. B. F., Johns, M. A., Murauskaite, E. E., Golonka, E. M., Pandža, N. B., Rytting, C. A., Buntain, C., & Ellis, D. (2023). Emotional content and sharing on Facebook: A theory cage match. Science Advances, 9(39), https://doi.org/10.1126/sciadv.ade9231
Digital Media Literacy Series
Alghoritmic bias: Platform capitalism, Data and Reality - Tirşe Erbaysal Filibeli
How to approach “accurate” Information in the digital age? - Koray Kaplıca
Our Media
IPS Communication Foundation/bianet is among the partners of the EU-funded “Our Media” project, which will run from 2023 to 2025.
The “Our Media: Civil Society Movement for the Multiplication of Media Literacy and Activism, Prevention of Polarization, and Promotion of Dialogue” project will last for three years.
The project's initial focus will be on building the capacity of NGOs, media professionals, young activists, and the public in the Balkans and Turkey to address trends and challenges related to media freedom, development, and sustainability.
Funded by the EU and covering the years 2023–2025, the partners of the “Our Media” project are as follows:
South East Europe Network for Professionalization of Media (SEENPM)
Albanian Media Institute (Tirana)
Mediacentar Foundation (Sarajevo)
Kosovo Press Council
Montenegro Media Institute (Podgorica)
Macedonia Media Institute (Skopje)
Novi Sad School of Journalism (Novi Sad)
Peace Institute (Ljubljana)
bianet (Turkey).
The researcher for the “Our Media” project on behalf of the IPS Communication Foundation/bianet is Sinem Aydınlı, the foundation's research coordinator.
.jpg)
A new civil society initiative: 'Our Media'
Scope of the project
The project begins with research aimed at identifying key trends, risks, and opportunities for media sustainability and mapping good practices in media activism to support media freedom and media and information literacy (MIL). The research findings will be used to strengthen the capacities of NGOs and other stakeholders in the media field to address challenges in the media.
Advocacy activities will be carried out to understand the capacities of journalists, media organizations, and media institutions within the scope of “Our Media.” Local and national media and other actors will be encouraged to carry out media activism work on gender inequalities in the media. Within the scope of the project, young leaders will be empowered to oppose discrimination and gender stereotypes and to support gender equality through various activities.
The project will reach local communities through financial support provided to NGOs in urban and rural areas, with the aim of developing citizens' MIL skills, supporting media freedom and integrity, and countering polarization caused by propaganda, hate speech, and disinformation.

The regional program “Our Media: A civil society action to generate media literacy and activism, counter polarisation and promote dialogue” is implemented with the financial support of the European Union by partner organizations SEENPM, Albanian Media Institute, Mediacentar Sarajevo, Press Council of Kosovo, Montenegrin Media Institute, Macedonian Institute for Media, Novi Sad School of Journalism, Peace Institute and bianet.
This article was produced with the financial support of the European Union. Its contents are the sole responsibility of IPS Communication Foundtaion/bianet and do not necessarily reflect the views of the European Union.
(KK/SA/VK)


