In this installment of our “The Political Construction of AI” series, we speak with Hani Chihabi from Thaura.ai, an AI tool emerging from the Tech for Palestine ecosystem. Hani argues that at a time when “ethical AI” has largely become a PR slogan for major corporations, they are trying to chart a different course. They highlight solidarity with Palestine, refusing to work with military and surveillance institutions, and not training on user data as the core principles of this approach.
We follow a simple question throughout the interview: is a real alternative possible to a Big Tech order built on surveillance, censorship, and data extraction? Thaura contrasts itself with platforms that turn people’s data into models by insisting that “your data stays yours.” We discuss their claim to build a community-driven tool that does not water down sensitive issues and that amplifies marginalized voices—along with the limits of that ambition.
Creating ethical boundaries and filters
The name “Thaura” (Revolution) itself carries a bold claim. While “ethical AI” has become part of the PR rhetoric of big companies today, you state in your principles that you “never work with military or surveillance institutions” and stand “in solidarity with the oppressed, against digital colonialism.” How do you position Thaura against this landscape, and how would you describe your fundamental difference from Big Tech?
Thaura positions itself as resistance technology against Big Tech’s extractive model. While companies like Google, Amazon, and Microsoft talk about “ethical AI” for good PR but actually work on military contracts and surveillance, we won't touch any of that stuff. Our name “Thaura” means Revolution because we’re against oppressive systems.
What makes us different is that we:
- don’t collect your data or spy on you
- have no political bias and don't water down the truth on sensitive topics
- center marginalized voices instead of suppressing them take an energy efficient approach
Our ethical framework guides us to differ from Big Tech’s profit-driven approach.
In our series “The Political Construction of AI,” we discuss AI as a new form of capital appropriating collective knowledge and labor. Large language models treat internet data like an “open mine” and enclose it. How do you navigate this “digital enclosure” when training your models? What concrete procedures do you operate regarding consent, copyright, and labor rights in your datasets?
Since we’re a small team and lack the financial resources we haven't trained our own model - we’re leveraging GLM 4.5 Air, an open-source and open-weight model, and putting guardrails around it that make it think and act based on our ethical framework. This approach allows us to navigate the “digital enclosure” issue by working with existing open technology rather than creating new proprietary models that would further enclose collective knowledge.
By using an open-source model, we avoid the problematic practice of treating internet data as an “open mine” to be enclosed and monetized. The model’s weights are already available publicly, meaning no additional extraction of collective knowledge or labor occurred through our use of it. Our focus is on creating ethical boundaries and filters that ensure the technology operates according to principles of solidarity and justice.
When we grow and can train our own models, we'll do it the right way - asking artists for permission and paying them fairly for their work. For now, we're focused on building something better than what’s out there by making existing open technology serve ethical purposes rather than corporate interests.
Becoming free from Big Tech
Big Tech companies control not just the models, but the entire infrastructure, from servers to chips, payment systems to the cloud. Where have you hit a wall regarding this infrastructural dependency while trying to build an “alternative” tool? In which areas have you managed to establish relative “digital sovereignty,” and at what points are you currently compelled to rely on the infrastructures of actors like Amazon, Google, or NVIDIA?
We’re not dependent on any big tech company—even with our current funding, our infrastructure doesn't rely on Azure, GCP, AWS, or NVIDIA. We live in a world where you don’t have to rely on big tech to build technology like this. This is why Tech 4 Palestine exists and why we encourage people to become free from big tech because you truly don’t have to be dependent on them.
While we don’t have our own infrastructure yet, we’ve found partners who do so and don’t rely on the big tech giants. Our commitment is to maintain this independence and continue developing infrastructure that keeps us free from the extractive, oppressive systems that dominate the tech landscape.
This approach is part of our broader mission to show that technology can be built differently - without the surveillance, military ties, and colonial logic that characterize big tech infrastructure.
Thaura smaller but 'much efficient'
Today, AI tools hide behind the “cloud” metaphor while consuming the planet’s water and energy resources with a colonial, extractive logic. You state in your principles that “the climate crisis is also a justice crisis, and we strive to minimize our carbon footprint,” emphasizing that you use significantly less energy compared to existing systems. What does your insistence on “degrowth” and efficiency signify in the face of this reality?
Our Thaura model is much smaller than the Big Tech AIs. It has 100 billion parameters, of which only 12 billion of them are activated at a time. This is because it's a mixture of experts model—it intelligently decides which expert to use based on the user’s query. In contrast, OpenAI’s GPT-5 model, has an estimated 3-5 trillion parameters, with all of them being activated simultaneously.
Even for a simple prompt like “hi”, their entire neural network fires at once. Imagine being asked “How’s everything going?”. And your whole brain starts lightning up to process literally everything—from your entire childhood memories to what’s happening right now.
Our sustainability metrics show that Thaura is 93% more efficient than Big Tech AI. This efficiency isn't just technical - it’s political. By minimizing our carbon footprint, we reject the mindset that treats the planet as an infinite resource to extract.
No third-party data sharing
Among your principles is the statement: “We never train on your data; privacy is a fundamental right, not a feature.” In this context, how do you practically protect the data of at-risk users, particularly activists and journalists? Beyond the principle of “not training on data,” what technical and institutional safeguards do you have regarding encryption, logging policies, server infrastructure, and data sharing with third parties?
Technically, we use military-grade encryption for all data - both in transit and at rest. This means information is protected as it moves between users and our servers and while stored on our infrastructure (no big tech involved).
Institutionally, we refuse to share data with any third parties - no government agencies, corporations, or other entities can access user information. We believe data belongs to users, not to us. We simply store it for their use of Thaura, acting as guardians rather than owners.
Unlike Big Tech companies that collect and monetize user data, we treat privacy as an absolute right that cannot be compromised for profit or political pressure.
Stance against occupation and oppression
You state, “We stand in solidarity with the oppressed, opposing digital colonialism and tech-enabled oppression.” As part of the “Tech for Palestine” ecosystem, how do you interpret the long-standing use of Palestine as a laboratory for military technologies and surveillance systems? How did this reality reflect on your claim of building a “countertechnology” against Big Tech while designing Thaura?
For decades, Palestine has tragically been a testing ground for military and surveillance technologies. Israeli companies have developed and tested their surveillance and military equipment on Palestinians, then exported these systems worldwide.
This reality directly shapes our claim of building “counter-technology” against Big Tech. While companies like Google, Amazon, and Microsoft provide the cloud infrastructure and AI systems that power this surveillance and military machine, Thaura exists as direct resistance. We refuse to participate in or enable any technology that contributes to occupation or oppression.
Our counter-technology approach means rejecting partnerships with companies involved in military or surveillance, building infrastructure that doesn’t rely on extractive colonial logic, and centering Palestinian voices that the powerful try to silence. We're building technology that serves solidarity rather than oppression - alternatives to the very systems that use Palestine as their testing ground.
'We won't pretend to be neutral when people are being killed'
Most popular AI tools are marketed with a narrative of “individual productivity.” How do you position Thaura in terms of collective organizing and social movements? For instance, what kind of joint projects or use cases do you envision with unions, student movements, feminist collectives, or solidarity networks?
Thaura is here to stand alongside the people. Unlike those corporate AI tools, we’re building Thaura together with the community, not just for them. The people who actually use these tools should have a real say in how they’re made and who controls them.
In a world where algorithms are constantly silencing marginalized voices and pushing corporate agendas, Thaura is a space where truth and justice can actually breathe. It won’t water down sensitive topics or shy away from discussions, Big Tech AI platforms are running away from. We’re not biased. We just tell it like it is.
What’s happening to Palestinians right now is one of the worst human rights crises of our lifetime, and we’re not going to pretend to be neutral when people are being killed. We’ve seen this in practice—a former TRT strategist told us he could finally write about Palestine without censorship. That’s our impact: amplifying marginalized voices and supporting collective action rather than just individual gain. But this isn’t just about Palestine. It’s about building AI that stands with anyone fighting oppression. Whether it’s for racial justice, climate action, women rights, or anything else.
'Power is in our unity'
The EU AI Act and similar regulations are often written in favor of major actors, with a “tickbox” compliance logic. How does this wave of regulation affect initiatives like yours that have a clear political stance but are smaller in scale? Do you think it is possible to bring the voices of oppressed and colonized communities into these legal processes, and if so, how?
Regulations like the EU AI Act often create rules that work for big companies but leave smaller initiatives like us struggling to keep up. But here’s the thing - politics often go against the people. It's frustrating to watch, but that won’t stop us from standing with the oppressed. Staying silent would make us just as complicit as those writing the rules.
Honestly, we’ve been blown away by just how many people are out there demanding a fundamental change in our economy and how they're being treated in the first place. Our efforts are actually pretty tiny compared to the unbelievable work others are already doing, educating people about what’s really going on and building that awareness. We’re seeing more and more people wake up every single day - they’re seeing through the noise, recognizing the truth, and realizing they deserve so much better.
We really believe that when we the people stand together, no regulation, no red tape, no corporate-friendly law can possibly stop us. Because the real power is in our unity, and there’s no way for any regulation to regulate away people standing together for what's right.
AI isn’t neutral.
— Thaura (@ThauraAI) December 14, 2025
It’s either a tool for liberation or a weapon of oppression.
That's why an ethical alternative is crucial.
Because if we don't fight back, we enable big tech AI that serves profits, not people: pic.twitter.com/viGgDEsp1Y
Declining investors
In your principles, you state: “Communities first, not capital; we answer to our users, not venture capital.” Historically, we have seen many “well-intentioned” tech projects eventually become dependent on funding or get swallowed by big corporations. What is your “institutional insurance” to maintain your refusal of military contracts and preserve this political stance in the long run? Are options such as cooperative structures, public funding, or alternative ownership models on the table for you?
We’ve deliberately declined investor opportunities because we refuse to have shareholders influence our mission. We’re bootstrapping this ourselves because we believe ethical AI shouldn’t come with corporate strings attached. While we could use money to scale faster and add more features, that’s not our goal - we just want to give people an alternative to big tech.
Our current model can scale to 100,000 users, and this number naturally increases as we gain more pro users. The more people who support us, the further we can grow without compromising our principles.
If it wasn’t AI, we’d be doing something else - our mission is about creating ethical alternatives to corporate tech, not about building the next big startup. By refusing funding, we maintain complete independence and ensure we'll never be pressured to compromise our stance against military contracts or our commitment to serving communities rather than capital. (DS/VC/VK)





