As AI sweeps through industries and daily life, it’s forcing us to confront some uncomfortable truths: Is it fair? Does it carry hidden biases? Who’s accountable when it goes wrong? In India, where languages, castes, and customs collide in a billion-plus ways, those questions hit harder. A lone researcher or startup can’t fix this. It demands policymakers, coders, teachers, and everyday citizens rolling up their sleeves together.
At TIACoN 2025, organized by the Trusted Information Alliance, The Interview World had the opportunity to interact with Dr. Geetha Raju, Senior Policy Analyst – AI & Data at the Centre for Responsible AI, Wadhwani School of Data Science and AI, IIT Madras.
During the discussion, she walked us through how her center hunts down bias in algorithms, builds guardrails into new tools, and keeps the human touch front and center. She also laid out what India needs next: smarter laws, better practices, and a shared promise that technology will lift people up, not leave them behind. Here’s the heart of what she told us.
Q: How is your organization addressing AI bias and leveraging technology responsibly to deliver fair, beneficial, and innovative outcomes for end users?
A: AI continues to grapple with bias emerging from data, discrimination, and design. From an organizational standpoint, we are deeply engaged in addressing these issues through intensive research, particularly within the Indian social context.
Our research works focus on assessing AI models to see if they have the potential to amplify existing social biases. While we do not directly “fix” these biases, since that responsibility lies primarily with model creators, we focus on developing robust evaluation frameworks to identify and measure them effectively.
To this end, we are designing evaluation strategies and metrics that assess bias across multiple dimensions, such as truthfulness, factual accuracy, balance, and inclusivity of model outputs. We exert specialized prompt-based testing mechanisms that deliberately elicit potential bias from AI models, allowing us to analyse how and why these biased responses occur.
Through this process, we establish checkpoints that help us understand the inner workings of AI systems and systematically document the biased outcomes they produce. We actively publish them in research papers, highlighting harmful biases and their implications for real-world deployment.
Beyond bias evaluation, we also pursue research in explainability and interpretability, aiming to uncover the underlying reasons AI models behave the way they do. This ongoing work at our center is central to ensuring that AI technologies evolve with transparency, accountability, and fairness at their core.
Q: What policy measures should the Indian government adopt to reduce AI-generated content bias and ensure fair representation of Indian perspectives?
A: The Indian government has already recognized the importance of developing AI systems rooted in the Indian context. As a result, it is actively funding tools, datasets, and models designed specifically for India’s linguistic, cultural, and social landscape. However, the real challenge now lies in operationalizing these initiatives and ensuring their sustainable implementation.
We cannot afford to wait for policies and regulations to take effect, as those often require years to translate into practice. Instead, we need a voluntary, self-regulatory framework that empowers organizations to act responsibly, monitor societal and technological impacts, and report outcomes transparently.
Such an approach allows the government to oversee progress without stifling innovation. When regulations become overly rigid or punitive, industries tend to resist adoption, making it harder to experiment, innovate, and build contextually relevant AI solutions. To implement advanced technologies like AI effectively, we must therefore encourage voluntary compliance, a system in which organizations commit to managing potential risks while leveraging their technical and human resources to monitor, understand, and mitigate harms arising from AI deployment.
This balance between technological advancement and human oversight is indispensable. Organizations should not only possess the technology but also maintain the capacity to evaluate its social impact and take corrective actions. Yet, risk mitigation remains a point of resistance, both for industry and government.
True progress will come when we reward responsible behaviour. By coupling accountability with incentives, we can motivate organizations to embrace ethical AI practices, adopt real-time self-regulation, and contribute to a trustworthy AI ecosystem that aligns with India’s values and priorities.

Q: What new AI and data innovations is your IIT Madras–associated center developing to ensure technology benefits society and improves people’s lives?
A: As a research center, our primary focus is not on delivering products but on guiding others to create them responsibly. We provide strategic direction and expert advice to ensure that products developed by organizations are inclusive, safe, and beneficial for society.
We actively collaborate with industry partners and government bodies in advisory capacities, helping them translate ethical principles into practical, real-world applications. Through these partnerships, we aim to make responsible and human-centered AI a tangible reality.
Our core mission is to understand and advance the safe, ethical, and responsible use of technology. While we do not function as a product development center, our research directly contributes to building frameworks, guidelines, and best practices that make technology safer and more trustworthy for everyone.
Q: What advice would you give stakeholders on meaningfully integrating AI with human intelligence to foster collaboration and impactful outcomes?
A: With the rapid rise of tools like ChatGPT and Perplexity, people increasingly use them as interactive companions, for seeking advice, managing emotional fatigue, or assisting with research and writing. Students, in particular, often view these systems as academic assistants capable of completing assignments or simplifying complex tasks. However, the critical question remains: how do we use these tools responsibly?
Students and users alike must be educated on the ethical use of generative AI. For instance, one should not rely on ChatGPT or similar tools to produce entire assignments. Instead, these systems should be used to seek guidance, clarify concepts, or explore new perspectives. Responsible use also requires fact-checking information rather than accepting every output as truth.
Unfortunately, fact verification is still a weak link. Many users assume these tools are inherently more accurate than traditional search engines like Google. While AI systems may retrieve and present content differently, the responsibility for validation and organization of that information still rests with us.
Ultimately, the impact of these technologies depends on how we use them. We must recognize that these tools, though powerful, are not yet fully mature and can still produce harmful or misleading outcomes. Trusting them is acceptable; overtrusting them is not.
At the end of the day, AI remains a tool, not a human being. It cannot reflect on moral consequences or anticipate harm. The responsibility for ensuring safe, ethical, and meaningful use of these technologies lies with human judgment.
