Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
AI education

To AI or Not to AI in Academia?

Namesh Killemsetty - Associate Professor at the O.P. Jindal Global UniversityPrachi Bansal - Assistant Professor at the O.P. Jindal Global UniversitybyNamesh Killemsetty - Associate Professor at the O.P. Jindal Global UniversityandPrachi Bansal - Assistant Professor at the O.P. Jindal Global University
May 2, 2025
in AI & MACHINE LEARNING, Education
0

AI in higher education is a reality, but it must be deployed and used more responsibly and equitably


In a classroom discussion on women’s work in India, a student confidently discussed the AI-generated result of her prompt “summarise women’s work in India.” The result was a well-structured essay on how women in the Indian economy are engaged in agriculture and how their work is under-paid.

While many dimensions of this response were correct, what it missed completely was the unpaid care work women do in Indian households. It was a textbook example of algorithmic bias — when women’s unpaid work isn’t measured well, it doesn’t appear in the datasets AI models are trained on.

And when students rely on these tools without question, they risk internalising those same silences. More and more teachers now face similar unsettling moments in classrooms with the overuse of AI by students.

As a response to these experiences, four archetypes of teachers are emerging in response to AI’s growing presence in academia. First, those who consider AI as a hindrance to authentic learning, and do not think that AI should change anything in terms of their teaching pedagogy. They resist integrating AI, perceiving it as incompatible with discipline-specific epistemologies.

These are traditionalists. This approach works better while teaching subjects such as mathematics and physics where foundational knowledge of the subject matter needs to be taught deeply and one can abstract from the contemporary world.

Second are the pragmatic integrators who adapt AI and try to integrate it in their pedagogy as and when they think it helps in their classrooms. They maintain their agency but use AI in simple tasks such as lesson planning, thinking about providing examples for a concept, and experimenting with different kinds of assessments..

A third set of covert users of AI are those who use tools (primarily ChatGPT) in the backend while not willing to accept its usage in front of students or their peers because of discomfort with the ethical implications, institutional guidelines or fear of undermining authority.

The last kind are the AI collaborators who are trying to build learning experiences with AI and are transparent about its usage. They are exposing students to AI tools, allowing them to use AI and also preparing them to see the bias and issues with AI-generated content.

Critical questions

Irrespective of the teacher type — and sometimes one can be in two different categories depending on the course being taught — most instructors today are grappling with two key questions: one, what does it mean to learn or teach in the presence of AI? Two, how can academic integrity and student agency be upheld?

Besides, there are questions of ethical nature. If students use ChatGPT to answer questions or write their term papers, it is described as cheating, but what if teachers use it? A course on public policy, for instance, cannot avoid discussion on AI completely. A teacher’s expertise on AI is critical to engage with students.

Technology has long played a transformative role in classroom pedagogy, far preceding the advent of AI. For instance, science education has historically benefited from laboratory experiments, which offered students tangible, experiential learning opportunities that stood in stark contrast to rote memorisation.

Similarly, in the contemporary teaching of statistics and econometrics, the integration of statistical software such as R, Stata, or Python has gained traction. These tools enable students to engage with real-world data and internalise theoretical concepts through application.


Related Articles:  AI for Good: When AI Is the ‘Only Viable Solution’ | AI Is Set to Change Fertility Treatment Forever | Imagining an Ethical Place for AI in Environmental Governance | Can Artificial Intelligence Help Us Speak to Animals? | AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn | Governments’ Role in Regulating and Using AI for the SDGs | The Challenges Ahead for Generative AI

Many undergraduate and postgraduate students take econometrics for largely theoretical orientation. However, it was when a noted statistician and theoretician demonstrated how statistical software could practically apply these theories that students began to fully understand the concepts. By visually seeing the theory in action, the abstract mathematics became much more relatable and accessible.

A similar moment of clarity occurred when simulations were used to teach the law of large numbers — a noted concept in statistics. Inspired by these experiences, many university teachers now incorporate simulations in their teaching of probability distributions, allowing students to visualise and experiment with statistics. These methods significantly enhance student engagement and conceptual understanding.

Making learning effective

In this broader context, AI should be viewed not as a radical rupture, but as a continuation — and evolution — of the pedagogical tradition of leveraging technology to make learning more interactive, personalised and effective. However, AI is not just a visual aid or a statistical software; it has adaptive and generative abilities.

These abilities also make it hallucinate or fabricate data, misattribute sources, or produce conceptually flawed explanations with high linguistic confidence. Teachers now face a special challenge to engage critically with digital literacy to interrogate AI outputs.

Just as students are taught to read texts critically or interpret data with caution, they must now be equipped with AI literacy — the ability to engage with generative tools such as ChatGPT with a discerning eye. This involves understanding how AI works, recognising its limitations and developing strategies for verification and triangulation. In doing so, AI becomes not a shortcut to learning, but a site for deeper inquiry and reflection.

If it is argued that AI has the potential to exacerbate existing inequalities — whether through biased data, algorithmic opacity or differential access to technology — then it is an imperative that students are equipped to identify and interrogate these biases. This means going beyond merely using AI tools to engaging with questions such as: Whose data is this model trained on? What perspectives are missing? Why does this output seem biased or skewed?

By incorporating critical data literacy and algorithmic awareness into the curriculum, students can begin to see AI not as a neutral authority, but as a product of human design, carrying the values, assumptions and limitations of its creators. Teaching students how to spot patterns of exclusion, detect stereotyping and question AI-generated narratives is a vital step toward using AI responsibly and equitably in education.

While the risks of AI in exacerbating inequality must be taken seriously, it is equally important to recognise that AI also holds tremendous potential to bridge gaps in pedagogy, particularly for differently abled learners and teachers. Specialised AI tools such as screen readers powered by natural language processing, speech-to-text and text-to-speech systems, real-time captioning, sonification tools, and AI-driven sign language recognition are already transforming accessibility in classrooms.

For students and teachers with visual, auditory or cognitive impairments, these tools can create more equitable learning environments by offering personalised, multimodal learning experiences.

Other interactive platforms such as Mentimeter, Kahoot and Quizziz are fostering more participatory and responsive classrooms by allowing students to engage anonymously, respond at their own pace and visualise collective understanding in real time. Together, these technologies represent not just a disruption, but a democratisation of learning — one that, if guided thoughtfully, can create more inclusive, engaging and learner-centric pedagogies.

** **

This article was originally published by 360info™.


Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com — In the Cover Photo: A close up of a computer screen with a blurry background, showing ChatGPT. Cover Photo Credit: Jonathan Kemper.

Tags: AcademiaAIartificial intelligencehigher education
Previous Post

Iberian Blackout: The Mystery Behind the Collapse

Next Post

Trump Pushes for Increased Fossil Fuel Usage, Sparking Concern

Related Posts

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?
AI & MACHINE LEARNING

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?

The question is one of increasing urgency: What will workers do when technology does most of the work? In April...

byDr Manoj Pant - Former Vice-Chancellor of the Indian Institute of Foreign Trade & Visiting Professor at the Shiv Nadar Institution of Eminenceand1 others
February 18, 2026
ESG News regarding Trump criticizing Newsom over UK green energy agreement, new analysis questioning the climate benefits of AI, EU greenlighting €1.04 billion Danish programme to reduce farm emissions and restore wetlands, and Santos winning court case over alleged misleading net-zero claims.
Business

Trump Slams Newsom Over UK Green Energy Deal

Today’s ESG Updates: Trump Slams Newsom’s UK Green Deal: Criticizes California governor for signing a clean energy agreement with the...

byAnastasiia Barmotina
February 17, 2026
ESG News regarding Tehran Dispatches Technical Team for Renewed Nuclear Dialogue; Italy Proposes Temporary Sea Entry Bans; Labour Market Slowdown in UK; India Hosts Global Tech Leaders in AI Investment Push
Business

Iran-US Nuclear Diplomacy Returns to Geneva

Today’s ESG Updates Switzerland Maintains Intermediary Role in U.S. - Iran Contacts: Iran’s Foreign Minister Abbas Araghchi arrives in Geneva...

byPuja Doshi
February 16, 2026
REAIM speaker stands in front of an image that reads "Real or fake?"
AI & MACHINE LEARNING

Deepfake Fraud Goes Mainstream

We have all done it. We’ve all seen a video on the internet, maybe a cute video of a cat...

bySarah Perras
February 13, 2026
An abstract robotic figure is surrounded by glowing lines
AI & MACHINE LEARNING

Moltbook: Should We Be Concerned About the First AI-Only Social Network?

Introducing Moltbook, a social media platform for AI bots. No, this isn’t the plot of a Black Mirror episode on...

bySarah Perras
February 3, 2026
ESG News regarding AI datacenters fueling U.S.-led gas power boom, Lukoil selling foreign holdings, England and Wales households paying more for water bills, and Trafigura investing $1 billion in African carbon removal projects.
Business

AI Datacenters Fuel U.S.-Led Gas Power Boom

Today’s ESG Updates U.S.-Led Gas Boom Threatens Climate: Global Energy Monitor reports 2026 could see record new gas plants, many...

byAnastasiia Barmotina
January 30, 2026
The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026
Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth
AI & MACHINE LEARNING

Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth

In 2025, the world’s 500 richest people increased their net worth by $2.2 trillion. Of those 500 individuals, eight billionaires...

bySarah Perras
January 14, 2026
Next Post
ESG News regarding Trump Pushes For Fossil Fuel Usage

Trump Pushes for Increased Fossil Fuel Usage, Sparking Concern

Recent News

A woman starting her Insurance agent Skill Development program.

Blended Learning Approaches for Insurance Agent Skill Development

February 19, 2026
Northern Kenya drought and hunger crisis affecting pastoral communities

Northern Kenya Drought and Hunger Crisis Worsens Amid Aid Cuts

February 19, 2026
Farewell to Soft Power

Farewell to Soft Power

February 19, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH