Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
AI in research

Recognising and Embracing AI in Research

Artificial Intelligence and human endeavour can work together in harmony to reshape scholarly work

Namesh Killemsetty - Associate Professor at the O.P. Jindal Global UniversityPrachi Bansal - Assistant Professor at the O.P. Jindal Global UniversitybyNamesh Killemsetty - Associate Professor at the O.P. Jindal Global UniversityandPrachi Bansal - Assistant Professor at the O.P. Jindal Global University
June 23, 2025
in AI & MACHINE LEARNING, Science, Society
0

The Artificial Intelligence (AI) revolution has transformed the world — not only by creating charming Ghibli-inspired images but also by prompting us to rethink how we conduct research. As tools like ChatGPT and Google NotebookLM redefine how information is accessed and synthesised, researchers find themselves divided.

Some see generative AI as a transformational ally, capable of accelerating discovery and democratising knowledge. Others view it with suspicion, fearing it threatens the core values of creativity, critical thinking and academic rigour.

This divide is particularly sharp in academic circles, where the use of AI is too often caricatured as a shortcut — outsourcing entire papers to a machine. But that oversimplifies a more nuanced reality. Like any emerging technology, the ethical and productive use of AI depends not on the tool itself, but on how we choose to wield it.

Researchers today face a clear choice: use AI to automate tasks or to augment their abilities. Automation implies full delegation — letting a tool generate a literature review, write an abstract or even draft entire sections of a paper. Augmentation, by contrast, is about assistance: refining outlines, identifying relevant works, or summarising dense material.

It keeps the human firmly in the loop. There is no question that AI can streamline workflows. It can help format references, draft a plain-language summary or provide a surface-level overview of a topic. But we must draw boundaries. AI cannot — at least not yet — grasp the subtle nuances of a specific research problem or weigh conflicting interpretations of complex data. It lacks context, judgement and the lived experience of scholarly work.

Generative AI’s shortcomings go beyond mere limitation — they can pose risks to scholarly integrity. Many AI tools, including ChatGPT, are prone to “hallucinations,” confidently fabricating and falsifying information. In one classroom example, a student using AI to locate literature on slum policies in India was presented with a fictional title authored by a hybrid of a first name and a PhD supervisor’s surname.

No such book existed — leaving it aspirational of something that should have been done with the PhD supervisor. Another example in the same class involved AI fabricating the title of a report supposedly published by a major global NGO. On verification, no such document or organisation record could be found.

Risks of misinterpretations

Recently, a generative AI tool misinterpreted a 1959 article by merging words from two different columns, resulting in the creation of a new term: “Vegetative Electron Microscopy.” This term does not exist in the scientific community, yet it has already appeared in over 20 published research papers.

These are not harmless errors; they can potentially undermine trust and credibility in academic writing. These issues stem in part from how large language models are trained. The datasets often include internet content with little to no scholarly oversight — Reddit threads with as few as three upvotes, blog posts and low-quality forums all feed into what is ultimately presented as authoritative knowledge.

Purpose-built academic tools such as Scite, Research Rabbit, Elicit, and Inciteful represent a step in the right direction for using AI tools in research. These tools offer scholars promising avenues to accelerate literature discovery, visualise citation networks, and synthesise ideas across papers. These platforms go beyond general-purpose AI by tailoring their features for academic workflows.

However, their limitations are significant. Most rely heavily on open-access databases like Semantic Scholar and PubMed, which means they exclude large volumes of critical literature locked behind paywalls — often home to the most critical and nuanced research.


Related Articles: To AI or Not to AI in Academia? | AI for Good: When AI Is the ‘Only Viable Solution’ | AI Is Set to Change Fertility Treatment Forever | Imagining an Ethical Place for AI in Environmental Governance | Can Artificial Intelligence Help Us Speak to Animals? | AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn | Governments’ Role in Regulating and Using AI for the SDGs | The Challenges Ahead for Generative AI

This is especially problematic for disciplines such as the humanities and social sciences, where key work often appears in subscription-only journals. Another common shortfall is their reliance on abstracts rather than full-text articles.

While summaries and keyword analysis offer a quick overview, they miss the nuance and rigour found deeper in a paper’s methodology, argumentation or theoretical framework. Besides, semantic links generated between articles can be misleading, as these tools struggle to distinguish between agreement, contradiction or disciplinary differences.

Wise usage of AI in research

Despite certain limitations, these platforms excel when used wisely. Google’s NotebookLM provides quick summarisation and can convert podcasts to text. Elicit and SciSpace are particularly strong in conceptual synthesis. Inciteful facilitates meta-analysis by mapping relationships among authors, institutions and citations.

When used alongside traditional tools like Google Scholar — and with the occasional visit to a library — these technologies can significantly enhance the research process. For non-native English speakers and scholars from the Global South, AI tools are especially beneficial. In addition to helping with the tasks mentioned above, they can bridge linguistic gaps, clarify complex ideas and improve global access to locally relevant research.

The ethical landscape surrounding the use of AI in research is continually evolving. Scholars must create personal ethical frameworks to guide their use of these tools. Recognising bias — both in the data and within the model itself — is crucial. It’s also essential to understand when the use of AI crosses into the realm of plagiarism.

As peer-reviewed academic journals increasingly mandate the disclosure of AI assistance, transparency is becoming essential, not optional. A growing number of academic publishers now encourage or require authors to disclose how AI tools have contributed to their work — whether in drafting text, generating summaries or conducting literature searches. This move is an important step toward maintaining academic integrity while embracing innovation.

Researchers need to be cautious about relying too heavily on AI-generated content, especially when it comes to interpretation and argumentation. Over-delegating intellectual work to machines can simplify complex ideas into generic narratives, which undermines the originality essential to quality scholarship.

Additionally, ethical AI use involves educating both students and colleagues. Universities have a duty to integrate AI literacy into research training, addressing issues such as authorship, consent and proper attribution. The future of AI in academia will not only depend on the tools we choose but also on how responsibly we use them.

The future of research isn’t AI versus human — it’s AI and human. If we want to preserve the integrity of academic inquiry while embracing the power of emerging tools, we must be thoughtful and transparent in how we integrate AI into our work.

The revolution is here. Let’s not waste time resisting it. Instead, let’s shape it — wisely.

** **

This article was originally published by 360info™.


Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com — In the Cover Photo: Interior of the George Peabody Library in Baltimore, Jan. 15, 2013. Cover Photo Credit: Matthew Petroff.
Tags: AIAI in researchartificial intelligenceChatGPTElicitGenerative AIGoogle NotebeookLMIncitefulResearch RabbitSciteVegetative Electron Microscopy
Previous Post

European Commission to Withdraw Law Against Greenwashing

Next Post

Euro Zone Growth Stalls as Manufacturing and Services Struggle

Related Posts

ESG News regarding AI datacenters fueling U.S.-led gas power boom, Lukoil selling foreign holdings, England and Wales households paying more for water bills, and Trafigura investing $1 billion in African carbon removal projects.
Business

AI Datacenters Fuel U.S.-Led Gas Power Boom

Today’s ESG Updates U.S.-Led Gas Boom Threatens Climate: Global Energy Monitor reports 2026 could see record new gas plants, many...

byAnastasiia Barmotina
January 30, 2026
The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026
Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth
AI & MACHINE LEARNING

Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth

In 2025, the world’s 500 richest people increased their net worth by $2.2 trillion. Of those 500 individuals, eight billionaires...

bySarah Perras
January 14, 2026
ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.
Business

To Save the Grid, China Forces Industries to Go Off-Network

Today’s ESG Updates China Limits Grid Exports for New Industrial Solar & Wind: China is encouraging companies to store green...

byEge Can Alparslan
January 9, 2026
Is AI Hype in Drug Development About to Turn Into Reality?
AI & MACHINE LEARNING

Is AI Hype in Drug Development About to Turn Into Reality?

The world of drug discovery, long characterised by years of painstaking trial-and-error, is undergoing a seismic transformation. Recent research led...

byDr Nidhi Malhotra - Assistant Professor at the Shiv Nadar Institution of Eminence
January 8, 2026
AI data centres
AI & MACHINE LEARNING

The Cloud We Live In

How AI data centres affect clean energy and water security As the holiday season begins, many of us are engaging...

byAriq Haidar
December 24, 2025
A crowded airport terminal with travelers moving through check-in areas during the holiday season.
AI & MACHINE LEARNING

How AI Is Helping Christmas Run More Smoothly

Christmas this year will look familiar on the surface. Gifts will arrive on time, supermarkets will stay stocked, airports will...

byJana Deghidy
December 22, 2025
Can Government Efforts to Regulate AI in the Workplace Make a Difference?
AI & MACHINE LEARNING

Can Government Efforts to Regulate AI in the Workplace Make a Difference?

An overview of AI regulations and laws around the world designed to ensure that the technology benefits individuals and society,...

byRichard Seifman - Former World Bank Senior Health Advisor and U.S. Senior Foreign Service Officer
December 18, 2025
Next Post
ESG news regarding eurozone growth stalls, Novo Nordisk ends Hims and Hers deal, Fiserv launches stablecoin, Octopus Energy to launch solar project for Ukraine

Euro Zone Growth Stalls as Manufacturing and Services Struggle

Recent News

ESG news regarding: New Report Urges Urgent Action to Halt PFAS Contamination Across EU, US Proposes New Rule to Force Greater Transparency in Pharmacy Benefit Manager Fees, EU and Brazil Seal Landmark Deal Creating World’s Largest Free Data Flow Zone, Beijing Suspends Import and Use of Sun Pharma Alzheimer’s Treatment

Commission Pushes Ban as PFAS Costs Spiral

January 30, 2026
Food Waste in India

India’s Food Waste Is Turning Into an Environmental Time Bomb

January 30, 2026
How Migration Made the Human World

How Migration Made the Human World

January 30, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH