Impakter
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • Industry News
    • Entertainment
    • Fashion & Lifestyle
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • Industry News
    • Entertainment
    • Fashion & Lifestyle
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter
No Result
View All Result
Home TECH AI & MACHINE LEARNING

Recognising and Embracing AI in Research

Artificial Intelligence and human endeavour can work together in harmony to reshape scholarly work

byNamesh Killemsetty - Associate Professor at the O.P. Jindal Global UniversityandPrachi Bansal - Assistant Professor at the O.P. Jindal Global University
June 23, 2025
in AI & MACHINE LEARNING, Science, Society
AI in research
Share on FacebookShare on Twitter

The Artificial Intelligence (AI) revolution has transformed the world — not only by creating charming Ghibli-inspired images but also by prompting us to rethink how we conduct research. As tools like ChatGPT and Google NotebookLM redefine how information is accessed and synthesised, researchers find themselves divided.

Some see generative AI as a transformational ally, capable of accelerating discovery and democratising knowledge. Others view it with suspicion, fearing it threatens the core values of creativity, critical thinking and academic rigour.

This divide is particularly sharp in academic circles, where the use of AI is too often caricatured as a shortcut — outsourcing entire papers to a machine. But that oversimplifies a more nuanced reality. Like any emerging technology, the ethical and productive use of AI depends not on the tool itself, but on how we choose to wield it.

Researchers today face a clear choice: use AI to automate tasks or to augment their abilities. Automation implies full delegation — letting a tool generate a literature review, write an abstract or even draft entire sections of a paper. Augmentation, by contrast, is about assistance: refining outlines, identifying relevant works, or summarising dense material.

It keeps the human firmly in the loop. There is no question that AI can streamline workflows. It can help format references, draft a plain-language summary or provide a surface-level overview of a topic. But we must draw boundaries. AI cannot — at least not yet — grasp the subtle nuances of a specific research problem or weigh conflicting interpretations of complex data. It lacks context, judgement and the lived experience of scholarly work.

Generative AI’s shortcomings go beyond mere limitation — they can pose risks to scholarly integrity. Many AI tools, including ChatGPT, are prone to “hallucinations,” confidently fabricating and falsifying information. In one classroom example, a student using AI to locate literature on slum policies in India was presented with a fictional title authored by a hybrid of a first name and a PhD supervisor’s surname.

No such book existed — leaving it aspirational of something that should have been done with the PhD supervisor. Another example in the same class involved AI fabricating the title of a report supposedly published by a major global NGO. On verification, no such document or organisation record could be found.

Risks of misinterpretations

Recently, a generative AI tool misinterpreted a 1959 article by merging words from two different columns, resulting in the creation of a new term: “Vegetative Electron Microscopy.” This term does not exist in the scientific community, yet it has already appeared in over 20 published research papers.

These are not harmless errors; they can potentially undermine trust and credibility in academic writing. These issues stem in part from how large language models are trained. The datasets often include internet content with little to no scholarly oversight — Reddit threads with as few as three upvotes, blog posts and low-quality forums all feed into what is ultimately presented as authoritative knowledge.

Purpose-built academic tools such as Scite, Research Rabbit, Elicit, and Inciteful represent a step in the right direction for using AI tools in research. These tools offer scholars promising avenues to accelerate literature discovery, visualise citation networks, and synthesise ideas across papers. These platforms go beyond general-purpose AI by tailoring their features for academic workflows.

However, their limitations are significant. Most rely heavily on open-access databases like Semantic Scholar and PubMed, which means they exclude large volumes of critical literature locked behind paywalls — often home to the most critical and nuanced research.


Related Articles: To AI or Not to AI in Academia? | AI for Good: When AI Is the ‘Only Viable Solution’ | AI Is Set to Change Fertility Treatment Forever | Imagining an Ethical Place for AI in Environmental Governance | Can Artificial Intelligence Help Us Speak to Animals? | AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn | Governments’ Role in Regulating and Using AI for the SDGs | The Challenges Ahead for Generative AI

This is especially problematic for disciplines such as the humanities and social sciences, where key work often appears in subscription-only journals. Another common shortfall is their reliance on abstracts rather than full-text articles.

While summaries and keyword analysis offer a quick overview, they miss the nuance and rigour found deeper in a paper’s methodology, argumentation or theoretical framework. Besides, semantic links generated between articles can be misleading, as these tools struggle to distinguish between agreement, contradiction or disciplinary differences.

Wise usage of AI in research

Despite certain limitations, these platforms excel when used wisely. Google’s NotebookLM provides quick summarisation and can convert podcasts to text. Elicit and SciSpace are particularly strong in conceptual synthesis. Inciteful facilitates meta-analysis by mapping relationships among authors, institutions and citations.

When used alongside traditional tools like Google Scholar — and with the occasional visit to a library — these technologies can significantly enhance the research process. For non-native English speakers and scholars from the Global South, AI tools are especially beneficial. In addition to helping with the tasks mentioned above, they can bridge linguistic gaps, clarify complex ideas and improve global access to locally relevant research.

The ethical landscape surrounding the use of AI in research is continually evolving. Scholars must create personal ethical frameworks to guide their use of these tools. Recognising bias — both in the data and within the model itself — is crucial. It’s also essential to understand when the use of AI crosses into the realm of plagiarism.

As peer-reviewed academic journals increasingly mandate the disclosure of AI assistance, transparency is becoming essential, not optional. A growing number of academic publishers now encourage or require authors to disclose how AI tools have contributed to their work — whether in drafting text, generating summaries or conducting literature searches. This move is an important step toward maintaining academic integrity while embracing innovation.

Researchers need to be cautious about relying too heavily on AI-generated content, especially when it comes to interpretation and argumentation. Over-delegating intellectual work to machines can simplify complex ideas into generic narratives, which undermines the originality essential to quality scholarship.

Additionally, ethical AI use involves educating both students and colleagues. Universities have a duty to integrate AI literacy into research training, addressing issues such as authorship, consent and proper attribution. The future of AI in academia will not only depend on the tools we choose but also on how responsibly we use them.

The future of research isn’t AI versus human — it’s AI and human. If we want to preserve the integrity of academic inquiry while embracing the power of emerging tools, we must be thoughtful and transparent in how we integrate AI into our work.

The revolution is here. Let’s not waste time resisting it. Instead, let’s shape it — wisely.

** **

This article was originally published by 360info™.


Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com — In the Cover Photo: Interior of the George Peabody Library in Baltimore, Jan. 15, 2013. Cover Photo Credit: Matthew Petroff.
Tags: AIAI in researchartificial intelligenceChatGPTElicitGenerative AIGoogle NotebeookLMIncitefulResearch RabbitSciteVegetative Electron Microscopy
Previous Post

European Commission to Withdraw Law Against Greenwashing

Next Post

Euro Zone Growth Stalls as Manufacturing and Services Struggle

Namesh Killemsetty - Associate Professor at the O.P. Jindal Global University

Namesh Killemsetty - Associate Professor at the O.P. Jindal Global University

Namesh Killemsetty is an Associate Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana.

Prachi Bansal - Assistant Professor at the O.P. Jindal Global University

Prachi Bansal - Assistant Professor at the O.P. Jindal Global University

Prachi Bansal is an Assistant Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana.

Related Posts

Your Guide to Becoming a Top-Level Graphic Designer in the Age of AI
AI & MACHINE LEARNING

Your Guide to Becoming a Top-Level Graphic Designer in the Age of AI

July 18, 2025
Power Outages in Spain and Portugal
Energy

Rethinking Energy Security in a Net-Zero World

July 14, 2025
Scientific Data and Computing Center
AI & MACHINE LEARNING

What Is the Environmental Cost of Generative AI?

July 11, 2025
Next Post
ESG news regarding eurozone growth stalls, Novo Nordisk ends Hims and Hers deal, Fiserv launches stablecoin, Octopus Energy to launch solar project for Ukraine

Euro Zone Growth Stalls as Manufacturing and Services Struggle

Recent News

Your Guide to Becoming a Top-Level Graphic Designer in the Age of AI

Your Guide to Becoming a Top-Level Graphic Designer in the Age of AI

July 18, 2025
Torres Strait Islands

Australian Court Rules Against Indigenous Islanders in Publicized Climate Case

July 18, 2025
Brazil’s Carbon Credit Schemes Linked to Illegal Logging

Brazil’s Carbon Credit Schemes Linked to Illegal Logging

July 18, 2025

Impakter informs you through the ESG news site and empowers your business CSRD compliance and ESG compliance with its Klimado SaaS ESG assessment tool marketplace that can be found on: www.klimado.com

Registered Office Address

Klimado GmbH
Niddastrasse 63,

60329, Frankfurt am Main, Germany


IMPAKTER is a Klimado GmbH website

Impakter is a publication that is identified by the following International Standard Serial Number (ISSN) is the following 2515-9569 (Printed) and 2515-9577 (online – Website).


Office Hours - Monday to Friday

9.30am - 5.00pm CEST


Email

stories [at] impakter.com

By Audience

  • TECH
    • Start-up
    • AI & MACHINE LEARNING
    • Green Tech
  • ENVIRONMENT
    • Biodiversity
    • Energy
    • Circular Economy
    • Climate Change
  • INDUSTRY NEWS
    • Entertainment
    • Fashion & Lifestyle
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
    • Editorial Series

ESG/Finance Daily

  • ESG News
  • Sustainable Finance
  • Business

Klimado Platform

  • Klimado ESG Tool
  • Impakter News

About Us

  • Team
  • Global Leaders
  • Partners
  • Write for Impakter
  • Contact Us
  • Privacy Policy

© 2025 IMPAKTER. All rights reserved.

No Result
View All Result
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • Industry News
    • Entertainment
    • Fashion & Lifestyle
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2024 IMPAKTER. All rights reserved.