AI Made Friendly HERE

AI tools pose threat to research ethics

Among the various impediments to the integrity and sanctity of academic research perhaps the most perilous as well as unprecedented challenge is the emergence of Artificial Intelligence (AI) tools like ChatGPT. The rampant use of such AI tools has posed serious threats to research ethics and scholarly honesty. Most importantly, it is notoriously disrupting the eco-system of broader academia that facilitates the pursuit of research, teaching, scholarship and fairness in general.

This article is an attempt to explore this potential threat to the sanctity, originality and integrity of research, especially research in social sciences. Before delving into the implications of ChatGPT in terms of its uses and abuses a brief discussion about ChatGPT is necessary to understand how it works? From where does it fetch information? Why and how does it become the centre of attraction as well as controversy around the world?

How has it brought about sea changes in research both from positive and negative perspectives? Are governments around the world prepared to take punitive actions against the abuses of such AI tools in research? Besides academia, what are the sectors that are facing threats due to new conversational artificial intelligence platforms? What if ChatGPT promotes cheating such as plagiarism in research? The origin of ChatGPT can be traced back to 2018 when its parent organization, a California-based company OpenAI, launched its earliest version. It was followed by ChatGPT-II in 2019, ChatGPT-III in 2020 and the latest version ChatGPT-IV on 30 November 2022.

It is designed to generate content such as text, images, videos, simulations, codes etc. It is capable of writing essays, emails, contents, research papers, fiction, math worksheets etc. in no time. For such works, it needs a keyword in written or audio commands to which it responds accurately and efficiently within a few seconds. The latest version, ChatGPT-IV is the most advanced in terms of its speed, range and efficiency. It can process 25,000 words at lightning speed with stunningly accurate research skills. It has already been subscribed to by one hundred million new users in two months since its inception, a growth which is unprecedented. GPT-IV is a long leap towards accuracy as it has learnt to be more precise with analysing power to decode images, mimic texts etc. in a human-like language. Such an enormous leap towards artificial intelligence is capable of bringing about sharp changes in various spheres like the educational ecosystem, movies, animation world, market and technology.

Precisely, it has brought about a revolution in using artificial intelligence about which the broader academia has already shown serious concerns as it is directly triggering academic integrity to a huge extent. Responding to prompts like ‘keywords’ or ‘phrases,’ it provides stunningly comprehensive content that has caused a dramatic impact on existing knowledge systems. It is already powering Microsoft’s binge search engine; techno-giants like Morgan Stanley Wealth Management are investing in it to build an information system; various online educational companies are using ChatGPT-IV as an automated tutor etc. OpenAI is not alone in this shift. Techno-giants like Google, Meta and Microsoft are investing billions of dollars to build their own Chatbots and AI technology. Recently it has shown tremendous results in a test conducted by the professors of Minnesota University by scoring a grade ‘C’ which is not an easy task for a bot. Again, it has successfully passed a law examination as well as an examination for an MBA in the USA. In the existing system of research, various tools or software have been developed worldwide to check plagiarized content, which may be acknowledged for ensuring academic integrity to a good extent.

But this revolutionary shift, which experts have dubbed ‘tectonic’, has posed a serious threat as it is capable of disrupting any existing systems for plagiarism check. The reason behind such concern lies in the mode of functioning of such AI tools like ChatGPT which can provide multiple writing styles, para-phrasing, rephrasing and most noteworthy, its content style can be controlled by the user in multiple ways. In other words, such AI tools are so sophisticated that they will be able to evade existing tools used to fight plagiarism in research. It is important to note that ChatGPT is neither the maiden AI tool nor is it the first ever designed by the OpenAI company.

Before ChatGPT several AI tools available were CopyAI, Writesonic, Kafkai, Copysmith, Pappertype, Articoolo, Copymatic and so on. The question of concern here is – what is different about ChatGPT that has already fuelled rising controversies in the broader academic world? So why are we so concerned about ChatGPT especially when so many web browsers, search engines like Google, and data reservoirs like Wikipedia have been available since the 1990s? Unlike existing AI tools, ChatGPT is sound in terms of its nature, mode of functioning, efficiency and unbiased content accuracy. It is a large language model (LLM) that can generate human-like texts instantly with stunning accuracy and wisdom in response to prompts and is capable of having cognitive conversations from scholarly perspectives.

The most perilous consequence is that ChatGPT is capable of giving responses in multiple text styles and its artificial intelligence is so sound that it can read the mind and intent of the user and likewise produce text suitable for users in simple or complex writing styles. Needless to say, such different styles have the potential to escape the plagiarism-checking software available in the academic world. Therefore, it will create an ambience of mistrust in academia where honest research by a meticulous researcher may be questioned due to this unprecedented technological revolution, which cannot be wished away amid the wake of the ICT revolution in this globalized world. Do governments have enough mechanisms to combat such threats against academic integrity?

In response to the mounting unprecedented cheating cases, several universities in countries like France, the USA and India have already imposed sanctions over the use of such AI tools but these are of no serious use at all. Many researchers have even mentioned the name ChatGPT as co-authors for writing their research articles. Till date around 12,000 journals including some of the renowned journals listed in the Nature Group have officially banned ChatGPT to be listed as a coauthor. Nonetheless, having failed to cope with the technological advance, these journals have updated their guidelines stating that AI tools can be used to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions. AI in research work is a revolution and it is non-deniable. Academia in many respects has to embrace such technological upgradation. But the question is what can be done to ensure the sanctity and integrity of academic research?

Especially, in a context where a reliable, tested and validated tool to identify the dishonest use of AI tools is yet to be invented. In the last year, OpenAI has launched another tool named AI-TextClassifier to identify the difference between AI-produced and humanwritten texts. But again, that is said to be imperfect and yet to be certified by OpenAI as reliable in every case. Here lies the importance of a reliable and punitive policy to combat such challenges for the sake of scientific and cognitive research. A serious and multi-stakeholder endeavour to address this unprecedented challenge and to ensure academic integrity is the need of the hour.

(The writer is Assistant Professor of Political Science, Galsi Mahavidyalaya, Purba Bardhaman.)

Originally Appeared Here

You May Also Like

About the Author:

Early Bird