By 2026, AI has become one of the most pervasive acronyms in everyday speech, worked into nearly every piece of software and industry. What began with trivial uses such as writing essays or counting letters has now expanded into research, the very engine of human intellectual progress. While a webpage or a company report may only need to be functional and correct, research demands both correctness and originality. Many feel history is repeating itself: a tool built to improve life is beginning to show well-documented harms.
The application of AI in an endeavour as delicate as this can be a tricky conundrum, causing even tenured academicians to be in discordant stances. We talked to three prominent figures in the computer science research landscape, spanning three different subdomains, to get their opinion on the matter.
Academia’s perception of students
“A task that was once expected from PhD and Master’s level students is now often seen to be produced by undergrad level students,” said Swakkhar Shatabda, a BRACU professor leading Data Science-centric research. “Overall, I’d say the scale favours the positive outlook.”
Not all academics share the same optimism, however. “On one hand, we appreciate how the inclusion of AI in academia increases efficacy and inclusivity by allowing more equitable access to learning resources and by enhancing productivity in learning and research activities,” says Naznin Nawar*, an Assistant Professor at BUET specialising in human-computer interaction. “On the other hand, we are concerned about how AI compromises traditional knowledge building, which is a fundamental step for any researcher.”
Implications of AI-generated papers
“A lot of papers are nowadays retracted even after getting published because of the detection of AI-text at a later time. Hence, even if someone gets a publication, further scrutiny can cancel their academic achievement at a later time,” warns Prof. Al Sakib Pathan, a specialist and leading scientist in Networking.
Naznin paints a possibility of a grim future: “The immediate implication is developing scepticism and doubts about the credibility, rigour, and reliability of scientific findings.” “We can already see the prominent decline of an individual’s ability to critically think, and actually reason their way through a problem to find a solution,” adds Swakkhar.
AI and originality
Swakkhar conveys optimism regarding AI technology’s calibre: “The models we have nowadays are sophisticated enough to the point that every so often, they will be able to find a specific gap in the research that hasn’t been worked on yet.” He still advises against throwing caution to the wind, however: “Researchers must not blindly start making it the centrepiece of their research. They must do their due diligence to verify that the gap is, in fact, present.”
Not everyone shares this optimism. “I have not yet come across any ‘novel’ idea generated by AI that really is novel. Editors often catch the presented idea’s superficial nature. The internal hollowness or technical flaws can be easy to catch,” dispels Pathan. For cases where AI does play a part, he adds: “Successful modifications that obliterate the AI signatures would be one’s credit as an author, even if the idea is taken from an AI output. It is basically similar to the skill of using a tool like Google Search; whoever uses it in the best way and acts on the idea first would gain ownership of it.”
As for the ethicality argument, Naznin neatly outlines the standard protocol: “My research community ensures that while submitting an academic paper, the authors must include how they used AI for their work.”
Is the writing process obsolete?
Pathan and Naznin lean towards a prudent stance: “The fact of the matter is that AI is unable to have the cognitive understanding of the use of words as a human would have,” says the former, unenthused by the onslaught of hype. “AI can never accurately mimic humans, no matter what promises are shown by overzealous fanatics.” The latter opines, emphasising the humane aspect: “For me, academic writing is more than just producing a task or organising thoughts. Hence, human intervention will always be required to ensure reflective representation of an argument.” Against the grain, Swakkhar believes in a new dawn for research: “I do agree that it would be more productive for researchers to focus on the actual idea generation and necessary methodology.”
Finding research enthusiasm
“A student’s genuine interest will be apparent from their own understanding of the topic and how they delve into the details of the parts that intrigue them,” says Swakkhar, while underlining the multifaceted nature of the process. “Almost all research-based programs involve an interview session, and that’s where the real filtering happens.” The same sentiment is echoed by Pathan: “There are some people whose academic outputs as well as their way of expression speak for themselves.” Naznin highlights a different aspect of the process: “The faculty also reaches out to the individuals who provide a letter of recommendation for a certain applicant.”
The peer-review procedures
Naznin chalks up being overworked as one of the causes behind this issue: “As reviewers, we are often overburdened with reviewing tasks owing to the large number of submissions in recent times.” Swakkhar, pointing out the futility of the pursuit, says: “It is unfortunate and does happen sometimes due to human error and negligence. But ultimately, those papers never really amount to anything and oftentimes, end up harming the author rather than favouring them. In the short term, they might indeed leverage that to gain funding, but ultimately someone ends up finding the irreproducibility in the research during their own literature review, and that can permanently smear the author’s reputation.”
Allowing AI usage in papers
Jumping right into the particulars, Naznin reveals: “AI tools and technologies are permitted to be used to create content, such as images, tables, etc. Authors are also permitted to use AI to improve the quality of writing, including style, phrasing, and grammar.” Pathan provides numerical specificity: “In usual academic writing, anything beyond 20% is not allowed for research papers,” referring to the percentage of generative content permitted. Swakkhar takes it one notch further, encouraging the incorporation of the background study within the AI workflow: “I personally believe the process of literature review has been streamlined. Before, it used to take hours to effectively read and comprehend a set of papers, but with the careful use of AI, researchers can cover more literary ground.”
Tips for researchers reliant on AI
Wasting no time downplaying the difficulty, Swakkhar advises: “Actually do the hard work of learning research methodologies. Obtain sufficient theoretical knowledge of the field that you want to research. Learn coding the hard way.” He adds: “Once you have a solid foundation, that’s when you can reliably use AI to streamline your research journey and tenfold your productivity.” Naznin, as a final message to aspiring researchers, says: “I would advise that AI be used as an assistive tool, not as a substitute for developing fundamental skills.”
Some names have been changed for the sake of privacy upon request by interviewees.