Using AI for academic research and writing: Some cautionary thoughts

Using AI for research and writing

The positives side of AI for academic research and writing

There’s been a lot of buzz lately around the use of AI to help with academic research and writing… I’ll get the positives out of the way first… I think AI can be incredibly useful, especially for parsing through large amounts of data, or for doing things like transcription or proofreading and I think it can be an excellent learning tool.

And it’s probably going to be important for a lot of researchers to know how to use AI effectively.

However…

The risks of using AI

There are huge risks to using AI, both on a personal level, and for academia in general, and you need to be very careful about how you use it.

In some ways, the risks are the same as with any use of software.

For example, with statistical software it’s very easy to plug in numbers and get statistical information out, saving a huge amount of time compared to doing it manually. However… in some ways it’s too easy to use, meaning you can get statistical information without understanding what the software is doing on your behalf, or even what the numbers mean or whether they are appropriate to use.

You need a certain amount of statistical knowledge to be able to interpret the data correctly, or to check whether there’s a mistake, but all too often researchers just copy and paste from the software (and all too often this gets past the reviewers).

AI multiplies the potential for these kinds of problems, with some people talking about using AI to automatically generate papers directly from the raw data.

If you don’t know what the AI has done to analyse the data (which may not necessarily be what it says it’s done), but you put your name on it, you’re responsible for it.

A good reviewer or examiner should catch it (especially an examiner who gets to question you)…

But it could actually be worse for you if you get away with it. This could mean losing your job several years after the fact.

How to use AI ethically for PhD research

So if you’re using AI, and again, to those who comment on videos without watching them, I’m not saying you shouldn’t use it, I think you should follow Kevin Kelly’s take… AI is like an intern. You can get it to do certain useful tasks, but you have to check its work.

This means, no matter how good AI gets, you still need well-developed research skills to be able to check what it’s done.

And no matter how good AI might be at summarising papers, you still need to be able to read and understand them yourself. And no matter how good chat GPT is at generating text, you still need to be able to write and express yourself clearly.

My worry is that AI will make it too easy for those who want to cheat. There have always been people willing to fake data just to churn out papers, but if we end up with AI generated papers, citing other AI generated papers, being reviewed using AI, then we’re in real trouble.

But my hope is that this just makes the human factor more important. Maybe conferences will become the dominant forum for peer review in the future, where humans can ask questions to other humans, and maybe the whole process will become more transparent… just a thought.

If you’re going to put your name on it, make sure you can defend it

So I don’t want to be too negative about AI… by all means, use it. But use it carefully and make sure you know enough to check what it’s done, and make sure you can defend what its done if you’re going to put your name on it.

James Hayton

Recovering physicist. I used to work in nanoscience before moving on to bigger things. After finishing my PhD in 2007 I completed 2 postdoc contracts before becoming starting coaching PhD students full-time in late 2010. In 2015 I published the book

https://amzn.to/32F4NeW
Previous
Previous

Fundamentals of Academic Writing #7: How to develop ideas and arguments

Next
Next

Fundamentals of Academic Writing #6: Linking ideas