My editor caught me using ChatGPT. Now what?
As a researcher who has been writing academic papers for more than 14 years, I found a new writing challenge: writing engaging content for the AgeLab’s blog.
For my latest blog post, I decided to try something different and collaborated with ChatGPT to help me write it. I engaged in a conversation with ChatGPT and asked it to write a few paragraphs on my chosen topic. The writing gave me an outline to the structure of my blog post with a clear and concise message. It was almost like having a co-author to brainstorm with!! After making a few minor tweaks and reorganizing the text, I happily sent it to Adam Felts, the AgeLab researcher who manages the blog.
Just a few minutes after, I received a Slack message from Adam: "Shabnam, I think I've caught you. You wrote your blog post with ChatGPT," he said.
I was taken aback! How did he know?
Adam explained that ChatGPT has its own writing style that is easy to recognize. When asked to write in forms like blog posts or essays, it organizes its ideas in a step-by-step, methodical way. At a closer look, one sees that it tends to favor sentence structures that begin with dependent clauses. And most importantly, it lacks the capacity for surprise that all human writers have.
Then, Adam asked me: "Now, what do you think? Should I use this draft as your blog post?”
An interesting question. It led us into a conversation about the purpose of the AgeLab blog – and the purpose of writing itself. “Writing is a form of self-expression,” Adam said. “One question is whether the interaction between you and ChatGPT constitutes self-expression, or whether something else is going on. Are we getting you when you work with this tool?”
At first, I decided that I needed to start over, and keep ChatGPT out of it. I was concerned that using it might give the impression that we are generating content solely with the help of a language model. However, after further consideration, I decided to explore the idea and take the time to gather my thoughts. I've come across various perspectives that have influenced my thinking.
One viewpoint suggests that relying solely on a language model could limit an individual's ability to develop their own unique voice and style. This argument resonates with Nicholas Carr's concerns in his article "Is Google Making Us Stupid?" Carr raises valid points about the potential drawbacks of over-reliance on technology for intellectual pursuits. In the piece, Carr discussed how the internet, particularly search engines like Google, may be impacting our ability to think and process information. He argued that the constant use of the internet and its quick access to information could lead to a decline in our deep reading and critical thinking skills. He mentioned that the internet's format, with its hyperlinks and distractions, encourages skimming and scanning rather than deep reading, which can hinder our ability to concentrate and comprehend complex ideas.
These arguments resonate further when applied to ChatGPT. On top of worrying about the internet creating shortcuts for finding information, we now have to worry about the internet taking over expressing our ideas for us—and, perhaps, reducing our own capacity to do so for ourselves.
On the other hand, I have also encountered a different perspective that advocates for the use of ChatGPT. When engaging in self-expression, we typically strive to communicate information that is accessible to us and acquired through various channels. During our searches for information, in our thoughts, online, or any other way, we are prone to biases and errors such as confirmation bias, availability bias, and selective bias. Sam Altman, the CEO of ChatGPT, emphasized that his platform has the potential to address the inherent information bias that individuals often encounter when conducting online searches. Since ChatGPT is trained on an extensive dataset comprising diverse texts and code, it encompasses a wide range of viewpoints and perspectives. According to Altman, this broad exposure to various ideas and information equips ChatGPT with the ability to reduce its inclination towards favoring a single viewpoint over others.
If what Altman says is true, then using ChatGPT as a writing assistant may help to expand our thinking outside of where our biases might lead us. By bringing the assistant into the process, writing becomes a dialectical exercise from the very start, as we work to assess and incorporate into our own thinking the material that the assistant provides us. That should make our self-expression richer and more complex, rather than impoverishing it.
Fow now, I plan to continue having ChatGPT assist me in enhancing my writing skills rather than generating content on my behalf—can you tell where the machine was helping me in this blog post? I am curious about the path this journey will lead me down. I suspect that the experience of teaming up with an AI assistant will be distinct for every individual, much like any other relationship we encounter in life.