Writing with ChatGPT: Drawing Ethical Lines Without Losing My Voice
Facing the Blank Page with ChatGPT (But Still Writing It Myself)
In recent times, online communities and media outlets have speculated that human novelists may soon be replaced by AI-generated content from Large Language Models (‘LLMs’) such as ChatGPT. There are also debates about whether fiction writers using any AI in their writing process have creative authenticity, integrity or originality.
Answering big questions like these fully is beyond the scope of my thoughts here, although I do wonder how feasible it really is for an LLM to produce a readable, satisfying, and emotionally resonant novel from prompts. TLDR: I don’t think this is possible until AI technology solves the problem of replacing a human mind at the centre of developing and writing a novel, which is unlikely any time soon, if ever.
Over the past month, I’ve worked with ChatGPT in a series of dialogue sessions to explore:
– Whether the model can usefully assist in developing ideas and conducting research for a new novel
– Where I should draw ethical boundaries in order to preserve my voice and creative integrity
I call my work with ChatGPT a “dialogic conceptual thinking partnership”—a way of bouncing ideas off the model to refine, challenge, or clarify my creative direction.
I sense many writers are at a point of tension - of apprehension - with tackling what AI means for their creative writing practice. Even if adamantly opposed, I wonder that some, perhaps many may suspect they cannot avoid taking the journey into new territory eventually. A basic grasp of history tells us that the world always moves on without us however hard we dig our heels in.
It is understandable that some - many - are not quite ready to begin. There is fear that we might leave a world where our creative selves had integrity and coherency and enter unknown territory, where our creativity could be outsourced to an automated machine self, and our creative selves might lose coherence altogether.
My overall feeling is that the digital world is our world. Yes, it is imposed on us, but we must learn to live within it. We cannot be sure if we will do so with integrity, originality and authenticity, but can ask questions about how to best try.
When I started to work with ChatGPT to refine ideas for a novel, I tried to approach it with an awareness of needing to develop an ethical framework. I started with some ethical principles - rejecting the use of ChatGPT to write or rewrite prose for me was one. Other principles came out of the process of experimentation and discovery in dialogue with ChatGPT. These ethical principles are about maintaining a distinction between human-led creative generation and creative reflection in collaboration with an LLM. The following are my key principles, meant as example to consider rather than a coherent framework.
Creative Integrity
No AI-written prose in my fiction
While I might use ChatGPT to suggest names for minor characters or occasionally for a chapter title, I do not want to use AI for narrative content generation for my fiction, or for rewriting or editing my prose. This is not about purity, but maintaining the clarity of my voice as a writer, a voice that has taken a lifetime to find and hone. I also want prose to come from my own creative process of finding the right words on the page, which can be quite chaotic and unpredictable. Although I could not explain exactly how, I do believe that working this way ensures the finished novel captures elements ‘between the lines’ that come from my unconscious mind.
Creativity deriving from a real human self facing the blank page
I believe that quality fiction writing is not just about good ideas, but translating experiences of thinking, feeling, and reflecting into a prose narrative. AI can assist in conceptual development, but the real work comes from facing an empty page and filling it with words from your unique sense of self and formed by your writing craft. The dialogic conceptual thinking partnership with should not be about escaping the blank page. It can be about being better ready to face it - and perhaps sooner.
Reflexive Practice
Learn about AI and reflect on its use
As my sessions with ChatGPT progressed, I took periodic pauses to discuss the implications of AI capabilities I was discovering. I wanted my work with ChatGPT to be in a constant state of reflection about the tools I was using and their implications. I wanted to know what I was doing and why. Some of these conversations were with ChatGPT itself, some were with people around me. I also read posts and articles online where others discussed these issues.
Ask the AI to reflect on what it has told me and why
I have my brother – who is an AI researcher - to thank for this approach. Not only do AIs make mistakes and misinterpret, but some have to capacity to simulate self-diagnosis of their own failings. It’s worth asking an LLM to do this when their response seems skewed or distorted, or just to understand how and why they have provided particular responses to questions. I think it is healthy, ethical practice to take time in ChatGPT sessions to explore why responses that don’t feel right have been presented to you. It is also a good way of learning the strengths and limitations of using an LLM as part of the process of developing your ideas for fiction.
Note: ChatGPT is able to simulate this kind of self-reflection; not all LLMs do.
Self-Respecting Use of AI (and respectful Use of AI)
Maintain healthy scepticism about AI:
I reminded myself regularly that ChatGPT isn’t a person and doesn’t possess actual intelligence. It can seem that way sometimes; it can hold a conversation coherently, and conversing with it like a person has advantages in fostering a free-flowing experience of creative discovery. However, unlike a human thinking partner, you cannot develop a real relationship with ChatGPT. It only remembers what you tell it to, and is programmed to be enthusiastic, supportive and encouraging. It can seem at times that almost everything you do and say, ChatGPT thinks is wonderful, unless you ask it to consider alternative viewpoints. As a rule of thumb, take what ChatGPT says with a pinch of salt, and also seek a human perspective on your ideas and writing.
Respecting ChatGPT Policies and Guidelines:
I know LLMs can be tricked into making responses they are not supposed to, but doing so did not interest me. I wanted to be an honest and respectful user and not approach a creative process with deceit or ulterior motives. This is about my own integrity as much as anything else, but I guess it also avoids the disruption of a temporary or permanent ban. No one likes computer saying “no”.
Human-Centric Development of Ideas
Use AI to test my ideas, not to think for me:
I tried to ensure I wasn’t primarily seeking original ideas from human-AI interaction. I wanted to use ChatGPT to test ideas I already had and refine them. ChatGPT describes itself as my “thinking foil”, I use it to ask questions, suggest alternatives, and act as a conceptual sounding board. Not that I am resistant to new ideas that emerge from dialog session with ChatGPT – there is no need to be precious about it all – I just don’t want to rely on that and get too comfortable with the notion.
Use AI as mirror, not an authority[1]
This is ChatGPT’s proposal, not mine, ChatGPT describes itself as ‘a cognitive mirror—one that helps you see the structure, assumptions, or implications of your own thinking more clearly’ although I did not originate this idea, I find it a reasonable description of my experience and was happy to accept this as a rule of thumb.
This piece is adapted from a chapter in my guidebook Thinking in Tandem: Refining Ideas for a Novel Through Dialogue with ChatGPT.
You can download a free version [https://drive.google.com/file/d/1rEvwfL4KK9afv-kysYJ7MhK8_Ug4xf0a/view?usp=sharing], or purchase it on Amazon Kindle.
[1] Note that the capacity for mirroring and self-diagnosis are not universal LLM capabilities; this approach may not work with LLMs other than ChatGPT.