12 Rules of Healthy AI Use for Creative Thinkers
Practical principles for keeping hold of your soul, creativity and ability to think independently.

I’m still working on part two of my Doctor Who and the Nostalgia Trap article, but my thoughts on AI are coming thick and fast…
With what was previously the realm of science fiction fast becoming a reality, we’re having to rapidly get to grips with the impact of ‘AI’, from Large Language Model powered chatbots to picture and video generation. Previously, I wrote principles on Remaining Human in the Age of AI as the beginnings of a manifesto on AI use — and now I’m attempting to put some practical flesh on the bones.
Read on for my 12 rules for (perhaps) using AI without losing your soul, creativity or ability to think independently…
Remaining Human in the Age of AI
Do you know that mental itch that comes with standing in a queue – that impulse to reach for your smartphone and start scrolling some feed?
A Rule of Life for Healthy AI Use
1. I will deliberately invest time in cultivating my craft, learning from other humans, reading deeply and thinking independently
The digital world already presents us with so many pressures that seem urgent that they make us neglect the truly important activities that we need to preserve space for offline.
As creative thinkers, we need space and time to think away from the digital hustle – and AI threatens to intensify the firehose of shallow ‘content’.
Face-to-face relationships; being in nature; reading books. All of these are more important and life-giving to me than AI.
2. I will prioritise slow, thoughtful processes in my work, resisting the pressure to optimise for speed or volume
We need to value the process of creating and thinking as a valuable human activity in itself.
Quality thinking, writing and creativity is often slow, winding and indirect.
When creativity does burst forth suddenly, it’s usually the culmination of a long background process of ruminating and developing.
3. I will avoid using AI when I can reasonably get help from another human person
One of the dangers of technology (like money) is to give us results without the complications of human relationships.
But in the messiness and friction of engaging with other people, of asking for help, we grow and deepen our connections to one another.
4. I will use AI to support my own thought and creativity, not as a replacement for it
In particular, I won’t use AI for generating intellectual or creative work.
I will use AI to give feedback, to challenge my ideas, to make me think harder — not do the work for me.
Effort is not the enemy; effort is how I grow.
5. I will give myself time to think and wrestle creatively for myself before using AI
I don’t want turning to AI to be my first instinct or response to a problem. I need to make sure that I keep my own skills sharp, and make space for creativity and inspiration to develop.
See also:
Creativity and spiritual inspiration: Iain McGilchrist and C.S. Lewis on Reason and Imagination
How does creativity work? What’s going on in our brains when we exercise our imaginations? What’s the relationship between the creative imagination and our more deliberative reason?
6. I will use AI where I already have skill and understanding to be able to assess its output
One of the big problems with AI is the way it hallucinates confidently wrong information, often mixed in with stuff that’s broadly correct.
You have to assess it critically, and you need to know something of what you are using it for to be able to check its output.
7. I will remember AI is orientated to plausibility not truthfulness, and treat its output critically
This inherent to the nature of LLMs: they aren’t orientated to truth or accuracy, only statistical probability based on their training data and the prompts you give them.
It can seem remarkably effective in giving some kind of overview of a topic, but it’s very risky to rely on.
At best, it can be a helpful springboard to finding proper sources to turn to. But the more you use it outside your area of expertise, the more likely you are to swallow errors without realising it.
8. I will be mindful of the environmental impact of AI
This is a surprisingly complex question. Yes, a ChatGPT query uses more energy and water than, say, a standard Google search, but it also uses a lot less energy than streaming an hour of HD video.
So I don’t think this is a knock-down reason not to use AI, unless you also abstain from lots of other online activities that rely on data-centres.
But I would suggest it means using AI sparingly and mindfully, only where it adds utility that can’t be achieved with less energy-intensive tools.
9. I will remember that I am under no obligation to use AI, taking breaks from AI use and promoting AI-free spaces and projects
There’s a massive push from corporations to get us to adopt AI, with it popping up all over the place in different software and services. You don’t have to use it, and companies shouldn’t be forcing it on users.
Don’t get addicted. If you can’t stop, that’s a sign you need to stop.
It’s healthy to have spaces and projects that are AI-free. I fully support, for example, schools, universities, publishers or creative communities in having no-AI policies where they choose (and in many cases I think it necessary and wise)
10. I will support others in setting healthy boundaries around AI
I won’t pressure other people to use AI beyond their own comfort zone.
I’ll gently challenge those who I think are overusing it to consider their limits.
11. I will be transparent about my use of AI
I have actually used AI in helping me develop these rules! I’m not an anti-AI absolutist, I think it can be useful if we can mitigate its dangers.
For these rules, I wrote a first draft myself of eight commands, as “dos and don’t” and asked ChatGPT to put them in first person and suggest other commitments, some of which I adapted, rewriting them for myself.
I will be upfront about how I have used AI in my projects, and invite accountability to stop me from misusing these technologies.
12. I will seek community with others seeking to take a cautious and critical approach to AI
I’m open to the possibility that we might need to eschew it completely. But I still think the technology has potential for good if used wisely.
So let’s keep working these questions through together, to work out what humane AI use might look like, while being skeptical of AI hype and protective of human creativity and critical thinking.
What do you think? Are these rules too strict, or not strict enough?
What rules have you set for yourself about how you will or won’t use AI tools?
Let me know in the chat or comments below!
Although I suppose it is good to tease this out as using AI for E.g. translation is just obviously good some of the time.
AI could additionally be the tool to break out of the exploitation of the creative minds. I recently explored that idea in an essay, maybe that resonates with you: