VIEW OUR SEO PACKAGES

Why you can’t use ChatGPT to write technical articles 

AI-generated content is taking over the world, but it’s not ideal for technical articles 

Like many other experienced copywriters, I was apprehensive about the launch of ChatGPT. However, as its popularity grew, I became curious and decided to experiment with it and write different forms of content. Interestingly, the results varied. 

ChatGPT is great when it comes to producing snappy and engaging posts for social media, which can be a real time saver. It’s also good at summarising information and comes in handy when the old writer’s block hits. However, when it comes to technical content, it’s far from perfect. Here’s why: 

1. Information comes from multiple sources

ChatGPT relies on data and algorithms, and so it gathers information from multiple sources from all over the internet. Say you have a client who manufacture fire-resistant cables and you want to write a blog on why these cables are important – you ask ChatGPT to write a 500-word blog on the subject and within seconds it does exactly that. But there is a problem. If your audience is in the UK, then at least half of the information gathered could be redundant because each country will have their own guidance on cables and fire safety. Even if you prompt ChatGPT to focus on the UK, data will still be pulled from various sources and some of pieces of information may contradict each other.  

2. You can’t fully trust the information gathered 

Even though ChatGPT can now access up-to-date information, there’s still a danger of it being irrelevant or incorrect. When it comes to writing technical content, you need to be extra vigilant and make sure the information you refer to is correct and from a verified source. This is especially important if the topic of your article concerns health and safety. You don’t want to share mis-leading information.  

Not all the information will be from verified organisations or publications. Information may have been taken from opinion pieces, personal blogs etc. So, its credibility comes into question. 

3. There’s a risk of plagiarism 

Now, let me be clear. ChatGPT does not technically plagiarise, it takes information and rewrites it into new text. However, this content may have re-used phrases in it, which can present a problem with technical articles. Some of these articles deal with niche topics, and so there may be a limited amount of information readily available online. ChatGPT will use what it can find. This is a very grey area, but it doesn’t look good if the copy contains phrases or sentences straight out of a published article… 

4. Lack of human expertise 

What do I mean by “human expertise?” After all, surely the information gathered by AI came from humans? Well, ChatGPT may present you with the facts, but nothing will beat talking to actual experts in the industry.  

Let’s go back to our cable manufacturer. You’re still looking for information on the importance of fire-resistant cables, and so you quiz an expert on these cables. They could tell you so much more about them because they work with the products every day and have done so for many years. They know the products and the regulations inside out. You may even discover unknown facts or surprising stories that you could use to embellish you article with. Such unique perspectives cannot be picked up by ChatGPT.

5. It sounds ro-bot-ic

Technical articles are factual and formal, but that doesn’t mean the writing has to be lifeless. ChatGPT is good at pulling out facts, but when you read it, it does sound like a robot is talking to you. This will put readers to sleep.  

ChatGPT is unable to think creatively about technical subjects and so every article sounds the same. Technical articles tackle complex subjects, so it’s important that the writing has flair so it can engage with readers.   

ChatGPT can be a useful resource for certain forms of content, but when it comes to technical articles, it doesn’t quite hit the mark.  

Share

View More