ChatGPT and the future of Web content, according to Olivier Blais, co-founder of Moov AI

Since the launch of the ChatGPT application, the genie is out of the bottle. The use of artificial intelligence in the production of Web content should now be the norm in the years to come. How do we deal with this trend? Should we go “all in”? Should you keep your distance until a more mature generation of algorithms is available? What precautions should you take as a user? We discussed these questions with Olivier Blais, co-founder and VP decision science at Moov AI.

Isarta Infos: Since Christmas, the Web has been abuzz with ChatGPT and its writing prowess. It is understood that such applications will soon be used in the production of Web content – whether in a journalistic, marketing or entertainment context – if it is not already the case. Do you think this is a good idea?

Olivier Blais: At this point, I think the right thing to do is to test and experiment with these tools. Every organization should at least look at how they can integrate them into their operations. If they don’t, they are making a mistake. Some will discover applications they had not thought of. If they don’t test the tools, they’ll never know.

From the headlines, ChatGPT can “do it all” – write articles, blogs, essays, etc. What would stop a content producer from “outsourcing” their writing to it, a student has it do their assignment for them etc.?

O. B.: First of all, the first trap is to think that a tool like ChatGPT can be used for total automation. On the one hand, the information contained in the texts generated by the AI is often false or unfounded. On the other hand, there is no guarantee that passages have not been copied from other authors on the Web. The same applies to visuals that are created with DALL-E 2.

What would be the most relevant use of these platforms?

O. B.: ChatGPT is very useful to counteract the white page syndrome. It can produce a rough draft. In video game design, I know that graphic designers use DALL-E 2 to draw the preliminary version of a world or a character. However, they have to produce the final iteration themselves, to avoid infringing a copyright. This is actually very important: always cross-validate the facts and originality of texts and visuals produced by AI applications.

Do you have any other usage tips specifically for marketers?

O. B.: ChatGPT has been touted for its versatility, overall performance and fertility. However, I would suggest that marketers move to tools that are more tailored to their needs and industries. In Web content, an application like Jasper, even if it is less powerful than ChatGPT, will probably produce a better result.

In your opinion, do you still need special skills to handle these new AI tools?

O. B.: To generate good content, you need to master the art of prompt engineering, which is defined as how to ask questions or give instructions to the text generator. It becomes a skill in itself. To produce a rich text, you may need to submit 20 or more questions to ChatGPT. In the case of Jasper, you can’t interact with the tool yet, but you can refine your initial request. The more popular and powerful the words, the better the return.

What do you think should be the transparency criteria for a company using AI in its web content production? Should it mention “AI-powered content,” as one identifies “sponsored” content?

O. B.: If it’s a conversational bot, I think so. The user needs to know that the responses that are generated are from an algorithm. Otherwise, I think it will be up to each company to position itself with respect to the degree of integration it has chosen. Some companies will limit the use of AI; others will want to automate all of their content production. Ultimately, it will come down to which type of content will get the most audience.

The question sounds like something out of a fictional novel. But, considering the prowess of ChatGPT, do you feel that AI will eventually replace real content producers – including influencers, marketers, journalists, columnists, writers and other scribblers?

O. B.: Having been around a lot of journalists, I see that there is still a craft aspect to their profession. And I think that applies to most people who are good at what they do: they like to get their hands dirty and keep control of what they do. So, yes, artificial intelligence is going to be used to produce non-value-added texts, and it has already started. But I’m not worried about journalists, it’s not going to replace them.

Do you see any risks – other than those already mentioned – with the use of AI in web content production?

O. B.: Some companies might decide to automate their content production just to rank better in search engines. We could then see a proliferation of content without added value. In this case, it is to be hoped that the big Web operators will know how to detect and block this kind of content, in the same way that they have started to do with deep fakes.

My last question is theoretical, or at least it refers to a more distant future. If one day a majority of companies adopt artificial intelligence tools, don’t we risk ending up in a dead end… since the texts that will feed the 2nd and 3rd generation algorithms will be texts that the AI will have produced itself?

O. B.: This is a real risk. By re-training an algorithm with content that it has generated itself, there is a risk of a self-fulfilling prophecy. So, yes, in the long run, there could be a degradation of web content. We’re crossing our fingers that the big technology companies – like Google, Microsoft and others, will put mechanisms in place to exclude this content in their training sample.