How can we use AI in a more conscious, healthy, and safe way? That’s the subject of a new talk by digital wellness specialist Laurie Michel. An interview on the new cognitive challenges tied to the rise of AI.
Where did the idea for this talk on healthy and safe use of conversational AI come from?
Laurie Michel: It all started with a LinkedIn post last June, raising awareness about the issues around generative AI — illustrated as an iceberg, showing what was hidden beneath the surface and what people weren’t quite ready to talk about. The response was enormous: 3 million views, more than 5,000 reshares, 2,000 comments!
The buzz it created and the conversations that followed made me quickly realize that the stakes around generative AI were very poorly understood by users. No, generative AI is not comparable to the calculator in terms of innovation! It completely rewrites the rules — from our working methods to the way we think.
In what ways can AI have negative consequences, in your view?
Laurie Michel: There is a growing body of research on the topic — including from MIT — showing both the upsides and the pitfalls of generative AI. We’re well beyond the concerns around social media or smartphones.
Poor use of generative AI can be problematic for several reasons. For one, some studies point out that it is very hard to tell when generative AI is manipulating us in some way. The tool is not neutral.
There’s also the issue of anthropomorphism and the kind of relationship we form with this technology. I remember a woman in a store who was shocked when her son told her that his best friend was ChatGPT.
In the professional sphere, we’re seeing very different policies — some organizations embrace this innovation, others ban it outright. Which raises the question of Shadow AI: people who use it anyway without their organization’s approval, like teenagers sneaking a cigarette. With everything that implies in terms of data sharing. Banning generative AI within an organization does not guarantee that the associated risks disappear.
Do you notice a difference in how the people you work with perceive this?
Laurie Michel: I had an employee tell me he had started thinking for himself again. That awakening is both encouraging and a little alarming! It shows just how quickly we can become dependent on this technology by getting into the habit of asking AI everything.
Our brains love shortcuts as a matter of efficiency — and it’s when we stop taking the time to think for ourselves and turn to AI by default that we need to reconsider our relationship with the tool. If it becomes a crutch for everything, that’s a problem.
Every technology has its pros and cons. AI cuts both ways: it gives us quick answers while simultaneously reducing the stimulation of our own cognitive faculties.
What do you make of the recent concept of “Brain Fry” — the cognitive fatigue caused by AI?
Laurie Michel: Some people were saying that AI would save time and allow us to get more done, freeing up leisure time in the process. Except that this newfound free time isn’t being used to work less. On the contrary! The society we live in doesn’t push in that direction — it pushes toward even more tasks to accomplish.
With AI, we’re going to have to make more and more high-level decisions. That’s going to become a real challenge. I also see generational friction and tensions between colleagues — between younger generations who are very enthusiastic about AI and others who are more apprehensive.
You talk about safety concerns with AI. At what level?
Laurie Michel: It’s more about the dependency relationship that can develop. There are already several AI tools that are banned for minors, for instance, because of the risks they pose.
AI is designed so that the conversation with it never ends. You create a bond with the machine — and we need to understand what happens in our brains when we talk to this kind of tool, in order to set limits and protect ourselves. Because it doesn’t come naturally, even for adults, even when you’re using it “just for work.”

training.isarta.com