On top of my work at Gosh! Kids, I’m a part-time teacher at a local university.
The core of this role is to deliver curriculum to undergraduates from various faculties.
A smaller, lesser-known aspect — but no less important — is to read and grade assignments.
That being said, I don’t particularly enjoy this part.
And the reason is not because I have to read through 40 submissions, each at least 30 pages long. No, it’s because every word was, in fact, not written by the student, but by AI.
How do I know this?
Because it’s riddled with expressions that no human I know would ever use. Words like “foster,” “dynamic,” and “robust.” Phrases like “this underscores the importance of” and “from a broader perspective.” Page by page, the same message — the only difference being a new set of anecdotes and prose — is repeated over and over again, along with an overuse of emojis that are inappropriate for the message it’s trying to convey.
I know this because I myself am a user of AI.
I’ve been experimenting with the tools available to see if it can make my life easier. In some cases, it has helped me in finding the perfect synonym, with word count, at filtering and sorting data, along with other dull, time-consuming tasks.
In most cases where intellect and contextual understanding are required, while it did point me in the right direction, however, it could never give me a definite answer without doing a separate fact-check first.
Prior to my vacation to Japan, for instance, I wanted the itinerary to be exciting and experiential. So I fed every known detail of my trip into the great ChatGPT and asked for suggestions. In less than twenty seconds, it produced what I would take a tremendous amount of time to find out if left on my own.
The results, however, turned out to be short-lived because, after pondering on it for a while, I realised some of the “recommendations” were suspiciously too good to be true, like the one example where it recommended a “family-friendly” seaside park right smack dab in the middle of a heavy industrial complex because the official name of the park had the character “family” in it.
In the end, I scrapped the AI-powered solution and went for the traditional route — I asked my Japanese friends for recommendations.
The point is this: thanks to my basic knowledge in the language and brief understanding of the local culture having lived there some years back, along with a relatively high proficiency in the use of social media and navigational apps, and with actual experience of travelling in a foreign country, I could identify which places to visit and which places, for that matter, to avoid.
What I’m also saying is, I have a certain relationship with AI.
Of which, what I do not use it for is to write this newsletter.
In fact, I turn to an older, more primitive way of doing this.
I have this habit of taking interesting ideas and insights I’d come across and storing them in a “notecard” system — a physical, methodical practice used primarily by authors and researchers who handle large volumes of textual data for their work, like books, research papers, academic journals et cetera.
I always have with me stacks of four-by-six notecards with scribblings on them. They’re not random musings, but quotes, paragraphs, short stories, one-liners, ideas that came from the books I read, the podcasts I listened to, the videos I watched, or from the random episodic thoughts that dawn on me throughout the day. I set aside at least an hour or two a day to read and write at least ten cards. Just before I kickstart my work day, I would read them just to “warm up” my mind.
So, be it the curriculum I teach at the university or writing this newsletter, the core ideas and thematics almost always originate from this longhand method. And I would go as far as to say that almost every professional—personal endeavour in my life has been and is being complemented by these notecards.
You may ask, Mathieu, why are you investing your most important resource into what seems to be a pointless task?
Because it forces me to engage with the material. It forces me to take my time. It forces me to go over it again and again. I’m compelled to immerse myself in what was written and in the process, to think critically about what it means to me. It brings out the ideal traits of focus, patience and discipline, and it awakens in me a curiosity bug, so much so that it makes me to want to write it down. Every word penned is like a symbolic act of writing to myself — this is who I can be, this is who I want to be, this is what I must be.
Ultimately, it gives me the luxury of thinking deeply about things in a world full of distraction.
In this sense, I prize the physical process. Sure, software would make sense from a productivity standpoint. But it’s precisely the tediousness that makes this simple exercise so powerful. Each card was painstakingly researched and crafted into existence by the methods which are, by today’s standards, inefficient. Learning becomes ingrained. The pursuit of knowledge and wisdom becomes the whole point of doing it.
And for this very reason, I would rather, in the event of a fire, save these precious, irreplaceable notecards than my phone!
But there’s also another reason why I do this.
Something that’s urgent and more deserving of our attention.
I’m in a long war against what is known as “AI slop” — the low-quality, mass-produced, AI-generated digital spam found at every corner of the internet. This inaccurate, nonsensical information is designed to exploit likes and view counts, to bait clueless audiences into clicking so that those responsible can maximise ad revenue.
In fact, it’s not just on social media. It’s not just content creators who outsource their researching, writing, editing to these tools. It’s everywhere. From newspapers reports to marketing material, emails from co-workers, college assignments, even political manifestos, individuals and organisations are passing off AI’s “writing” and “thinking” as their own without making the effort to consider whether it was something they truly stand for or whether it makes sense in the broader context.
Which is why I cannot agree more with the few who have said that the most essential skill of our time is not coding or content creation but a “finely tuned bullshit detector.” To know, on one hand, what is real, what is a fact of life, and on the other, what is complete BS.
What an over-reliance on AI does to our brains is to suppress the need for discernment. We lose the ability to distinguish what’s moral or immoral, what’s right or wrong, what’s authentic or fake in a world that is increasingly dichotomous.
Ultimately, to make for yourself what boils down to an informed decision — one that did not originate from viral, trending information but from deep-seated knowledge accumulated over time — becomes rarer than ever.
In other words, every time you defer a menial task to ChatGPT — when it could’ve been done without — you risk developing “brain rot”
Scientists call this “cognitive debt.” You defer effort now but pay later — with weaker critical thinking, poorer recall, and more superficial engagement with ideas. We lose the zeal of what it means to pursue knowledge. We get fed so much information, but starved of wisdom.
The irony is that the skills that are apparently being made obsolete by AI are becoming more valuable now than ever before — reading, writing, thinking, applying context, seeing through lies, deceit and misinformation. I would have likely given the college assignment an A if not for a basic understanding of the technologies mentioned and how students think. If I had not already possessed basic proficiency in the Japanese language and cultural nuances, or the basic instinct to cross-check against reputable sources, I might have been fooled into bringing my family to that side of town when it was in fact, nonsense.
To use any technology well is to not be used by it.
It’s to have, in the arsenal of your brain, the skills that are apparently being cut out by the cutting edge of AI technology.
It’s to possess an understanding of the psychology of human behaviour and the periscope to spot BS miles away, and to not get manipulated by those who do.
If there’s anything AI has done up till this point, it is to raise the floor of what’s considered the norm. Anyone can come up with an evidence-based argument or a “million-dollar” strategy. Anyone, with a basic understanding of social media and AI, can effortlessly produce content that would reel you in for your money. It gets harder to differentiate yourself in a world of increasing competition, not just in business or school, but among potential life partners and vital relationships.
But if the floor has been raised, so has the ceiling.
Lucky for us, it’s never too late to elevate our game against the deep-fakes. All it takes is for us to rise above what everyone else is willingly doing, or the way I see it, to do what everyone is unwilling to do — to read history, to write consistently, to journal daily, to, instead of doom-scrolling or binge-watching, take a trip into the wild, to immerse in foreign cultures, to learn new languages, to intentionally set aside time for deep thinking.
We need all those skills that big tech has proclaimed to be obsolete, and we have to be willing to work hard for it. And that's exactly what Jim Stockdale was implying when he said, almost 40 years ago, that the greatest fallacy of education is that we can get it without stress.
That is to say, an education without challenge is not education but deception.
Haven’t we all realised that the goal of education is about investing time and effort to find out what is and what isn’t? That it was never meant to be an easy, efficient, digital-only endeavour? That it comes from a place of discipline and patience, demanding us to focus, sit still and search with precision for years, even decades, maybe a lifetime?
The good news for us parents is that education doesn’t begin in school, but at home.
We are, as the saying goes, our kids’ first teacher.
So, as living examples, here’s how we can alter our relationship with this technology, as the author David Epstein sets out in the notecard I shared earlier: Think first, tool second.
Brain now, ChatGPT later.
Put in the effort to do the hard work, first. Then, let ChatGPT be your assistant, not your director.
Historically, nobody has ever curbed the waves of technology. Nor do I think there's anything admirable or impressive about avoiding it or opposing new ways of working.
But while we’re forced to adapt or die, we can decide how much of a say it has in our lives.
And if there’s one resolution you will need for the new year, it’s not to lose more weight, nor is it to sleep earlier, or spend less.
It’s to fix your relationship with this particular technology, or technologies.
And you may ask if there will be a point when these “traditional” skills have finally become obsolete?
Nobody knows, for sure.
But the way things are looking right now?
You’re gonna need them, not just in 2026, or until 2027, or a decade from now.
You’re gonna need them for all the years down the line, regardless of culture, government policy, or the next big thing.
You're gonna need them for all the years you’re still human.