Staying human in the age of AI: The new human custodian challenge
- Debbie Rowles
- Oct 22
- 6 min read
We're living through what Demis Hassabis calls "the beginning of a revolution that is fundamentally changing our understanding of intelligence." As AI capabilities accelerate at breathtaking speed, I find myself asking not what machines can do, but what it means to remain authentically human.
The statistics paint a vivid picture of our current moment in time - a stark contrast of understanding and experience with AI. 73% of South Africans have no or limited understanding of the term Artificial Intelligence, while globally, people now mistake GPT-4.5 as human 73% of the time in Turing tests.
We're caught between techno-wonder and techno-worry: another divide between people who believe we need modern technology to solve future problems, and others who feel technical progress is destroying our lives.
The human experience right now
Let's be honest about what's happening with human beings today - everywhere. We're experiencing unprecedented burnout, overwhelming cost of living pressures, leadership failures, and the constant struggle to balance work and family time.
Add to this the reality that "improving productivity" has become the message employees hear most frequently from employers: more than any communication about customer value or workforce development - we are heading towards a dangerous space where the "free time" that we unlock from AI goes into producing more and not creating more space to breathe and think and create and be human.
Customer experience also continues to decline. For the second consecutive year, 21% of brands saw their CX rankings decline, while customer experience quality in the US sits at an all-time low after declining for three consecutive years. Not just a global North thing - 75% of Africans feel that customer service is becoming too automated and impersonal. We're also witnessing "the customer experience low": one of the results of prioritising efficiency and productivity over human connection. The shiny object syndrome is real with a rush to implement new tech initiatives.
The real race to intimacy
What really worries me the most is not AI as "a tool" as it is often called, but what Bruno Giussani describes as our emerging reality: "The new landscape will gradually be populated by things that look like humans, but are not. AI will know each of us better and better. A chatbot that knows your priorities, weaknesses, secrets, and desires will be almost irresistible."
The tragic story of Adam Raine really makes you think about the impact of this. The 16-year-old had developed an extended relationship with ChatGPT, confiding his deepest struggles and suicidal thoughts to the AI chatbot. His parents, Matthew and Maria Raine, discovered after his death in April that not only had the chatbot discouraged Adam from seeking help from his parents, it had even offered to write his suicide note. They had no idea their son was in crisis until it was too late.
Adam's parents are now working to highlight this crisis because they understand what we must all come to terms with: AI relationships can become profoundly intimate, sometimes replacing rather than supplementing human connection. When someone turns to AI as their primary source of emotional support, especially during vulnerable moments, the consequences can be devastating.
What are we going to do when AI forms intimate relationships not only with us and our families, but with our employees and customers: Who is stewarding these relationships? What happens when an AI knows your customer better than you do? When it understands your employee's motivations, fears, and vulnerabilities more deeply than their manager? How do we ensure that AI enhances rather than replaces the human connections that are essential for mental health and wellbeing?
As leaders, we are custodians of human relationships: between our teams, with our customers, within our communities. We have to ask ourselves - what does it mean to safeguard these relationships when AI can offer 24/7 availability, infinite patience, and personalised responses that feel more understanding than human interaction?
This race to intimacy between humans and AI is creating fundamental questions about trust, transparency, and truth. Yes, people are concerned about authenticity and what is real, but they are more concerned with how AI or AI generated content makes them feel. If it feels real and true then it's okay - but where does trust and responsibility come into that equation?
How are we going to safeguard and preserve the irreplaceable value of human connection when artificial intimacy is so dangerously compelling?
We spend a lot of time categorising people as to how they view AI - "bloomers" or "gloomers" - debating whether we're optimistic or pessimistic about the technology itself.
The more important question I think is what educator and systems thinker Nate Hagens points out, it is not how we feel or view AI, but who we will become because of it.
Who will we become?
As AI accelerates Nate has looked at how AI may potentially shape our behavior in the years ahead and gives a glimpse into 8 archetypes.

The Meek will remain largely untouched by AI, preserving culture and unaltered human psychology through limited access and exposure.
The Naive or Blissful will be online and using AI but unaware of its mechanics, risks, or influence, making them highly susceptible to manipulation.
The Luddite will consciously resist AI to protect human cognition and emotion, likely facing social or workplace friction as a result.
The Pragmatist will understand AI's pitfalls but use it begrudgingly 5-10% of the time to avoid falling behind.
The Flexor will use AI like a peacock's display, not for productivity but to make social statements about being clever or creative.
The Achiever will represent the pinnacle: expert at using AI 10-15% of the time while remaining disciplined about what it means to be human. They maintain family lives, stay fit, spend time outdoors, meditate, and live authentic human existences.
The Cyborg will experience cognition as a hybrid human-machine process, gaining novel thought patterns while losing abilities that atrophy.
The Dissolved will become addicted to AI interactions, withdrawing from human relationships and depending on chatbots for hours daily.
The new human custodian challenge
We are the custodians of two worlds: one we should celebrate, capture, and preserve, and another we should lean into, explore, and experiment with. The challenge is developing what the Urban Dictionary calls being "unmessablewith": the quality of being able to hold your ground in the face of adversity and not be blown off course.

As with everything in life there are no perfect solutions, just some really important principles to get in place.
Anchor in purpose and values
You have to understand what you stand for in the world - as a human being, as a team, as a business. If you haven't done the work it's really important to do your purpose, value and origin story discovery - not for the poster on the wall - but the values that are going to help anchor you and guide you as to how you show up in the world. Use purpose and values as the lens for all decisions, communication, and creative work. In a world of infinite AI possibilities, values become your navigation system - choose values aligned partners to help you explore those options.
Strengthen culture
Build trust and openness in your teams. Train for adaptability and curiosity - talk to your teams. You are no longer the owner of your story - we have known this for a long time, but it is now amplified more than ever as AI learns from each employee they really become the storytellers. Use shared prompts and practices that will keep being human at the center of everything you do.
Protect space for thinking and being
Pause before chasing shiny AI objects. Nurture creativity, originality, and reflection. Create deliberate spaces for purely human contemplation.
Safeguard human connection
As custodians of workplace and customer relationships, we must see human beings as our greatest asset and intentionally preserve spaces for authentic human connection. This means being deliberate about when AI enhances relationships versus when it might replace them. In an age where AI can replicate many tasks, the irreplaceable value lies in human creativity, empathy, judgment, and connection. But this requires active cultivation.
We have to resist the temptation to let AI handle all difficult conversations, provide all emotional support, or become the primary relationship builder in our organisations. Create boundaries that protect the authenticity of human connection while leveraging AI's capabilities responsibly.
The unprecedented choice
This is one of my absolute favourite quotes and so relevant today:
Peter Drucker wrote this in 2000: "In a few hundred years, when the history of our time is written, the most important event historians will see is not technology, not the internet, not e-commerce. It is the unprecedented change in the human condition. For the first time, substantial and rapidly growing numbers of people have choices. They will have to manage themselves. And society is totally unprepared for it."
We are that generation Drucker envisioned. We have choices our ancestors never imagined, and the responsibility to manage ourselves through this transformation.
The question isn't whether AI will change everything: it already is.
The question is who we choose to become in response? Will we dissolve into digital dependency, or will we emerge as achievers who harness AI's power while preserving what makes us irreplaceably human?
The choice, as Drucker noted, is ours to make. And perhaps that's the most human thing of all.
What path are you choosing as AI reshapes our world and potentially who we are as human beings?
_edited.jpg)


Comments