The Mongols are coming.
In the 13th Century, that cry would strike fear into the hearts of townspeople. What will happen to my family, my life and my livelihood when this horde breaks through our defenses and enters our town? In reality — and contrary to popular belief — Genghis Khan was actually an incredibly egalitarian and compassionate leader who did more uniting than pillaging.
AI and ChatGPT are coming.
Today, that cry from every media outlet and business magazine is almost as hysterical and fearful as the defenders of those historical walled cities. We shouldn’t be surprised by the hysteria or the hyperbole. After all, humans have always had an uneasy relationship with advances in technology. In the 1st Industrial Revolution, the Luddites were famously so threatened by the advent of mechanized textile weaving that they actively destroyed factories and mills they believed were going to make their jobs obsolete. Sound familiar?
Hollywood hasn’t helped. Their portrayal of sentient technology — technology that can think, act, believe and have a conscience like a human — has covered the spectrum from SkyNet, the evil organization at the heart of the Terminator series, to the weird relationship between Joaquin Phoenix and the voice of Scarlett Johansson in Her. And can there be anything more bone-chilling than the legendary disembodied and detached voice of HAL from 2001 and his classic quip “I’m sorry, Dave. I’m afraid I can’t do that”?
Last week, two rather ironic voices have joined the rising chorus of cries calling for a moratorium or even a halt on the runaway development of AI. They’re demanding more overt focus on the ethical questions and concerns that AI models have raised, such as the risk of the spread of misinformation at scale, of baked-in bias perpetuating historical prejudices and the unintended consequences introduced by products rushed to market in the frenzy to achieve AI dominance. They’ve even raised the classic fear that AI spells the destruction of jobs. While the risks are real and must be addressed for the good of business and society, the voices of Elon Musk and Steve Wozniak joining in the chorus are ironic because they are themselves poster children for some significant technological advancement. Musk, of course, was a co-founder of OpenAI, the creator of ChatGPT, but he has since parted ways with the organization. He is also a founder of PayPal, Tesla and SpaceX (we’re not counting his recent foray into social media ownership). Wozniak co-founded Apple, a technology company in almost every pocket on the planet and worth $2.5 trillion today. In fairness, those credentials give them unique insight into how groundbreaking technology evolves from interesting idea to shaping entire economies and cultures. And they may have a point when you remember that Amazon, Microsoft and Google have recently made cuts to their AI ethics teams.
So how legitimate are these calls for a halt in AI programs?
How concerned should humanity be about this latest technology advance?
Well, here’s a classic consultant answer — it depends.
At Gagen our team has staff steeped in many technology organizations and categories but, we remain an unapologetic and deliberate advocate for the human beings at the center of every organization. Technology is, in our opinion, a tool or an enabler. Pairing powerful technology with the minds, passions, behaviors and commitments of the people in your organization is what will keep you competitive in this ever-evolving environment.
So then, how concerned or excited should we humans be?
What if I’m a business leader?
Well, you should be both excited and cautious.
The fact is, AI is already here. The reason it is easier to log into your phone with facial recognition is thanks to AI. AI-powered chatbots and virtual assistants have been automating routine tasks like customer support for several years, freeing up employees to focus on more complex tasks and more nuanced decision-making. The amazing personalization of your Yammer feed, Amazon suggestions or your Facebook, Instagram or TikTok feeds – that’s all powered by AI, connecting you with content that your past activity and profile indicates you’ll be interested in. Then there’s the effect AI is having on advancing human health. With AI’s ability to accurately predict diseases at early stages, drive early drug discovery and improve the design and management of human trials, AI is bringing groundbreaking advances to human health.
And now Generative AI — the AI behind ChatGPT — is coming.
It’s generative meaning it can create things like text, images, software code and answer questions for you. And it too is already being used in a variety of ways – such as to even the playing field on academic journal submissions for ESL researchers and to prepare for negotiating difficult conversations. Just two weeks ago, Microsoft released the first iteration of Microsoft 365 Copilot, basically ChatGPT integrated into your enterprise software, essentially enabling office workers to use Copilot within Excel, Word, PowerPoint and other everyday applications at scale. Imagine those productivity gains Mr. or Ms. C-Suite Executive.
The allure of more efficient, more effective and less costly business impacts can’t be ignored. And, if there’s one thing that business history has taught us, when the organism we call an organization grows, an inevitable outcome is more bloat and more bureaucracy. If AI can reduce that bloat and that bureaucracy, then share prices and shareholder confidence will likely rise. That’s great, isn’t it?
The sticky messy part is where, when and how quickly will this business improvement occur. How quickly will it turn to revenue? No one realistically knows. Could generative AI further reduce repetitive tasks, automate workflows, streamline processes… and remove the costs and associated headcount? Absolutely. Again, the time-to-impact of all of that will differ company to company and, critically, be different for every culture and organization. Let’s not fool ourselves, there’s a reason that bloat and bureaucracy perpetuates too.
What to do?
Explore, just don’t blindly rush in. The freneticism of media noise around this topic brings with it an urgency to act and do something immediately, if not sooner. That’s understandable but diving headlong into an emerging technology that has potentially profound implications across your entire business seems rash. One of the central frameworks we use at Gagen MacDonald is looking at the intersections of Strategy, Structure and Culture. In this case that framing — and subsequent evaluation — makes tremendous sense. Look at each dimension and ask some further reflective questions like “What might AI change in this area and what should we not let it touch?” “Who would be involved and who would be impacted?” “When might we pilot something and when we would expect or want results?” would enable a more measured evaluation before charging in. It may seem counter-intuitive considering all the noise, but this is a moment for a scenario-planning approach, not a making-my-quarterly-numbers approach.
What if I’m an employee?
It would be naïve to think that AI will have no impact on your employment or your career. There’s just too much attention being paid to it and, even in its relative infancy, there are too many areas of obvious business benefit to think your organization isn’t looking long and hard at it.
What to do?
Don’t. Stand. Idle. Make an honest appraisal of your job and ask yourself which parts of it could be improved or eradicated by the AI we know today and future iterations of it. ChatGPT is currently on GPT-4. The tech leaders who are calling for a pause are very concerned about the impending release of GPT-5, which rumor has it will achieve “artificial general intelligence” (AGI) or the ability to of AI to display human-grade performance on any intellectual task. So, think about the tasks you don’t want to do. Let’s be honest, there’s likely numerous parts that suck and you wish someone would remove them from your day-to-day. If that evaluation is over 70% though, then I’d be looking at some skills augmentation or development. This isn’t about being Chicken Little, it’s about personal preparedness. Ironically the most unique skills that AI can’t currently replicate or eradicate are the ones that make us uniquely human. Those are skills like understanding nuance and dealing with ambiguity, using ethics not just logic and others that our species has evolved into survival skills over thousands of years. Basics like how well you work with others, how strong are your communication skills in real-time scenarios where you can’t use AI to improve them, what is your distinct personal voice, what is your EQ or your aptitude to negotiate or build cohesion in a group setting, what is your ability to empathize, etc. While those might sound like really basic skills don’t underestimate how vital they are in our work and personal lives.
We also encourage you to create an account on ChatGPT and ask it “What can generative AI help [your business role] improve?” Be specific as that will directly impact the quality of the ChatGPT response. Here are some of the areas ChatGPT listed when we asked how it could help a business improve: product and service design, content creation, optimization of supply chain management, logistics and production processes and others. Pretty high-level list. To get a more specific response, we then asked it “how can generative AI improve upon personalization over predictive AI.” The response: “Predictive AI uses historical data to make predictions about future behavior, while generative AI can create new data based on existing data and generate unique solutions that were not previously considered.” As you can see from this brief interaction, generative AI still needs a guiding, discerning human partner. It hasn’t yet become this job-destroying-demon some in the media will have you believe. But for how long? If you’re in business and you create anything, your work processes and your job will inevitably be impacted by AI. The key question for each of us to get to grips with is “Will AI replace me, or will it complement and augment what I already do?”
Ultimately, whether you’re an employer or an employee, it can be very easy to get swept up in the hype and hyperbole of the current AI conversation. And if you’re a natural optimist, or born pessimist, that worldview will color whether you see AI as friend or foe. In reality, AI is probably a bit of both but the important part to remember is that for most organizations, and the humans within them, its still too early to say for sure.
Our final suggestion as you’re swimming in the media tsunami of AI articles is to remember the brilliant quip by Roy Amara, another remarkable technologist, who summed up the technology world so aptly in Amara’s Law:
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Stay curious Dear Reader.