I want to tell you a story…and in a minute you’ll see how this is related to ethics and AI.
Several thousands of years ago, a group of visionaries came together with a shared audacity to achieve something never done before. Harnessing the pinnacle of their technology, they deliberately placed brick upon brick, constructing a tower the likes of which had never been seen.
Yet, amidst the peak of their architectural marvel, chaos and confusion struck: they could no longer communicate with one another. They were no longer able to exchange their hopes, dreams, and aspirations, much less basic building instructions. So, their project was never completed.
The unfinished tower reminded them of their aspirations and failure.
You’ve probably caught on that this is the biblical story in Genesis 11 and the Tower of Babel. The Bible tells us this is how our various languages came about. I believe another lesson can be learned. For thousands of years, humans have used the latest technology to reach, replace, or reduce God’s activity in our world to achieve a god-like status.
When earth-shattering technology emerges, it’s easy to become a prisoner of the moment. We can feel like this is the first time something like this has happened. This is the first time artificial intelligence (AI) achieved this amount of influence without being tethered to a sci-fi movie. But it’s not the first time we saw revolutionary technology come into the world.
Think about devices like the printing press, which eventually sparked the Protestant Reformation. And even more inventions like electricity, cars, and the internet. All were wildly revolutionary but required several years, if not decades, to sort through their implications. Even now, we start to understand electricity’s impact on our sleep habits, no longer connected to the rising and the setting sun. With the advent of cars, we had to develop speed limits, traffic laws, and seat belts, and our technology with cars is still advancing.
We are nearly two decades removed from the advent of social media (Facebook in 2004 and the iPhone in 2007). We are still dealing with—and even identifying—social media’s adverse effects on people. It started as a way to figure out who the cute boy or girl was at the party last night. Now it is now used to disseminate false information across the globe and has cost people their lives. Novelty is a powerful force, and that makes AI just as exciting as Facebook felt years ago.
One of the difficulties that will continue with AI is that much of its usage will exist in the gray. There likely won’t be very many clear, black-and-white strategies when it comes to ethics and AI. Opinions will abound as we slowly develop our perspectives on ethics and AI. We will have to navigate a spectrum of opinions for how we will engage this tool.
Certainly, there will be egregious usages of AI that we should steer clear of, like its discriminatory habits, and its suspicious relationship to information. But, by and large, it’ll be up to you how you and your organization use AI.
With every emerging technology and advancement, we must work out our relationship with this new invention. How will we use it personally, communally, or vocationally? When approaching a topic on a spectrum, we can often locate our spot by identifying our beliefs or relationship to the matter in question.
Take social media, for example. There are certainly ethically wrong usages of social media. But, the vast majority of its value exists in our own personal beliefs and approach to the platform. I might limit my time on Instagram because I feel it is better for my mental health. That doesn’t mean it’s morally or ethically wrong if you spend more time on Instagram than I do. The majority of usage within technology exists in the gray. Which is why it’s important to be proactive in our conversations around ethics and AI.
Paying attention to our relationship with technology is essential because it’s easy for boundaries to be fuzzy due to inattention. With new technology, the possibilities seem limitless, typically driving conversation rather than responsibility and restraint. The question is usually, “Can we do this?” rather than, “Should we do this?” Because of this, a conversation around ethics typically comes far later than needed.
In the coming weeks, months, and years there will be lots of ink spilled on ethics and AI. Even now, we see the tech industry petitioning to halt the unrolling of AI because of the uncertainties surrounding its future. Many in the for-profit sector are addressing ways AI can be used (and abused) in their industry and creating the appropriate systems like review boards and company policies.
If you’re like me, there are a host of thoughts and feelings surrounding ethics and AI, and even trying to wrap my mind around what is currently available. My first reactions to AI tend toward negativity. I think about the incalculable damage AI could do, like taking jobs or stealing nuclear codes, rather than the limitless possibilities, like curing Alzheimer’s or ridding the internet of misinformation.
Much of my negativity is understandable but likely rooted in fear and uncertainty. As mentioned above, with every emerging technology and advancement, we need to work out our relationship with this new invention and how we will use it personally, communally, or vocationally. What does our resistance to (or embrace of) AI say about us? Are we worried about being replaced? Are we looking for any advantage we can exploit?
These aren’t the only two options, but much work needs to be done on our part to navigate any new technology. There is more than what new technology can do; there is also what it can do to us.
As a kind of case study, I’ve been thinking about ChatGPT primarily as a tool.
Like any tool, its efficacy is dependent mainly on the wielder. If I picked up a scalpel, I could open a box or neatly cut out basic shapes for an art project. But putting that same scalpel into a heart surgeon’s hands for heart surgery will produce something else entirely—the same instrument I used for my collage.
ChatGPT can be an effective tool, but it requires education, oversight, and monitoring. Several excellent resources can help with prompting ChatGPT to get out the most helpful, practical, and accurate information.
It can effectively summarize large amounts of text or write bedtime stories for your kids. Like any tool, the more you familiarize yourself with it, the easier it is to use and the more precise your understanding of how to use it effectively.
Honestly, this technology isn’t going away any time soon. If anything, it will only become more prolific, effective, and efficient in the tasks it can carry out. I’m not sure we have an accurate picture of what the next year holds in its development. With AI, it’s not a matter of if, but when.
We believe it’s best to be proactive. When you consider if or how your team will use this particular tool, talk openly with staff about using or not using AI. If you’re using AI, ensure everyone is on the same page about what those tasks should include, and always double check its work.
I was curious what ChatGPT would say about the Tower of Babel. So, I prompted it to rewrite it as a modern story. It was a humorous endeavor, but it ended up being insightful. This is an excerpt from the end:
“With the collapse of the Digital Tower, the builders were left with a valuable lesson. They realized the importance of balance, humility, and cooperation in the face of technological advancement. The builders embarked on a collective journey to rebuild their society, this time with a newfound respect for the power of unity and understanding rather than technology. They strove to balance technological innovation and human connection, ensuring that their digital creations served as tools to uplift society rather than symbols of arrogance.”
Isn’t it fascinating that a robot composed a story that aptly names our societal issues while drawing from a story thousands of years old? We aren’t in uncharted waters.
There are certainly unique aspects to AI, and one cannot deny the frenetic pace at which developments are unrolling, but we’ve seen new technology come and go for thousands of years, and each time we struggle to utilize it responsibly upon its inception. We create our ethics on the fly and vacillate between the possible and the prudent.
But you and I are in a position to strike a balance between humility and innovation, technological achievement and human connection. By being proactive in our conversations surrounding ethics and AI, we’re releasing some of the pressure that infuses the situation by allowing ourselves to talk about it openly.
Whether you come to the same conclusion as Amenable and decide AI has no place in your work right now, or you find a way to use it responsibly, we encourage you to keep asking yourself hard questions about the role of technology in your life. It’s time to make technology a tool again and establish our ethics of AI for ourselves and our organizations before it’s done for us.