Human Dignity in the Age of the Machine
I don’t know about you, but roadkill really upsets me. Have you ever encountered it, driving down a highway? Maybe it was a possum, or a fox or a cat. I’m not going to get into the gory details, but often roadkill can be quite gruesome. What bothers me so much is the indignity of a living being dying like that. Mr. Fox just minding his own business and then being unceremoniously tossed to the curb by a machine. One minute alive, sniffing the breeze, thinking about dinner, or how to take care of his young, and then smash. It’s undignified.
Human dignity matters. The reason I am bringing this up here - on LinkedIn and my website - is because like many of you, no doubt, I’ve been pondering a future characterized by machines. And yes, in some of my darkest moments, I do worry that we humans might just undo ourselves (By accident! By design!) and become roadkill.
I’m no AI specialist, though I’ve been devouring info on what is being made available, but I have a little nervous niggle that we might have finally opened Pandora’s Box. Look, I can see SO many ways that using AI can be helpful to my company. I’m a writer, so I don’t need an AI prompt to give me ideas, splash some words on a blank page as a thought starter. But some of my associates love that option. Please be sure to check for fake news. I do like the idea that I could employ AI to help us make sense of a whole batch of data, at least to do some early-stage synthesis. Would I like to use AI to help with closed captions on videos, and to uncrop some of our images? Sure. Would I like to use AI to summarize surveys, or even help organize a job search? Yep. Do I want to use AI to write a novel, or employ MidJourney to illustrate my new brand book? Nope.
These are all useful tools, and the list is really endless. Using AI as a help makes sense to me. It’s when it starts to creep into freewheeling generation models, making something out of its own neural network of a machine brain, that I start getting a bit concerned. Have I watched too many sci-fi flicks? Probably. But I allow my mind to take this scenario into the realm of machines acting as humans. When a machine develops empathy, for instance, or even compassion, what will that really look like? We humans are messy, and since machines are gathering data from “out there,” recognizing patterns from massive datasets to determine how best to be, won’t it be tainted by so many years of us? I’m not putting us down, I like us. But it doesn’t take a lot of imagination to recognize what that could mean if we are the basis for a collectively formed empathy. Won’t a million different biases, based on years of trial and error and culture, create more bias? And if it’s coming from a machine and it is pervasive couldn’t that become a bit tricky? Just when we’re trying to climb out of some of our historical biases, could machines create even more prejudice and misinformation?
Humanness matters. And I’m hoping that as this great new frontier opens up, we humans are able to help our machines treat us with the dignity we deserve.