A Humanistic View Of Artificial Intelligence

Here’s the irony of technology. The greater our capability to unite humanity, the greater our capability to create chaos. And not only do we prove that axiom every day, but we’ve done so with every invention and discovery we’ve ever made – from stone tools through social media.

And now we’re continuing on the same journey – or at least it seems so – with artificial intelligence.

Blindly or recklessly advancing technology without integrating it into a greater purpose is merely providing us with a more efficient and rapid way of creating more chaos and, in the process, moving backwards. And that paradox should be all we need to explain why, when the term “artificial intelligence” is uttered, the most common reaction is that it’s scary. To most of us, it conjures up the same ominous feeling as that persistent two-note ostinato theme at the beginning of Jaws. We know it’s there, we know little about it, but we’re scared witless, because we have no idea, really, how big this thing is or what it can do.

Worried?

What most of us fear is that AI is being run by powers – ruthless business, technology, and political forces – often nefarious – that are moving with little regard to ethical and humanitarian responsibilities, so long as they can stay ahead of the curve on product development and usage, and with that, keep the venture capital flowing unimpeded. That, says the resulting fear, will lead to job loss, depersonalization of health care, disruption of global food supply, sharpened terrorism, and a host of other phenomena not under our control. Unless…

Unless we take a less technical and more humanistic view of the inevitably ongoing development of AI. To that end I sought out the perspective of two scholars in the humanities.

Not worried

“I’m not worried about the metaverse taking off,” says Dr. Lindsey Cormack, Ph.D. and professor of political science at Stevens Institute of Technology in Hoboken, New Jersey, who expects to release her new book, How to Raise a Citizen, And Why It’s Up To You To Do So, later this year. “AI can’t come close to all the human bits of life. It can’t replace human interaction. It doesn’t know what it means to be human around other humans.”

What AI can’t do

And that seems to be the generally accepted tipping point that will never be reached by AI, at least in the view of the vast majority of thinkers. Traits and behaviors like sympathy, empathy, regret, joy, hope, optimism, pride, originality, intuition, and even humor are characteristics of human life – and, ultimately, more galvanizing than technical power, especially when you consider that artificial intelligence is not intelligence at all.

“It’s remarkable,” observes Dr. Nick Byrd, Ph.D. and professor of philosophy at Stevens Institute of Technology on Hoboken, New Jersey, “how a seemingly dumb system can produce seemingly intelligent products.” AI doesn’t think; it starts with one word and then, using vast amounts of data that would take a billion people a billion years to sort through, predicts what the next word should be – and then the next and the next and the next – in increments of billionths of a second. And there you get its answer.

No wonder it’s intimidating – until you take comfort in the fact that, at least for the foreseeable future, there are things we humans can do that AI just cannot do. For example, I asked AI to make up an original joke about cats, not a joke it’s heard before or just found. (For the record, my wife and I are cat lovers, and have been owned by many cats for 45 years). Immediately came back a joke I had heard 33 years ago, when I got my first PC: “Why did the cat sit on the computer?” “To keep an eye on the mouse.” (Right, it wasn’t funny back then, either.)

Hallucinating

But not only wasn’t it funny, it wasn’t original – which is what I asked for – and, therefore, it was deceptive. AI does that, and it’s called hallucinating. At the same time, this phenomenon brings up a warning about AI and a piece of sage advice from both Byrd and Cormack.

The warning? Be careful and discerning. The advice? “It’s going to be very important that AI will be assistants to people doing the work,” according to Byrd. “For its ability to serve us as reasoners, as people who have intuition but who also have the ability to correct a faulty intuition, AI is promising,” he followed.

And in line with that, says Cormack, in advice aimed directly at jobs and careers, which is why everyone is so trepidatious in the first place, “AI should be used for low-hanging tasks, removing human drudgery through efficiency.” Therefore, she adds, “People who ar good at asking AI things, who figure out the query systems and develop overall AI skills, will find AI to be to their advantage.”

We are reminded, by these sagacious words of wisdom, of not only our capabilities to invent things, but also our responsibility to use them judiciously. History is the story of decisions, and with every invention or discovery we humans – and our hominid ancestors before us – have put up on the board, we’ve concurrently figured out both the constructive and destructive ways to use them.

Once again we are at such an inflection point, this time with a civilization-changing transformation larger and more impactful than anything else we’ve ever done. So, if we will think about the consequences of our decisions before we make them, we will make better decisions.

But then, we already know that. We just have to do that.

We are punished, let us remember, not for our mistakes but by them.