Why is Big Tech Ignoring Big Questions About AI?

If business is chasing AI for profit, what about the AI evangelists? The most powerful players in AI say they're effective altruists, but the phrase lies: they're chasing a sci-fi dream.

Why is Big Tech Ignoring Big Questions About AI?

Recently, a Google Engineer discoursed with a chatbot built on a gigantic language dataset. This lead to said engineer convincing himself the chatbot was sentient. He leaked proprietary information, Google fired him, and the internet exploded with ecstatic talk of real thinking-feeling machines.

Google suspends engineer who claims its AI is sentient
‘Is LaMDA sentient?’

The topic of AI sentiency has been fascinating for humans for as long as we could dream it up, but that's all it is: a dream for the powerful. For everyone else, it's a technological nightmare.

To understand what I mean, we must start with how AI learns.

Much like a real human, AI learns by training

Unlike a real human, AI...isn't

Photo by FRANCESCO TOMMASINI on Unsplash

Imagine a human baby: Vulnerable, impressionable, learning by shoving things in their mouth. The baby needs guidance. The baby needs parents that protect them, love them and teach them things like: 'don't choke the cat to death when you hug him' and maybe even 'don't run into traffic'.

As they grow, this baby also needs teachers to imbue not just knowledge, but bolster confidence. They need social interaction from other babies. They need a safe place to navigate what it means to be a person.

Raising just one baby takes a myriad of soft skills, but it's bigger than that. Humans aren't merely the result of nature and nurture. Humanity is a life-long pursuit built on a shared history. "How to people decently" is a massive "data-set" that is constantly growing.

Now, imagine that this human baby is not human or baby at all. At its least, it's a Python script. At its best, it's an analog module inside a machine. It feels no pain, has no desires, cares about nothing, experiences nothing, has no value system, cannot reach out to "gum" info on its own and only "trains" via rote data. Rote data pumped into it by engineers.

Engineers with no experience teaching anything "how to people decently" at the scale required. Engineers that are merely doing the task assigned. Engineers that answer to leadership. Leadership that would replace its workforce with AI in a heartbeat.

AI is large-scale pattern recognition for profit, regardless if a real problem is being solved or not—not AI Sentiency. Yet, the messaging we get from Big Tech is so often a sci-fi AI future. So why is that?

Hold onto that question. We need to talk about the "rote data" first.

AI issues boil down to garbage in, garbage out

Big Tech refuses to address what it wrought

Our human bigotries are all over the internet and they run deep. Deep enough that all data for AI includes humanity's biases. Deep enough that Big Tech fails to recognize its own biases in how AI is being taught and what it's learning.

If you don't believe me, let me ask you a simple question: Do you think that English is the only language spoken on the planet?

As that's patently false, "no" seems an appropriate answer. Yet almost every AI is trained on English Data. An entire species worth of language with its own constructs, yet Big Tech has chosen English. That's already a bias isn't it?

Perhaps you think this is a simple choice of convenience. Let me ask you another question: Do you believe there's algorithmic bias against Black people online?

That's a scathing question with a damning answer: anti-Blackness has been algorithmified. What about queerphobia? TikTok publicly stated they hide queer content. Is technological ableism real? You might want to read about how digital surveillance harms disabled people.

The data being fed to AI is stained in humanity's biggest failings, to scale, right through the good old internet. It's garbage data that Tech—and our society—refuses to clear out. Data that any good parent wouldn't teach a child. Data that any good technologist would doggedly divest before dabbling in AI.

If you still don't believe me, I raise you consummate AI professionals making the same points about human bias in data. Even with all that, still not concerned about GIGO? Well, you have like-minds in industry leaders.

Currently, AI's "parents" deny the problem and go a step further: Google cleaved its AI Ethics team for doing its job too well. The names on that first research paper should be familiar to you.

Another question is raised: Besides business, why does Big Tech deny its GIGO problem in favor of chasing the AI dragon?

Longtermism Infected Big Tech

Evangelists are trading human lives for AI sci-fi

Photo by Maximalfocus on Unsplash

If business is chasing AI for profit, what about the AI evangelists? The most powerful players in AI say they're effective altruists, but the phrase lies: they're chasing a sci-fi dream. Current problems pale in comparison to a fantastical future. A future that won't exist if big problems in tech—and in society—aren't solved. That is the real technological nightmare. Why the sci-fi?

The answer to that question is Longtermism, something effective altruists have adopted as a secular religion.

Longtermism, if you aren’t familiar with the term, is the philosophy, promoted by philosopher Nick Bostrom of Oxford University, that our primary ethical obligation as a species is to ensure the post-human future for countless sentient beings. Thus, all moral questions are reduced to existential risk — what will ensure that this post-human future comes about.
Tim Andersen, Ph.D. PRS at Georgia Tech

I encourage you to read Andersen's Longtermism article for more info, but if you want my opinion, this is what AI Evangelists of Silicon Valley believe:

The most potent AI visionaries imagine themselves saviors of a utopian future and are using vast wealth and power to enact this future at all costs. Even as the very-real world burns, they construct their Digital God in their GIGO likeness with the self-awareness of a puddle. They have deemed themselves heroes of future-people and machine-men. Every real human harmed by this manifest-destiny is an acceptable loss for men who find themselves far too great to bother with mere mortal problems. That is the nature of Longtermism.

If you do not agree with my description, consider the beliefs of those like Elon Musk. To powerful AI evangelists, the dream really does justify the nightmarish means.

Still don't believe me? Here's a quote from a research paper by Nick Beckstead, researcher at The Future Of Humanity Institute, which was founded by effective altruism's forefather Nick Bolstrom:

To take another example, saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive.

If this statement doesn't raise any red flags for you, I'm not sure what would, but I have one more point:

The problem isn't that these issues haven't been raised by AI experts. It's that these experts have been largely ignored for their criticism. Worst yet, some have been hounded from the industry they hoped to be a "good parent" to.

Why have veteran professionals been ignored—and even fired—for their expert criticism on the state of AI?

I'm not certain why a knowledge-industry remains reluctant to grapple with informed expert criticism and very real problems. It's fairly illogical. Maybe there's an unwillingness to counter the sci-fi hero narrative? No idea. I don't know things. I am but a simple sci-fi author.

I will, instead, leave you with questions:

  • What questions are being asked but not answered by power players in AI?
  • Which experts are raising these questions?
  • What consequences do these experts face by asking?
  • Who or what benefits from ignoring these questions?
  • Who or what suffers from the possible answers?

Time to read provided sources, do your homework and start asking big questions. There's no harm in asking questions, right? 😉


K. Leigh is an ex-freelancer, full-time author, and weirdo artist. Read their lgbt+ sci-fi books, connect on Twitter, visit their site, or send them an email if you’d like to work together. 🌈 🏳️‍⚧️


Read my latest cyberpunk short story: