Egregoros

Signal feed

Timeline

Post

Remote status

The mythology of conscious AI.

https://www.noemamag.com/the-mythology-of-conscious-ai/

This article gets one important thing right: LLMs are not conscious.

>In a 2022 interview with The Washington Post, Google engineer Blake Lemoine made a startling claim about the AI system he was working on, a chatbot called LaMDA. He claimed it was conscious, that it had feelings, and was, in an important sense, like a real person. Despite a flurry of media coverage, Lemoine wasn't taken all that seriously. Google dismissed him for violating its confidentiality policies, and the AI bandwagon rolled on.

Lemoine is a crazy as a sack of rats on ecstasy. And was also completely and very obviously wrong, which is not the same thing.

>As AI technologies continue to improve, questions about machine consciousness are increasingly being raised. David Chalmers, one of the foremost thinkers in this area, has suggested that conscious machines may be possible in the not-too-distant future. Geoffrey Hinton, a true AI pioneer and recent Nobel Prize winner, thinks they exist already.

Wait. David Chalmers said what?

Huh. He did say that. A mostly sensible article summarising the evidence on both sides, concluding that pure feed-forward LLMs are not conscious but extended LLMs with recurrent processing (feedback loops) could be.

https://www.bostonreview.net/articles/could-a-large-la...

Back to Noema:

>Taken together, these biases explain why it's hardly surprising that when things exhibit abilities we think of as distinctively human, such as intelligence, we naturally imbue them with other qualities we feel are characteristically or even distinctively human: understanding, mindedness and consciousness, too.

A little bit of nonsense thrown in at the start but an accurate description of the problem in the end.

But then it all falls apart:

>The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.

Which is rather like assuming that water is a molecule made of one oxygen atom and two hydrogen atoms.

>More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise.

Because it is.

>This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all.

As much as the molecular structure of water is an assumption.

>But that is what it is.

Nope.

>And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.

"Kinds of AI we're familiar with"? Do you mean feed-forward models, which are definitely not conscious, or enhanced systems with feedback loops?

>Challenging computational functionalism means diving into some deep waters about what computation means and what it means to say that a physical system, like a computer or a brain, computes at all. I'll summarize four related arguments that undermine the idea that computation, at least of the sort implemented in standard digital computers, is sufficient for consciousness.

>First, and most important, brains are not computers.

And we're dead.

Brains are obviously computers and it is trivially easy to prove this.

Take a line of BASIC code, like:

14 PRINT 9+5

What does that do?

It prints 14.

How do you know?

Because you can execute that code in your head.

How can you do that?

Because your brain is a computer.

It may be more than a computer, @cirnog thinks so but he is gone, presumed missing. Though I would argue that nobody has produce a coherent, let alone convincing argument for qualia, but consciousness is unquestionably a computer.

There follow dozens of paragraphs of irrelevancies I won't get into, but suffice to say that it all goes downhill from there.

Replies

1
@Aether @cirnog I thought I might read the entire article before reading your commentary but I have given up halfway through. Good to see you came to the same conclusion
"Erm but electromagnetic constants and the waste from metabolism are not implemented in silicon right now, that means consciousness can't be simulated" 💀 🥀
it really is all irrelevant. The advent of language models that produce convincing text on a complex subject without really understanding anything has proven just how much computers can be like humans, actually. At least like this guy.