Won't SOMEONE think of the Boomers that are distressed that they might not be able to sell their house they bought in 1980 for $20k at a 2000% profit?
Real Aetherness
@Aether@poa.st
ƎNOZ N∩Ⅎ ƎH⊥ O⊥ ƎWOϽ˥ƎM
Woke Deprogramming Consultant
Woke Deprogramming Consultant
Posts
Latest notes
No posts yet.
Good evening poast
Smarthphone shipments are expected to drop 12% and prices to increase 14% thanks to our friend Sam and his DRAM Apocalypse.
https://techcrunch.com/2026/02/26/memory-shortage-coul...
Thanks, Sam.
https://techcrunch.com/2026/02/26/memory-shortage-coul...
Thanks, Sam.
@cjd @Humpleupagus It is happening now. Servers, especially for AI, in-memory databases, and big virtualization hosts, already started mid-2025.
Higher end work stations will start changing over from some point in 2027
Consumer PCs, honestly… maybe never in the classic sense
Higher end work stations will start changing over from some point in 2027
Consumer PCs, honestly… maybe never in the classic sense
Intel's Nova Lake platform will support up to 48 PCIe lanes.
https://www.tomshardware.com/pc-components/chipsets/in...
That's 16 PCIe 5.0 lanes from the CPU for slots and 8 for storage, plus up to 12 PCIe 5.0 and 12 PCIe 4.0 lanes from the chipset. Given that even an RTX 5090 is barely slowed down by only having 4 lanes of PCIe 5.0 available, that's a pretty healthy number.
And 8 SATA ports and approximately 40 USB ports.
There are five different chipsets planned, though the W980 offers pretty much everything including support for ECC memory... Which is not a chipset function but a CPU one, but something Intel likes to do.
https://www.tomshardware.com/pc-components/chipsets/in...
That's 16 PCIe 5.0 lanes from the CPU for slots and 8 for storage, plus up to 12 PCIe 5.0 and 12 PCIe 4.0 lanes from the chipset. Given that even an RTX 5090 is barely slowed down by only having 4 lanes of PCIe 5.0 available, that's a pretty healthy number.
And 8 SATA ports and approximately 40 USB ports.
There are five different chipsets planned, though the W980 offers pretty much everything including support for ECC memory... Which is not a chipset function but a CPU one, but something Intel likes to do.
Good evening poast
The mythology of conscious AI.
https://www.noemamag.com/the-mythology-of-conscious-ai/
This article gets one important thing right: LLMs are not conscious.
>In a 2022 interview with The Washington Post, Google engineer Blake Lemoine made a startling claim about the AI system he was working on, a chatbot called LaMDA. He claimed it was conscious, that it had feelings, and was, in an important sense, like a real person. Despite a flurry of media coverage, Lemoine wasn't taken all that seriously. Google dismissed him for violating its confidentiality policies, and the AI bandwagon rolled on.
Lemoine is a crazy as a sack of rats on ecstasy. And was also completely and very obviously wrong, which is not the same thing.
>As AI technologies continue to improve, questions about machine consciousness are increasingly being raised. David Chalmers, one of the foremost thinkers in this area, has suggested that conscious machines may be possible in the not-too-distant future. Geoffrey Hinton, a true AI pioneer and recent Nobel Prize winner, thinks they exist already.
Wait. David Chalmers said what?
Huh. He did say that. A mostly sensible article summarising the evidence on both sides, concluding that pure feed-forward LLMs are not conscious but extended LLMs with recurrent processing (feedback loops) could be.
https://www.bostonreview.net/articles/could-a-large-la...
Back to Noema:
>Taken together, these biases explain why it's hardly surprising that when things exhibit abilities we think of as distinctively human, such as intelligence, we naturally imbue them with other qualities we feel are characteristically or even distinctively human: understanding, mindedness and consciousness, too.
A little bit of nonsense thrown in at the start but an accurate description of the problem in the end.
But then it all falls apart:
>The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.
Which is rather like assuming that water is a molecule made of one oxygen atom and two hydrogen atoms.
>More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise.
Because it is.
>This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all.
As much as the molecular structure of water is an assumption.
>But that is what it is.
Nope.
>And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
"Kinds of AI we're familiar with"? Do you mean feed-forward models, which are definitely not conscious, or enhanced systems with feedback loops?
>Challenging computational functionalism means diving into some deep waters about what computation means and what it means to say that a physical system, like a computer or a brain, computes at all. I'll summarize four related arguments that undermine the idea that computation, at least of the sort implemented in standard digital computers, is sufficient for consciousness.
>First, and most important, brains are not computers.
And we're dead.
Brains are obviously computers and it is trivially easy to prove this.
Take a line of BASIC code, like:
14 PRINT 9+5
What does that do?
It prints 14.
How do you know?
Because you can execute that code in your head.
How can you do that?
Because your brain is a computer.
It may be more than a computer, @cirnog thinks so but he is gone, presumed missing. Though I would argue that nobody has produce a coherent, let alone convincing argument for qualia, but consciousness is unquestionably a computer.
There follow dozens of paragraphs of irrelevancies I won't get into, but suffice to say that it all goes downhill from there.
https://www.noemamag.com/the-mythology-of-conscious-ai/
This article gets one important thing right: LLMs are not conscious.
>In a 2022 interview with The Washington Post, Google engineer Blake Lemoine made a startling claim about the AI system he was working on, a chatbot called LaMDA. He claimed it was conscious, that it had feelings, and was, in an important sense, like a real person. Despite a flurry of media coverage, Lemoine wasn't taken all that seriously. Google dismissed him for violating its confidentiality policies, and the AI bandwagon rolled on.
Lemoine is a crazy as a sack of rats on ecstasy. And was also completely and very obviously wrong, which is not the same thing.
>As AI technologies continue to improve, questions about machine consciousness are increasingly being raised. David Chalmers, one of the foremost thinkers in this area, has suggested that conscious machines may be possible in the not-too-distant future. Geoffrey Hinton, a true AI pioneer and recent Nobel Prize winner, thinks they exist already.
Wait. David Chalmers said what?
Huh. He did say that. A mostly sensible article summarising the evidence on both sides, concluding that pure feed-forward LLMs are not conscious but extended LLMs with recurrent processing (feedback loops) could be.
https://www.bostonreview.net/articles/could-a-large-la...
Back to Noema:
>Taken together, these biases explain why it's hardly surprising that when things exhibit abilities we think of as distinctively human, such as intelligence, we naturally imbue them with other qualities we feel are characteristically or even distinctively human: understanding, mindedness and consciousness, too.
A little bit of nonsense thrown in at the start but an accurate description of the problem in the end.
But then it all falls apart:
>The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.
Which is rather like assuming that water is a molecule made of one oxygen atom and two hydrogen atoms.
>More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise.
Because it is.
>This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all.
As much as the molecular structure of water is an assumption.
>But that is what it is.
Nope.
>And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
"Kinds of AI we're familiar with"? Do you mean feed-forward models, which are definitely not conscious, or enhanced systems with feedback loops?
>Challenging computational functionalism means diving into some deep waters about what computation means and what it means to say that a physical system, like a computer or a brain, computes at all. I'll summarize four related arguments that undermine the idea that computation, at least of the sort implemented in standard digital computers, is sufficient for consciousness.
>First, and most important, brains are not computers.
And we're dead.
Brains are obviously computers and it is trivially easy to prove this.
Take a line of BASIC code, like:
14 PRINT 9+5
What does that do?
It prints 14.
How do you know?
Because you can execute that code in your head.
How can you do that?
Because your brain is a computer.
It may be more than a computer, @cirnog thinks so but he is gone, presumed missing. Though I would argue that nobody has produce a coherent, let alone convincing argument for qualia, but consciousness is unquestionably a computer.
There follow dozens of paragraphs of irrelevancies I won't get into, but suffice to say that it all goes downhill from there.