I think the funniest part of this computer memory shortage is that all of it is just being bought up by OpenAI to give the illusion of a growing company, who is then immediately shelving it in warehouses and never using it. When this industry crashes the amount of brand new GPUs flooding the secondary market is going to be nuts
Timeline
Post
Remote status
Context
16
@Shadowman311 >illusion of a growing company
They were just trying to starve competitors of ram. Everyone was already aware of their tenuous situation. They have first mover advantage and are perceived as the “brand name” LLM. They’re trying to maintain this status and advance it by kneecapping everyone else. It won’t work. As long as the Chinese keep getting the results of these models for free no advancement made by spending billions will matter in the face of a competitor 6 months behind getting it for free.
They were just trying to starve competitors of ram. Everyone was already aware of their tenuous situation. They have first mover advantage and are perceived as the “brand name” LLM. They’re trying to maintain this status and advance it by kneecapping everyone else. It won’t work. As long as the Chinese keep getting the results of these models for free no advancement made by spending billions will matter in the face of a competitor 6 months behind getting it for free.
@john_darksoul >getting the results of these models for free
>getting it for free
What do you mean here exactly?
>getting it for free
What do you mean here exactly?
@WandererUber They’re getting all the weights through espionage no? I’m sure they’re still spending on hardware, but if a new version comes out with more parameters they get those models by stealing them. The new Chinese open source model that just came out straight up answered as Claude.
@john_darksoul @WandererUber >the new Chinese open source model that just came out straight up answered as Claude
uhh where can I find this model? I need to download it for research purposes
uhh where can I find this model? I need to download it for research purposes
@bronze @WandererUber I think it’s called Kim K2, but deepseek is still open and is still updating iirc. I don’t think these are easy to run though. You’ll probably need a lower parameter one.
@john_darksoul @bronze @WandererUber
I remember it being k2.5 that someone claimed identified as Claude when you said hi. I didn't have access to it at that time, and its output is quite different when I compare it to Claude 4.5 Opus.
Maybe the screenshot was fake? idk Kimi K2.5 Reasoning sure is a lot cheaper than Claude 4.5 Opus
I remember it being k2.5 that someone claimed identified as Claude when you said hi. I didn't have access to it at that time, and its output is quite different when I compare it to Claude 4.5 Opus.
Maybe the screenshot was fake? idk Kimi K2.5 Reasoning sure is a lot cheaper than Claude 4.5 Opus
@hazlin @bronze @WandererUber They’re doing everything cheaper. I wonder if the models are actually cheaper to run or if they’re just trying to be disruptive. If they can maintain their pace and cost we’ll likely see legislation here soon to ban their use in the US.
@john_darksoul @bronze @hazlin you need more technical insight tbqh
I already told you guys what they did in this thread, and also it's easily googleable, plus it pops up when you google any of the claims you made.
Nobody knows anything about the theoretical limits of efficiency of tokens/watt, not even on the same hardware.
A smaller model will be cheaper though. And you can "distill" models. Which is what they do. And I said that they did.
I already told you guys what they did in this thread, and also it's easily googleable, plus it pops up when you google any of the claims you made.
Nobody knows anything about the theoretical limits of efficiency of tokens/watt, not even on the same hardware.
A smaller model will be cheaper though. And you can "distill" models. Which is what they do. And I said that they did.
@john_darksoul @bronze @hazlin I don't need to believe in some QAnon tier conspiracy of them somehow stealing the weights when what they actually did also easily explains it and better
@WandererUber @bronze @hazlin Homie, they steal EVERYTHING. I just watched a video on their burgeoning RAM business that started with them getting in trouble for stealing tech. Then when that company was blacklisted, a new company started where they left off. There is no conspiracy. It's SOP for any chinese tech company to "accelerate" by getting info wherever they can. The idea that any of this would be found scandalous is laughable.
@john_darksoul @bronze @hazlin you're missing the entirely niche technical thing I'm saying and it's pissing me off
@WandererUber @bronze @hazlin I understand what you said about the deepseek. They trained that model on the output of ChatGPT instead of stealing the tech. I’m saying they’re probably doing both
@john_darksoul @bronze @hazlin and yet you still have zero evidence they pulled that off. Because they haven't.
When they distilled, openAI made this into a public incident. Had they stolen the model, for an open source one, which is what you said, this would also have happened. It didn't. So it's not true.
really not that complicated
When they distilled, openAI made this into a public incident. Had they stolen the model, for an open source one, which is what you said, this would also have happened. It didn't. So it's not true.
really not that complicated
@WandererUber @bronze @hazlin Yea, the country that steals everything has been unable so far to steal AI models despite having their hands in every other aspect of US tech for decades and having recognized AI as a national interest as the US has. No you’re right. That’s a sane take.
@john_darksoul @bronze @hazlin >We should uncritically believe everyone else is only ever stealing and at the same time so incompetent that they have to do that, but also such Oceans Style Masterminds that they always pull it off. My evidence is all the times they got caught, but they did it without getting caught in this instance.
An absolutely laughable direction to paint me as some China defender just because I said "Anti-China FUD" one time. That's just a real thing, dude. The US fed apparatus does it constantly. So it was AS REASONABLE, at the very least, to suggest maybe you fell for a fake headline. But it turns out you made it up completely and there isn't even one, so yeah. I guess I'm the silly one, because I said "it's explained by the thing that openAI accused them of doing and not a theoretically possible thing they might LIKE to do, which openAI has NOT accused them of doing"
tedious conversation at this point.
An absolutely laughable direction to paint me as some China defender just because I said "Anti-China FUD" one time. That's just a real thing, dude. The US fed apparatus does it constantly. So it was AS REASONABLE, at the very least, to suggest maybe you fell for a fake headline. But it turns out you made it up completely and there isn't even one, so yeah. I guess I'm the silly one, because I said "it's explained by the thing that openAI accused them of doing and not a theoretically possible thing they might LIKE to do, which openAI has NOT accused them of doing"
tedious conversation at this point.
Replies
4
@WandererUber @bronze @hazlin I don’t understand the green text here. It’s absurd. The ability to steal doesn’t correlate to innovation at all. The Chinese aren’t great innovators in tech. They are very prolific in stealing it. Of all the things to be clear today that should be least difficult thing to understand.
@john_darksoul @bronze @hazlin that's it dude I've had it with you
see you in Vermont
see you in Vermont
@john_darksoul @WandererUber @hazlin >The Chinese aren’t great innovators in tech
who gives a fuck, theyre giving me free AI models
(((american))) companies make me pay my hard earned money
i have perfectly good hardware and hoarded RAM sticks, i can run this shit on my own
who gives a fuck, theyre giving me free AI models
(((american))) companies make me pay my hard earned money
i have perfectly good hardware and hoarded RAM sticks, i can run this shit on my own
@bronze @WandererUber @hazlin The underlying point of all this was that AI profitability is moot if the Chinese can produce models at a fraction of the cost. I think it’s good too, because I’m not a fan of the AI fallout.