Egregoros

Signal feed

Timeline

Post

Remote status

Maybe what I really want is LLM autocomplete.

The "good" models think TOO much and they can't iterate fast.
Non-technicals jerk themselves off over one-shots, so that's not gonna change.

My most used phrase when trying to get GLM-5.1 to code is "DO IT NOW." with "DO THIS NOW." a close second.
You can easily spend half an hour on a very simple refactor because it keeps trying to expand scope and consider everything else in the program.
It has written the program itself and needs to consider everything else because the control flow is a total mess. When you finally get it to draw text on the screen, the text is shifted by two lines.

interesting behavior. I think if the models were stupider, you might get faster times in the end.

RT: https://poa.st/objects/da71dc15-ce22-44e9-90ee-8aa362e586db

Replies

8
@WandererUber I find myself constantly resetting the context window.

A common framing for me is,
Create a Script (or function) in X language.
Using the following sample input, create Y as output.

Reasoning models seem to always take around 20-40 seconds. But, as long as you keep the context window clean, it doesn't balloon up.