is Github dead btw? when I look at repos that should be popular, they have single digit stars. where did people go?
Timeline
Post
Remote status
Replies
9
@kaia got an example of such repo?
@kaia A lot moved to codeberg. But the more likely answer is M$ vibe coding the stars system.
@Arwalk
when I looked at https://github.com/reasv/panoptikon it had fewer stars and I was unsure whether it was any good. so I looked at other repos in that space and 95% have zero stars. maybe this was always normal idk
when I looked at https://github.com/reasv/panoptikon it had fewer stars and I was unsure whether it was any good. so I looked at other repos in that space and 95% have zero stars. maybe this was always normal idk
maybe the repos need a hacker news feature before the psychos and bots that star shit on github find them
THE HACKER HAS A NEWS WEBSITE!???
@kaia @Arwalk I didn't even realize I had gotten over 50 stars on this. I've been working on the Rust rewrite for Panoptikon (which is pure python + next.js frontend) for a bit now, but I haven't had time to get back to it in the last month or so. I want to finish it and make the rust version the main one.
I was able to fix some architectural issues in the rewrite as the Python version couldn't scale well.
I was able to fix some architectural issues in the rewrite as the Python version couldn't scale well.
@icedquinn @Arwalk @kaia In fact, in this case there's no generative LLM involved at least at search time.
Although most of the models (some of the embedding and OCR and taggers/classifier models) are actually some form of Transformer. The trend over time has been that transformer-based architectures outperform everything else in more and more tasks.
There's also the captioning models which are just generating text. All in all Panoptikon has configuration for several dozen models built in, for different purposes, and many ways of combining filters that search through their data. But the idea is that you can plug your own weights and models.
In the API, you can even combine filters with AND/OR/NOT operators, basically queries are written in a JSON-based "language" I call PQL, which gets compiled to SQLite SQL queries. The UI is more rigid, but it does have a lot of filters. It's a bit busy though, and even I sometimes struggle to find the right switch.
I'd like to improve on that at some point, when I have time. Overall I like the UI though, and I use it every day so it's not that bad.
Although most of the models (some of the embedding and OCR and taggers/classifier models) are actually some form of Transformer. The trend over time has been that transformer-based architectures outperform everything else in more and more tasks.
There's also the captioning models which are just generating text. All in all Panoptikon has configuration for several dozen models built in, for different purposes, and many ways of combining filters that search through their data. But the idea is that you can plug your own weights and models.
In the API, you can even combine filters with AND/OR/NOT operators, basically queries are written in a JSON-based "language" I call PQL, which gets compiled to SQLite SQL queries. The UI is more rigid, but it does have a lot of filters. It's a bit busy though, and even I sometimes struggle to find the right switch.
I'd like to improve on that at some point, when I have time. Overall I like the UI though, and I use it every day so it's not that bad.
@kaia @Arwalk I wish Panoptikon got more popular so I could recommend it to people without essentially outing myself as its author by mentioning it.
At the same time it would really suck if it got very popular right now when the Rust rewrite is 70% done and I'm trying to get back to work on it.
Anyways, it's a very flexible tool, although I think it's a bit difficult for most people to use, and especially to install (the latter due to it being a python + AI app)
I'm hoping to make this easier in the Rust version because it can then be distributed as a binary, and the binary can bootstrap the rest of the environment it needs... maybe. I haven't gotten to that point. It will still need to run python processes to do the actual inference part, but that can be isolated, and I am hoping I can set it up such that Rust automatically creates the right python envs and ensures dependencies work for each platform. I have a feeling that will be a lot of work.
At the same time it would really suck if it got very popular right now when the Rust rewrite is 70% done and I'm trying to get back to work on it.
Anyways, it's a very flexible tool, although I think it's a bit difficult for most people to use, and especially to install (the latter due to it being a python + AI app)
I'm hoping to make this easier in the Rust version because it can then be distributed as a binary, and the binary can bootstrap the rest of the environment it needs... maybe. I haven't gotten to that point. It will still need to run python processes to do the actual inference part, but that can be isolated, and I am hoping I can set it up such that Rust automatically creates the right python envs and ensures dependencies work for each platform. I have a feeling that will be a lot of work.