A coworker is trying to get everyone at work to just vibe code everything and today in a meeting he said something along the lines of "I know today that we rely on internal expertise, but I don't think we should do that anymore". Buddy if we can't rely on internal expertise how the fuck are we supposed to validate the output of these LLMs? We can't trust the LLM to validate itself. It was faking the tests in his PR. It wrote dozens of tests that asserted nothing, but he didn't see a problem with that because the test coverage was higher than average.
fancysandwiches
@fancysandwiches@neuromatch.social
Software person, co-founder of PyCascades and PuPPy, artist in my free time, collector of hobbies.
Follow me if you're interested in:
- Painting (watercolors and gouache)
- Sketching
- Birding / Nature Photography
- Modular Synthesizers
- Metal music
If I liked a photo you took, or a painting you made, but didn't boost it, you probably didn't have alt text.
Posts
Latest notes
No posts yet.