2025/10/30

Newest at the top

2025-10-30 21:06:17 +0100 <segfaultfizzbuzz> yeah sorry, not type signature in isolation, i meant type signature as a hard restriction on the validity of the LLM output, given a reasonable prompt
2025-10-30 21:05:56 +0100 <segfaultfizzbuzz> type signature plus a description/comment
2025-10-30 21:05:52 +0100 <EvanR> maybe you mean the carefully chosen name of the function
2025-10-30 21:05:44 +0100 <EvanR> the type signature is usually not enough to judge what you want it to do
2025-10-30 21:05:21 +0100 <segfaultfizzbuzz> EvanR: oh?
2025-10-30 21:05:13 +0100 <EvanR> no
2025-10-30 21:05:01 +0100 <segfaultfizzbuzz> or at least that's what i find
2025-10-30 21:05:00 +0100 <EvanR> lol
2025-10-30 21:04:54 +0100 <segfaultfizzbuzz> roughly speaking if you can write the type signature of your function then ai seems like it can do decently at filling in the rest 50% to 75% of the time...
2025-10-30 21:04:17 +0100 <geekosaur> why large language models are what led to something that comes across as "actual AI"
2025-10-30 21:04:11 +0100 <segfaultfizzbuzz> EvanR: yeah i would say that "without understanding any of it" isn't what i do, but it can save me a lot of round trips back and forth from documentation and also it can sometimes stich things nicely (type conversions, etc)
2025-10-30 21:03:43 +0100 <geekosaur> and makes a lot of sense if you think about it
2025-10-30 21:03:35 +0100 <segfaultfizzbuzz> geekosaur: why LLMs in the first place? explain?
2025-10-30 21:03:23 +0100 <geekosaur> seriously, it explains a lot of things, including why LLMs in the first place
2025-10-30 21:03:10 +0100 <EvanR> if you start pasting large amounts of code generated by the LLM into the project without understanding any of it, well, it will start to break down, and there's plenty of memes about where this leads
2025-10-30 21:02:47 +0100 <segfaultfizzbuzz> hahaha markov chains :-) you might not be wrong there
2025-10-30 21:02:34 +0100 <geekosaur> which means it's only as good as the Markov chains it can build from its training data
2025-10-30 21:02:30 +0100 <segfaultfizzbuzz> and then there is architecting your application so that you can kind of limit the damage that can happen, but i would imagine that's the same as structuring code for writing on a team
2025-10-30 21:02:24 +0100 <haskellbridge> <sm> the coding tools and chat bots are no longer just that
2025-10-30 21:02:04 +0100 <geekosaur> keep in mind that current AI still doesn't understand anything; it's a Markov bot with a smarter notion of how language fits together
2025-10-30 21:01:57 +0100 <segfaultfizzbuzz> geekosaur: there also is how you prompt,... if your language is better you get better results i think
2025-10-30 21:01:57 +0100 <haskellbridge> <sm> yes also the model, the ai-based coding tool, the context, the prompts all matter
2025-10-30 21:01:22 +0100 <geekosaur> the question is what it was trained on. if you have a lot of blog posts by people who're still learning the language, the code the AI will produce will mostly be at their level
2025-10-30 21:01:02 +0100 <EvanR> I use it for C and it still needs to be checked, obviously
2025-10-30 21:00:44 +0100 <segfaultfizzbuzz> js is awful,... rust is like,... not bad i find
2025-10-30 21:00:43 +0100 <haskellbridge> <sm> I don't think you can generalise, it depends what you're doing
2025-10-30 21:00:36 +0100 <segfaultfizzbuzz> geekosaur: oh? nice :-)
2025-10-30 21:00:31 +0100 <geekosaur> even js needs to be checked
2025-10-30 21:00:27 +0100 <segfaultfizzbuzz> lol yeah sorry i forgot to mention using neuralink while using tesla self driving
2025-10-30 21:00:22 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 260 seconds)
2025-10-30 21:00:19 +0100 <geekosaur> ai gets haskell very wrong still
2025-10-30 21:00:14 +0100 <haskellbridge> <loonycyborg> Direct neural uplink better
2025-10-30 21:00:10 +0100 <haskellbridge> <sm> it varies a lot
2025-10-30 21:00:08 +0100 <EvanR> in the same way that handwriting is not a thing anymore
2025-10-30 21:00:06 +0100 <segfaultfizzbuzz> hahaha... but seriously...?
2025-10-30 20:59:08 +0100 <EvanR> you will be ridiculed for using your keyboard at all. Voice input to an LLM is the only way to signal how up to date you are
2025-10-30 20:57:01 +0100 <segfaultfizzbuzz> what are the norms these days regarding using "ai" to code among good quality professional programmers. is it fine to use or do i need to type everything into my keyboard myself
2025-10-30 20:56:33 +0100peterbecich(~Thunderbi@172.222.148.214) peterbecich
2025-10-30 20:55:15 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-10-30 20:48:52 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-10-30 20:46:04 +0100Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-10-30 20:45:25 +0100segfaultfizzbuzz(~segfaultf@23-93-74-222.fiber.dynamic.sonic.net) segfaultfizzbuzz
2025-10-30 20:44:44 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2025-10-30 20:39:28 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-10-30 20:36:44 +0100opencircuit(~quassel@user/opencircuit) opencircuit
2025-10-30 20:35:34 +0100opencircuit_(~quassel@user/opencircuit) (Remote host closed the connection)
2025-10-30 20:28:49 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 264 seconds)
2025-10-30 20:27:16 +0100rvalue-rvalue
2025-10-30 20:23:38 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-10-30 20:20:22 +0100haltingsolver(~cmo@2604:3d09:207f:8000::d1dc) (Ping timeout: 256 seconds)