2025/11/26

Newest at the top

2025-11-27 00:18:54 +0100EvanR(~EvanR@user/evanr) EvanR
2025-11-27 00:18:35 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 240 seconds)
2025-11-27 00:18:34 +0100EvanR(~EvanR@user/evanr) (Remote host closed the connection)
2025-11-27 00:16:10 +0100tv(~tv@user/tv) tv
2025-11-27 00:15:35 +0100tv(~tv@user/tv) (Quit: derp)
2025-11-27 00:14:10 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-11-27 00:12:41 +0100 <EvanR> meanwhile machine learning continued
2025-11-27 00:12:25 +0100 <jreicher> I feel like now it's becoming a religion. :(
2025-11-27 00:12:24 +0100 <EvanR> AI winter resulted in loss of funding/interest in AI research, if it was called that
2025-11-27 00:11:52 +0100 <EvanR> AI has been more of a "brand" than a science for a long time
2025-11-27 00:10:00 +0100mange(~mange@user/mange) mange
2025-11-27 00:09:21 +0100__monty__(~toonn@user/toonn) (Quit: leaving)
2025-11-27 00:07:26 +0100 <jreicher> Well I guess at a high level it's all the same thing in the end. If the "smart" system is not an LLM it just means the training data has been fed to the human (in the form of experience and/or user feedback) and the human translates that into the rules (weights) of the system.
2025-11-27 00:03:28 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2025-11-27 00:00:41 +0100ThePenguin(~ThePengui@cust-95-80-28-221.csbnet.se) ThePenguin
2025-11-26 23:59:44 +0100gmg(~user@user/gehmehgeh) gehmehgeh
2025-11-26 23:59:44 +0100ThePenguin(~ThePengui@cust-95-80-28-221.csbnet.se) (Ping timeout: 244 seconds)
2025-11-26 23:58:48 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-11-26 23:58:44 +0100 <haskellbridge> <loonycyborg> because it's a form of expert system
2025-11-26 23:58:40 +0100 <haskellbridge> <loonycyborg> so is a buildsystem
2025-11-26 23:58:18 +0100 <jreicher> I reckon you could argue that a language server is a form of AI. :)
2025-11-26 23:57:18 +0100 <haskellbridge> <loonycyborg> and lots of other things
2025-11-26 23:57:15 +0100emmanuelux(~emmanuelu@user/emmanuelux) (Remote host closed the connection)
2025-11-26 23:57:11 +0100 <haskellbridge> <loonycyborg> because AI is also automated players in computer games
2025-11-26 23:56:54 +0100 <haskellbridge> <loonycyborg> I wish they just stick with LLM
2025-11-26 23:56:48 +0100 <haskellbridge> <loonycyborg> "AI" is really confusing term
2025-11-26 23:54:49 +0100emmanuelux(~emmanuelu@user/emmanuelux) emmanuelux
2025-11-26 23:50:31 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 240 seconds)
2025-11-26 23:50:21 +0100califax(~califax@user/califx) califx
2025-11-26 23:50:00 +0100califax(~califax@user/califx) (Quit: ZNC 1.8.2 - https://znc.in)
2025-11-26 23:43:55 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-11-26 23:40:13 +0100tromp(~textual@2001:1c00:3487:1b00:c5b7:b8d9:7db7:74e1) (Quit: My iMac has gone to sleep. ZZZzzz…)
2025-11-26 23:39:56 +0100gmg(~user@user/gehmehgeh) (Quit: Leaving)
2025-11-26 23:35:03 +0100divlamir(~divlamir@user/divlamir) divlamir
2025-11-26 23:34:37 +0100divlamir(~divlamir@user/divlamir) (Read error: Connection reset by peer)
2025-11-26 23:29:07 +0100peterbecich(~Thunderbi@172.222.148.214) (Ping timeout: 264 seconds)
2025-11-26 23:28:10 +0100 <geekosaur> this is also flavored by my monitoring various kinds of science news, and an active area of research is hybridizing LLMs with other varieties of AI more capable of some form of reasoning (fsvo) about the data they've been trained with
2025-11-26 23:24:44 +0100Frostillicus(~Frostilli@pool-71-174-119-69.bstnma.fios.verizon.net)
2025-11-26 23:23:55 +0100 <jreicher> jackdk: FWIW I "imagine" those points are very plausible, but I meant something more specific than "works". I mean USEFUL. If the AI succeeds in writing code that would have been no more effort and no slower for a human to write, it's not useful. And very concise and obvious code is like this. Lengthy boilerplate is where the AI has the potential to save time.
2025-11-26 23:23:24 +0100 <geekosaur> I lean strongly toward #1 because the only way I can see for an LLM to have boosted "AI" is for it to be a smarter Markov bot
2025-11-26 23:22:49 +0100Frostillicus(~Frostilli@pool-71-174-119-69.bstnma.fios.verizon.net) (Ping timeout: 260 seconds)
2025-11-26 23:21:30 +0100michalz(~michalz@185.246.207.193) (Remote host closed the connection)
2025-11-26 23:19:21 +0100 <jackdk> ... 3. it works best on languages with simpler syntax and less compiler smarts because the token stream just carries more information (a Gleam advocate mentioned this to me once).
2025-11-26 23:18:20 +0100 <jackdk> jreicher: I have seen three arguments and I'm not sure which to weight most heavily: 1. it works best on things it's seen the most of in the training distribution (python, TS, specific major libraries — Anthropic called this out in its article on the design and implementation of Claude Code); 2. it works best on strongly-typed languages because it can converge on a solution with compiler assistance (Terry Tao's posts about Lean may apply); ...
2025-11-26 23:04:37 +0100peterbecich(~Thunderbi@172.222.148.214) peterbecich
2025-11-26 23:00:25 +0100Frostillicus(~Frostilli@pool-71-174-119-69.bstnma.fios.verizon.net)
2025-11-26 23:00:23 +0100target_i(~target_i@user/target-i/x-6023099) (Quit: leaving)
2025-11-26 23:00:06 +0100 <haskellbridge> <sm> that's us..
2025-11-26 22:57:45 +0100 <jreicher> I suspect AI is more useful in languages that require a fair amount of boilerplate to get things done.
2025-11-26 22:56:16 +0100takuan(~takuan@d8D86B9E9.access.telenet.be) (Remote host closed the connection)