2024/11/14

Newest at the top

2024-11-14 18:37:43 +0100 <bailsman> Replacing the code with the safe versions of freeze and thaw makes it 3x slower
2024-11-14 18:37:26 +0100 <geekosaur> ST will ensure that for you
2024-11-14 18:36:47 +0100 <bailsman> I actually really like the performance now - I'd like to fully understand the dragons on my path.
2024-11-14 18:35:42 +0100 <bailsman> Even if I make sure that the code with mutable reference has fully evaluated before any code with immutable references tries to read?
2024-11-14 18:34:51 +0100wootehfoot(~wootehfoo@user/wootehfoot) wootehfoot
2024-11-14 18:34:19 +0100 <tomsmeding> work in ST and keep the thing mutable while you're mutating it
2024-11-14 18:33:54 +0100 <tomsmeding> please don't do this :p
2024-11-14 18:33:35 +0100 <tomsmeding> GHC assumes that immutable values don't change and sometimes optimises quite aggressively based on that assumption
2024-11-14 18:33:32 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2024-11-14 18:33:13 +0100 <tomsmeding> bailsman: yes, mutating an immutable vector is sure to produce very strange issues
2024-11-14 18:32:55 +0100 <bailsman> I should probably find a way to keep it mutable permanently rather than thawing and freezing
2024-11-14 18:31:59 +0100 <bailsman> more readable on multiple lines
2024-11-14 18:31:34 +0100 <bailsman> https://paste.tomsmeding.com/yaTzqQA3
2024-11-14 18:27:04 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) alexherbo2
2024-11-14 18:26:43 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) (Remote host closed the connection)
2024-11-14 18:26:10 +0100 <bailsman> updateValue is pure. This is the 'inplace map': `runST $ do; mv <- VU.unsafeThaw v; VUM.iforM_ mv $ \i s -> VUM.write mv i $! updateValue s; VU.unsafeFreeze mv`
2024-11-14 18:25:10 +0100 <tomsmeding> bailsman: "please tell me" if you show the code, perhaps we can :)
2024-11-14 18:23:30 +0100 <bailsman> Please tell me it's not going to segfault on me if I move forward with this in more complex examples
2024-11-14 18:22:15 +0100 <bailsman> why did nobody tell me :P
2024-11-14 18:22:11 +0100 <bailsman> Wait, so apparently I can derive the unboxed instances with minimal boilerplate (as tuples), and the pure world doesn't even need to know or care that I did that all. I can write it idiomatically. And it's now as fast as C
2024-11-14 18:21:58 +0100emfrom(~emfrom@37.168.28.138) (Remote host closed the connection)
2024-11-14 18:13:09 +0100Inst(~Inst@user/Inst) (Ping timeout: 276 seconds)
2024-11-14 18:12:03 +0100emfrom(~emfrom@37.168.28.138)
2024-11-14 18:11:25 +0100Inst_(~Inst@user/Inst) Inst
2024-11-14 18:10:57 +0100mantraofpie(~mantraofp@user/mantraofpie) mantraofpie
2024-11-14 18:10:17 +0100mantraofpie(~mantraofp@user/mantraofpie) (Quit: ZNC 1.9.1 - https://znc.in)
2024-11-14 18:05:45 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 252 seconds)
2024-11-14 18:03:50 +0100mantraofpie_mantraofpie
2024-11-14 17:58:53 +0100housemate(~housemate@146.70.66.228) housemate
2024-11-14 17:58:27 +0100aljazmc(~aljazmc@user/aljazmc) aljazmc
2024-11-14 17:58:01 +0100aljazmc(~aljazmc@user/aljazmc) (Remote host closed the connection)
2024-11-14 17:46:08 +0100Digit(~user@user/digit) Digit
2024-11-14 17:44:33 +0100Digitteknohippie(~user@user/digit) (Ping timeout: 252 seconds)
2024-11-14 17:42:44 +0100 <bailsman> It went from 4x slower to 10x faster than plain `map`
2024-11-14 17:42:20 +0100 <haskellbridge> <Bowuigi> Oh yeah unboxing and strict data type fields can help in optimizing in general
2024-11-14 17:42:00 +0100 <geekosaur> otherwise it'll be chasing a lot of pointers
2024-11-14 17:41:51 +0100 <geekosaur> well, yes, that helps
2024-11-14 17:40:18 +0100 <bailsman> need to write unboxed instances for all of your data types.
2024-11-14 17:40:17 +0100 <bailsman> Hmmm. I had Claude.AI write an unboxed small record instance with 50+ lines of code (to my eyes absolutely horrific). Then, using Data.Vector.Unboxed.Mutable the performance is now approaching the C in-place update speed. I don't entirely trust that this won't segfault at some point, but if claude.ai did everything correctly then apparently it *is* possible to write inplace algorithms, you just
2024-11-14 17:37:19 +0100Digit(~user@user/digit) (Ping timeout: 265 seconds)
2024-11-14 17:37:00 +0100Digitteknohippie(~user@user/digit) Digit
2024-11-14 17:34:26 +0100 <haskellbridge> <Bowuigi> It turns out that first class labels are just Proxy on a kind ranging over every possible label
2024-11-14 17:33:44 +0100 <haskellbridge> <Bowuigi> Now that everything is solved, it's time to move to something else
2024-11-14 17:21:27 +0100 <geekosaur> llvm still lacks support for pre-CPSed code
2024-11-14 17:20:48 +0100aljazmc(~aljazmc@user/aljazmc) aljazmc
2024-11-14 17:19:31 +0100 <tomsmeding> :)
2024-11-14 17:19:01 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-11-14 17:16:39 +0100 <EvanR> ok
2024-11-14 17:16:35 +0100 <tomsmeding> EvanR: it definitely is not
2024-11-14 17:16:16 +0100 <Inst> probably MY skill issue :(