Newest at the top
2024-11-14 18:27:04 +0100 | alexherbo2 | (~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) alexherbo2 |
2024-11-14 18:26:43 +0100 | alexherbo2 | (~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) (Remote host closed the connection) |
2024-11-14 18:26:10 +0100 | <bailsman> | updateValue is pure. This is the 'inplace map': `runST $ do; mv <- VU.unsafeThaw v; VUM.iforM_ mv $ \i s -> VUM.write mv i $! updateValue s; VU.unsafeFreeze mv` |
2024-11-14 18:25:10 +0100 | <tomsmeding> | bailsman: "please tell me" if you show the code, perhaps we can :) |
2024-11-14 18:23:30 +0100 | <bailsman> | Please tell me it's not going to segfault on me if I move forward with this in more complex examples |
2024-11-14 18:22:15 +0100 | <bailsman> | why did nobody tell me :P |
2024-11-14 18:22:11 +0100 | <bailsman> | Wait, so apparently I can derive the unboxed instances with minimal boilerplate (as tuples), and the pure world doesn't even need to know or care that I did that all. I can write it idiomatically. And it's now as fast as C |
2024-11-14 18:21:58 +0100 | emfrom | (~emfrom@37.168.28.138) (Remote host closed the connection) |
2024-11-14 18:13:09 +0100 | Inst | (~Inst@user/Inst) (Ping timeout: 276 seconds) |
2024-11-14 18:12:03 +0100 | emfrom | (~emfrom@37.168.28.138) |
2024-11-14 18:11:25 +0100 | Inst_ | (~Inst@user/Inst) Inst |
2024-11-14 18:10:57 +0100 | mantraofpie | (~mantraofp@user/mantraofpie) mantraofpie |
2024-11-14 18:10:17 +0100 | mantraofpie | (~mantraofp@user/mantraofpie) (Quit: ZNC 1.9.1 - https://znc.in) |
2024-11-14 18:05:45 +0100 | machinedgod | (~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 252 seconds) |
2024-11-14 18:03:50 +0100 | mantraofpie_ | mantraofpie |
2024-11-14 17:58:53 +0100 | housemate | (~housemate@146.70.66.228) housemate |
2024-11-14 17:58:27 +0100 | aljazmc | (~aljazmc@user/aljazmc) aljazmc |
2024-11-14 17:58:01 +0100 | aljazmc | (~aljazmc@user/aljazmc) (Remote host closed the connection) |
2024-11-14 17:46:08 +0100 | Digit | (~user@user/digit) Digit |
2024-11-14 17:44:33 +0100 | Digitteknohippie | (~user@user/digit) (Ping timeout: 252 seconds) |
2024-11-14 17:42:44 +0100 | <bailsman> | It went from 4x slower to 10x faster than plain `map` |
2024-11-14 17:42:20 +0100 | <haskellbridge> | <Bowuigi> Oh yeah unboxing and strict data type fields can help in optimizing in general |
2024-11-14 17:42:00 +0100 | <geekosaur> | otherwise it'll be chasing a lot of pointers |
2024-11-14 17:41:51 +0100 | <geekosaur> | well, yes, that helps |
2024-11-14 17:40:18 +0100 | <bailsman> | need to write unboxed instances for all of your data types. |
2024-11-14 17:40:17 +0100 | <bailsman> | Hmmm. I had Claude.AI write an unboxed small record instance with 50+ lines of code (to my eyes absolutely horrific). Then, using Data.Vector.Unboxed.Mutable the performance is now approaching the C in-place update speed. I don't entirely trust that this won't segfault at some point, but if claude.ai did everything correctly then apparently it *is* possible to write inplace algorithms, you just |
2024-11-14 17:37:19 +0100 | Digit | (~user@user/digit) (Ping timeout: 265 seconds) |
2024-11-14 17:37:00 +0100 | Digitteknohippie | (~user@user/digit) Digit |
2024-11-14 17:34:26 +0100 | <haskellbridge> | <Bowuigi> It turns out that first class labels are just Proxy on a kind ranging over every possible label |
2024-11-14 17:33:44 +0100 | <haskellbridge> | <Bowuigi> Now that everything is solved, it's time to move to something else |
2024-11-14 17:21:27 +0100 | <geekosaur> | llvm still lacks support for pre-CPSed code |
2024-11-14 17:20:48 +0100 | aljazmc | (~aljazmc@user/aljazmc) aljazmc |
2024-11-14 17:19:31 +0100 | <tomsmeding> | :) |
2024-11-14 17:19:01 +0100 | tromp | (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) |
2024-11-14 17:16:39 +0100 | <EvanR> | ok |
2024-11-14 17:16:35 +0100 | <tomsmeding> | EvanR: it definitely is not |
2024-11-14 17:16:16 +0100 | <Inst> | probably MY skill issue :( |
2024-11-14 17:16:14 +0100 | <EvanR> | is llvm not the default now anyway |
2024-11-14 17:14:34 +0100 | <bailsman> | Inst: I compiled my benchmark with -O2 -fllvm. Does not seem meaningfully different. Is -O2 the wrong optimization level? |
2024-11-14 17:12:36 +0100 | <Inst> | try compile with -fllvm |
2024-11-14 17:12:30 +0100 | <lambdabot> | Unknown command, try @list |
2024-11-14 17:12:30 +0100 | <Inst> | @bailsman |
2024-11-14 17:11:19 +0100 | <EvanR> | in any case idiomatic haskell is a starting point for getting into the weeds for optimization |
2024-11-14 17:10:34 +0100 | <EvanR> | not necessarily, sometimes idiomatic haskell is faster |
2024-11-14 17:10:03 +0100 | <bailsman> | If you write idiomatic haskell, you get as-slow-as-you-would-expect, if you try to write in-place code, you get way-slower-than-you-would-expect. |
2024-11-14 17:09:25 +0100 | <EvanR> | in the case of arrays, for lookup tables |
2024-11-14 17:09:22 +0100 | <bailsman> | I agree with your conclusion - stop trying to be clever and just learn what idiomatic haskell code looks like. |
2024-11-14 17:08:53 +0100 | <EvanR> | but as a looping mechanism |
2024-11-14 17:08:44 +0100 | <EvanR> | in the case of list, usually not as a data structure |
2024-11-14 17:08:09 +0100 | <EvanR> | list and arrays in haskell are both good for certain purposes |