2024/11/14

Newest at the top

2024-11-14 18:12:03 +0100emfrom(~emfrom@37.168.28.138)
2024-11-14 18:11:25 +0100Inst_(~Inst@user/Inst) Inst
2024-11-14 18:10:57 +0100mantraofpie(~mantraofp@user/mantraofpie) mantraofpie
2024-11-14 18:10:17 +0100mantraofpie(~mantraofp@user/mantraofpie) (Quit: ZNC 1.9.1 - https://znc.in)
2024-11-14 18:05:45 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 252 seconds)
2024-11-14 18:03:50 +0100mantraofpie_mantraofpie
2024-11-14 17:58:53 +0100housemate(~housemate@146.70.66.228) housemate
2024-11-14 17:58:27 +0100aljazmc(~aljazmc@user/aljazmc) aljazmc
2024-11-14 17:58:01 +0100aljazmc(~aljazmc@user/aljazmc) (Remote host closed the connection)
2024-11-14 17:46:08 +0100Digit(~user@user/digit) Digit
2024-11-14 17:44:33 +0100Digitteknohippie(~user@user/digit) (Ping timeout: 252 seconds)
2024-11-14 17:42:44 +0100 <bailsman> It went from 4x slower to 10x faster than plain `map`
2024-11-14 17:42:20 +0100 <haskellbridge> <Bowuigi> Oh yeah unboxing and strict data type fields can help in optimizing in general
2024-11-14 17:42:00 +0100 <geekosaur> otherwise it'll be chasing a lot of pointers
2024-11-14 17:41:51 +0100 <geekosaur> well, yes, that helps
2024-11-14 17:40:18 +0100 <bailsman> need to write unboxed instances for all of your data types.
2024-11-14 17:40:17 +0100 <bailsman> Hmmm. I had Claude.AI write an unboxed small record instance with 50+ lines of code (to my eyes absolutely horrific). Then, using Data.Vector.Unboxed.Mutable the performance is now approaching the C in-place update speed. I don't entirely trust that this won't segfault at some point, but if claude.ai did everything correctly then apparently it *is* possible to write inplace algorithms, you just
2024-11-14 17:37:19 +0100Digit(~user@user/digit) (Ping timeout: 265 seconds)
2024-11-14 17:37:00 +0100Digitteknohippie(~user@user/digit) Digit
2024-11-14 17:34:26 +0100 <haskellbridge> <Bowuigi> It turns out that first class labels are just Proxy on a kind ranging over every possible label
2024-11-14 17:33:44 +0100 <haskellbridge> <Bowuigi> Now that everything is solved, it's time to move to something else
2024-11-14 17:21:27 +0100 <geekosaur> llvm still lacks support for pre-CPSed code
2024-11-14 17:20:48 +0100aljazmc(~aljazmc@user/aljazmc) aljazmc
2024-11-14 17:19:31 +0100 <tomsmeding> :)
2024-11-14 17:19:01 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-11-14 17:16:39 +0100 <EvanR> ok
2024-11-14 17:16:35 +0100 <tomsmeding> EvanR: it definitely is not
2024-11-14 17:16:16 +0100 <Inst> probably MY skill issue :(
2024-11-14 17:16:14 +0100 <EvanR> is llvm not the default now anyway
2024-11-14 17:14:34 +0100 <bailsman> Inst: I compiled my benchmark with -O2 -fllvm. Does not seem meaningfully different. Is -O2 the wrong optimization level?
2024-11-14 17:12:36 +0100 <Inst> try compile with -fllvm
2024-11-14 17:12:30 +0100 <lambdabot> Unknown command, try @list
2024-11-14 17:12:30 +0100 <Inst> @bailsman
2024-11-14 17:11:19 +0100 <EvanR> in any case idiomatic haskell is a starting point for getting into the weeds for optimization
2024-11-14 17:10:34 +0100 <EvanR> not necessarily, sometimes idiomatic haskell is faster
2024-11-14 17:10:03 +0100 <bailsman> If you write idiomatic haskell, you get as-slow-as-you-would-expect, if you try to write in-place code, you get way-slower-than-you-would-expect.
2024-11-14 17:09:25 +0100 <EvanR> in the case of arrays, for lookup tables
2024-11-14 17:09:22 +0100 <bailsman> I agree with your conclusion - stop trying to be clever and just learn what idiomatic haskell code looks like.
2024-11-14 17:08:53 +0100 <EvanR> but as a looping mechanism
2024-11-14 17:08:44 +0100 <EvanR> in the case of list, usually not as a data structure
2024-11-14 17:08:09 +0100 <EvanR> list and arrays in haskell are both good for certain purposes
2024-11-14 17:07:25 +0100 <EvanR> "they are both called list" isn't that inspiring
2024-11-14 17:06:59 +0100 <EvanR> the C version of linked list is just a bad thing to compare to haskell list unless you are careful to emulate what the haskell version did
2024-11-14 17:06:20 +0100 <bailsman> To me the fact that the Haskell Vector is ~100ms, Haskell map is ~25ms, C allocate-new-linked-list-and-copy version is ~15ms, C array in place is ~2ms is suggestive of the fact that indeed allocating a list is slow, and it's indeed what Haskell is doing, but it's still better than trying to do an array in Haskell.
2024-11-14 17:06:10 +0100 <EvanR> before claiming stuff about what stuff compiles to you should check it
2024-11-14 17:05:01 +0100 <EvanR> but the specific reasons are off
2024-11-14 17:04:50 +0100 <EvanR> "straight list processing and immutable structures are probably better in haskell than C-like mutable array munging" though is what I've been saying for days
2024-11-14 17:03:44 +0100 <geekosaur> gc only gets involved when the pointer reaches the end of the nursery
2024-11-14 17:03:31 +0100 <geekosaur> the nursery/gen 0 is a bump-pointer allocator
2024-11-14 17:03:20 +0100 <geekosaur> not magic