Newest at the top
2024-11-14 17:44:33 +0100 | Digitteknohippie | (~user@user/digit) (Ping timeout: 252 seconds) |
2024-11-14 17:42:44 +0100 | <bailsman> | It went from 4x slower to 10x faster than plain `map` |
2024-11-14 17:42:20 +0100 | <haskellbridge> | <Bowuigi> Oh yeah unboxing and strict data type fields can help in optimizing in general |
2024-11-14 17:42:00 +0100 | <geekosaur> | otherwise it'll be chasing a lot of pointers |
2024-11-14 17:41:51 +0100 | <geekosaur> | well, yes, that helps |
2024-11-14 17:40:18 +0100 | <bailsman> | need to write unboxed instances for all of your data types. |
2024-11-14 17:40:17 +0100 | <bailsman> | Hmmm. I had Claude.AI write an unboxed small record instance with 50+ lines of code (to my eyes absolutely horrific). Then, using Data.Vector.Unboxed.Mutable the performance is now approaching the C in-place update speed. I don't entirely trust that this won't segfault at some point, but if claude.ai did everything correctly then apparently it *is* possible to write inplace algorithms, you just |
2024-11-14 17:37:19 +0100 | Digit | (~user@user/digit) (Ping timeout: 265 seconds) |
2024-11-14 17:37:00 +0100 | Digitteknohippie | (~user@user/digit) Digit |
2024-11-14 17:34:26 +0100 | <haskellbridge> | <Bowuigi> It turns out that first class labels are just Proxy on a kind ranging over every possible label |
2024-11-14 17:33:44 +0100 | <haskellbridge> | <Bowuigi> Now that everything is solved, it's time to move to something else |
2024-11-14 17:21:27 +0100 | <geekosaur> | llvm still lacks support for pre-CPSed code |
2024-11-14 17:20:48 +0100 | aljazmc | (~aljazmc@user/aljazmc) aljazmc |
2024-11-14 17:19:31 +0100 | <tomsmeding> | :) |
2024-11-14 17:19:01 +0100 | tromp | (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) |
2024-11-14 17:16:39 +0100 | <EvanR> | ok |
2024-11-14 17:16:35 +0100 | <tomsmeding> | EvanR: it definitely is not |
2024-11-14 17:16:16 +0100 | <Inst> | probably MY skill issue :( |
2024-11-14 17:16:14 +0100 | <EvanR> | is llvm not the default now anyway |
2024-11-14 17:14:34 +0100 | <bailsman> | Inst: I compiled my benchmark with -O2 -fllvm. Does not seem meaningfully different. Is -O2 the wrong optimization level? |
2024-11-14 17:12:36 +0100 | <Inst> | try compile with -fllvm |
2024-11-14 17:12:30 +0100 | <lambdabot> | Unknown command, try @list |
2024-11-14 17:12:30 +0100 | <Inst> | @bailsman |
2024-11-14 17:11:19 +0100 | <EvanR> | in any case idiomatic haskell is a starting point for getting into the weeds for optimization |
2024-11-14 17:10:34 +0100 | <EvanR> | not necessarily, sometimes idiomatic haskell is faster |
2024-11-14 17:10:03 +0100 | <bailsman> | If you write idiomatic haskell, you get as-slow-as-you-would-expect, if you try to write in-place code, you get way-slower-than-you-would-expect. |
2024-11-14 17:09:25 +0100 | <EvanR> | in the case of arrays, for lookup tables |
2024-11-14 17:09:22 +0100 | <bailsman> | I agree with your conclusion - stop trying to be clever and just learn what idiomatic haskell code looks like. |
2024-11-14 17:08:53 +0100 | <EvanR> | but as a looping mechanism |
2024-11-14 17:08:44 +0100 | <EvanR> | in the case of list, usually not as a data structure |
2024-11-14 17:08:09 +0100 | <EvanR> | list and arrays in haskell are both good for certain purposes |
2024-11-14 17:07:25 +0100 | <EvanR> | "they are both called list" isn't that inspiring |
2024-11-14 17:06:59 +0100 | <EvanR> | the C version of linked list is just a bad thing to compare to haskell list unless you are careful to emulate what the haskell version did |
2024-11-14 17:06:20 +0100 | <bailsman> | To me the fact that the Haskell Vector is ~100ms, Haskell map is ~25ms, C allocate-new-linked-list-and-copy version is ~15ms, C array in place is ~2ms is suggestive of the fact that indeed allocating a list is slow, and it's indeed what Haskell is doing, but it's still better than trying to do an array in Haskell. |
2024-11-14 17:06:10 +0100 | <EvanR> | before claiming stuff about what stuff compiles to you should check it |
2024-11-14 17:05:01 +0100 | <EvanR> | but the specific reasons are off |
2024-11-14 17:04:50 +0100 | <EvanR> | "straight list processing and immutable structures are probably better in haskell than C-like mutable array munging" though is what I've been saying for days |
2024-11-14 17:03:44 +0100 | <geekosaur> | gc only gets involved when the pointer reaches the end of the nursery |
2024-11-14 17:03:31 +0100 | <geekosaur> | the nursery/gen 0 is a bump-pointer allocator |
2024-11-14 17:03:20 +0100 | <geekosaur> | not magic |
2024-11-14 17:03:13 +0100 | <bailsman> | I agree - I'm not really sure. Some GC magic probably. But the point is that it's builtin and optimized, so it's much faster than trying to emulate in-place updates, which compiles to a morass of work and not 5 asm instructions like the c version. |
2024-11-14 17:02:06 +0100 | <geekosaur> | because it's probably not what actually happens |
2024-11-14 17:01:56 +0100 | <geekosaur> | bailsman, what do you think is going on during an allocation? |
2024-11-14 17:01:22 +0100 | <EvanR> | yes it is |
2024-11-14 17:01:08 +0100 | <bailsman> | No it isn't. |
2024-11-14 17:00:57 +0100 | <EvanR> | allocating nodes in haskell is much faster |
2024-11-14 17:00:38 +0100 | <EvanR> | are you allocating nodes with malloc |
2024-11-14 17:00:31 +0100 | <EvanR> | that's... not going to be an apples to apples comparison |
2024-11-14 17:00:30 +0100 | <bailsman> | So I think I'm concluding that map is "the best you can do in haskell" because it's optimized and a builtin, and any attempt to do in place algorithms is just going to be massively slow. |
2024-11-14 17:00:06 +0100 | <bailsman> | To test my theory, I wrote a C version of the benchmark. Updating a linked list by allocating nodes one by one and copying over the values takes 14ms, approximately as long as Haskell takes to do map. Updating 1M records inplace in an array takes 2ms. |