Newest at the top
| 2026-03-07 15:16:50 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds) |
| 2026-03-07 15:12:06 +0100 | <Guest89> | oh damn, strict data actually massively improved the runtime on larger inputs |
| 2026-03-07 15:11:39 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 15:08:00 +0100 | <Guest89> | exa do you have a reference description of the data structure? |
| 2026-03-07 15:07:26 +0100 | <Guest89> | well the point is more that the data structures should be limited to trees and lists |
| 2026-03-07 15:00:55 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 264 seconds) |
| 2026-03-07 14:55:52 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 14:52:32 +0100 | <[exa]> | (missing def.: ofKey (a :> _) = a ) |
| 2026-03-07 14:51:41 +0100 | <[exa]> | (apologies for the streaming mess around) |
| 2026-03-07 14:50:36 +0100 | <[exa]> | anyway I ended up replacing pqueue with this (the code is essentially a multi-stream merge): https://paste.tomsmeding.com/UskOMkBm, was like 3x faster |
| 2026-03-07 14:50:01 +0100 | <[exa]> | +1 for set, not too slow at all |
| 2026-03-07 14:49:47 +0100 | <haskellbridge> | <loonycyborg> if you ever need smallest element you can use set |
| 2026-03-07 14:49:30 +0100 | [exa] | hides |
| 2026-03-07 14:49:27 +0100 | <[exa]> | tuple is also an array |
| 2026-03-07 14:48:41 +0100 | <Guest89> | part of my thesis involves *not* using arrays though |
| 2026-03-07 14:48:28 +0100 | <Guest89> | I only ever need the smallest element |
| 2026-03-07 14:48:21 +0100 | <[exa]> | if not they are strictly slower than a manual arrayheap |
| 2026-03-07 14:48:09 +0100 | <[exa]> | PQueues are great IF you need to also pick the elements by some index |
| 2026-03-07 14:47:43 +0100 | <Guest89> | otherwise I have some toy implementations from chris okasaki's book but it's, uhh, not working as intended right now |
| 2026-03-07 14:46:51 +0100 | <Guest89> | the only asymptotic difference is constant or log time lookup though |
| 2026-03-07 14:46:42 +0100 | <Guest89> | I was conisdering using I think it's called PQueue? |
| 2026-03-07 14:46:27 +0100 | <Guest89> | right now it's just a red-black tree so it's not an optimal priority queue |
| 2026-03-07 14:46:20 +0100 | <[exa]> | just in case lemme find my quickhacked heapqueue implementation |
| 2026-03-07 14:46:02 +0100 | <Guest89> | it's not as much of a hot spot as I expected |
| 2026-03-07 14:45:57 +0100 | <[exa]> | so a priority list? |
| 2026-03-07 14:45:51 +0100 | <Guest89> | yes |
| 2026-03-07 14:45:39 +0100 | <[exa]> | btw do you store these things in some kindof priority queue or ordered container or so? |
| 2026-03-07 14:44:50 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 248 seconds) |
| 2026-03-07 14:44:24 +0100 | <Guest89> | ooo |
| 2026-03-07 14:44:17 +0100 | <[exa]> | btw you can write that case-y comparison as: `compare s1 s2 <> compare r1 r2 <> compare ishigh1 ishigh2` |
| 2026-03-07 14:43:42 +0100 | <Guest89> | we could move to a separate chat if you want |
| 2026-03-07 14:43:23 +0100 | <[exa]> | oh cool |
| 2026-03-07 14:43:03 +0100 | <Guest89> | do you want a new profile? |
| 2026-03-07 14:43:01 +0100 | <Guest89> | I've cut off about 30% of the allocations so far with it |
| 2026-03-07 14:42:41 +0100 | <[exa]> | well lmk if these 2 things change anything, if not I'd say we profile again and see |
| 2026-03-07 14:41:02 +0100 | <Guest89> | I mean the primary data structure is itself one big list, but I'm still not sure whether laziness or not is the way to go |
| 2026-03-07 14:40:04 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 14:39:48 +0100 | <[exa]> | (except the ones where you know that laziness is required) |
| 2026-03-07 14:39:25 +0100 | <[exa]> | also do add bangs to all your datatype definitions (Pair !Ptr !Ptr, etc) |
| 2026-03-07 14:38:53 +0100 | <Guest89> | inlining seemed to take the worst of the initial overhead (like 20% or something) but yeah, thunsk everywhere |
| 2026-03-07 14:38:52 +0100 | madresch | (~Thunderbi@user/madresch) (Ping timeout: 256 seconds) |
| 2026-03-07 14:38:30 +0100 | <Guest89> | I will look into it |
| 2026-03-07 14:38:23 +0100 | <[exa]> | anyway lots of the allocations in your profile actually seem to point to typeclass methods so I guess that a bit of specialization could help there |
| 2026-03-07 14:38:16 +0100 | <Guest89> | so compiler- or language-specific optimizations are kind of not the point there |
| 2026-03-07 14:37:45 +0100 | <Guest89> | well it's supposed to be an implementation that can be used as a jumping-off point if someone more nerdy about proofs would want to make a provably correct version |
| 2026-03-07 14:37:12 +0100 | CallipygousPepe | (~reuben@user/CallipygousPepe) CallipygousPepe |
| 2026-03-07 14:36:39 +0100 | <[exa]> | Guest89: by "semantically correct" you mean "readable" :D |
| 2026-03-07 14:35:53 +0100 | <Guest89> | my tl;dr is that the extra LOC should be whatever |
| 2026-03-07 14:35:36 +0100 | <Guest89> | I'm doing this for my master's thesis, but I'll probably end up presenting a "clean" version that's semantically correct in the thesis itself and then using an optimized version for the actual benchmarks |
| 2026-03-07 14:34:20 +0100 | <[exa]> | (tbh my binary changed from like 9M to 9.5M when I was trying it, so I decided I don't care) |