Newest at the top
2025-03-12 17:23:31 +0100 | <Inst> | well tbh i probably should try implementing it imperatively in some other language |
2025-03-12 17:23:05 +0100 | <ski> | (or maybe you're combining each element with its next element, as opposed to ones at even indices with the following adjacent ones at odd indices) |
2025-03-12 17:22:42 +0100 | <Inst> | sorry, it's a dumb exercise, but i find it fun to think through and try to test |
2025-03-12 17:21:49 +0100 | <ski> | sounds similar to a merge sort, in that tree aspect |
2025-03-12 17:21:12 +0100 | <Inst> | so it's called recursively on itself until it matches [x] |
2025-03-12 17:20:56 +0100 | <Inst> | the actual goal here is to fold every element in the list with the adjacent element, producing a new list, then fold the resulting list until it reduces to one level |
2025-03-12 17:19:54 +0100 | peterbecich | (~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich |
2025-03-12 17:19:50 +0100 | <Inst> | thank you for answering why parFoldMap isn't a thing |
2025-03-12 17:19:24 +0100 | <c_wraith> | You can't just write a parallel fold. You need to consider what the fold is actually doing and parallelize that. |
2025-03-12 17:19:08 +0100 | alfiee | (~alfiee@user/alfiee) alfiee |
2025-03-12 17:18:38 +0100 | <c_wraith> | The only way to make this pay off at the level par works at is to work with a very high-level understanding of what your code is doing. |
2025-03-12 17:16:52 +0100 | <c_wraith> | that's what contention does, yes |
2025-03-12 17:16:36 +0100 | <Inst> | and apparently it blocks threads? |
2025-03-12 17:16:15 +0100 | <c_wraith> | there is contention on trying to evaluate the same value twice in parallel |
2025-03-12 17:14:49 +0100 | <c_wraith> | you need to understand how ghc implements lazy evaluation before you can really understand this. |
2025-03-12 17:14:22 +0100 | <Inst> | no :( |
2025-03-12 17:14:12 +0100 | <c_wraith> | please, do you know how ghc uses blackholes? |
2025-03-12 17:13:57 +0100 | <Inst> | which has 60% conversion and creates 2-4 times more sparks |
2025-03-12 17:13:49 +0100 | <Inst> | like 8 times the cont a version |
2025-03-12 17:13:39 +0100 | <Inst> | there's 80-90% conversion, efficient spark creation (iirc it generates less sparks overall), but it takes forever on a 10 million element list |
2025-03-12 17:13:00 +0100 | <c_wraith> | even if multiples of them fire, they're going to face blackhole contention or redundant work |
2025-03-12 17:12:37 +0100 | <Inst> | and that explains the contradiction, right? |
2025-03-12 17:12:22 +0100 | <c_wraith> | except it's creating a spark at every single level |
2025-03-12 17:11:59 +0100 | <c_wraith> | Oh, in that order. Yes, it is. |
2025-03-12 17:11:38 +0100 | <Inst> | why not? |
2025-03-12 17:11:25 +0100 | <c_wraith> | well, not exactly. |
2025-03-12 17:11:23 +0100 | <Inst> | since it has to evaluate all the way to the end of the list before it returns anything |
2025-03-12 17:11:01 +0100 | <Inst> | the interesting thing is that a cont, in this particular context, is effectively foldr' |
2025-03-12 17:10:34 +0100 | <Inst> | ski: it's a stupid rabbit hole that I somehow expect has treasure instead of the usual contents of rabbit holes |
2025-03-12 17:09:50 +0100 | <Inst> | !!! |
2025-03-12 17:09:37 +0100 | <c_wraith> | also, it turns out that [] is especially hostile to parallelization |
2025-03-12 17:09:24 +0100 | <ski> | i don't really see the point of using `par cont (..)' here. what's the intent ? |
2025-03-12 17:08:56 +0100 | <Inst> | but the degenerate case, at least for me, seems to open up interesting questions |
2025-03-12 17:08:40 +0100 | <Inst> | monoid interface is too restrictive, and this type of large-array stuff i'm testing for is better done via a specialized library supporting massively parallel computing |
2025-03-12 17:08:07 +0100 | <c_wraith> | Which is why it really isn't a big thing |
2025-03-12 17:08:02 +0100 | euphores | (~SASL_euph@user/euphores) (Quit: Leaving.) |
2025-03-12 17:07:48 +0100 | <c_wraith> | It turns out this isn't just something you can sprinkle on top of existing code to get magic results. |
2025-03-12 17:07:15 +0100 | <c_wraith> | that's going to parallelize horribly anyway. It's too low-level. |
2025-03-12 17:05:44 +0100 | machinedgod | (~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 260 seconds) |
2025-03-12 17:04:23 +0100 | <Inst> | so it's actually (\(a,b) cont -> let res = a <> b in par cont res `pseq` res : cont) |
2025-03-12 17:03:39 +0100 | <Inst> | but yeah, I realize that doing it via monoids is the wrong interface, but might as well be there? |
2025-03-12 17:02:42 +0100 | <Inst> | i'm trying to set up parFold for a ParFoldable typeclass |
2025-03-12 17:02:24 +0100 | <Inst> | parMap iirc is implemented via Eval monad, which wraps unsafePerformIO |
2025-03-12 17:01:59 +0100 | <c_wraith> | oh. so you're actually writing parMap |
2025-03-12 17:01:49 +0100 | <Inst> | something like that |
2025-03-12 17:01:43 +0100 | <Inst> | it's foldr (\a cont -> par cont a `pseq` a: cont) |
2025-03-12 17:01:38 +0100 | <c_wraith> | and heck. Even where the input list is coming from matters. Does the generation of the input list have a linear data dependency? Well, then, you're not going to benefit a lot from attempting to process multiple elements in parallel. |
2025-03-12 17:01:14 +0100 | Inst | (~Inst@user/Inst) Inst |
2025-03-12 17:01:04 +0100 | <ski> | mm |
2025-03-12 16:59:44 +0100 | <c_wraith> | Depending on what f does, it might make sense to do any number of different things in parallel. Or none. |