Newest at the top
2025-03-12 17:14:49 +0100 | <c_wraith> | you need to understand how ghc implements lazy evaluation before you can really understand this. |
2025-03-12 17:14:22 +0100 | <Inst> | no :( |
2025-03-12 17:14:12 +0100 | <c_wraith> | please, do you know how ghc uses blackholes? |
2025-03-12 17:13:57 +0100 | <Inst> | which has 60% conversion and creates 2-4 times more sparks |
2025-03-12 17:13:49 +0100 | <Inst> | like 8 times the cont a version |
2025-03-12 17:13:39 +0100 | <Inst> | there's 80-90% conversion, efficient spark creation (iirc it generates less sparks overall), but it takes forever on a 10 million element list |
2025-03-12 17:13:00 +0100 | <c_wraith> | even if multiples of them fire, they're going to face blackhole contention or redundant work |
2025-03-12 17:12:37 +0100 | <Inst> | and that explains the contradiction, right? |
2025-03-12 17:12:22 +0100 | <c_wraith> | except it's creating a spark at every single level |
2025-03-12 17:11:59 +0100 | <c_wraith> | Oh, in that order. Yes, it is. |
2025-03-12 17:11:38 +0100 | <Inst> | why not? |
2025-03-12 17:11:25 +0100 | <c_wraith> | well, not exactly. |
2025-03-12 17:11:23 +0100 | <Inst> | since it has to evaluate all the way to the end of the list before it returns anything |
2025-03-12 17:11:01 +0100 | <Inst> | the interesting thing is that a cont, in this particular context, is effectively foldr' |
2025-03-12 17:10:34 +0100 | <Inst> | ski: it's a stupid rabbit hole that I somehow expect has treasure instead of the usual contents of rabbit holes |
2025-03-12 17:09:50 +0100 | <Inst> | !!! |
2025-03-12 17:09:37 +0100 | <c_wraith> | also, it turns out that [] is especially hostile to parallelization |
2025-03-12 17:09:24 +0100 | <ski> | i don't really see the point of using `par cont (..)' here. what's the intent ? |
2025-03-12 17:08:56 +0100 | <Inst> | but the degenerate case, at least for me, seems to open up interesting questions |
2025-03-12 17:08:40 +0100 | <Inst> | monoid interface is too restrictive, and this type of large-array stuff i'm testing for is better done via a specialized library supporting massively parallel computing |
2025-03-12 17:08:07 +0100 | <c_wraith> | Which is why it really isn't a big thing |
2025-03-12 17:08:02 +0100 | euphores | (~SASL_euph@user/euphores) (Quit: Leaving.) |
2025-03-12 17:07:48 +0100 | <c_wraith> | It turns out this isn't just something you can sprinkle on top of existing code to get magic results. |
2025-03-12 17:07:15 +0100 | <c_wraith> | that's going to parallelize horribly anyway. It's too low-level. |
2025-03-12 17:05:44 +0100 | machinedgod | (~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 260 seconds) |
2025-03-12 17:04:23 +0100 | <Inst> | so it's actually (\(a,b) cont -> let res = a <> b in par cont res `pseq` res : cont) |
2025-03-12 17:03:39 +0100 | <Inst> | but yeah, I realize that doing it via monoids is the wrong interface, but might as well be there? |
2025-03-12 17:02:42 +0100 | <Inst> | i'm trying to set up parFold for a ParFoldable typeclass |
2025-03-12 17:02:24 +0100 | <Inst> | parMap iirc is implemented via Eval monad, which wraps unsafePerformIO |
2025-03-12 17:01:59 +0100 | <c_wraith> | oh. so you're actually writing parMap |
2025-03-12 17:01:49 +0100 | <Inst> | something like that |
2025-03-12 17:01:43 +0100 | <Inst> | it's foldr (\a cont -> par cont a `pseq` a: cont) |
2025-03-12 17:01:38 +0100 | <c_wraith> | and heck. Even where the input list is coming from matters. Does the generation of the input list have a linear data dependency? Well, then, you're not going to benefit a lot from attempting to process multiple elements in parallel. |
2025-03-12 17:01:14 +0100 | Inst | (~Inst@user/Inst) Inst |
2025-03-12 17:01:04 +0100 | <ski> | mm |
2025-03-12 16:59:44 +0100 | <c_wraith> | Depending on what f does, it might make sense to do any number of different things in parallel. Or none. |
2025-03-12 16:58:38 +0100 | <c_wraith> | all it does is hand control back to f immediately, with all other behavior up to f. |
2025-03-12 16:56:56 +0100 | Inst | (~Inst@user/Inst) (Remote host closed the connection) |
2025-03-12 16:56:50 +0100 | <c_wraith> | foldr is too low-level to really parallelize in any meaningful way. |
2025-03-12 16:55:59 +0100 | <ski> | s/odd/off/ |
2025-03-12 16:55:45 +0100 | <ski> | so, with `par cont a', you're sparking odd the recursive calls in `foldr'. each one performed would force its `a'. with `par a cont', you'd instead be sparking off the `a's all directly |
2025-03-12 16:55:30 +0100 | <c_wraith> | It's all *very* dependent on context. the whole way par works only is effective because of laziness. |
2025-03-12 16:54:37 +0100 | infinity0 | (~infinity0@pwned.gg) infinity0 |
2025-03-12 16:53:43 +0100 | <Inst> | it's weird, because the par cont a, #1, generates more sparks, #2, fizzles more sparts, and #3, is faster |
2025-03-12 16:53:40 +0100 | <ski> | (well, i suppose it will be forced to WHNF. but not sparked) |
2025-03-12 16:53:01 +0100 | <ski> | `pseq (par cont a) (..)' seems a bit useless, the `a' will get ignored by the `pseq' ? |
2025-03-12 16:51:56 +0100 | <Inst> | i have two dummy cases under Data.Time benchmarking, the first is par a b, the second is par b a |
2025-03-12 16:50:42 +0100 | <c_wraith> | `par a b' creates a spark to evaluate a and returns b. |
2025-03-12 16:49:54 +0100 | <Inst> | par a cont should be generating a ton of sparks, but the cont can't return until all the sparks are generated, right? |
2025-03-12 16:49:40 +0100 | <Inst> | what is the difference, though? |