Newest at the top
2025-03-12 17:42:03 +0100 | acidjnk_new | (~acidjnk@p200300d6e7283f52210926b325fe4262.dip0.t-ipconnect.de) (Ping timeout: 245 seconds) |
2025-03-12 17:42:03 +0100 | Inst | (~Inst@user/Inst) (Remote host closed the connection) |
2025-03-12 17:41:09 +0100 | jmcantrell | (~weechat@user/jmcantrell) jmcantrell |
2025-03-12 17:40:30 +0100 | tzh | (~tzh@c-76-115-131-146.hsd1.or.comcast.net) tzh |
2025-03-12 17:35:57 +0100 | Guest4 | (~Guest4@2804:14c:3f87:8149:ce7a:9b84:f941:24de) |
2025-03-12 17:34:08 +0100 | j1n37- | (~j1n37@user/j1n37) j1n37 |
2025-03-12 17:33:52 +0100 | j1n37 | (~j1n37@user/j1n37) (Ping timeout: 252 seconds) |
2025-03-12 17:32:14 +0100 | yegorc | (~yegorc@user/yegorc) yegorc |
2025-03-12 17:31:18 +0100 | Inst | (~Inst@user/Inst) Inst |
2025-03-12 17:30:25 +0100 | Inst | (~Inst@user/Inst) (Remote host closed the connection) |
2025-03-12 17:28:15 +0100 | <Inst> | chunking function is generating mempty to fill out if it's an odd length list |
2025-03-12 17:27:53 +0100 | <Inst> | yeah it's keyed to monoid |
2025-03-12 17:27:38 +0100 | <ski> | presumably your combining function is associative |
2025-03-12 17:27:18 +0100 | <ski> | mm (as i initially was supposing) |
2025-03-12 17:26:32 +0100 | <Inst> | second |
2025-03-12 17:26:14 +0100 | <ski> | (unclear which of these two options you're doing, in a pass) |
2025-03-12 17:25:47 +0100 | <lambdabot> | [0 + 1,2 + 3,4 + 5,6 + 7] |
2025-03-12 17:25:45 +0100 | <ski> | > map (\[x,y] -> x + y) (chunk 2 [0 .. 7]) :: [Expr] |
2025-03-12 17:25:41 +0100 | <lambdabot> | [0 + 1,1 + 2,2 + 3,3 + 4,4 + 5,5 + 6,6 + 7] |
2025-03-12 17:25:40 +0100 | <ski> | > (zipWith (+) `ap` tail) [0 .. 7] :: [Expr] |
2025-03-12 17:25:24 +0100 | <lambdabot> | [1,5,9,13] |
2025-03-12 17:25:23 +0100 | <ski> | > map (\[x,y] -> x + y) (chunk 2 [0 .. 7]) |
2025-03-12 17:25:20 +0100 | <lambdabot> | [1,3,5,7,9,11,13] |
2025-03-12 17:25:19 +0100 | <ski> | > (zipWith (+) `ap` tail) [0 .. 7] |
2025-03-12 17:24:26 +0100 | <c_wraith> | because that's the part that's leading to so many of these issues |
2025-03-12 17:24:00 +0100 | alfiee | (~alfiee@user/alfiee) (Ping timeout: 272 seconds) |
2025-03-12 17:23:46 +0100 | <c_wraith> | Be sure to use linked lists in the other language, too. |
2025-03-12 17:23:31 +0100 | <Inst> | well tbh i probably should try implementing it imperatively in some other language |
2025-03-12 17:23:05 +0100 | <ski> | (or maybe you're combining each element with its next element, as opposed to ones at even indices with the following adjacent ones at odd indices) |
2025-03-12 17:22:42 +0100 | <Inst> | sorry, it's a dumb exercise, but i find it fun to think through and try to test |
2025-03-12 17:21:49 +0100 | <ski> | sounds similar to a merge sort, in that tree aspect |
2025-03-12 17:21:12 +0100 | <Inst> | so it's called recursively on itself until it matches [x] |
2025-03-12 17:20:56 +0100 | <Inst> | the actual goal here is to fold every element in the list with the adjacent element, producing a new list, then fold the resulting list until it reduces to one level |
2025-03-12 17:19:54 +0100 | peterbecich | (~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich |
2025-03-12 17:19:50 +0100 | <Inst> | thank you for answering why parFoldMap isn't a thing |
2025-03-12 17:19:24 +0100 | <c_wraith> | You can't just write a parallel fold. You need to consider what the fold is actually doing and parallelize that. |
2025-03-12 17:19:08 +0100 | alfiee | (~alfiee@user/alfiee) alfiee |
2025-03-12 17:18:38 +0100 | <c_wraith> | The only way to make this pay off at the level par works at is to work with a very high-level understanding of what your code is doing. |
2025-03-12 17:16:52 +0100 | <c_wraith> | that's what contention does, yes |
2025-03-12 17:16:36 +0100 | <Inst> | and apparently it blocks threads? |
2025-03-12 17:16:15 +0100 | <c_wraith> | there is contention on trying to evaluate the same value twice in parallel |
2025-03-12 17:14:49 +0100 | <c_wraith> | you need to understand how ghc implements lazy evaluation before you can really understand this. |
2025-03-12 17:14:22 +0100 | <Inst> | no :( |
2025-03-12 17:14:12 +0100 | <c_wraith> | please, do you know how ghc uses blackholes? |
2025-03-12 17:13:57 +0100 | <Inst> | which has 60% conversion and creates 2-4 times more sparks |
2025-03-12 17:13:49 +0100 | <Inst> | like 8 times the cont a version |
2025-03-12 17:13:39 +0100 | <Inst> | there's 80-90% conversion, efficient spark creation (iirc it generates less sparks overall), but it takes forever on a 10 million element list |
2025-03-12 17:13:00 +0100 | <c_wraith> | even if multiples of them fire, they're going to face blackhole contention or redundant work |
2025-03-12 17:12:37 +0100 | <Inst> | and that explains the contradiction, right? |
2025-03-12 17:12:22 +0100 | <c_wraith> | except it's creating a spark at every single level |