Newest at the top
2025-02-25 11:33:34 +0100 | __monty__ | (~toonn@user/toonn) toonn |
2025-02-25 11:31:29 +0100 | gmg | (~user@user/gehmehgeh) gehmehgeh |
2025-02-25 11:29:42 +0100 | misterfish | (~misterfis@31-161-39-137.biz.kpn.net) misterfish |
2025-02-25 11:29:26 +0100 | <ski> | (the unsafe stuff ?) |
2025-02-25 11:28:36 +0100 | <ski> | hm ? |
2025-02-25 11:28:15 +0100 | tromp | (~textual@2a02:a210:cba:8500:6ddc:c1a9:bc13:1391) (Quit: My iMac has gone to sleep. ZZZzzz…) |
2025-02-25 11:27:24 +0100 | L29Ah | (~L29Ah@wikipedia/L29Ah) L29Ah |
2025-02-25 11:26:36 +0100 | <[exa]> | ski: interesting construction tho. thanks. :D |
2025-02-25 11:26:20 +0100 | acidjnk | (~acidjnk@p200300d6e7283f38a15cd1ba33b15ba0.dip0.t-ipconnect.de) (Ping timeout: 272 seconds) |
2025-02-25 11:26:03 +0100 | tomsmeding | has a meeting |
2025-02-25 11:25:41 +0100 | <tomsmeding> | ski: right, but at that point I'm not sure one can really say "yes, I'm using ST and not IO" :p |
2025-02-25 11:24:26 +0100 | alfiee | (~alfiee@user/alfiee) (Ping timeout: 272 seconds) |
2025-02-25 11:24:17 +0100 | <ski> | moining |
2025-02-25 11:24:11 +0100 | <Hecate> | morning |
2025-02-25 11:23:55 +0100 | <ski> | [exa] : not afaik |
2025-02-25 11:23:36 +0100 | <lambdabot> | ST s a -> a |
2025-02-25 11:23:35 +0100 | <ski> | @type Control.Monad.Primitive.unsafeInlineST |
2025-02-25 11:23:33 +0100 | <lambdabot> | ST s c -> c |
2025-02-25 11:23:32 +0100 | <ski> | @type System.IO.Unsafe.unsafePerformIO . Control.Monad.ST.Unsafe.unsafeSTToIO -- "I guess that's called \"runST . unsafeCoerce\"" |
2025-02-25 11:22:24 +0100 | [exa] | goes aFfInE TeNsOrS |
2025-02-25 11:21:49 +0100 | <[exa]> | ski: anyway the main thing I took home from the AD currently is that the array indices need some completely different approach before this works automatically in all cases |
2025-02-25 11:21:11 +0100 | <[exa]> | ski: the differential lambda calculus is btw not directly related to the datatype differentiation (as with zippers), right? |
2025-02-25 11:19:47 +0100 | alfiee | (~alfiee@user/alfiee) alfiee |
2025-02-25 11:18:26 +0100 | misterfish | (~misterfis@h239071.upc-h.chello.nl) (Ping timeout: 252 seconds) |
2025-02-25 11:14:08 +0100 | gmg | (~user@user/gehmehgeh) (Quit: Leaving) |
2025-02-25 11:14:00 +0100 | <ski> | nice :b |
2025-02-25 11:13:41 +0100 | xff0x | (~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp) (Ping timeout: 248 seconds) |
2025-02-25 11:12:33 +0100 | <tomsmeding> | ski: I'm doing my PhD in the group that's currently doing research on Accelerate, so in that sense I'm in the right place :p |
2025-02-25 11:10:10 +0100 | <ski> | ([exa] : you mentioning updates (and reconstruction) made me wonder) |
2025-02-25 11:10:09 +0100 | <ski> | mhm |
2025-02-25 11:09:58 +0100 | <Athas> | No, that came out of a different line of research. |
2025-02-25 11:09:13 +0100 | <ski> | Obsidian ? |
2025-02-25 11:08:52 +0100 | <Athas> | Accelerate itself is also a DPH spinoff. |
2025-02-25 11:08:23 +0100 | <ski> | mm, i see |
2025-02-25 11:07:54 +0100 | ljdarj | (~Thunderbi@user/ljdarj) ljdarj |
2025-02-25 11:07:48 +0100 | <Athas> | ski: the things that didn't work in DPH are dead (mostly the vectorisation transform), but a lot of the other ideas are well and alive, or its successors are. DPH begat Repa, which inspired massiv, which to my knowledge is still good and living. |
2025-02-25 11:06:33 +0100 | <Athas> | tomsmeding: yes, those plots are from gradbench (but not fully automated yet). |
2025-02-25 11:06:07 +0100 | ljdarj | (~Thunderbi@user/ljdarj) (Ping timeout: 244 seconds) |
2025-02-25 11:02:51 +0100 | <[exa]> | ski: as in AD in this case |
2025-02-25 11:01:10 +0100 | <ski> | [exa] : differentiated as in AD, or say as in differential lambda calculus ? |
2025-02-25 11:00:20 +0100 | <ski> | mainly the parallelism, i suppose, tomsmeding |
2025-02-25 10:59:53 +0100 | divya- | divya |
2025-02-25 10:58:38 +0100 | divya- | (divya@140.238.251.170) divya |
2025-02-25 10:55:43 +0100 | <[exa]> | ski: I was trying to find some such connection before and it doesn't really seem easily. IMO we'd need some very interesting encoding of the array indexes and updates that can be differentiated directly and then reconstructed. Most differentiation formulations I've seen basically deny that. |
2025-02-25 10:53:54 +0100 | kuribas | (~user@ip-188-118-57-242.reverse.destiny.be) kuribas |
2025-02-25 10:46:42 +0100 | <tomsmeding> | in any case, the fact that it doesn't really exist any more makes this a hard proposition :) |
2025-02-25 10:46:27 +0100 | <tomsmeding> | ski: are you thinking of just the parallelism part, or also the distributed computing part of DPH? (IIRC they had distributed execution over multiple machines as a core design goal) |
2025-02-25 10:45:59 +0100 | tzh | (~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz) |
2025-02-25 10:45:24 +0100 | <tomsmeding> | Athas: do you get those nice plots out of gradbench? |
2025-02-25 10:45:18 +0100 | <ski> | Athas : wondering whether there'd be any hope of integrating it with something like the above |