2024/05/21

2024-05-21 00:04:20 +0200euleritian(~euleritia@dynamic-176-006-180-037.176.6.pool.telefonica.de)
2024-05-21 00:05:23 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 264 seconds)
2024-05-21 00:06:58 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
2024-05-21 00:11:11 +0200AlexNoo_(~AlexNoo@178.34.163.203)
2024-05-21 00:13:41 +0200AlexZenon(~alzenon@5.139.232.131) (Ping timeout: 240 seconds)
2024-05-21 00:13:43 +0200Sgeo(~Sgeo@user/sgeo)
2024-05-21 00:14:32 +0200 <nitrix> As someone returning to Haskell, what's the state of the ecosystem now? Are projects primarily Cabal or Stack?
2024-05-21 00:14:59 +0200AlexNoo(~AlexNoo@5.139.232.131) (Ping timeout: 264 seconds)
2024-05-21 00:15:29 +0200 <geekosaur> 50-50
2024-05-21 00:15:44 +0200 <geekosaur> but cabal has been increasing its "market share" of late
2024-05-21 00:16:57 +0200 <glguy> I still check that my projects build in stack but don't use it day to day
2024-05-21 00:17:19 +0200 <yin> yeah. +ghcup -stack
2024-05-21 00:17:59 +0200 <yin> maybe +ghcid depending on how long have you been away
2024-05-21 00:19:02 +0200AlexZenon(~alzenon@178.34.163.203)
2024-05-21 00:19:04 +0200 <glguy> I like ghcid during Advent of Code but is easy enough to use these days that I don't bother in general
2024-05-21 00:19:10 +0200 <glguy> but HLS is*
2024-05-21 00:20:59 +0200euleritian(~euleritia@dynamic-176-006-180-037.176.6.pool.telefonica.de) (Ping timeout: 264 seconds)
2024-05-21 00:21:53 +0200euleritian(~euleritia@dynamic-176-003-074-222.176.3.pool.telefonica.de)
2024-05-21 00:22:09 +0200 <yin> ghcid --warnings --no-status --run --clear --no-height-limit
2024-05-21 00:24:09 +0200 <yin> for advent of code this is what i do, and i test the sample inputs first
2024-05-21 00:26:23 +0200mikess(~mikess@user/mikess) (Ping timeout: 264 seconds)
2024-05-21 00:31:08 +0200Square(~Square@user/square) (Ping timeout: 260 seconds)
2024-05-21 00:34:47 +0200aryah(~aryah@141-138-38-218.dsl.iskon.hr) (Ping timeout: 264 seconds)
2024-05-21 00:35:03 +0200aryah(~aryah@141-138-45-48.dsl.iskon.hr)
2024-05-21 00:38:08 +0200Lord_of_Life(~Lord@user/lord-of-life/x-2819915) (Ping timeout: 260 seconds)
2024-05-21 00:39:33 +0200Lord_of_Life(~Lord@user/lord-of-life/x-2819915)
2024-05-21 00:40:11 +0200aryah(~aryah@141-138-45-48.dsl.iskon.hr) (Ping timeout: 264 seconds)
2024-05-21 00:41:03 +0200aryah(~aryah@141-138-38-218.dsl.iskon.hr)
2024-05-21 00:53:37 +0200troydm(~troydm@user/troydm)
2024-05-21 00:56:29 +0200euleritian(~euleritia@dynamic-176-003-074-222.176.3.pool.telefonica.de) (Read error: Connection reset by peer)
2024-05-21 00:56:48 +0200euleritian(~euleritia@ip4d16fc38.dynamic.kabel-deutschland.de)
2024-05-21 01:03:11 +0200nadja(~dequbed@banana-new.kilobyte22.de) (Quit: bye!)
2024-05-21 01:04:23 +0200xff0x(~xff0x@2405:6580:b080:900:4704:a3e5:3fba:c1dc) (Ping timeout: 256 seconds)
2024-05-21 01:06:03 +0200xff0x(~xff0x@2405:6580:b080:900:a157:6ed1:5915:7c2c)
2024-05-21 01:07:57 +0200pdw(~user@215.156.62.185.bridgefibre.net) (Ping timeout: 272 seconds)
2024-05-21 01:09:41 +0200xff0x(~xff0x@2405:6580:b080:900:a157:6ed1:5915:7c2c) (Client Quit)
2024-05-21 01:15:26 +0200Tuplanolla(~Tuplanoll@91-159-69-59.elisa-laajakaista.fi) (Quit: Leaving.)
2024-05-21 01:15:40 +0200xff0x(~xff0x@2405:6580:b080:900:b527:98ae:f93a:e494)
2024-05-21 01:22:53 +0200nickiminjaj(~kvirc@user/laxhh) (Quit: KVIrc 5.2.2 Quasar http://www.kvirc.net/)
2024-05-21 01:29:46 +0200bitdex(~bitdex@gateway/tor-sasl/bitdex)
2024-05-21 01:31:20 +0200acidjnk_new(~acidjnk@p200300d6e714dc12c1768acda4802182.dip0.t-ipconnect.de) (Ping timeout: 260 seconds)
2024-05-21 01:32:16 +0200Nixkernal(~Nixkernal@240.17.194.178.dynamic.wline.res.cust.swisscom.ch) (Ping timeout: 260 seconds)
2024-05-21 01:53:10 +0200phma(~phma@host-67-44-208-139.hnremote.net) (Read error: Connection reset by peer)
2024-05-21 01:54:08 +0200phma(phma@2001:5b0:210f:6338:6b26:1f57:9336:4cb3)
2024-05-21 01:54:41 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 02:28:57 +0200ystael(~ystael@user/ystael) (Ping timeout: 268 seconds)
2024-05-21 02:34:25 +0200it_(~quassel@v2202212189510211193.supersrv.de) (Quit: ,o>)
2024-05-21 02:34:50 +0200it_(~quassel@v2202212189510211193.supersrv.de)
2024-05-21 02:35:17 +0200szkl(uid110435@id-110435.uxbridge.irccloud.com) (Quit: Connection closed for inactivity)
2024-05-21 02:35:35 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net)
2024-05-21 02:49:44 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 260 seconds)
2024-05-21 02:50:59 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net) (Ping timeout: 264 seconds)
2024-05-21 02:57:05 +0200hsw(~hsw@2001-b030-2303-0104-0172-0025-0012-0132.hinet-ip6.hinet.net)
2024-05-21 03:05:08 +0200xdminsy(~xdminsy@117.147.70.240) (Ping timeout: 260 seconds)
2024-05-21 03:06:10 +0200xdminsy(~xdminsy@117.147.70.240)
2024-05-21 03:11:48 +0200sayola(~sayola@ip-109-42-242-108.web.vodafone.de)
2024-05-21 03:13:41 +0200sayola1(~sayola@ip-109-42-241-204.web.vodafone.de) (Ping timeout: 240 seconds)
2024-05-21 03:15:16 +0200xdminsy(~xdminsy@117.147.70.240) (Read error: Connection reset by peer)
2024-05-21 03:16:04 +0200xdminsy(~xdminsy@117.147.70.240)
2024-05-21 03:20:39 +0200xdminsy(~xdminsy@117.147.70.240) (Read error: Connection reset by peer)
2024-05-21 03:21:46 +0200xdminsy(~xdminsy@117.147.70.240)
2024-05-21 03:26:51 +0200xff0x(~xff0x@2405:6580:b080:900:b527:98ae:f93a:e494) (Ping timeout: 255 seconds)
2024-05-21 03:28:40 +0200xdminsy(~xdminsy@117.147.70.240) (Read error: Connection reset by peer)
2024-05-21 03:29:01 +0200xdminsy(~xdminsy@117.147.70.240)
2024-05-21 04:00:01 +0200fliife(~fliife@user/fliife) (Quit: ZNC 1.8.2+deb2build5 - https://znc.in)
2024-05-21 04:00:48 +0200fliife(~fliife@user/fliife)
2024-05-21 04:00:53 +0200otto_s(~user@p5de2fafb.dip0.t-ipconnect.de) (Ping timeout: 240 seconds)
2024-05-21 04:02:56 +0200otto_s(~user@p5de2f060.dip0.t-ipconnect.de)
2024-05-21 04:05:59 +0200td_(~td@i53870921.versanet.de) (Ping timeout: 264 seconds)
2024-05-21 04:07:44 +0200td_(~td@i5387090E.versanet.de)
2024-05-21 04:14:05 +0200agent314(~quassel@184.75.215.3) (Ping timeout: 240 seconds)
2024-05-21 04:14:11 +0200xff0x(~xff0x@125x103x176x34.ap125.ftth.ucom.ne.jp)
2024-05-21 04:15:07 +0200agent314(~quassel@104.129.57.116)
2024-05-21 04:15:35 +0200machinedgod(~machinedg@d173-183-246-216.abhsia.telus.net) (Ping timeout: 264 seconds)
2024-05-21 04:40:45 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net)
2024-05-21 04:41:23 +0200justsomeguy(~justsomeg@user/justsomeguy) (Quit: WeeChat 3.6)
2024-05-21 04:41:37 +0200justsomeguy(~justsomeg@user/justsomeguy)
2024-05-21 04:44:59 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net) (Ping timeout: 252 seconds)
2024-05-21 04:50:46 +0200joeyadams(~joeyadams@2603:6010:5100:2ed:64b6:9e88:7ce7:a120)
2024-05-21 05:06:52 +0200Midjak(~MarciZ@82.66.147.146) (Quit: This computer has gone to sleep)
2024-05-21 05:11:23 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net) (Ping timeout: 264 seconds)
2024-05-21 05:17:17 +0200yin(~yin@user/zero) (Ping timeout: 240 seconds)
2024-05-21 05:28:11 +0200aforemny_(~aforemny@i59F516F9.versanet.de) (Ping timeout: 264 seconds)
2024-05-21 05:28:38 +0200aforemny(~aforemny@2001:9e8:6cca:4800:37b5:fb76:9e3f:6a26)
2024-05-21 05:42:48 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 05:49:24 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 260 seconds)
2024-05-21 05:52:55 +0200justsomeguy(~justsomeg@user/justsomeguy) (Quit: WeeChat 3.6)
2024-05-21 05:56:02 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net)
2024-05-21 06:00:35 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net) (Ping timeout: 264 seconds)
2024-05-21 06:26:23 +0200agent314(~quassel@104.129.57.116) (Ping timeout: 264 seconds)
2024-05-21 06:26:36 +0200agent314(~quassel@162.219.176.19)
2024-05-21 06:34:50 +0200michalz(~michalz@185.246.207.203)
2024-05-21 06:35:05 +0200michalz(~michalz@185.246.207.203) (Client Quit)
2024-05-21 06:37:54 +0200michalz(~michalz@185.246.207.221)
2024-05-21 06:46:50 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 07:17:01 +0200takuan(~takuan@178-116-218-225.access.telenet.be)
2024-05-21 07:18:08 +0200zetef(~quassel@5.2.182.99)
2024-05-21 07:20:52 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 260 seconds)
2024-05-21 07:27:43 +0200y04nn(~username@2a03:1b20:8:f011::e10d)
2024-05-21 07:30:07 +0200Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2024-05-21 07:30:42 +0200vladl(~vladl@24.35.90.183)
2024-05-21 07:37:18 +0200johnw_(~johnw@69.62.242.138) (Quit: ZNC - http://znc.in)
2024-05-21 07:38:50 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 07:40:59 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net)
2024-05-21 07:41:29 +0200 <vladl> What patterns are present in a recursion scheme that requires synchronization? For example, recursively interpolating subpixels in an image, where the interpolation step requires the parent pixel neighborhood? The subpixels get folded back into the original resolution, so this is almost a hylomorphism, except the coalgebra gets "split" over the synchronization step. This seems like a common pattern but I
2024-05-21 07:41:35 +0200 <vladl> can't really find much on it, which could definitely be a skill issue. I read "Fantastic Morphisms and where to find them" but none of these really fit the bill. There's a problem on tree nexuses in Richard Bird's Pearls of Functional Algorithm design that I went through that, again, seemed very close to what I wanted but I couldn't make the pieces fit.
2024-05-21 07:45:55 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net) (Ping timeout: 268 seconds)
2024-05-21 07:50:40 +0200 <probie> I don't quite understand what you mean by "synchronization" here
2024-05-21 07:53:11 +0200geekosauris thinking this sounds more like a comonad
2024-05-21 07:53:39 +0200 <vladl> At a given resolution j, before interpolating the subpixels at resolution j+1, the neighboring (sub)pixels at resolution j have to have been computed. So we need to sync layer-by-layer. Synchronization like a fence or a barrier.
2024-05-21 07:54:55 +0200johnw(~johnw@69.62.242.138)
2024-05-21 07:57:35 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 264 seconds)
2024-05-21 07:57:40 +0200 <vladl> Yes I think I see a comonad in there too, but its also split. Like, lets suppose a 1D array of pixels p with neighborhoods w, so something like [w p]. We have a (w p -> p), but in order to extend and get a (w p) out, we have to step all the way out of the [] in order to propagate neighbor values. So we can go [p] -> [w p], but that [] prevents me from making it a proper comonad.
2024-05-21 07:58:13 +0200 <probie> I don't see what requires you to synchronize layer-by-layer. A subpixel merely requires its neighbours at the previous resolution to have been computed, not the entire previous layer
2024-05-21 07:59:11 +0200 <vladl> That's true but I'm relaxing my focus to the layer scale in hopes of making it easier for me to reason about
2024-05-21 08:00:39 +0200 <vladl> Also because my particular use case ultimately does need to synchonize layer-by-layer for other reasons (memory allocation strategy) so I may as well
2024-05-21 08:01:56 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-05-21 08:10:56 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net)
2024-05-21 08:13:25 +0200internatetional(~nate@182.2.51.214)
2024-05-21 08:13:53 +0200internatetional(~nate@182.2.51.214) (Max SendQ exceeded)
2024-05-21 08:14:11 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
2024-05-21 08:15:35 +0200demon-cat(~demon-cat@dund-15-b2-v4wan-169642-cust1347.vm6.cable.virginm.net) (Ping timeout: 264 seconds)
2024-05-21 08:18:56 +0200sord937(~sord937@gateway/tor-sasl/sord937)
2024-05-21 08:20:24 +0200 <vladl> Actually, scratch that - we don't have a (w p -> p), we have a (w p -> f p), where f is the coalgebra functor (what contains the subpixels). So the comonad situation is even messier.
2024-05-21 08:21:35 +0200 <vladl> Its like... taking an anamorphism and a comonad and trying to thread them through one another somehow.
2024-05-21 08:21:38 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-05-21 08:22:50 +0200 <vladl> Or, hopefully, I am just overcomplicating things and there is some more elegant formalism for this out there
2024-05-21 08:34:48 +0200skiisn't too clear on the concrete example with subpixel interpolation
2024-05-21 08:36:01 +0200 <ski> you're building a quadtree of the pixels, or something ?
2024-05-21 08:37:09 +0200 <vladl> You guessed right. If it matters, the pixels are actually fluid density distributions and the interpolation is meant to refine the grid cells until they're fine enough for the fluid to advect across in one time step.
2024-05-21 08:37:39 +0200 <ski> hm, or maybe considering a rectangle of pixels, and then a rectangle of all windows around each pixel, and then a rectangle of all windows around those, &c. (so this is a DAG, not a tree) ?
2024-05-21 08:38:49 +0200 <vladl> Well it is a tree in the sense that every parent cell is exactly decomposed into its child cells, like they overlap. But in terms of data dependencies, it is a DAG.
2024-05-21 08:38:50 +0200 <ski> (windows of a particular fixed size, in terms of the elements of the layer below, say)
2024-05-21 08:39:32 +0200skidoesn't know the term "advect"
2024-05-21 08:40:06 +0200 <vladl> It just means the movement of materials. So basically it is the fluid traversing space.
2024-05-21 08:40:09 +0200 <ski> does two adjacent parent cells share child cells ?
2024-05-21 08:40:59 +0200 <vladl> No, no child cells are shared. So imagine each pixel gets broken up into 4 child pixels, but the values of the child pixels gets interpolated from the parent pixels and its adjacent neighbors (for a first-order interpolation scheme)
2024-05-21 08:41:58 +0200 <vladl> Er, cells, sorry, I should stop mixing terminology
2024-05-21 08:42:05 +0200 <ski> hm, interpolated from parent cell, and parentsibling cells ? or also from cousin cells ?
2024-05-21 08:42:21 +0200sayola1(~sayola@ip-109-42-241-195.web.vodafone.de)
2024-05-21 08:42:34 +0200 <vladl> Only from parent sibling cells. So a cell at resolution j only depends on cell values at resolutoin j-1
2024-05-21 08:43:28 +0200sayola(~sayola@ip-109-42-242-108.web.vodafone.de) (Ping timeout: 260 seconds)
2024-05-21 08:43:41 +0200 <ski> mhm
2024-05-21 08:44:00 +0200 <ski> so how does information flow ? only from root towards leaves ?
2024-05-21 08:44:07 +0200 <ski> hmm
2024-05-21 08:44:08 +0200sayola1(~sayola@ip-109-42-241-195.web.vodafone.de) (Read error: Connection reset by peer)
2024-05-21 08:44:29 +0200 <ski> well, to sibling children as well, yea
2024-05-21 08:44:50 +0200 <vladl> Both ways, but not at the same time. We unfold all the way, then do some transformations, and then fold it back up to the original resolution.
2024-05-21 08:45:49 +0200 <ski> mhm, so first stage is basically from root toward leaves (but including from siblings to children). and then later from leaves to root again ?
2024-05-21 08:46:36 +0200 <vladl> so a 1D example, say you have [x0, x1, x2, x3, x4...] and you want to expand x2 into [y0, y1], then y0 = f 0 [x1, x2, x3] and y1 = f 1 [x1, x2, x3] for some interpolation function f
2024-05-21 08:46:40 +0200 <vladl> yes
2024-05-21 08:46:51 +0200sayola(~sayola@ip-109-42-241-195.web.vodafone.de)
2024-05-21 08:48:51 +0200 <ski> hm, i see
2024-05-21 08:49:03 +0200lortabac(~lortabac@2a01:e0a:541:b8f0:55ab:e185:7f81:54a4)
2024-05-21 08:49:25 +0200 <ski> (or `[y0,y1] = g [x1,x2,x3]')
2024-05-21 08:49:38 +0200 <vladl> yes, that's more accurate
2024-05-21 08:50:06 +0200 <ski> so this would be `w p -> f p'
2024-05-21 08:50:16 +0200 <vladl> yes, exactly
2024-05-21 08:53:07 +0200acidjnk_new(~acidjnk@p200300d6e714dc192d105e993ca958d7.dip0.t-ipconnect.de)
2024-05-21 08:53:21 +0200 <ski> still not following what you mean by the synchronization
2024-05-21 08:55:09 +0200 <vladl> so, suppose we have [[y0, y1], [y2, y3], ...] from the expansion and we flattened it down to [y0, y1, y2, y3,...]. So we want to expand y1 into [z0,z1] but we need [y0, y1, y2] to do this. but x2 only computes y0 and y1, so we need x3 to have been expanded into y2 and y3 before we can compute z0,z1
2024-05-21 08:56:25 +0200 <vladl> so z's have to wait until all of the y's in their interpolation domain are done
2024-05-21 08:57:09 +0200 <vladl> which means x2 and x3 have to both finish their expansions before either y1 or y2 can be expanded
2024-05-21 08:58:24 +0200 <ski> "but x2 only computes y0 and y1, so we need x3" -- hm, shouldn't `x2' and `x3' be `x0' and `x1' ?
2024-05-21 08:59:05 +0200 <ski> (hm, or maybe you don't have cropped windows/neighbourhoods at the edges. that would make it `x1' and `x2' though, i think)
2024-05-21 08:59:36 +0200 <vladl> we don't expand x0, because it doesn't have a complete interpolation domain
2024-05-21 08:59:58 +0200 <ski> mm, right
2024-05-21 09:03:54 +0200 <vladl> And then i just try to simplify it for myself and consider dependencies at the scale of an entire layer at a time, instead of worrying about the implicit DAG in the window-wise dependencies
2024-05-21 09:04:48 +0200 <ski> hmm .. so i guess you only need siblings,cousins,&c. ("same generation") up to a common ancestor `n' levels up, where `n' would depend on the width of `w' and the branching factor of `f'
2024-05-21 09:05:29 +0200 <ski> (only thinking of fairly "regular" `w' and `f' here (e.g. probably linear), rather than say more arbitrary ones)
2024-05-21 09:05:35 +0200cfricke(~cfricke@user/cfricke)
2024-05-21 09:06:07 +0200 <ski> at least, for your example above, `n' would seem to be `2'
2024-05-21 09:06:08 +0200 <vladl> yes, but note the common ancestor could be pretty far up, if the cell's position is near a power-of-two.
2024-05-21 09:06:18 +0200 <ski> hmm
2024-05-21 09:06:30 +0200 <ski> oh right. i was misthinking here
2024-05-21 09:08:02 +0200 <ski> the flattening, to generate `w', is complicating stuff
2024-05-21 09:09:09 +0200 <ski> "consider dependencies at the scale of an entire layer at a time" -- i suppose you mean generating a layer completely, before increasing the resolution. is this what you meant by "synchronization" ?
2024-05-21 09:09:32 +0200 <vladl> yes. the flattening complicates things significantly, and yes that is what i mean by synchronization
2024-05-21 09:13:58 +0200 <ski> given `t p', you can get to `t (w p)', and then to `t (f p)'. then that becomes `t p' with one level deeper
2024-05-21 09:16:09 +0200 <vladl> yes. t needs to be able to absorb f (and remember it so it can form it later) so it seems like a tree with layer-wise views
2024-05-21 09:19:35 +0200 <ski> i guess, something like `data TreeD f :: Nat -> * -> * where Leaf :: TreeD f Zero p; Branch :: f (TreeD f n p) -> TreeD f (Succ n) p' or `data TreeB f p = Conquer p | Divide (TreeB f (f p))'
2024-05-21 09:20:22 +0200 <ski> (where `TreeB f p' amounts to `exists n :: Nat. TreeD f n p')
2024-05-21 09:20:58 +0200 <ski> the interesting part, of course, is how to do the `t p -> t (w p)' part
2024-05-21 09:24:52 +0200 <vladl> Yeah, something like that
2024-05-21 09:25:41 +0200 <ski> .. i'm wondering if you could carry siblings with you, as you go down, so that you don't have to traverse arbitarily high up again to retrieve them
2024-05-21 09:26:24 +0200 <vladl> So the way I do it in my specific case, is that, during the unfolding, each cell carries with it an object called a "topology" that you can basically think of as a list of pointers to the neighbors
2024-05-21 09:26:49 +0200 <vladl> So when a cell gets expanded, it actually computes this right away, so it knows from the moment of birth where its neighbors are, so to speak
2024-05-21 09:27:14 +0200 <ski> hm, so instead of `t p >-> t (w p) >-> t (f p)', could one do `t (w p) >-> t (w (f p)) >-> t (f (w p))' ?
2024-05-21 09:28:09 +0200 <vladl> i don't think so, because the new cell only knows where its neighbors are, not what their values are
2024-05-21 09:29:22 +0200 <ski> mhm
2024-05-21 09:30:52 +0200 <vladl> so in my case, in a sense i have to pair the topology with the entire parent layer (since the offsets are with respect to the layer) in order to compute the child cells
2024-05-21 09:31:50 +0200chele(~chele@user/chele)
2024-05-21 09:32:31 +0200ubert(~Thunderbi@p200300ecdf1a44e6bddfe2bf28cca96e.dip0.t-ipconnect.de)
2024-05-21 09:33:10 +0200 <vladl> that's another reason I go layer-by-layer, I can access neighbors in O(1) and I only have to store 2 layers at a time
2024-05-21 09:33:21 +0200 <vladl> instead of O(log n) and storing the entire tree
2024-05-21 09:34:13 +0200 <ski> yup, just like dynamic programming with fixed memory/history
2024-05-21 09:34:53 +0200kuribas(~user@ptr-17d51encis8jg2ccf48.18120a2.ip6.access.telenet.be)
2024-05-21 09:36:00 +0200 <vladl> yeah, and I was considering the dynamorphism as a model but its that lateral dependency that trips me up.
2024-05-21 09:36:59 +0200 <ski> (hm, this all is making me think of "structure syntax" now .. although that's tangential to what you're pondering)
2024-05-21 09:38:39 +0200 <vladl> you mean like how ListF a b is a functor over its recursive structure, that kind of thing?
2024-05-21 09:39:54 +0200 <ski> well .. it's an idea i've been pondeing, on and off. basically, given a structure/collection with elements, i want to name the structure (the layer(s)) itself, separately from the elements
2024-05-21 09:41:39 +0200ubert(~Thunderbi@p200300ecdf1a44e6bddfe2bf28cca96e.dip0.t-ipconnect.de) (Quit: ubert)
2024-05-21 09:42:45 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
2024-05-21 09:42:49 +0200 <ski> e.g. with `concat :: [[a]] -> [a]' and `sum :: [Integer] -> Integer' we have a law `sum . map sum = sum . concat'. i was to express this as `sum (| l0 ; sum (| l1 ; n |) |) = sum (| concat (| l0,l1 |) ; n |)'. here `l0' is the name of the outer list structure, and `l1' is the name of each inner list structure (it's plural), and `n' is the name of each element (it's doubly plural)
2024-05-21 09:43:06 +0200 <ski> s/i was to/i want to/
2024-05-21 09:48:51 +0200 <vladl> i think i'm following. i read `sum(| a ; b |)` as sum of b's ranging over a, so this expression shows how different views of the structure relate to one another, which tells you about the structure as a whole
2024-05-21 09:49:30 +0200 <vladl> s/expression/equation
2024-05-21 09:51:37 +0200 <vladl> i'm guessing you might be able to derive equivalent-but-different traversals if you had some syntax like that?
2024-05-21 09:51:58 +0200 <ski> (and then i want to be able to say things like `[] -> Maybe', which would be a right kan extension. `([] -> Maybe) a' amounting to `forall b. (a -> [b]) -> Maybe b'. and `(exists n. t n) a', i think (?), amounting to `exists n. t n a' (and similarly for `forall'))
2024-05-21 09:52:49 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-05-21 09:53:11 +0200 <ski> `sum (| a ; b |)' sums all the `b's inside the `a' structure. `concat (| l0,l1 |)' concatenates ("flattens") the `l0' and `l1' structures (the latter being contained inside the former). note that there are no elements mentioned here (hence no `;', just `,')
2024-05-21 09:54:12 +0200 <vladl> we don't think of l1 as elements of l0?
2024-05-21 09:54:25 +0200 <ski> basically, it's an attempt to make a calculus where you can name individual structures, or layers, *without* including the contents/elements in that name. *separating* structure from contents
2024-05-21 09:54:44 +0200 <ski> concat :: [] . [] -> []
2024-05-21 09:55:07 +0200danse-nr3(~danse-nr3@151.35.171.208)
2024-05-21 09:55:13 +0200 <ski> this is why it is `concat (| l0,l1 |)', the `.' (composition) means that you call `concat' with two layers (`l0' and `l1' here)
2024-05-21 09:55:24 +0200 <ski> (and the result is also a list layer)
2024-05-21 09:56:15 +0200machinedgod(~machinedg@d173-183-246-216.abhsia.telus.net)
2024-05-21 09:56:16 +0200 <ski> so, polymorphic operations don't need to mention the elements of the type parameters. but `sum' involves both the structure and the elements, so it still needs to mention both
2024-05-21 09:56:55 +0200 <vladl> i see now
2024-05-21 09:57:32 +0200 <ski> (`sum' is like a monoid action, `[]' acting on `Integer'. the law involving `concat' and `sum' above is similar to e.g. `(x * y) * v = x * (y * v)' law for scalars `x',`y' and vector `v')