2024/04/30

Newest at the top

2024-04-30 21:47:01 +0200 <tomsmeding> I don't see how making things thunks, or doing the difference-list thing, etc. helps there
2024-04-30 21:46:33 +0200 <tomsmeding> monochrom: I'm not exactly sure what you mean; the problem that I have is that the GC uselessly traverses my huge data structure that is live anyway
2024-04-30 21:46:22 +0200 <EvanR> data is definitely data while closures might contain a bunch of closures, or thunks. So closures are the better bet xD
2024-04-30 21:43:26 +0200 <monochrom> But changing to thunks can be worthwhile if it enables streaming.
2024-04-30 21:42:55 +0200 <monochrom> That would be what I said about "just changes data to thunks".
2024-04-30 21:42:46 +0200benjaminl(~benjaminl@user/benjaminl) (Ping timeout: 246 seconds)
2024-04-30 21:41:25 +0200 <tomsmeding> you just get a network of closures with the exact same structure as the original data structure :)
2024-04-30 21:41:09 +0200 <tomsmeding> you just create indirection
2024-04-30 21:41:02 +0200 <tomsmeding> if you replace the nodes of the data structure by closures, you don't make the heap size any smaller
2024-04-30 21:40:29 +0200yin(~yin@user/zero) (Ping timeout: 240 seconds)
2024-04-30 21:40:24 +0200 <monochrom> I wonder if "diff list but for snoc list" helps.
2024-04-30 21:38:07 +0200 <tomsmeding> it would reduce the peak heap size a lot
2024-04-30 21:38:05 +0200machinedgod(~machinedg@d173-183-246-216.abhsia.telus.net)
2024-04-30 21:37:47 +0200 <monochrom> I don't think top-down reduces fruitless GC anyway. It just changes data to thunks.
2024-04-30 21:28:19 +0200L29Ah(~L29Ah@wikipedia/L29Ah)
2024-04-30 21:27:38 +0200 <tomsmeding> unsatisfying but probably the best answer
2024-04-30 21:27:22 +0200 <tomsmeding> and hope that the GC doesn't trigger too often
2024-04-30 21:26:38 +0200 <c_wraith> well, then... making the nursery huge isn't terrible. that's why the option exists.
2024-04-30 21:26:23 +0200 <tomsmeding> (this is reverse-mode automatic differentiation, in case you're curious)
2024-04-30 21:26:12 +0200 <tomsmeding> (it's a kind of "tape"/"log" of a computation that is performed; the second phase of the program interprets this tape in reverse)
2024-04-30 21:25:46 +0200 <tomsmeding> this really must be constructed from the bottom up
2024-04-30 21:25:40 +0200 <tomsmeding> unfortunately not
2024-04-30 21:25:20 +0200 <c_wraith> is there no way to all to make the data structure constructed from the top down, so that it isn't created until it's consumed? that's the easiest way to control memory use.
2024-04-30 21:25:15 +0200L29Ah(~L29Ah@wikipedia/L29Ah) ()
2024-04-30 21:25:08 +0200madeleine-sydney(~madeleine@c-76-155-235-153.hsd1.co.comcast.net) (Quit: Konversation terminated!)
2024-04-30 21:24:30 +0200 <tomsmeding> (the structure gets pretty large)
2024-04-30 21:24:18 +0200 <tomsmeding> the construction phase of my program frankly does little else than add lots of stuff to this structure
2024-04-30 21:23:55 +0200 <c_wraith> compact regions really don't help during construction, unless things can be done in phases.
2024-04-30 21:23:03 +0200 <tomsmeding> meh and this would be an annoying refactor
2024-04-30 21:22:39 +0200 <tomsmeding> yeah no this is unlikely to work
2024-04-30 21:22:32 +0200 <tomsmeding> I was deluded by castStablePtrToPtr's existence, but that doesn't return a pointer that actually means anything
2024-04-30 21:22:19 +0200 <tomsmeding> oh I see
2024-04-30 21:21:51 +0200 <tomsmeding> maybe this would make only the IORef not move, and still make the GC copy the whole data structure every time :)
2024-04-30 21:21:39 +0200 <c_wraith> (and the mechanism is that it picks a number that can be cast to a void*, then throws it into a hash table in the RTS...)
2024-04-30 21:21:08 +0200 <tomsmeding> > A stable pointer is a reference to a Haskell expression that is guaranteed not to be affected by garbage collection, i.e., it will neither be deallocated nor will the value of the stable pointer itself change during garbage collection
2024-04-30 21:20:47 +0200 <c_wraith> Does StablePtr do anything there? my impression was that it just wraps an arbitrary value in a way that lets you send it via a void* in the FFI
2024-04-30 21:20:45 +0200 <mauke> as a last resort, write the whole thing using manually allocated memory :-)
2024-04-30 21:19:35 +0200 <tomsmeding> it's clever, let's see if that does anything
2024-04-30 21:19:23 +0200 <tomsmeding> I could try having each thread have its own StablePtr (IORef a) to its "head" of the structure
2024-04-30 21:18:50 +0200 <mauke> could make a StablePtr (IORef a), I guess. but I'm not sure if that even does anything
2024-04-30 21:17:52 +0200yin(~yin@user/zero)
2024-04-30 21:17:16 +0200 <tomsmeding> so you shouldn't make tons of them
2024-04-30 21:17:13 +0200 <tomsmeding> I think I recall from somewhere that StablePtrs are kept in a list somewhere in the RTS
2024-04-30 21:16:29 +0200 <mauke> honestly, no idea :-)
2024-04-30 21:15:58 +0200 <tomsmeding> would that be a good idea?
2024-04-30 21:15:54 +0200 <tomsmeding> so I'd have to create new StablePtrs all the time
2024-04-30 21:15:47 +0200 <tomsmeding> mauke: I'm adding to my structure "at the top" all the time, so the root of the data structure changes all the time
2024-04-30 21:15:41 +0200yin(~yin@user/zero) (Ping timeout: 240 seconds)
2024-04-30 21:15:13 +0200 <mauke> tomsmeding: if you make a StablePtr to your structure, does that affect GC behavior?
2024-04-30 21:14:50 +0200peterbecich(~Thunderbi@47.229.123.186)