2025/01/31

Newest at the top

2025-01-31 21:52:42 +0100 <dminuoso> mauke: Yes. Its really just a kind of mutual exclusion lock for thunks.
2025-01-31 21:52:24 +0100 <dminuoso> You can think of it as some kind of mutual exclusion lock, but with special logic to detect if the entry code recursed into itself.
2025-01-31 21:51:58 +0100 <euouae> oh mauke's example relates to black holes? I'll read the whole convo then
2025-01-31 21:51:48 +0100 <dminuoso> If not, it will set that mark.
2025-01-31 21:51:39 +0100 <dminuoso> Now that entry code both checks for a particular mark BLACKHOLE to be set, if its set, you get a <<loop>> assuming this happened from within the same haskell thread.
2025-01-31 21:51:15 +0100 <euouae> interestingly `let x = head [x] in x` just hangs in ghci
2025-01-31 21:50:41 +0100 <dminuoso> euouae: and you demand that value by just jmp'ing into that memory region.
2025-01-31 21:50:25 +0100 <dminuoso> euouae: So roughly, if you have `let x = <expensive> in ..` then we can think of x being represented in memory as some memory region with a bunch of code
2025-01-31 21:49:53 +0100 <euouae> sorry I'm reading from the top so I'm trying to catch up on what was said
2025-01-31 21:49:44 +0100 <euouae> mauke, is <<loop>> possible because of sharing?
2025-01-31 21:49:12 +0100 <dminuoso> Even in single threaded RTS you will have concurrency.
2025-01-31 21:49:09 +0100 <mauke> ah, right
2025-01-31 21:48:34 +0100 <dminuoso> While the other use of threads if about haskell threads.
2025-01-31 21:48:24 +0100 <dminuoso> Note that "threaded RTS" talks about OS threads
2025-01-31 21:48:08 +0100 <dminuoso> mauke: Im not sure how the threaded RTS changes, but blackholing should be needed for single threaded RTS too.
2025-01-31 21:46:05 +0100 <dminuoso> So consider it a bonus *if* it triggers.
2025-01-31 21:46:00 +0100 <mauke> i.e. in multi-thread mode a thread could re-enter the thunk and end up waiting for itself (deadlock)
2025-01-31 21:45:54 +0100 <dminuoso> (And it does not work reliably either for a bunch of reasons)
2025-01-31 21:45:44 +0100dsrt^(~dsrt@108.192.66.114) (Ping timeout: 252 seconds)
2025-01-31 21:45:23 +0100 <dminuoso> The <<loop>> is just some opportunistic debugging helper, its not the core feature.
2025-01-31 21:45:18 +0100 <mauke> did they change that? I have a vague memory that <<loop>> detection didn't work in multi-thread mode
2025-01-31 21:45:04 +0100 <dminuoso> So it gets woken up whenever the thunk finished.
2025-01-31 21:44:48 +0100 <dminuoso> If another thread enters a blackhole, it gets put on a list to be woken up later.
2025-01-31 21:44:33 +0100 <dminuoso> mauke: If the same thread enters a blackhole, that blackhole acts as loop detection.,
2025-01-31 21:44:25 +0100 <dminuoso> mauke: Okay, so there's two behaviors to blackhole.
2025-01-31 21:44:08 +0100alexherbo2(~alexherbo@2a02-8440-3503-94e0-1866-04f2-f81a-c1ec.rev.sfr.net) (Remote host closed the connection)
2025-01-31 21:44:07 +0100 <mauke> so if another thread tries to evaluate the same thunk later, it will simply wait until the first thread is done computing a value
2025-01-31 21:43:33 +0100 <mauke> in a multi-threaded environment, the first thread to reach a given thunk instead switches out the code pointer to an "enter waiting queue" subroutine
2025-01-31 21:42:48 +0100 <mauke> at least in single-threaded mode
2025-01-31 21:42:30 +0100 <mauke> this is implemented by temporarily switching the code pointer of a thunk to a subroutine that throws an exception
2025-01-31 21:41:58 +0100euouaefuriously types some notes of the previous discussion, needs more time to read the latest stuff being said
2025-01-31 21:41:55 +0100 <mauke> anyway, the <<loop>> exception happens when evaluation of a thunk tries to re-enter the same thunk (i.e. you have a value that depends on itself)
2025-01-31 21:41:06 +0100 <mauke> huh, interesting
2025-01-31 21:40:03 +0100 <dminuoso> See https://simonmar.github.io/bib/papers/multiproc.pdf
2025-01-31 21:39:37 +0100 <mauke> I may be wrong on my terminology, but I think blackholing only happens in single-threaded mode
2025-01-31 21:39:25 +0100 <dminuoso> And that all alone pretty much stops memory issues.
2025-01-31 21:38:58 +0100 <dminuoso> euouae: That is, there is automatic protection that no two threads attempt to evaluate the same expression concurrently.
2025-01-31 21:38:38 +0100 <lambdabot> *Exception: <<loop>>
2025-01-31 21:38:37 +0100 <dminuoso> euouae: First off, when switchinig between threads, entered thunks are blackholed.
2025-01-31 21:38:36 +0100 <mauke> > let x = head [x] in x
2025-01-31 21:38:18 +0100 <euouae> No
2025-01-31 21:38:12 +0100 <mauke> on a related topic, have you ever seen the <<loop>> exception?
2025-01-31 21:38:02 +0100 <dminuoso> No, its all build to handle that.
2025-01-31 21:38:02 +0100 <euouae> I can see why memory can blow up then
2025-01-31 21:37:49 +0100 <euouae> Oh it does? That can be bad
2025-01-31 21:37:40 +0100sprotte24(~sprotte24@p200300d16f06b9001d5c2b08794be0ce.dip0.t-ipconnect.de)
2025-01-31 21:37:34 +0100 <mauke> euouae: yes
2025-01-31 21:37:15 +0100 <mauke> my point is that just as main() is a regular C function (has an address, can be called from inside the program, etc), so Haskell main is a regular IO () value and can be used as such
2025-01-31 21:37:11 +0100 <dminuoso> Oh you're really diving deep now.
2025-01-31 21:36:58 +0100 <euouae> so what about threads? is sharing happening across threads?