2025/01/31

Newest at the top

2025-01-31 21:45:04 +0100 <dminuoso> So it gets woken up whenever the thunk finished.
2025-01-31 21:44:48 +0100 <dminuoso> If another thread enters a blackhole, it gets put on a list to be woken up later.
2025-01-31 21:44:33 +0100 <dminuoso> mauke: If the same thread enters a blackhole, that blackhole acts as loop detection.,
2025-01-31 21:44:25 +0100 <dminuoso> mauke: Okay, so there's two behaviors to blackhole.
2025-01-31 21:44:08 +0100alexherbo2(~alexherbo@2a02-8440-3503-94e0-1866-04f2-f81a-c1ec.rev.sfr.net) (Remote host closed the connection)
2025-01-31 21:44:07 +0100 <mauke> so if another thread tries to evaluate the same thunk later, it will simply wait until the first thread is done computing a value
2025-01-31 21:43:33 +0100 <mauke> in a multi-threaded environment, the first thread to reach a given thunk instead switches out the code pointer to an "enter waiting queue" subroutine
2025-01-31 21:42:48 +0100 <mauke> at least in single-threaded mode
2025-01-31 21:42:30 +0100 <mauke> this is implemented by temporarily switching the code pointer of a thunk to a subroutine that throws an exception
2025-01-31 21:41:58 +0100euouaefuriously types some notes of the previous discussion, needs more time to read the latest stuff being said
2025-01-31 21:41:55 +0100 <mauke> anyway, the <<loop>> exception happens when evaluation of a thunk tries to re-enter the same thunk (i.e. you have a value that depends on itself)
2025-01-31 21:41:06 +0100 <mauke> huh, interesting
2025-01-31 21:40:03 +0100 <dminuoso> See https://simonmar.github.io/bib/papers/multiproc.pdf
2025-01-31 21:39:37 +0100 <mauke> I may be wrong on my terminology, but I think blackholing only happens in single-threaded mode
2025-01-31 21:39:25 +0100 <dminuoso> And that all alone pretty much stops memory issues.
2025-01-31 21:38:58 +0100 <dminuoso> euouae: That is, there is automatic protection that no two threads attempt to evaluate the same expression concurrently.
2025-01-31 21:38:38 +0100 <lambdabot> *Exception: <<loop>>
2025-01-31 21:38:37 +0100 <dminuoso> euouae: First off, when switchinig between threads, entered thunks are blackholed.
2025-01-31 21:38:36 +0100 <mauke> > let x = head [x] in x
2025-01-31 21:38:18 +0100 <euouae> No
2025-01-31 21:38:12 +0100 <mauke> on a related topic, have you ever seen the <<loop>> exception?
2025-01-31 21:38:02 +0100 <dminuoso> No, its all build to handle that.
2025-01-31 21:38:02 +0100 <euouae> I can see why memory can blow up then
2025-01-31 21:37:49 +0100 <euouae> Oh it does? That can be bad
2025-01-31 21:37:40 +0100sprotte24(~sprotte24@p200300d16f06b9001d5c2b08794be0ce.dip0.t-ipconnect.de)
2025-01-31 21:37:34 +0100 <mauke> euouae: yes
2025-01-31 21:37:15 +0100 <mauke> my point is that just as main() is a regular C function (has an address, can be called from inside the program, etc), so Haskell main is a regular IO () value and can be used as such
2025-01-31 21:37:11 +0100 <dminuoso> Oh you're really diving deep now.
2025-01-31 21:36:58 +0100 <euouae> so what about threads? is sharing happening across threads?
2025-01-31 21:36:43 +0100 <dminuoso> euouae: Right.
2025-01-31 21:36:41 +0100 <mauke> yes
2025-01-31 21:36:36 +0100 <euouae> very similar to C etc
2025-01-31 21:36:32 +0100 <euouae> Well GHC bootstraps some code before main to do the execution
2025-01-31 21:36:20 +0100 <mauke> the runtime system grabs the value of main and runs the instructions you built
2025-01-31 21:36:05 +0100 <mauke> main is just a constant
2025-01-31 21:36:02 +0100 <mauke> I don't think that's the right way to think about it
2025-01-31 21:35:51 +0100 <dminuoso> That is, main is the only thing (aside from some dark primitives) that can execute IO.
2025-01-31 21:35:35 +0100 <dminuoso> Of course *executing* IO also happens at runtime and is done by main.
2025-01-31 21:35:33 +0100 <euouae> dminuoso: I'm very prone to using the wrong terms all the time
2025-01-31 21:35:25 +0100 <dminuoso> Yes, you can imagine this happens at runtime.
2025-01-31 21:35:16 +0100 <euouae> Yeah right, evaluating
2025-01-31 21:35:14 +0100 <mauke> in my example, if nothing is shared, the code can run in O(1) memory
2025-01-31 21:35:14 +0100 <dminuoso> euouae: But yes, you can imagine that substitution happens at runtime.
2025-01-31 21:34:58 +0100 <euouae> But feigning ignorance on compilation, it's fine to think of it as all interpreted at runtime right?
2025-01-31 21:34:57 +0100 <dminuoso> euouae: You mean *evaluating* right?
2025-01-31 21:34:38 +0100 <euouae> So when "executing" haskell code, theoretically you can imagine that all the grammatical expansions happen at runtime. Obviously "compiled" haskell code can do some of the work before hand
2025-01-31 21:34:38 +0100 <lambdabot> 19999
2025-01-31 21:34:36 +0100 <mauke> > let { x = [0 ..]; y = [0 ..] } in (x !! 10000) + (y !! 9999)
2025-01-31 21:34:20 +0100 <dminuoso> But not sharing can also waste memory in potentially having to keep it in memory multiple times..
2025-01-31 21:34:08 +0100 <euouae> Ah yeah that makes sense