2025/01/31

Newest at the top

2025-01-31 22:07:01 +0100 <dminuoso> Details are described in https://gitlab.haskell.org/ghc/ghc/-/blob/wip/T17910/rts/Updates.h
2025-01-31 22:06:58 +0100 <monochrom> The good news is "why not both".
2025-01-31 22:06:54 +0100 <dminuoso> By the way, in reality whether or not a thunk gets marked as blackhole on entry depends on whether you opted into eager blackholing or use the default lazy blackholing
2025-01-31 22:06:31 +0100 <mauke> also a good term: event loop
2025-01-31 22:06:27 +0100 <euouae> yup I've heard of green threads, I'm just curious if Haskell has them behind the scenes or if it was in regards to the programmer's usage
2025-01-31 22:06:11 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-01-31 22:06:03 +0100 <monochrom> The "threaded RTS" uses OS threads (so hopefully spreading out on more cores if you have them). The "unthreaded RTS" makes its own green threads, if you have heard of that word.
2025-01-31 22:06:03 +0100 <mauke> runtime system
2025-01-31 22:05:59 +0100 <euouae> what does RTS stand for?
2025-01-31 22:05:52 +0100 <mauke> the RTS does m:n threading anyway (running m haskell threads on n native threads), so n = 1 is supported as well
2025-01-31 22:04:54 +0100 <euouae> I am curious, you said single RTS still has haskell threads, were you referring to some concurrent library or what happens behind the scenes by ghc itself?
2025-01-31 22:04:38 +0100ubert(~Thunderbi@p200300ecdf4e63626546ad8bed5a8da9.dip0.t-ipconnect.de) ubert
2025-01-31 22:04:29 +0100 <euouae> I've written & digested all that was said today, and thanks for more homework
2025-01-31 22:04:25 +0100 <monochrom> It helps avoid multiple threads doing redundant work. It is not even a 100% perfect lock, for no one needs one, in case two threads do happen to slip through and perform redundant work, it is just slower but still correct, this is an immutable language, evaluating x+y a million times does not change the answer.
2025-01-31 22:04:21 +0100ubert(~Thunderbi@p200300ecdf4e6362e6fd4bc8009ee988.dip0.t-ipconnect.de) (Quit: ubert)
2025-01-31 22:04:17 +0100 <euouae> yup yup
2025-01-31 22:03:50 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 244 seconds)
2025-01-31 22:02:30 +0100CoolMa7(~CoolMa7@ip5f5b8957.dynamic.kabel-deutschland.de) CoolMa7
2025-01-31 22:02:02 +0100 <dminuoso> euouae: Like I said, the machinery was not build for loop detection but for preventing concurrent evaluation of the same thunk between two haskell threads.
2025-01-31 22:01:53 +0100kimiamania8(~65804703@user/kimiamania) kimiamania
2025-01-31 22:01:35 +0100 <mauke> only in this case it's more like "if you encounter yourself on the road, kill the current thread"
2025-01-31 22:01:29 +0100kimiamania8(~65804703@user/kimiamania) (Quit: PegeLinux)
2025-01-31 22:01:18 +0100 <dminuoso> But dont ever rely on it firing.
2025-01-31 22:01:16 +0100 <mauke> if you encounter the buddha on the road, kill him
2025-01-31 22:01:09 +0100 <dminuoso> If it triggers, there's definitely an infinite loop and you can celebrate. :)
2025-01-31 22:00:43 +0100 <euouae> well eah, that's good, heuristics can misfire
2025-01-31 22:00:09 +0100 <dminuoso> Its not even a heuristic.
2025-01-31 22:00:08 +0100 <euouae> right
2025-01-31 22:00:00 +0100 <dminuoso> euouae: Like I said: It can only detect a particular kind of infinite loop.
2025-01-31 21:59:23 +0100 <euouae> so maybe it's not reliable, just a heuristic?
2025-01-31 21:59:14 +0100 <euouae> yeah I was thinking that <<loop>> detection seems roughly like the halting prolbem
2025-01-31 21:58:17 +0100 <dminuoso> See https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/storage/heap-objects#black-holes for some details on blackhole
2025-01-31 21:58:16 +0100sarna(~sarna@d224-221.icpnet.pl) sarna
2025-01-31 21:57:48 +0100 <dminuoso> euouae: Note, that every object has a pointer to an info table, and that info table contains entry code. Evaluation is driven by just jumping into that entry code
2025-01-31 21:57:29 +0100sarna(~sarna@d224-221.icpnet.pl) (Ping timeout: 260 seconds)
2025-01-31 21:56:29 +0100 <dminuoso> euouae: Start with the `Heap Objects` section
2025-01-31 21:56:11 +0100 <dminuoso> Is a good website to remember.
2025-01-31 21:56:05 +0100 <dminuoso> euouae: https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/storage/heap-objects
2025-01-31 21:53:53 +0100 <ash3en> i mean the haskell library: https://hackage.haskell.org/package/jack-0.7.2.2/docs/Sound-JACK-MIDI.html
2025-01-31 21:53:39 +0100 <ash3en> using jack midi with haskell: do i have to manage memory or something?
2025-01-31 21:53:08 +0100 <dminuoso> It was recognized we could use the same machinery to detect some forms of infinite loops
2025-01-31 21:52:42 +0100 <dminuoso> mauke: Yes. Its really just a kind of mutual exclusion lock for thunks.
2025-01-31 21:52:24 +0100 <dminuoso> You can think of it as some kind of mutual exclusion lock, but with special logic to detect if the entry code recursed into itself.
2025-01-31 21:51:58 +0100 <euouae> oh mauke's example relates to black holes? I'll read the whole convo then
2025-01-31 21:51:48 +0100 <dminuoso> If not, it will set that mark.
2025-01-31 21:51:39 +0100 <dminuoso> Now that entry code both checks for a particular mark BLACKHOLE to be set, if its set, you get a <<loop>> assuming this happened from within the same haskell thread.
2025-01-31 21:51:15 +0100 <euouae> interestingly `let x = head [x] in x` just hangs in ghci
2025-01-31 21:50:41 +0100 <dminuoso> euouae: and you demand that value by just jmp'ing into that memory region.
2025-01-31 21:50:25 +0100 <dminuoso> euouae: So roughly, if you have `let x = <expensive> in ..` then we can think of x being represented in memory as some memory region with a bunch of code
2025-01-31 21:49:53 +0100 <euouae> sorry I'm reading from the top so I'm trying to catch up on what was said