Newest at the top
2025-01-31 21:58:17 +0100 | <dminuoso> | See https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/storage/heap-objects#black-holes for some details on blackhole |
2025-01-31 21:58:16 +0100 | sarna | (~sarna@d224-221.icpnet.pl) sarna |
2025-01-31 21:57:48 +0100 | <dminuoso> | euouae: Note, that every object has a pointer to an info table, and that info table contains entry code. Evaluation is driven by just jumping into that entry code |
2025-01-31 21:57:29 +0100 | sarna | (~sarna@d224-221.icpnet.pl) (Ping timeout: 260 seconds) |
2025-01-31 21:56:29 +0100 | <dminuoso> | euouae: Start with the `Heap Objects` section |
2025-01-31 21:56:11 +0100 | <dminuoso> | Is a good website to remember. |
2025-01-31 21:56:05 +0100 | <dminuoso> | euouae: https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/storage/heap-objects |
2025-01-31 21:53:53 +0100 | <ash3en> | i mean the haskell library: https://hackage.haskell.org/package/jack-0.7.2.2/docs/Sound-JACK-MIDI.html |
2025-01-31 21:53:39 +0100 | <ash3en> | using jack midi with haskell: do i have to manage memory or something? |
2025-01-31 21:53:08 +0100 | <dminuoso> | It was recognized we could use the same machinery to detect some forms of infinite loops |
2025-01-31 21:52:42 +0100 | <dminuoso> | mauke: Yes. Its really just a kind of mutual exclusion lock for thunks. |
2025-01-31 21:52:24 +0100 | <dminuoso> | You can think of it as some kind of mutual exclusion lock, but with special logic to detect if the entry code recursed into itself. |
2025-01-31 21:51:58 +0100 | <euouae> | oh mauke's example relates to black holes? I'll read the whole convo then |
2025-01-31 21:51:48 +0100 | <dminuoso> | If not, it will set that mark. |
2025-01-31 21:51:39 +0100 | <dminuoso> | Now that entry code both checks for a particular mark BLACKHOLE to be set, if its set, you get a <<loop>> assuming this happened from within the same haskell thread. |
2025-01-31 21:51:15 +0100 | <euouae> | interestingly `let x = head [x] in x` just hangs in ghci |
2025-01-31 21:50:41 +0100 | <dminuoso> | euouae: and you demand that value by just jmp'ing into that memory region. |
2025-01-31 21:50:25 +0100 | <dminuoso> | euouae: So roughly, if you have `let x = <expensive> in ..` then we can think of x being represented in memory as some memory region with a bunch of code |
2025-01-31 21:49:53 +0100 | <euouae> | sorry I'm reading from the top so I'm trying to catch up on what was said |
2025-01-31 21:49:44 +0100 | <euouae> | mauke, is <<loop>> possible because of sharing? |
2025-01-31 21:49:12 +0100 | <dminuoso> | Even in single threaded RTS you will have concurrency. |
2025-01-31 21:49:09 +0100 | <mauke> | ah, right |
2025-01-31 21:48:34 +0100 | <dminuoso> | While the other use of threads if about haskell threads. |
2025-01-31 21:48:24 +0100 | <dminuoso> | Note that "threaded RTS" talks about OS threads |
2025-01-31 21:48:08 +0100 | <dminuoso> | mauke: Im not sure how the threaded RTS changes, but blackholing should be needed for single threaded RTS too. |
2025-01-31 21:46:05 +0100 | <dminuoso> | So consider it a bonus *if* it triggers. |
2025-01-31 21:46:00 +0100 | <mauke> | i.e. in multi-thread mode a thread could re-enter the thunk and end up waiting for itself (deadlock) |
2025-01-31 21:45:54 +0100 | <dminuoso> | (And it does not work reliably either for a bunch of reasons) |
2025-01-31 21:45:44 +0100 | dsrt^ | (~dsrt@108.192.66.114) (Ping timeout: 252 seconds) |
2025-01-31 21:45:23 +0100 | <dminuoso> | The <<loop>> is just some opportunistic debugging helper, its not the core feature. |
2025-01-31 21:45:18 +0100 | <mauke> | did they change that? I have a vague memory that <<loop>> detection didn't work in multi-thread mode |
2025-01-31 21:45:04 +0100 | <dminuoso> | So it gets woken up whenever the thunk finished. |
2025-01-31 21:44:48 +0100 | <dminuoso> | If another thread enters a blackhole, it gets put on a list to be woken up later. |
2025-01-31 21:44:33 +0100 | <dminuoso> | mauke: If the same thread enters a blackhole, that blackhole acts as loop detection., |
2025-01-31 21:44:25 +0100 | <dminuoso> | mauke: Okay, so there's two behaviors to blackhole. |
2025-01-31 21:44:08 +0100 | alexherbo2 | (~alexherbo@2a02-8440-3503-94e0-1866-04f2-f81a-c1ec.rev.sfr.net) (Remote host closed the connection) |
2025-01-31 21:44:07 +0100 | <mauke> | so if another thread tries to evaluate the same thunk later, it will simply wait until the first thread is done computing a value |
2025-01-31 21:43:33 +0100 | <mauke> | in a multi-threaded environment, the first thread to reach a given thunk instead switches out the code pointer to an "enter waiting queue" subroutine |
2025-01-31 21:42:48 +0100 | <mauke> | at least in single-threaded mode |
2025-01-31 21:42:30 +0100 | <mauke> | this is implemented by temporarily switching the code pointer of a thunk to a subroutine that throws an exception |
2025-01-31 21:41:58 +0100 | euouae | furiously types some notes of the previous discussion, needs more time to read the latest stuff being said |
2025-01-31 21:41:55 +0100 | <mauke> | anyway, the <<loop>> exception happens when evaluation of a thunk tries to re-enter the same thunk (i.e. you have a value that depends on itself) |
2025-01-31 21:41:06 +0100 | <mauke> | huh, interesting |
2025-01-31 21:40:03 +0100 | <dminuoso> | See https://simonmar.github.io/bib/papers/multiproc.pdf |
2025-01-31 21:39:37 +0100 | <mauke> | I may be wrong on my terminology, but I think blackholing only happens in single-threaded mode |
2025-01-31 21:39:25 +0100 | <dminuoso> | And that all alone pretty much stops memory issues. |
2025-01-31 21:38:58 +0100 | <dminuoso> | euouae: That is, there is automatic protection that no two threads attempt to evaluate the same expression concurrently. |
2025-01-31 21:38:38 +0100 | <lambdabot> | *Exception: <<loop>> |
2025-01-31 21:38:37 +0100 | <dminuoso> | euouae: First off, when switchinig between threads, entered thunks are blackholed. |
2025-01-31 21:38:36 +0100 | <mauke> | > let x = head [x] in x |