Newest at the top
2024-09-21 22:24:01 +0200 | <tuxpaint> | for such static binaries |
2024-09-21 22:23:49 +0200 | <tuxpaint> | which defeats the goal of having portable static binaries, and very fast build times |
2024-09-21 22:23:47 +0200 | <tomsmeding> | geekosaur: right, but you just do that once, right? |
2024-09-21 22:23:40 +0200 | <tuxpaint> | so you either need to link it, or rebuild it every time |
2024-09-21 22:23:33 +0200 | <geekosaur> | …right, never mind |
2024-09-21 22:23:32 +0200 | <tuxpaint> | well now you are calling libc right? |
2024-09-21 22:23:31 +0200 | <tomsmeding> | and anyway, freeing up a worker is lying to yourself, because the time that you spend in kernel mode is also cpu time |
2024-09-21 22:23:07 +0200 | <tomsmeding> | this libc call is written by go developers in the go standard library, it's not a random go programmer that goes through the cgo interface |
2024-09-21 22:22:57 +0200 | <geekosaur> | (glibc's RTS startup actually does quite a lot, if you strace a program you'll see a _lot_ of activity before main() is called) |
2024-09-21 22:22:47 +0200 | <tomsmeding> | tuxpaint: why could the runtime not spawn up a new system thread, that is _not_ registered as a go worker, to run the libc call in? |
2024-09-21 22:22:19 +0200 | <tuxpaint> | as in, unusably slow for many disk operations (think database) |
2024-09-21 22:22:00 +0200 | <geekosaur> | libc calls can therefore crash |
2024-09-21 22:21:56 +0200 | merijn | (~merijn@204-220-045-062.dynamic.caiway.nl) (Ping timeout: 272 seconds) |
2024-09-21 22:21:54 +0200 | <tuxpaint> | it makes it faster, but it's still very very slow |
2024-09-21 22:21:53 +0200 | <geekosaur> | libc *has a runtime*. go does not initialize that runtume |
2024-09-21 22:21:52 +0200 | <tomsmeding> | that's just `start()` at the start of the go _process_ |
2024-09-21 22:21:46 +0200 | <tuxpaint> | yes, you can set gomaxprocs to a very high number, and do one thread per syscall |
2024-09-21 22:21:36 +0200 | <geekosaur> | you still don't have loibc initialized |
2024-09-21 22:21:30 +0200 | <tomsmeding> | I still don't see what the difference is |
2024-09-21 22:21:23 +0200 | <tomsmeding> | okay, but if you're willing to start another thread for that syscall, why not run the libc call in that new thread too? |
2024-09-21 22:21:02 +0200 | <tuxpaint> | gomaxprocs does not determine the max amount of system threads |
2024-09-21 22:21:01 +0200 | <tomsmeding> | it's not like that `syscall` instruction can magically go off and run asynchronously |
2024-09-21 22:20:55 +0200 | <tuxpaint> | you can launch another thread and do work, you are not frozen |
2024-09-21 22:20:55 +0200 | <geekosaur> | no? it fires off completely separate threads for that |
2024-09-21 22:20:39 +0200 | <tomsmeding> | fine, but you still have a thread less for "normal" go code! |
2024-09-21 22:20:26 +0200 | <geekosaur> | the IO manager manages them separately from standard threads |
2024-09-21 22:20:12 +0200 | <tomsmeding> | ... okay, but the thread is still blocked and can't do anything else |
2024-09-21 22:20:03 +0200 | <geekosaur> | like ghc's IO manager threads |
2024-09-21 22:19:58 +0200 | <geekosaur> | right |
2024-09-21 22:19:51 +0200 | <tuxpaint> | the runtime does not see that thread as a thread that is "working" |
2024-09-21 22:19:45 +0200 | <tomsmeding> | so something is contradictory here |
2024-09-21 22:19:39 +0200 | <geekosaur> | (ghc ensures the C RTS is set up, but go's pure-go approach means it probably isn't and going through libc will therefore be dangerous) |
2024-09-21 22:19:31 +0200 | <tomsmeding> | tuxpaint: if you perform a syscall, your thread is blocked |
2024-09-21 22:19:26 +0200 | <tuxpaint> | and it doesn't take up when waiting, one of your gomaxprocs slots |
2024-09-21 22:19:11 +0200 | <tuxpaint> | while the pure-go syscall, they can unblock the worker and do something else while the syscall is waiting |
2024-09-21 22:19:00 +0200 | <geekosaur> | other things that come up are C-style global constructors/destructors and things that look like syscalls but aren't directly and may use services like malloc which could well fail with the go runtime because the C runtime isn't initialized |
2024-09-21 22:18:21 +0200 | <tuxpaint> | in go there is gomaxprocs, so each blocking thread takes a slot from gomaxprocs. say you have 20 slots, a cgo call will take one of those 20 slots when invoked, and cannot unblock the worker. |
2024-09-21 22:17:57 +0200 | <geekosaur> | I assume go's stdlib is wired to a particular kernel ABI and doesn't need the runtime checks |
2024-09-21 22:17:50 +0200 | <tomsmeding> | sure, but tuxpaint is claiming that there is more than just speed |
2024-09-21 22:17:31 +0200 | <geekosaur> | speed. did you notice earlier when I mentioned the extra code to check kernel ABI versions? |
2024-09-21 22:17:11 +0200 | <tomsmeding> | (apart from the latter possibly being a little slower) |
2024-09-21 22:16:58 +0200 | <tomsmeding> | I dunno, maybe. But if so, then what's the difference (to go) between a syscall and a libc function that wraps that syscall? |
2024-09-21 22:16:44 +0200 | <geekosaur> | similarly to what ghc's RTS does with `safe` calls |
2024-09-21 22:16:27 +0200 | <geekosaur> | right, I assume it offloads the actual call to a thread that isn't part of the scheduler so it can safely block |
2024-09-21 22:16:23 +0200 | <tomsmeding> | eventually there is some code in go's stdlib or RTS that does the actual syscall, be it directly or through libc |
2024-09-21 22:16:15 +0200 | merijn | (~merijn@204-220-045-062.dynamic.caiway.nl) |
2024-09-21 22:15:36 +0200 | <tomsmeding> | of course, but we're talking about how said go function is implemented |
2024-09-21 22:15:18 +0200 | <geekosaur> | right, but you're not supposed to do either, you use the go function which offloads the call (similarly to Haskell's `safe`) so it doesn't block the scheduler |
2024-09-21 22:14:49 +0200 | <tomsmeding> | (surely?) |
2024-09-21 22:14:36 +0200 | <tomsmeding> | well, some are, but then the libc call would also be |