2024/05/21

Newest at the top

2024-05-21 20:55:05 +0200mesaoptimizer(~mesaoptim@user/PapuaHardyNet)
2024-05-21 20:54:51 +0200mesaoptimizer(~mesaoptim@user/PapuaHardyNet) (Quit: mesaoptimizer)
2024-05-21 20:52:29 +0200yin(~yin@user/zero)
2024-05-21 20:50:40 +0200waleee(~waleee@h-176-10-144-38.NA.cust.bahnhof.se)
2024-05-21 20:50:01 +0200kuribas(~user@ptr-17d51encis8jg2ccf48.18120a2.ip6.access.telenet.be) (Remote host closed the connection)
2024-05-21 20:43:28 +0200ezzieyguywuf(~Unknown@user/ezzieyguywuf)
2024-05-21 20:42:39 +0200 <[exa]> like, most threads in existence just wait for some resource to arrive anyway, right
2024-05-21 20:42:38 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net)
2024-05-21 20:42:12 +0200 <dolio> Although I assume they weren't doing much.
2024-05-21 20:41:59 +0200 <dolio> There's an old ghc tracker ticker where Simon Marlow says he ran 1 million threads without a problem.
2024-05-21 20:41:24 +0200 <dolio> Yeah.
2024-05-21 20:41:02 +0200 <[exa]> dolio: btw the last time I read the RTS the total haskell IO-thread (_not_ OS thread) count was much more like memory-bounded than actually switch-starvation-bounded
2024-05-21 20:40:41 +0200Guest86(~Guest86@186.82.99.37)
2024-05-21 20:39:42 +0200 <mauke> (mind, this was a 32-bit system with limited RAM)
2024-05-21 20:39:09 +0200 <[exa]> let's take a moment now to commemorate the magnificent glorious 1 globally interpreter-locked thread of python
2024-05-21 20:39:04 +0200 <mauke> haskell got very slow, but still made visible progress at 300,000 threads
2024-05-21 20:38:25 +0200 <mauke> perl was under 100
2024-05-21 20:38:02 +0200 <mauke> last time I tried to benchmark thread systems (by creating an ever-growing bucket chain of threads in various systems/languages), C/pthreads maxed out at a couple hundred threads IIRC
2024-05-21 20:35:38 +0200 <mauke> fibers, maybe
2024-05-21 20:35:03 +0200ft(~ft@p508db8fc.dip0.t-ipconnect.de)
2024-05-21 20:24:07 +0200 <geekosaur> Gtk provides its own event loop
2024-05-21 20:23:44 +0200 <geekosaur> pthread_create. and libuv is an event loop, not a thread multiplexer
2024-05-21 20:22:06 +0200 <dolio> Yeah, that sounds right.
2024-05-21 20:21:57 +0200 <Rembane> The uv thing?
2024-05-21 20:21:56 +0200 <monochrom> But lazy evaluation is the major difference. It also has the domino effect of causing many other differences e.g. how and why GHC does heap, closures, and GC in a way a C compiler doesn't.
2024-05-21 20:21:53 +0200gentauro(~gentauro@user/gentauro)
2024-05-21 20:21:43 +0200 <dolio> Are they actually using OS threads? I thought there was some other C thing that people used when they wanted that level of concurrency.
2024-05-21 20:21:11 +0200 <lxsameer> mauke: cheers
2024-05-21 20:20:32 +0200 <monochrom> We certainly recommend "don't bother writing your own select event loop, just fork more threads". So the RTS has to actually optimize for that. >:)
2024-05-21 20:20:01 +0200 <geekosaur> C/Gtk programmers have told me otherwise (thousands of threads)
2024-05-21 20:19:21 +0200 <dolio> At least, last I heard.
2024-05-21 20:18:44 +0200 <dolio> Yeah, I guess. The problem is that you can't get away with having as many OS threads as people want Haskell threads.
2024-05-21 20:18:20 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net) (Ping timeout: 260 seconds)
2024-05-21 20:17:59 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 264 seconds)
2024-05-21 20:17:19 +0200 <monochrom> It also turns out we want to liberally move Haskell threads to any OS thread at a whim.
2024-05-21 20:16:25 +0200 <monochrom> OS stacks would be OK if there were a 1-1 mapping from Haskell threads to OS threads. OS threads already enjoy individual stacks. But of course we are more ambitious, we want our own cramming M Haskell threads into 1 OS thread.
2024-05-21 20:16:02 +0200gentauro(~gentauro@user/gentauro) (Read error: Connection reset by peer)
2024-05-21 20:16:02 +0200awnmp(~awnmp@user/awnmp)
2024-05-21 20:14:25 +0200 <dolio> Yeah. Switching out OS stacks is bad news, I think.
2024-05-21 20:14:21 +0200 <monochrom> As a bonus, stack is also growable and movable.
2024-05-21 20:14:02 +0200 <monochrom> Ah yeah, then also N Haskell threads can have N distinct Haskell stacks too.
2024-05-21 20:13:26 +0200Guest86(~Guest86@186.82.99.37) (Client Quit)
2024-05-21 20:12:33 +0200 <dolio> That's probably not the only reason.
2024-05-21 20:11:29 +0200 <monochrom> The fact that Haskell FFI works best if Haskell code leave OS-sanctioned stack alone (so C code can use it unconfused) so ghc-generated code uses another register and another memory area for Haskell stack.
2024-05-21 20:11:16 +0200 <mauke> lxsameer: I like direct-sqlite. no built-in support for migrations, but pretty trivial to add IMHO. depends on what you expect from a migration feature
2024-05-21 20:11:01 +0200Guest86(~Guest86@186.82.99.37)
2024-05-21 20:07:52 +0200 <monochrom> ghc-generated code looks different from gcc-generated code because of better reasons than this. Evaluation order. Heap system. Closures. How they decide to use registers.
2024-05-21 20:06:08 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 20:05:01 +0200 <EvanR> this calls for a closed cartesian comics on this subject
2024-05-21 20:03:53 +0200 <monochrom> And in the latter case, furthermore, "push" is narrated as "pass a parameter".