2024/05/21

Newest at the top

2024-05-21 20:21:57 +0200 <Rembane> The uv thing?
2024-05-21 20:21:56 +0200 <monochrom> But lazy evaluation is the major difference. It also has the domino effect of causing many other differences e.g. how and why GHC does heap, closures, and GC in a way a C compiler doesn't.
2024-05-21 20:21:53 +0200gentauro(~gentauro@user/gentauro)
2024-05-21 20:21:43 +0200 <dolio> Are they actually using OS threads? I thought there was some other C thing that people used when they wanted that level of concurrency.
2024-05-21 20:21:11 +0200 <lxsameer> mauke: cheers
2024-05-21 20:20:32 +0200 <monochrom> We certainly recommend "don't bother writing your own select event loop, just fork more threads". So the RTS has to actually optimize for that. >:)
2024-05-21 20:20:01 +0200 <geekosaur> C/Gtk programmers have told me otherwise (thousands of threads)
2024-05-21 20:19:21 +0200 <dolio> At least, last I heard.
2024-05-21 20:18:44 +0200 <dolio> Yeah, I guess. The problem is that you can't get away with having as many OS threads as people want Haskell threads.
2024-05-21 20:18:20 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net) (Ping timeout: 260 seconds)
2024-05-21 20:17:59 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 264 seconds)
2024-05-21 20:17:19 +0200 <monochrom> It also turns out we want to liberally move Haskell threads to any OS thread at a whim.
2024-05-21 20:16:25 +0200 <monochrom> OS stacks would be OK if there were a 1-1 mapping from Haskell threads to OS threads. OS threads already enjoy individual stacks. But of course we are more ambitious, we want our own cramming M Haskell threads into 1 OS thread.
2024-05-21 20:16:02 +0200gentauro(~gentauro@user/gentauro) (Read error: Connection reset by peer)
2024-05-21 20:16:02 +0200awnmp(~awnmp@user/awnmp)
2024-05-21 20:14:25 +0200 <dolio> Yeah. Switching out OS stacks is bad news, I think.
2024-05-21 20:14:21 +0200 <monochrom> As a bonus, stack is also growable and movable.
2024-05-21 20:14:02 +0200 <monochrom> Ah yeah, then also N Haskell threads can have N distinct Haskell stacks too.
2024-05-21 20:13:26 +0200Guest86(~Guest86@186.82.99.37) (Client Quit)
2024-05-21 20:12:33 +0200 <dolio> That's probably not the only reason.
2024-05-21 20:11:29 +0200 <monochrom> The fact that Haskell FFI works best if Haskell code leave OS-sanctioned stack alone (so C code can use it unconfused) so ghc-generated code uses another register and another memory area for Haskell stack.
2024-05-21 20:11:16 +0200 <mauke> lxsameer: I like direct-sqlite. no built-in support for migrations, but pretty trivial to add IMHO. depends on what you expect from a migration feature
2024-05-21 20:11:01 +0200Guest86(~Guest86@186.82.99.37)
2024-05-21 20:07:52 +0200 <monochrom> ghc-generated code looks different from gcc-generated code because of better reasons than this. Evaluation order. Heap system. Closures. How they decide to use registers.
2024-05-21 20:06:08 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 20:05:01 +0200 <EvanR> this calls for a closed cartesian comics on this subject
2024-05-21 20:03:53 +0200 <monochrom> And in the latter case, furthermore, "push" is narrated as "pass a parameter".
2024-05-21 20:03:50 +0200dtman34(~dtman34@2601:447:d001:ed50:ebe5:b36d:357b:8a39)
2024-05-21 20:02:54 +0200 <monochrom> If you see a "push <code address>", it can be narrated as "push return address", but it can just as well be narrated as "push address of continuation". Same difference.
2024-05-21 20:02:49 +0200 <dolio> The stack is the continuation.
2024-05-21 20:02:36 +0200 <EvanR> care to explain how they are the same
2024-05-21 20:01:38 +0200 <monochrom> I no longer distinguish continuation passing from call stack.
2024-05-21 19:54:26 +0200 <lxsameer> hey folks, what lib do you recommend for using sqlite? Supporting db migrations is a plus
2024-05-21 19:53:54 +0200lxsameer(~lxsameer@Serene/lxsameer)
2024-05-21 19:53:43 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net)
2024-05-21 19:51:56 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 252 seconds)
2024-05-21 19:47:31 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com)
2024-05-21 19:43:48 +0200Guest86(~Guest41@186.82.99.37) ()
2024-05-21 19:43:21 +0200Guest86(~Guest41@186.82.99.37)
2024-05-21 19:40:57 +0200jstolarek(~jstolarek@staticline-31-183-174-191.toya.net.pl)
2024-05-21 19:33:42 +0200rdcdr(~rdcdr@user/rdcdr)
2024-05-21 19:32:47 +0200raehik(~raehik@rdng-25-b2-v4wan-169990-cust1344.vm39.cable.virginm.net) (Ping timeout: 256 seconds)
2024-05-21 19:32:18 +0200 <EvanR> or lack of success
2024-05-21 19:32:06 +0200rdcdr(~rdcdr@user/rdcdr) (Quit: ZNC 1.8.2+deb3.1 - https://znc.in)
2024-05-21 19:31:30 +0200 <EvanR> looking at core output can give clues to the success of optimizing
2024-05-21 19:27:39 +0200 <EvanR> the translation of haskell into core language is where the magic happens
2024-05-21 19:27:09 +0200 <EvanR> the core language has a simplified version of pattern matching which can be implemented like the switch statement in C
2024-05-21 19:26:54 +0200ocra8(ocra8@user/ocra8) (Quit: WeeChat 4.2.2)
2024-05-21 19:26:37 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-05-21 19:23:38 +0200 <d34df00d> kuribas: I'm curious then how one might compare some common haskell constructs (think even pattern matching) to what happens in other, more imperative languages.