2025/04/29

Newest at the top

2025-04-29 15:33:57 +0200lxsameer(~lxsameer@Serene/lxsameer) lxsameer
2025-04-29 15:33:49 +0200ColinRobinson(~juan@user/JuanDaugherty) (Quit: praxis.meansofproduction.biz (juan@acm.org))
2025-04-29 15:33:18 +0200 <ski> but you still don't want to build up more and more context, when doing a long-running loop (e.g. with `IO'), if you can avoid it, and in such cases, reasoning about tail calls can be quite important to prevent space leakage (in the Haskell sense)
2025-04-29 15:32:01 +0200 <ski> a tail-recursive loop, which will always be bulky, not return any part of the result until the whole loop is done
2025-04-29 15:31:55 +0200 <ski> Liamzee : anyway, tail calls and tail recursion is still important in Haskell .. but tends to be a bit less important, due to non-strictness (typically implemented using laziness) making it easier to often describe loops as incremental algorithms, only computing the result piece-by-piece, so that the caller will drive looping over this, demanding more result pieces, for as long as it wants to, as opposed to
2025-04-29 15:31:36 +0200mari-estel(~mari-este@user/mari-estel) mari-estel
2025-04-29 15:29:10 +0200 <ski> (yes, `case' in Haskell doesn't directly correspond to `case' in Scheme (which is more like a C `switch'), but rather `match', but that doesn't matter too much here, for the comparision)
2025-04-29 15:28:28 +0200 <ski> in Haskell, `C[] ::= [] | C[] E ... E | if C[] then E else E | case C[] of P -> E; ... P -> E | ..'
2025-04-29 15:27:39 +0200 <ski> in Scheme, you have evaluation contexts like `C[] ::= [] | (E ... E C[] E ... E) | (if C[] E E) | (case C[] (P E ... E) ... (P E ... E)) | ..'
2025-04-29 15:27:27 +0200mange(~user@user/mange) (Quit: Zzz...)
2025-04-29 15:26:13 +0200internatetional_(~nate@2001:448a:20a3:c2e5:9930:c729:580e:9aa0) (Ping timeout: 252 seconds)
2025-04-29 15:25:26 +0200internatetional(~nate@2001:448a:20a3:c2e5:1ee8:b348:b41a:5570) internatetional
2025-04-29 15:25:07 +0200 <ski> in Haskell, it's basically the same, except that the "evaluating procedure call arguments" part is skipped. a call `f x' will not evaluate `x' (but will still evaluate `f' itself, which could cause stack usage)
2025-04-29 15:24:59 +0200internatetional(~nate@2001:448a:20a3:c2e5:701e:e3ed:2d04:6885) (Ping timeout: 245 seconds)
2025-04-29 15:24:02 +0200 <ski> btw, in Scheme, calling a procedure doesn't push stack, either, but evaluating procedure call arguments (and well as the function expression), and introducing new local variables (e.g. with `let'), or doing branching (like `case',`if',`cond', or e.g. a library pattern-matcher as `match'), does. or, in general, introducing a non-tail context
2025-04-29 15:21:51 +0200internatetional_(~nate@2001:448a:20a3:c2e5:9930:c729:580e:9aa0) internatetional
2025-04-29 15:21:00 +0200 <ski> not using stack is not the same as not using call stack
2025-04-29 15:19:49 +0200mari-estel(~mari-este@user/mari-estel) (Ping timeout: 260 seconds)
2025-04-29 15:18:46 +0200tromp(~textual@2001:1c00:3487:1b00:81f6:6a75:5fad:c9b4) (Quit: My iMac has gone to sleep. ZZZzzz…)
2025-04-29 15:16:49 +0200 <haskellbridge> <Liamzee> welp, no shade intended
2025-04-29 15:16:47 +0200 <haskellbridge> <Liamzee> that's interesting, i thought Haskell didn't use stack?
2025-04-29 15:16:02 +0200j1n37(~j1n37@user/j1n37) j1n37
2025-04-29 15:15:30 +0200 <ski> with uniqueness, you're promised that your reference to an item/value have not been duplicated in the *past* (but you could still duplicate in future, unless, say, you have to return it as still being unique). with linearity, you promise to not duplicate it in the *future* (but it could already have been duplicated in the past, before giving up the ability to duplicate)
2025-04-29 15:14:40 +0200jespada(~jespada@r167-61-148-73.dialup.adsl.anteldata.net.uy) (Ping timeout: 252 seconds)
2025-04-29 15:14:00 +0200 <ski> yin : note that linear types are not quite the same as affine or unique ones (those two are quite similar, the only difference is whether you're allowed to drop an item on the floor or not)
2025-04-29 15:13:02 +0200 <ski> "in comparison to an imperative language where stack allocation is happening" -- yea, calls don't push stack in Haskell, but *pattern-matching* (`case'-`of', as well as the sugar for that) still does
2025-04-29 15:12:56 +0200j1n37-(~j1n37@user/j1n37) (Read error: Connection reset by peer)
2025-04-29 15:12:33 +0200jespada_(~jespada@r179-25-126-65.dialup.adsl.anteldata.net.uy) jespada
2025-04-29 15:12:01 +0200 <ski> "i was trying to analogize the IO type to free monads actually" -- yea, the last `IO' version i sketched is basically the free monad version
2025-04-29 15:11:13 +0200 <ski> "since you actually have runRW# fun #Real World# or something like that, then you call a continuation" -- yea, GHC uses a state-passing internal implementation, which is only valid if the `RealWorld#' is being passed around uniquely (and this still doesn't properly describe exceptions, nor concurrency, nor things like `unsafeInterleaveIO',`unsafePerformIO')
2025-04-29 15:10:58 +0200internatetional(~nate@2001:448a:20a3:c2e5:701e:e3ed:2d04:6885) internatetional
2025-04-29 15:10:49 +0200internatetional_(~nate@2001:448a:20a3:c2e5:5b6:e1f9:afcb:86c5) (Ping timeout: 252 seconds)
2025-04-29 15:10:31 +0200ljdarj(~Thunderbi@user/ljdarj) ljdarj
2025-04-29 15:09:29 +0200 <haskellbridge> <Liamzee> erm, iiuc
2025-04-29 15:09:24 +0200 <ski> it helps to have seen some CPS (Continuation-Passing Style) or `Cont' before, though
2025-04-29 15:09:20 +0200 <haskellbridge> <Liamzee> it's just a free monad afaik
2025-04-29 15:09:01 +0200 <ski> yin : you could try it, as an exercise. it's not too hard
2025-04-29 15:08:43 +0200 <ski> (instead being defined in terms of the more primitive `Request' and `Response')
2025-04-29 15:08:27 +0200 <ski> are those DSL constructs, but there's no "interpretation" of them, since we don't build a tree of them
2025-04-29 15:08:10 +0200 <ski> you can clearly see the "instructions" of the DSL, in the former. in the latter, your main operations
2025-04-29 15:07:49 +0200 <haskellbridge> <Liamzee> since an interpreter model of how GHC is managing IO actions is treating them like it's zero cost to call an IO action and low cost to call an IO function, in comparison to an imperative language where stack allocation is happening
2025-04-29 15:07:38 +0200 <ski> defining in terms of `Dialogue' is more a shallow embedding
2025-04-29 15:07:17 +0200 <ski> having the primitive `IO' operations be data constructors is the "deep embedded DSL" version
2025-04-29 15:06:51 +0200 <haskellbridge> <Liamzee> treating them as zero or low-cost
2025-04-29 15:06:37 +0200 <haskellbridge> <Liamzee> ehhh, i guess i'm more being operational about it, since my habit right now in Haskell is not to care about function calls etc
2025-04-29 15:06:34 +0200 <ski> (iirc Hugs didn't use the CPS version, but something a bit different, for `IO')
2025-04-29 15:06:22 +0200tolgo(~Thunderbi@199.115.144.130) (Ping timeout: 265 seconds)
2025-04-29 15:06:12 +0200 <ski> and you an possibly conceive of more ways
2025-04-29 15:05:53 +0200 <ski> but the main point here is that you *can* think of `IO' as an "instruction tree", or as being defined in terms of `IOProgram', being an instruction tree. *or* defined in terms of `Request' & `Response', which are used in tandem (with lazy pattern matches, the `~', which is crucial here, and easy to mess up, hence abstracting it away behind CPS operations like `hGetChar')
2025-04-29 15:03:53 +0200 <ski> argument into all leaves, effectively splicing new subtrees in place of the leaves .. or you could instead add a `Bind :: IO a -> (a -> IO b) -> IO b' data constructor