2025/01/31

Newest at the top

2025-01-31 08:46:21 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 276 seconds)
2025-01-31 08:41:03 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-01-31 08:40:00 +0100 <euouae> neat, it's probably beyond what I can grasp but it's probably still worth looking into
2025-01-31 08:37:44 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) Smiles
2025-01-31 08:35:15 +0100monochrmmonochrom
2025-01-31 08:35:15 +0100monochrom(trebla@216.138.220.146) (Ping timeout: 244 seconds)
2025-01-31 08:33:25 +0100 <dminuoso> (Though GHC has a lot of other tricks up its sleeve to make that possible, so its not just STG)
2025-01-31 08:33:12 +0100monochrm(trebla@216.138.220.146)
2025-01-31 08:31:23 +0100 <dminuoso> euouae: Ultimately we can achieve very good performance with our approach, sometimes comparable to C++ or Rust with careful programming (though to be honest even those languages require careful treatment to obtain optimal performance). STG is *that* good.
2025-01-31 08:30:34 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 260 seconds)
2025-01-31 08:30:26 +0100 <euouae> Alright, thank you. I've got some cool stuff for the days ahead.
2025-01-31 08:30:04 +0100 <ski> first link
2025-01-31 08:29:47 +0100 <euouae> firstlink or all?
2025-01-31 08:29:39 +0100 <ski> check that first
2025-01-31 08:29:33 +0100 <lambdabot> strictness-guarded-recursion>
2025-01-31 08:29:33 +0100 <lambdabot> nfshost.com/articles/lazy-eval.html>; "Laziness, strictness, guarded recursion" by bitemyapp at <https://github.com/bitemyapp/learnhaskell/blob/master/specific_topics.md#user-content-laziness-
2025-01-31 08:29:33 +0100 <lambdabot> "Lazy Evaluation of Haskell" by monochrom at <http://www.vex.net/~trebla/haskell/lazy.xhtml>; "The Incomplete Guide to Lazy Evaluation (in Haskell)" by apfelmus in 2015-03-07 at <https://apfelmus.
2025-01-31 08:29:33 +0100 <ski> @where lazy
2025-01-31 08:29:22 +0100 <dminuoso> euouae: Give it a try, and see how far you go. If your mind explodes, put the paper aside for a future read.
2025-01-31 08:28:45 +0100 <dminuoso> (Well not quite *whenever* ...)
2025-01-31 08:28:43 +0100 <euouae> I'm kind of curious
2025-01-31 08:28:41 +0100 <euouae> <https://www.microsoft.com/en-us/research/wp-content/uploads/1992/04/spineless-tagless-gmachine.pdf> is a good intro to that?
2025-01-31 08:28:36 +0100 <dminuoso> With sharing whenever possible
2025-01-31 08:28:18 +0100 <dminuoso> euouae: Semantically you can imagine it kept the source code and just substituted.
2025-01-31 08:28:04 +0100 <dminuoso> s/programming/translating/
2025-01-31 08:27:58 +0100 <dminuoso> Which is a very efficient way of programming to native code.
2025-01-31 08:27:51 +0100 <dminuoso> euouae: No, we encode the whole program into what we call a spineless tagless G-machine
2025-01-31 08:27:44 +0100 <ski> "does it keep track of the source code" -- no
2025-01-31 08:27:22 +0100 <ski> same thing happens, if you define `f x = x + x', and then call `f (2 * 2)'
2025-01-31 08:27:21 +0100 <euouae> does it keep track of the source code instead of computing it? and just computes when necessary?
2025-01-31 08:27:05 +0100 <euouae> so about laziness, how exactly is it accomplished in ghc?
2025-01-31 08:26:48 +0100 <ski> it means that in `let x = 2 * 2 in x + x', first the `x + x' starts to happen, then that demands the result of `x', so `2 * 2' happens, result `4'. now it *remembers* (caches) that `x' resulted in `4', so that when the second `x' in `x + x' is checked, it reuses the `4', to compute `4 + 4', rather than performing the multiplication twice
2025-01-31 08:26:24 +0100monochrmmonochrom
2025-01-31 08:26:21 +0100 <euouae> okay right. hm...
2025-01-31 08:26:09 +0100monochrom(trebla@216.138.220.146) (Ping timeout: 248 seconds)
2025-01-31 08:26:07 +0100monochrm(trebla@216.138.220.146)
2025-01-31 08:25:54 +0100alfiee(~alfiee@user/alfiee) (Ping timeout: 260 seconds)
2025-01-31 08:25:42 +0100 <dminuoso> Consider `let x = <expensive computation> in (x, x)`
2025-01-31 08:25:41 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-01-31 08:25:31 +0100 <euouae> Why would it remember intermediate results? for what purpose?
2025-01-31 08:25:00 +0100 <euouae> What does caching mean?
2025-01-31 08:24:42 +0100 <ski> GHC does lazy evaulation, meaning demand-driven, with caching of intermediate results
2025-01-31 08:24:32 +0100 <dminuoso> euouae: Imagine the program was kept *textually* as you wrote it, and evaluation is just substitution.
2025-01-31 08:24:13 +0100 <euouae> i.e. what happens under the hood via ghc
2025-01-31 08:24:03 +0100 <euouae> Oh I understand that much (i.e. what you explained here ski), but in general to understand the Haskell evaluation
2025-01-31 08:23:58 +0100 <ski> the caller controls how much of it is materialized
2025-01-31 08:23:39 +0100 <ski> think of the list generated as an iterator, if you like
2025-01-31 08:23:21 +0100 <euouae> dminuoso: is there hope to understand it for non experts or is it too difficult?
2025-01-31 08:22:55 +0100 <ski> it's incremental, rather than tail-calling
2025-01-31 08:22:54 +0100 <dminuoso> euouae: In GHC Haskell, evaluation model works vastly different from traditional programming languages. We dont exactly push to a stack at the beginning of a function and pop at the end.