2025/01/11

Newest at the top

2025-01-11 21:01:10 +0100acidjnk(~acidjnk@p200300d6e7283f90c8dc7c78c19bd00e.dip0.t-ipconnect.de) (Ping timeout: 260 seconds)
2025-01-11 21:00:52 +0100acidjnk_new(~acidjnk@p200300d6e7283f9009bc3096dfcfc887.dip0.t-ipconnect.de) acidjnk
2025-01-11 21:00:43 +0100caconym(~caconym@user/caconym) caconym
2025-01-11 21:00:04 +0100caconym(~caconym@user/caconym) (Quit: bye)
2025-01-11 20:59:44 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds)
2025-01-11 20:55:20 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-11 20:50:57 +0100tnt2(~Thunderbi@user/tnt1) (Ping timeout: 244 seconds)
2025-01-11 20:50:12 +0100tnt1(~Thunderbi@user/tnt1) tnt1
2025-01-11 20:50:01 +0100swistak(~swistak@185.21.216.141)
2025-01-11 20:49:55 +0100target_i(~target_i@user/target-i/x-6023099) (Ping timeout: 264 seconds)
2025-01-11 20:48:18 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 276 seconds)
2025-01-11 20:46:42 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-11 20:44:18 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-11 20:43:06 +0100tnt2tnt1
2025-01-11 20:43:06 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 276 seconds)
2025-01-11 20:42:32 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-11 20:39:57 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-11 20:39:32 +0100tnt2(~Thunderbi@user/tnt1) (Ping timeout: 252 seconds)
2025-01-11 20:38:23 +0100tnt1(~Thunderbi@user/tnt1) tnt1
2025-01-11 20:35:30 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 252 seconds)
2025-01-11 20:34:58 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-11 20:32:02 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) machinedgod
2025-01-11 20:31:05 +0100lxsameer(~lxsameer@Serene/lxsameer) (Ping timeout: 252 seconds)
2025-01-11 20:28:54 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 260 seconds)
2025-01-11 20:27:43 +0100 <monochrom> Yeah that too.
2025-01-11 20:27:30 +0100 <c_wraith> the recent optimizations doing join point analysis really help out with specific types of code, too.
2025-01-11 20:26:39 +0100 <monochrom> And my favourite example linked above, some of you asked "why doesn't GHC take one step further and just generate 'eval = fval'?" Guess what, these days it does! :)
2025-01-11 20:26:19 +0100swistak(~swistak@185.21.216.141) (Ping timeout: 252 seconds)
2025-01-11 20:26:18 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-01-11 20:25:43 +0100 <c_wraith> (when both the lens and the operation are statically known)
2025-01-11 20:24:47 +0100 <c_wraith> lens also depends on case-of-case to be efficient without tons of rewrite rules.
2025-01-11 20:23:31 +0100 <monochrom> The vector library's stream fusion actually relies on that fundamentally.
2025-01-11 20:22:37 +0100 <monochrom> And there is also one where if you have "case (case y of x:_ -> Just x) of Just z -> foo" the middle Just can also be eliminated.
2025-01-11 20:21:56 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-11 20:21:40 +0100 <monochrom> There is a fairly general scheme where if you have "case m of Just z -> ... case m of Just z again -> ..." the 2nd conditional branching can be eliminated.
2025-01-11 20:20:49 +0100 <darkling> Needle nardle noo.
2025-01-11 20:20:45 +0100acidjnk(~acidjnk@p200300d6e7283f90c8dc7c78c19bd00e.dip0.t-ipconnect.de) acidjnk
2025-01-11 20:19:41 +0100 <monochrom> My favourite example is https://mail.haskell.org/pipermail/haskell-cafe/2013-April/107775.html and it's already last decade. Imagine what it can do today. :)
2025-01-11 20:17:11 +0100 <haskellbridge> <Bowuigi> Interesting, GHC is actually very smart
2025-01-11 20:16:33 +0100 <bailsman> so far as I can understand - I'm not an expert in reading the simpl output - it has entirely optimized out my setfield calls to just creating a new record passing in values from the original values by direct field access and/or the values that I passed in, which seems optimal. No mention of generic anything.
2025-01-11 20:16:11 +0100 <monochrom> I haven't seen an example, but it is said that O2 can be slower than O1 for some code, hence I analogize it to gcc's O3.
2025-01-11 20:14:37 +0100 <monochrom> OK vector is one of the few exceptions where O2 is necessary.
2025-01-11 20:13:48 +0100 <bailsman> I don't actually really know what the difference is.
2025-01-11 20:13:35 +0100 <bailsman> BEcause when I did my microbenchmarks with the storable vectors -O2 was much much faster, I believe because it was specializing better
2025-01-11 20:12:53 +0100 <monochrom> And yeah, actually ghc : 1 : 2 :: gcc : 2 : 3
2025-01-11 20:12:46 +0100 <c_wraith> (most code is compiled at -O1, as it's the default and hackage even warns you about uploading -O2)
2025-01-11 20:12:08 +0100 <monochrom> Yeah the -O0 code is also suspiciously much shorter. >:)
2025-01-11 20:12:07 +0100 <c_wraith> why not -O1? that's the most relevant level
2025-01-11 20:12:02 +0100 <haskellbridge> <Bowuigi> Check for calls to the generic methods instead
2025-01-11 20:11:32 +0100 <haskellbridge> <Bowuigi> They might be inlined but still there