2024/11/14

2024-11-14 00:01:51 +0100falafel(~falafel@2600:1700:99f4:2050:41b3:d17e:817a:4e83) falafel
2024-11-14 00:02:55 +0100Everything(~Everythin@46.211.104.82) (Quit: leaving)
2024-11-14 00:28:22 +0100Xe_(~Xe@perl/impostor/xe) Xe
2024-11-14 00:29:11 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2024-11-14 00:29:57 +0100acidjnk_new3(~acidjnk@p200300d6e7283f7100fa0b96aa6639bf.dip0.t-ipconnect.de) (Ping timeout: 248 seconds)
2024-11-14 00:32:33 +0100acidjnk_new3(~acidjnk@p200300d6e7283f717cba866c0fa9f7cd.dip0.t-ipconnect.de)
2024-11-14 00:47:47 +0100 <jackdk> I want to provide a type family-shaped helper that identifies the type of a record field, something like `FieldType "foo" MyRecord` reducing to `Bar`. I can get at the type of a field by looking at the fundep on the `HasField` class (GHC gives me in `instance HasField "foo" MyRecord Bar`, but is there a good idiom for binding and returning that type variable using a type family?
2024-11-14 00:48:23 +0100alexherbo2(~alexherbo@2a02-8440-3117-f07c-987b-fc29-77ee-addd.rev.sfr.net) (Remote host closed the connection)
2024-11-14 00:49:22 +0100CoolMa7(~CoolMa7@ip5f5b8957.dynamic.kabel-deutschland.de) (Quit: My Mac has gone to sleep. ZZZzzz…)
2024-11-14 00:49:54 +0100athostFI(~Atte@176-93-56-50.bb.dnainternet.fi)
2024-11-14 00:50:52 +0100alp(~alp@2001:861:e3d6:8f80:8dec:7d0f:9187:87d0) (Remote host closed the connection)
2024-11-14 00:51:04 +0100 <Axman6> This feels like it might be easier with generics-sop, but it's been a long time since I looked at any of these things
2024-11-14 00:51:40 +0100alp(~alp@2001:861:e3d6:8f80:cd0a:c39d:37b7:c1a3)
2024-11-14 00:53:05 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2024-11-14 00:53:24 +0100alp(~alp@2001:861:e3d6:8f80:cd0a:c39d:37b7:c1a3) (Remote host closed the connection)
2024-11-14 00:54:14 +0100alp(~alp@2001:861:e3d6:8f80:c1d0:6a01:957c:3af2)
2024-11-14 00:55:56 +0100alp(~alp@2001:861:e3d6:8f80:c1d0:6a01:957c:3af2) (Remote host closed the connection)
2024-11-14 01:01:19 +0100falafel(~falafel@2600:1700:99f4:2050:41b3:d17e:817a:4e83) (Ping timeout: 260 seconds)
2024-11-14 01:10:06 +0100acidjnk_new3(~acidjnk@p200300d6e7283f717cba866c0fa9f7cd.dip0.t-ipconnect.de) (Read error: Connection reset by peer)
2024-11-14 01:10:17 +0100alexherbo2(~alexherbo@2a02-8440-3117-f07c-987b-fc29-77ee-addd.rev.sfr.net) alexherbo2
2024-11-14 01:11:02 +0100 <Leary> jackdk: I doubt there's anything like an 'idiom' for this. Does `class HasField f r (Field f r) => HasFieldF f r where { type Field f r }; instance HasField f r t => HasFieldF f r where { type Field f r = t }` work?
2024-11-14 01:14:53 +0100Tuplanolla(~Tuplanoll@91-159-69-59.elisa-laajakaista.fi) (Quit: Leaving.)
2024-11-14 01:16:38 +0100Lord_of_Life(~Lord@user/lord-of-life/x-2819915) (Ping timeout: 245 seconds)
2024-11-14 01:17:43 +0100 <glguy> jackdk: You could do something like this: https://bpa.st/AYDQ
2024-11-14 01:18:38 +0100Lord_of_Life(~Lord@user/lord-of-life/x-2819915) Lord_of_Life
2024-11-14 01:20:20 +0100arahael(~arahael@user/arahael) (Quit: Lost terminal)
2024-11-14 01:25:58 +0100sprotte24(~sprotte24@p200300d16f059400e8d39b8ffa006815.dip0.t-ipconnect.de) (Quit: Leaving)
2024-11-14 01:26:14 +0100xff0x(~xff0x@2405:6580:b080:900:ca42:e655:d7e4:ec2b) (Ping timeout: 272 seconds)
2024-11-14 01:32:10 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 252 seconds)
2024-11-14 01:35:18 +0100 <jackdk> Leary: alas no: "The RHS of an associated type declaration mentions out-of-scope variable ‘t’ All such variables must be bound on the LHS"; glguy: Yeah, recursing through the `Rep` seems like the best bet. Thanks to you both.
2024-11-14 01:36:34 +0100MironZ3(~MironZ@nat-infra.ehlab.uk) (Quit: Ping timeout (120 seconds))
2024-11-14 01:36:34 +0100Square(~Square@user/square) Square
2024-11-14 01:39:10 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Read error: Connection reset by peer)
2024-11-14 01:43:55 +0100MironZ3(~MironZ@nat-infra.ehlab.uk)
2024-11-14 01:44:15 +0100ljdarj1(~Thunderbi@user/ljdarj) ljdarj
2024-11-14 01:47:34 +0100ljdarj(~Thunderbi@user/ljdarj) (Ping timeout: 265 seconds)
2024-11-14 01:47:34 +0100ljdarj1ljdarj
2024-11-14 01:48:46 +0100athostFI(~Atte@176-93-56-50.bb.dnainternet.fi) (Read error: Connection reset by peer)
2024-11-14 01:49:48 +0100 <jle`> does anybody know if there has been any updates on https://github.com/haskell/cabal/issues/9577 ? is there a good way to get haddock to do multiple sublibraries?
2024-11-14 01:50:03 +0100 <glguy> getting ready for aoc? ;-)
2024-11-14 01:51:19 +0100 <geekosaur> https://github.com/haskell/cabal/pull/9821 maybe?
2024-11-14 01:51:52 +0100 <jle`> glguy: heh how did you guess
2024-11-14 01:52:04 +0100 <jle`> i am merging all of my aoc libs into a single master cabal project
2024-11-14 01:52:14 +0100 <glguy> jle`: I can't think of any other reason to use multiple sublibraries ^_^
2024-11-14 01:52:38 +0100 <geekosaur> amazonka and recent HLS use them
2024-11-14 01:52:59 +0100 <geekosaur> HLS for all its plugins, amazonka for all its generated service packages
2024-11-14 01:53:06 +0100 <glguy> geekosaur: Maybe someone used those libraries to solve an aoc problem then
2024-11-14 01:53:09 +0100 <Leary> jle`: There's some discussion on it here: https://discourse.haskell.org/t/best-practices-for-public-cabal-sublibraries/10272
2024-11-14 01:53:26 +0100 <jle`> geekosaur: ah that does seem promising, do you know if it's in any cabal releases?
2024-11-14 01:54:06 +0100 <geekosaur> not yet but it should be in 3.14.1.0
2024-11-14 01:54:16 +0100 <geekosaur> which is due around when ghc 9.12.1 GA is
2024-11-14 01:54:32 +0100 <jle`> ooh, that's in a matter of days right?
2024-11-14 01:54:36 +0100 <geekosaur> the release process has already begun
2024-11-14 01:54:47 +0100 <jle`> woo hoo
2024-11-14 01:55:19 +0100 <jle`> rate of cabal improvements has been amazing
2024-11-14 02:14:21 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2024-11-14 02:18:56 +0100xff0x(~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp)
2024-11-14 02:21:07 +0100ljdarj(~Thunderbi@user/ljdarj) (Quit: ljdarj)
2024-11-14 02:27:07 +0100CrunchyFlakes_(~CrunchyFl@ip1f13e94e.dynamic.kabel-deutschland.de) (Ping timeout: 264 seconds)
2024-11-14 02:28:51 +0100yin(~z@user/zero) (Read error: Connection reset by peer)
2024-11-14 02:29:30 +0100zero(~z@user/zero) zero
2024-11-14 02:33:06 +0100califax(~califax@user/califx) (Remote host closed the connection)
2024-11-14 02:43:43 +0100califax(~califax@user/califx) califx
2024-11-14 02:45:51 +0100jero98772(~jero98772@190.158.28.32)
2024-11-14 02:47:45 +0100telser(~quassel@user/telser) (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.)
2024-11-14 02:55:16 +0100jero98772(~jero98772@190.158.28.32) (Ping timeout: 244 seconds)
2024-11-14 03:00:23 +0100housemate(~housemate@146.70.66.228) (Quit: "I saw it in a tiktok video and thought that it was the most smartest answer ever." ~ AnonOps Radio [some time some place] | I AM THE DERIVATIVE I AM GOING TANGENT TO THE CURVE!)
2024-11-14 03:13:26 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) Smiles
2024-11-14 03:14:45 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 246 seconds)
2024-11-14 03:30:17 +0100myxos(~myxos@syn-065-028-251-121.res.spectrum.com) myxokephale
2024-11-14 04:06:25 +0100alexherbo2(~alexherbo@2a02-8440-3117-f07c-987b-fc29-77ee-addd.rev.sfr.net) (Remote host closed the connection)
2024-11-14 04:13:00 +0100arahael(~arahael@user/arahael) arahael
2024-11-14 04:16:04 +0100arahael_(~arahael@user/arahael) arahael
2024-11-14 04:29:59 +0100td_(~td@i53870901.versanet.de) (Ping timeout: 260 seconds)
2024-11-14 04:31:21 +0100td_(~td@i5387092A.versanet.de) td_
2024-11-14 04:46:17 +0100Pozyomka(~pyon@user/pyon) (Quit: Reboot.)
2024-11-14 04:52:54 +0100agent314(~quassel@static-198-44-129-53.cust.tzulo.com) agent314
2024-11-14 05:09:25 +0100bitdex(~bitdex@gateway/tor-sasl/bitdex) bitdex
2024-11-14 05:10:33 +0100agent314(~quassel@static-198-44-129-53.cust.tzulo.com) (Ping timeout: 276 seconds)
2024-11-14 05:12:25 +0100divya(~user@139.5.11.223) divya
2024-11-14 05:19:01 +0100mange(~user@user/mange) mange
2024-11-14 05:19:01 +0100mange(~user@user/mange) (Excess Flood)
2024-11-14 05:25:13 +0100mc47(~mc47@xmonad/TheMC47) (Remote host closed the connection)
2024-11-14 05:25:33 +0100mc47(~mc47@xmonad/TheMC47) mc47
2024-11-14 05:27:30 +0100divya(~user@139.5.11.223) (Remote host closed the connection)
2024-11-14 05:30:03 +0100Pozyomka(~pyon@user/pyon) pyon
2024-11-14 05:30:40 +0100mange(~user@user/mange) mange
2024-11-14 05:31:26 +0100mange(~user@user/mange) (Client Quit)
2024-11-14 05:32:37 +0100pavonia(~user@user/siracusa) (Quit: Bye!)
2024-11-14 05:36:48 +0100housemate(~housemate@146.70.66.228) housemate
2024-11-14 05:37:01 +0100stiell_(~stiell@gateway/tor-sasl/stiell) (Ping timeout: 260 seconds)
2024-11-14 05:41:15 +0100stiell_(~stiell@gateway/tor-sasl/stiell) stiell
2024-11-14 05:54:07 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2024-11-14 06:06:35 +0100mikko(~mikko@user/mikko) (Ping timeout: 255 seconds)
2024-11-14 06:13:50 +0100agent314(~quassel@static-198-44-129-53.cust.tzulo.com) agent314
2024-11-14 06:14:44 +0100visilii_(~visilii@213.24.127.47)
2024-11-14 06:14:54 +0100visilii(~visilii@213.24.133.209) (Ping timeout: 276 seconds)
2024-11-14 06:36:59 +0100Guest16(~Guest16@2401:4900:65c9:bca3:883d:d42c:cc19:7f95)
2024-11-14 06:37:48 +0100 <Guest16> hi
2024-11-14 06:37:56 +0100 <Guest16> Does anyone know why http://wiki.haskell.org/ is down
2024-11-14 06:39:54 +0100 <Axman6> There's been issues with the machine it runs on lately, which I believe are proving to be quite hard to fix. sm I think knows more (there's also #haskell-infrastructure)
2024-11-14 06:41:03 +0100 <probie> Guest16: https://mail.haskell.org/pipermail/haskell-cafe/2024-November/136929.html
2024-11-14 06:41:44 +0100 <Guest16> thank you
2024-11-14 06:52:22 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Remote host closed the connection)
2024-11-14 06:52:41 +0100Guest16(~Guest16@2401:4900:65c9:bca3:883d:d42c:cc19:7f95) (Quit: Client closed)
2024-11-14 06:58:06 +0100takuan(~takuan@178-116-218-225.access.telenet.be)
2024-11-14 07:00:42 +0100 <haskellbridge> <sm> https://github.com/haskell/haskell-wiki-configuration/issues/43
2024-11-14 07:02:55 +0100misterfish(~misterfis@84.53.85.146) misterfish
2024-11-14 07:04:06 +0100michalz(~michalz@185.246.207.203)
2024-11-14 07:08:06 +0100philopsos(~caecilius@user/philopsos) (Quit: Lost terminal)
2024-11-14 07:08:12 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2024-11-14 07:12:25 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) (Quit: Connection closed for inactivity)
2024-11-14 07:16:45 +0100Square(~Square@user/square) (Remote host closed the connection)
2024-11-14 07:17:09 +0100Square(~Square@user/square) Square
2024-11-14 07:25:25 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 248 seconds)
2024-11-14 07:31:18 +0100Square2(~Square4@user/square) Square
2024-11-14 07:34:04 +0100Square(~Square@user/square) (Ping timeout: 252 seconds)
2024-11-14 07:51:43 +0100alp(~alp@2001:861:e3d6:8f80:c4b2:beb0:f361:d694)
2024-11-14 07:57:54 +0100hellwolf(~user@0e2f-3a3b-aecf-adb3-0f00-4d40-07d0-2001.sta.estpak.ee) (Ping timeout: 246 seconds)
2024-11-14 07:58:37 +0100divya(~user@139.5.11.223) divya
2024-11-14 07:59:20 +0100divya(~user@139.5.11.223) (Quit: ERC 5.6.0.30.1 (IRC client for GNU Emacs 30.0.91))
2024-11-14 08:00:12 +0100divya(~user@139.5.11.223) divya
2024-11-14 08:00:12 +0100tv(~tv@user/tv) (Read error: Connection reset by peer)
2024-11-14 08:06:51 +0100Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2024-11-14 08:09:38 +0100Xe(~cadey@perl/impostor/xe) (Ping timeout: 248 seconds)
2024-11-14 08:09:59 +0100Xe_(~Xe@perl/impostor/xe) (Ping timeout: 252 seconds)
2024-11-14 08:16:08 +0100Xe(~Xe@perl/impostor/xe) Xe
2024-11-14 08:17:18 +0100Cadey(~cadey@perl/impostor/xe) Xe
2024-11-14 08:30:14 +0100acidjnk(~acidjnk@p200300d6e7283f73687bc11ede7922f8.dip0.t-ipconnect.de) acidjnk
2024-11-14 08:34:21 +0100petrichor(~znc-user@user/petrichor) petrichor
2024-11-14 08:45:10 +0100vanishingideal(~vanishing@user/vanishingideal) (Ping timeout: 265 seconds)
2024-11-14 08:45:17 +0100ubert(~Thunderbi@178.165.164.236.wireless.dyn.drei.com) ubert
2024-11-14 08:51:11 +0100ft(~ft@p4fc2a216.dip0.t-ipconnect.de) (Quit: leaving)
2024-11-14 08:51:35 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2024-11-14 08:51:46 +0100lortabac(~lortabac@2a01:e0a:541:b8f0:55ab:e185:7f81:54a4) lortabac
2024-11-14 08:53:54 +0100kuribas(~user@2a02:1808:84:5008:bc1f:a609:eab5:5cb9) kuribas
2024-11-14 08:55:57 +0100ubert(~Thunderbi@178.165.164.236.wireless.dyn.drei.com) (Quit: ubert)
2024-11-14 08:58:52 +0100kuribas(~user@2a02:1808:84:5008:bc1f:a609:eab5:5cb9) (Remote host closed the connection)
2024-11-14 08:59:05 +0100kuribas(~user@2a02:1808:84:5008:61f:fb32:d5a4:cce1) kuribas
2024-11-14 09:00:02 +0100caconym(~caconym@user/caconym) (Quit: bye)
2024-11-14 09:00:39 +0100caconym(~caconym@user/caconym) caconym
2024-11-14 09:01:47 +0100kuribas`(~user@ip-188-118-57-242.reverse.destiny.be) kuribas
2024-11-14 09:03:43 +0100kuribas(~user@2a02:1808:84:5008:61f:fb32:d5a4:cce1) (Ping timeout: 264 seconds)
2024-11-14 09:08:39 +0100vanishingideal(~vanishing@user/vanishingideal) (Ping timeout: 252 seconds)
2024-11-14 09:10:24 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2024-11-14 09:16:00 +0100falafel(~falafel@2600:1700:99f4:2050:1cad:26ba:1279:135d) falafel
2024-11-14 09:16:18 +0100tv(~tv@user/tv) tv
2024-11-14 09:22:38 +0100mceresa(~mceresa@user/mceresa) (Remote host closed the connection)
2024-11-14 09:22:47 +0100mceresa(~mceresa@user/mceresa) mceresa
2024-11-14 09:25:02 +0100misterfish(~misterfis@84.53.85.146) (Ping timeout: 255 seconds)
2024-11-14 09:27:15 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) Smiles
2024-11-14 09:38:42 +0100falafel(~falafel@2600:1700:99f4:2050:1cad:26ba:1279:135d) (Remote host closed the connection)
2024-11-14 09:51:18 +0100hellwolf(~user@2001:1530:70:545:809e:22e1:baa3:1e4c) hellwolf
2024-11-14 09:58:59 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) machinedgod
2024-11-14 10:02:51 +0100alphazone(~alphazone@2.219.56.221) (Ping timeout: 246 seconds)
2024-11-14 10:05:57 +0100Maxdamantus(~Maxdamant@user/maxdamantus) (Ping timeout: 248 seconds)
2024-11-14 10:08:06 +0100rvalue(~rvalue@user/rvalue) (Read error: Connection reset by peer)
2024-11-14 10:08:36 +0100rvalue(~rvalue@user/rvalue) rvalue
2024-11-14 10:13:18 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) misterfish
2024-11-14 10:14:38 +0100CrunchyFlakes(~CrunchyFl@ip1f13e94e.dynamic.kabel-deutschland.de)
2024-11-14 10:19:37 +0100Maxdamantus(~Maxdamant@user/maxdamantus) Maxdamantus
2024-11-14 10:24:45 +0100vanishingideal(~vanishing@user/vanishingideal) (Quit: leaving)
2024-11-14 10:31:28 +0100chele(~chele@user/chele) chele
2024-11-14 10:32:08 +0100alp(~alp@2001:861:e3d6:8f80:c4b2:beb0:f361:d694) (Remote host closed the connection)
2024-11-14 10:32:14 +0100tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz)
2024-11-14 10:32:25 +0100alp(~alp@2001:861:e3d6:8f80:c18:bc99:f25e:38cc)
2024-11-14 10:41:23 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-11-14 10:42:06 +0100favalex(~favalex@176.200.207.41)
2024-11-14 10:50:16 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 10:53:06 +0100favalex(~favalex@176.200.207.41) (Quit: Client closed)
2024-11-14 11:05:52 +0100mari18976(~mari-este@user/mari-estel) mari-estel
2024-11-14 11:07:14 +0100hgolden(~hgolden@2603:8000:9d00:3ed1:6c70:1ac0:d127:74dd) (Ping timeout: 260 seconds)
2024-11-14 11:08:17 +0100mari-estel(~mari-este@user/mari-estel) (Ping timeout: 248 seconds)
2024-11-14 11:10:27 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 11:10:45 +0100mari18976(~mari-este@user/mari-estel) (Read error: Connection reset by peer)
2024-11-14 11:12:07 +0100mari24610(~mari-este@user/mari-estel) mari-estel
2024-11-14 11:14:04 +0100lxsameer(~lxsameer@Serene/lxsameer) lxsameer
2024-11-14 11:15:10 +0100xff0x(~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp) (Ping timeout: 252 seconds)
2024-11-14 11:15:12 +0100mari-estel(~mari-este@user/mari-estel) (Ping timeout: 276 seconds)
2024-11-14 11:20:44 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 11:23:39 +0100mari24610(~mari-este@user/mari-estel) (Ping timeout: 276 seconds)
2024-11-14 11:24:27 +0100ash3en(~Thunderbi@149.222.147.110) ash3en
2024-11-14 11:27:17 +0100Digitteknohippie(~user@user/digit) Digit
2024-11-14 11:27:49 +0100Digit(~user@user/digit) (Ping timeout: 260 seconds)
2024-11-14 11:28:05 +0100ash3en(~Thunderbi@149.222.147.110) (Client Quit)
2024-11-14 11:32:25 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) (Quit: Connection closed for inactivity)
2024-11-14 11:33:37 +0100DigitteknohippieDigit
2024-11-14 11:49:20 +0100mikko(~mikko@user/mikko) mikko
2024-11-14 11:57:08 +0100mari-estel(~mari-este@user/mari-estel) (Remote host closed the connection)
2024-11-14 11:57:18 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 11:58:42 +0100mari-estel(~mari-este@user/mari-estel) (Remote host closed the connection)
2024-11-14 11:58:53 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 12:00:01 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) Smiles
2024-11-14 12:05:29 +0100mari96334(~mari-este@user/mari-estel) mari-estel
2024-11-14 12:06:53 +0100mari96334(~mari-este@user/mari-estel) (Remote host closed the connection)
2024-11-14 12:07:05 +0100mari89179(~mari-este@user/mari-estel) mari-estel
2024-11-14 12:07:42 +0100mari-estel(~mari-este@user/mari-estel) (Ping timeout: 252 seconds)
2024-11-14 12:11:27 +0100xff0x(~xff0x@ai080132.d.east.v6connect.net)
2024-11-14 12:22:06 +0100jero98772(~jero98772@190.158.28.32)
2024-11-14 12:26:32 +0100__monty__(~toonn@user/toonn) toonn
2024-11-14 12:38:40 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 12:40:42 +0100mari89179(~mari-este@user/mari-estel) (Ping timeout: 252 seconds)
2024-11-14 12:42:19 +0100mari29333(~mari-este@user/mari-estel) mari-estel
2024-11-14 12:43:13 +0100mari-estel(~mari-este@user/mari-estel) (Read error: Connection reset by peer)
2024-11-14 12:43:53 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 12:46:03 +0100mari-estel(~mari-este@user/mari-estel) (Client Quit)
2024-11-14 12:47:34 +0100mari29333(~mari-este@user/mari-estel) (Ping timeout: 260 seconds)
2024-11-14 12:52:05 +0100pavonia(~user@user/siracusa) siracusa
2024-11-14 12:57:24 +0100 <hellwolf> Is "IOPhobia" pathological case? After decades of programming, I find pure joy in writing main part of the code that deals with zero IO. And only Haskell can guarantee that, to the extent that I am questioning if I am sick.
2024-11-14 12:58:54 +0100jero98772(~jero98772@190.158.28.32) (Remote host closed the connection)
2024-11-14 13:00:04 +0100caconym(~caconym@user/caconym) (Quit: bye)
2024-11-14 13:01:18 +0100 <Rembane> hellwolf: Nah, it's sound. Not having to deal with side effects makes code so much easier to write, read and test.
2024-11-14 13:02:11 +0100caconym(~caconym@user/caconym) caconym
2024-11-14 13:03:23 +0100 <hellwolf> I hesitate to make a connetion with germophobia, since I personally am an opposite of a germophobia.
2024-11-14 13:04:00 +0100 <Leary> hellwolf: Welcome to the oasis of sanity.
2024-11-14 13:04:09 +0100 <Rembane> Some germs are quite good to not be in contact with IMO
2024-11-14 13:05:28 +0100 <hellwolf> like unsafePerformIO?
2024-11-14 13:07:45 +0100acidjnk(~acidjnk@p200300d6e7283f73687bc11ede7922f8.dip0.t-ipconnect.de) (Ping timeout: 248 seconds)
2024-11-14 13:09:11 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) (Ping timeout: 252 seconds)
2024-11-14 13:09:23 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 13:13:49 +0100 <dminuoso> unsafePerformIO is indeed quite unsafe. :-)
2024-11-14 13:16:33 +0100hellwolflike when label is truth for to itself.
2024-11-14 13:17:57 +0100 <dminuoso> It was a simple case of something like `replicate n (unsafePerformIO (newIORef []))`, which GHC happily refactored into `let x = unsafePerformIO (newIORef []) in replicate n x`
2024-11-14 13:18:14 +0100housemate(~housemate@146.70.66.228) (Quit: "I saw it in a tiktok video and thought that it was the most smartest answer ever." ~ AnonOps Radio [some time some place] | I AM THE DERIVATIVE I AM GOING TANGENT TO THE CURVE!)
2024-11-14 13:18:15 +0100 <dminuoso> (In reality the code was far more sophisticated, so it was neither obvious how or why this happened)
2024-11-14 13:18:46 +0100 <dminuoso> I mean actually there was a `traverse_` in there too.
2024-11-14 13:19:48 +0100 <dminuoso> Yeah I think it was something like `unsafePerformIO (traverse_ (\_ -> newIORef []) xs)` and GHC successfully floated that IORef out
2024-11-14 13:20:06 +0100 <dminuoso> Ill have to dig through the commit history to find this one.
2024-11-14 13:20:14 +0100 <hellwolf> which code base?
2024-11-14 13:20:26 +0100 <dminuoso> An internal compiler of ours.
2024-11-14 13:20:32 +0100 <dminuoso> No, those examples I named are both wrong. Mmm.
2024-11-14 13:22:02 +0100 <dminuoso> hellwolf: Anyway, IO can still be a useful tool, especially if you want any kind of introspectability of whats going on (say logging or debugging)
2024-11-14 13:22:21 +0100 <dminuoso> Pure code is often cumbersome to debug
2024-11-14 13:22:42 +0100 <dminuoso> Consider something like GHC, where large portions work in IO
2024-11-14 13:23:59 +0100mari59415(~mari-este@user/mari-estel) mari-estel
2024-11-14 13:24:25 +0100arahael_(~arahael@user/arahael) (Quit: leaving)
2024-11-14 13:25:23 +0100arahael_(~arahael@user/arahael) arahael
2024-11-14 13:26:03 +0100mari-estel(~mari-este@user/mari-estel) (Ping timeout: 252 seconds)
2024-11-14 13:26:39 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
2024-11-14 13:27:19 +0100 <hellwolf> runTrace your_pure_fn your_trace_filters ...currying your_pure_fn_args...
2024-11-14 13:28:06 +0100 <dminuoso> What is `runTrace` supposed to be here?
2024-11-14 13:28:25 +0100 <hellwolf> that'd be my ideal way of trace into your pure fn in a principled way. I am entirely sure how feasible/difficult it could be; I did somethings that involve some aspects of such a thing.
2024-11-14 13:28:32 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) misterfish
2024-11-14 13:28:48 +0100 <hellwolf> sorry, typed too slow. I meant to propose a hypothetical
2024-11-14 13:31:46 +0100haskellbridge(~hackager@syn-024-093-192-219.res.spectrum.com) (Remote host closed the connection)
2024-11-14 13:31:51 +0100 <lortabac> "Pure code is often cumbersome to debug" *with GHC*
2024-11-14 13:32:28 +0100 <lortabac> I don't think we should see lack of observability as an intrinsic property of pure computations
2024-11-14 13:32:36 +0100haskellbridge(~hackager@syn-024-093-192-219.res.spectrum.com) hackager
2024-11-14 13:32:36 +0100ChanServ+v haskellbridge
2024-11-14 13:32:42 +0100 <mari59415> no mentions about pure code being amounts easier to test
2024-11-14 13:33:53 +0100 <hellwolf> But I find the habit of spending more time in thinking then examining into what happened is a better use of time. Of course, on the contrary, Linus, notoriously, promoted the idea of printf debugging. So I guess the tool influence on how you do troubleshooting.
2024-11-14 13:34:38 +0100 <hellwolf> exactly, mari59415, it is a problem most applicable to impure code. For pure code, you write properties (which means thinking a lot about what you are writing.)
2024-11-14 13:35:34 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 13:36:10 +0100 <hellwolf> fwiw, @dminuoso, I had a small example here https://discourse.haskell.org/t/variable-arity-currying-helper/10659 that decorates "let foo' = curry' (MkFn foo)" but that assumes all arguments is "showable". to make it runTrace, you'd need to have a default instance for all types, and then overlapping instance for Show, Num, Functor, etc.
2024-11-14 13:37:49 +0100mari59415(~mari-este@user/mari-estel) (Read error: Connection reset by peer)
2024-11-14 13:38:40 +0100 <mari-estel> huh properties help equally with pure and monadic
2024-11-14 13:38:40 +0100 <mari-estel> prints or traces are a good way to collect test samples while troubleshooting
2024-11-14 13:41:35 +0100 <hellwolf> Does Trace.trace help?
2024-11-14 13:41:53 +0100 <hellwolf> Debug.Trace (trace)
2024-11-14 13:43:13 +0100mari73904(~mari-este@user/mari-estel) mari-estel
2024-11-14 13:44:19 +0100mari-estel(~mari-este@user/mari-estel) (Read error: Connection reset by peer)
2024-11-14 13:44:51 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 13:45:55 +0100 <kuribas`> The problem is that the GHC debugger follow the imperative model for debugging (stepping through, etc..)
2024-11-14 13:46:10 +0100 <kuribas`> A more useful pure debugger would allow you to choose which expression to evaluate.
2024-11-14 13:46:40 +0100 <kuribas`> I the end, lazyness doesn't specify an order for execution.
2024-11-14 13:47:00 +0100 <kuribas`> As long as the semantics are preserved.
2024-11-14 13:47:01 +0100mari-estel(~mari-este@user/mari-estel) (Client Quit)
2024-11-14 13:47:50 +0100mari73904(~mari-este@user/mari-estel) (Ping timeout: 255 seconds)
2024-11-14 13:48:41 +0100 <__monty__> That would also cause confusion though. Since sometimes referential transparency is a lie. And it's easy to convince yourself that the expressions must surely be evaluating in the order you think they are.
2024-11-14 13:58:57 +0100 <kuribas`> __monty__: how can it be a lie with "pure" code?
2024-11-14 13:59:08 +0100 <kuribas`> Assuming it doesn't use unsafePerformIO.
2024-11-14 13:59:51 +0100alphazone(~alphazone@2.219.56.221)
2024-11-14 14:02:26 +0100 <__monty__> There's the rub : )
2024-11-14 14:05:11 +0100 <haskellbridge> <hellwolf> the lie is limited to the extent that, if your program is not total, would the debugger hurt your bottom where you intend to leave it so.
2024-11-14 14:12:04 +0100 <kuribas`> > head [1, undefined]
2024-11-14 14:12:05 +0100 <lambdabot> 1
2024-11-14 14:12:30 +0100 <kuribas`> If you would evaluate the second element of the list, the debugger should not halt the whole expression.
2024-11-14 14:13:44 +0100 <bailsman> Huh, are mutable vectors a scam? `VM.iforM_ mv $ \i x -> VM.write mv i (updateValue x)` is considerably slower for simple objects, and barely faster than `map updateValue` even for large complex objects.
2024-11-14 14:14:28 +0100 <geekosaur> they will definitely have costs you don't incur with immutable vectors
2024-11-14 14:15:25 +0100 <bailsman> So the use cases are considerably more niche than I thought. Like if you need to exchange two elements or something, the pure version would have to copy the entire thing and the mutable version only two elements. But for most cases, it's a bait?
2024-11-14 14:16:30 +0100 <bailsman> If you expect to touch every element, just use map.
2024-11-14 14:16:51 +0100 <geekosaur> pretty much
2024-11-14 14:17:21 +0100 <geekosaur> it's still going to do copies, I think, and more of them the more elements you touch. but I'mnot sure how that plays out for vector
2024-11-14 14:18:05 +0100 <geekosaur> for Array it's split into "cards" and modifications within a single card are batched so only a single copy needs to be done by the mutator, AIUI
2024-11-14 14:18:13 +0100 <bailsman> I tried look at it with -ddump-simpl and the mutable version doesn't compile to simple code at all. What should be like 5 assembly instructions turns into several pages of assembly.
2024-11-14 14:18:30 +0100 <geekosaur> but that's built into GC and I don't think vector can take advantage of it
2024-11-14 14:18:58 +0100 <bailsman> I think if you need a mutable algorithm maybe you should do a CFFI or something.
2024-11-14 14:24:13 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) alexherbo2
2024-11-14 14:32:56 +0100weary-traveler(~user@user/user363627) (Remote host closed the connection)
2024-11-14 14:33:10 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-11-14 14:35:46 +0100acidjnk(~acidjnk@p200300d6e7283f73687bc11ede7922f8.dip0.t-ipconnect.de) acidjnk
2024-11-14 14:38:28 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 14:48:24 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) (Ping timeout: 276 seconds)
2024-11-14 14:50:45 +0100weary-traveler(~user@user/user363627) user363627
2024-11-14 14:56:06 +0100bitdex(~bitdex@gateway/tor-sasl/bitdex) (Quit: = "")
2024-11-14 15:01:26 +0100ash3en(~Thunderbi@149.222.147.110) ash3en
2024-11-14 15:05:34 +0100ash3en(~Thunderbi@149.222.147.110) (Client Quit)
2024-11-14 15:06:31 +0100L29Ah(~L29Ah@wikipedia/L29Ah) (Ping timeout: 265 seconds)
2024-11-14 15:10:02 +0100 <dminuoso> bailsman: Do you have the actual code and the generated core to look at?
2024-11-14 15:16:32 +0100mari-estel(~mari-este@user/mari-estel) (Quit: errands)
2024-11-14 15:18:37 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2024-11-14 15:30:28 +0100mari-estel(~mari-este@user/mari-estel) mari-estel
2024-11-14 15:35:09 +0100ash3en(~Thunderbi@149.222.147.110) ash3en
2024-11-14 15:35:40 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) (Remote host closed the connection)
2024-11-14 15:35:45 +0100ash3en(~Thunderbi@149.222.147.110) (Client Quit)
2024-11-14 15:35:59 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) alexherbo2
2024-11-14 15:36:27 +0100yaroot(~yaroot@2400:4052:ac0:d901:1cf4:2aff:fe51:c04c) (Read error: Connection reset by peer)
2024-11-14 15:36:41 +0100yaroot(~yaroot@2400:4052:ac0:d901:1cf4:2aff:fe51:c04c) yaroot
2024-11-14 15:39:55 +0100Cadey(~cadey@perl/impostor/xe) (Quit: WeeChat 4.4.2)
2024-11-14 15:41:12 +0100weary-traveler(~user@user/user363627) (Quit: Konversation terminated!)
2024-11-14 15:47:02 +0100billchenchina(~billchenc@2a0d:2580:ff0c:1:e3c9:c52b:a429:5bfe) billchenchina
2024-11-14 15:48:48 +0100 <bailsman> Plain old lists are consistently the fastest. I find that somewhat confusing, since in imperative languages linked lists are often slow.
2024-11-14 15:49:41 +0100 <geekosaur> if all you're doing is iterating through them, consider that ghc is optimized for that case: think of a list as a loop encoded as data
2024-11-14 15:49:51 +0100 <hellwolf> I mean, if you need to do a log of random indexing, it got to be slow. but for stream processing, it is probably the most efficient
2024-11-14 15:50:23 +0100 <geekosaur> allocation, gc, and iteration are all optimized because it's so common
2024-11-14 15:50:37 +0100 <haskellbridge> <Bowuigi> Reasoning imperatively in functional languages leads to bad performance in general
2024-11-14 15:50:40 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) misterfish
2024-11-14 15:51:04 +0100ph88(~ph88@2a02:8109:9e26:c800:7ee4:dffc:4616:9e2a)
2024-11-14 15:52:00 +0100 <bailsman> I thought I needed to do a lot of random indexing. But, now I'm not sure if I shouldn't instead redesign everything so that it does not require random access.
2024-11-14 15:52:55 +0100 <haskellbridge> <Bowuigi> Have you tried any functional random access data structures?
2024-11-14 15:53:14 +0100 <haskellbridge> <Bowuigi> Data.Map is the first one that comes to mind
2024-11-14 15:53:38 +0100 <bailsman> Data.Vector.Map over a vector is consistently 4x slower than regular map over []. (Data.Map is 10x slower)
2024-11-14 15:54:06 +0100hgolden(~hgolden@2603:8000:9d00:3ed1:6c70:1ac0:d127:74dd) hgolden
2024-11-14 15:54:11 +0100 <hellwolf> "data Array i e" is also under rated.
2024-11-14 15:54:16 +0100 <geekosaur> right, map's going to be one of those cases that [] will work very well for
2024-11-14 15:55:03 +0100 <geekosaur> it actually compiles down to a tight loop in most cases, not the C-style linked list you might expect
2024-11-14 15:55:14 +0100 <ph88> when i have some code more or less in the shape of this thing https://hackage.haskell.org/package/containers-0.7/docs/Data-Tree.html#t:Tree how can i write code that changes `a` with State but there are two points to change it, when going down (into the leafs) and going up (back to the root)? also known as visitor pattern
2024-11-14 15:55:38 +0100 <geekosaur> ph88, are you aware of tree zippers?
2024-11-14 15:55:42 +0100 <ph88> no
2024-11-14 15:55:53 +0100 <geekosaur> sadly the first reference that comes to mind is on the wiki…
2024-11-14 15:56:06 +0100 <bailsman> I have some parts right now that use random access. But was thinking maybe I don't want to pay a 4x performance penalty just for random access.
2024-11-14 15:56:08 +0100 <hellwolf> (wiki has been fixed)
2024-11-14 15:56:16 +0100 <geekosaur> just found that, yes
2024-11-14 15:56:29 +0100 <geekosaur> actually hgolden in #h-i said there are still some style issues
2024-11-14 15:56:30 +0100 <bailsman> Awesome! Thank you to whoever fixed it
2024-11-14 15:56:30 +0100 <geekosaur> https://wiki.haskell.org/Zipper
2024-11-14 15:56:57 +0100 <geekosaur> it uses a tree as the example data structure, where most of them focus on lists which are the easiest case
2024-11-14 15:57:50 +0100 <haskellbridge> <Bowuigi> Gérard Huet's pearl "The Zipper" is also good if you don't mind OCaml
2024-11-14 15:58:09 +0100 <bailsman> What do you mean by tight loop? Surely it still has to allocate all the elements for the new list?
2024-11-14 15:58:25 +0100 <bailsman> Or does it turn into an in-place algorithm?
2024-11-14 15:58:44 +0100 <geekosaur> if your generation and consumption are written correctly, they get pipelined
2024-11-14 15:59:04 +0100 <bailsman> I don't know what any of those words mean
2024-11-14 15:59:10 +0100 <ph88> wiki got a makeover? i remember being it uglier
2024-11-14 15:59:34 +0100 <geekosaur> ph88, that's what I meant by style but also a mediawiki upgrade is what started the whole outage thing
2024-11-14 15:59:42 +0100 <bailsman> I am just doing [SmallRecord] -> [SmallRecord] by updating a field in the record
2024-11-14 15:59:43 +0100 <haskellbridge> <Bowuigi> GHC does dark magic to not actually use a linked list
2024-11-14 16:00:07 +0100 <geekosaur> bailsman, construction of the list vs. mapping through the list
2024-11-14 16:00:40 +0100 <geekosaur> in the optimal case, the list is never constructed as such, elements are fed directly to map as they are created
2024-11-14 16:01:05 +0100 <bailsman> Hey, no, that's cheating. Then I've written my benchmark wrong
2024-11-14 16:01:11 +0100 <bailsman> I need to benchmark the list already existing
2024-11-14 16:01:46 +0100 <bailsman> It has to actually be stored and loaded from memory to be a fair comparison.
2024-11-14 16:02:19 +0100 <bailsman> Why is understand performance of things so difficult aaargh
2024-11-14 16:02:30 +0100 <EvanR> yes, when you "write C in any language" in haskell, it's not optimal. Surprise
2024-11-14 16:02:33 +0100 <geekosaur> because everyone wants speeeeeeed
2024-11-14 16:02:59 +0100 <EvanR> haskell is weird that way. But it's actually not smart to write C in any language generally
2024-11-14 16:03:18 +0100 <geekosaur> (including C /gd&r)
2024-11-14 16:03:24 +0100weary-traveler(~user@user/user363627) user363627
2024-11-14 16:03:33 +0100 <bailsman> EvanR: That would be helpful advice if I automatically understood how to write idiomatic-and-perforant code in Haskell - but unfortunately that wisdom is as yet inaccesible to me :P
2024-11-14 16:03:49 +0100 <EvanR> advice: forget anything you know about C and C++ and learn haskell
2024-11-14 16:04:02 +0100 <EvanR> also forget python for good measure
2024-11-14 16:04:47 +0100 <geekosaur> I think maybe if you want to understand idiomatic-and-performant, it might be worth looking at Chris Okasaki's thesis on functional data structures
2024-11-14 16:04:53 +0100 <haskellbridge> <Bowuigi> Because it is different to what you are used to. Functional languages can do optimizations that imperative langs can't, like list/fold/map/hylo fusion (AKA removing intermediate computations while traversing or creating stuff), safe(-ish) inlining, laziness stuff, etc
2024-11-14 16:05:47 +0100 <geekosaur> IIRC it's in OCaml instead of Haskell so it won't cover things like laziness, but it'll still teach you the zen of functional programming
2024-11-14 16:06:43 +0100 <bailsman> How do I force it to actually create the list? `smallRecs = force [... | ... <- ...]` did not change anything, map is still as fast as it was before. Maybe it wasn't cheating?
2024-11-14 16:06:57 +0100 <haskellbridge> <Bowuigi> Laziness is something you will want to learn at some point but for now you can use "{-# LANGUAGE Strict #-}" if you don't want laziness
2024-11-14 16:07:23 +0100 <bailsman> Or did the compiler optimize that out
2024-11-14 16:07:27 +0100 <EvanR> why are we trying to cripple haskell again by "actually creating lists" and enabling Strict xD
2024-11-14 16:08:02 +0100L29Ah(~L29Ah@wikipedia/L29Ah) L29Ah
2024-11-14 16:08:14 +0100 <geekosaur> there's multiple levels of cheating
2024-11-14 16:08:18 +0100 <geekosaur> build/foldr is one
2024-11-14 16:08:20 +0100 <haskellbridge> <Bowuigi> You can force the first constructor (IIRC) with "seq", every constructor with "length" and the entire thing with "deepseq". Yeah Haskell has evaluation control
2024-11-14 16:08:48 +0100 <geekosaur> optimizing lists by treating them as loops is another
2024-11-14 16:08:52 +0100 <ph88> geekosaur, i was mistaken, i have actually not one data structure to fit all of the tree but multiple like `data Program = Program a [Statement]` and `data Statement = Statement a Expression` (dummy examples). Can tree zippers work with this? or do i need another technique?
2024-11-14 16:09:51 +0100 <geekosaur> I don't know of any examples, but that doesn't seem much different from (say) a zipper for red-black trees
2024-11-14 16:10:04 +0100 <haskellbridge> <Bowuigi> You might need the slightly more general idea of the "derivative of a data structure" but it is essentially the same idea
2024-11-14 16:10:43 +0100 <bailsman> doing smallRecsDeep = smallRecs `deepseq` smallRecs did not change anything either
2024-11-14 16:11:03 +0100Square2(~Square4@user/square) (Ping timeout: 246 seconds)
2024-11-14 16:11:06 +0100 <geekosaur> right, I'm not sure it's the place to start buit the fundamentals of the zipper technique are http://strictlypositive.org/diff.pdf
2024-11-14 16:11:06 +0100 <bailsman> the benchmark is using `nf` so that should be forcing both the source list and the destination list to be actually created now, right? But it's exactly as fast as before
2024-11-14 16:11:37 +0100 <EvanR> that's just a definition, it would have to be evaluated to cause the normal form to be realized
2024-11-14 16:11:54 +0100 <geekosaur> given the stuff in that paper you should be able to construct a derivative-based zipper for any list-like or tree-like structure
2024-11-14 16:12:13 +0100 <EvanR> it might also be that the non deepseq version was "just as slow" for some reason
2024-11-14 16:12:16 +0100 <haskellbridge> <Bowuigi> I think that usage of deepseq means "fully evaluate smallRecs when smallRecs is evaluated" but I am probably wrong
2024-11-14 16:12:28 +0100 <lortabac> Bowuigi: probably worth mentioning that the Strict pragma only makes user definitions strict. So the rest of the ecosystem (including lists) will still be lazy
2024-11-14 16:12:51 +0100 <geekosaur> not even that, actually. "strict" in Haskell means WHNF
2024-11-14 16:13:00 +0100 <geekosaur> not `rnf`
2024-11-14 16:13:02 +0100 <lortabac> it won't magically make Haskell a strict language
2024-11-14 16:13:33 +0100 <haskellbridge> <Bowuigi> So it is StrictData but also for functions? Huh
2024-11-14 16:13:40 +0100 <lortabac> geekosaur: if you only use functions and data types that you define it shouldn't make a difference I guess
2024-11-14 16:13:56 +0100 <haskellbridge> <Bowuigi> Oh well, you can't make Haskell strict on a single pragma then
2024-11-14 16:14:27 +0100 <bailsman> How do I write this benchmark to ensure the list is already created when map runs and not streamed
2024-11-14 16:14:33 +0100 <bailsman> and the output list is created as well
2024-11-14 16:14:33 +0100 <geekosaur> and you really don't want to because a fair amount of the Prelude assumes laziness and will bottom if you somehow forced them to be strict
2024-11-14 16:14:38 +0100 <haskellbridge> <Bowuigi> AutomaticBang might have been a clearer name lol
2024-11-14 16:15:06 +0100 <bailsman> When I hear someone say AutomaticBang something different comes to mind than was probably intended
2024-11-14 16:15:25 +0100 <haskellbridge> <Bowuigi> Fair enough
2024-11-14 16:15:43 +0100acidjnk(~acidjnk@p200300d6e7283f73687bc11ede7922f8.dip0.t-ipconnect.de) (Ping timeout: 264 seconds)
2024-11-14 16:16:01 +0100 <lortabac> AutomaticExclamationMark
2024-11-14 16:16:20 +0100mari-estel(~mari-este@user/mari-estel) (Quit: on the move)
2024-11-14 16:16:33 +0100 <bailsman> Did I write this correctly? https://paste.tomsmeding.com/B6koT8Nx
2024-11-14 16:16:48 +0100 <bailsman> In my real-world-use-case I'm pretty sure the lists are going to have to be loaded from memory and cannot be streamed.
2024-11-14 16:16:58 +0100 <haskellbridge> <Bowuigi> bailsman foldr/build uses a rule so just creating the list on a function on a function that is not inlined (with "{-# NOINLINE createList #-}") may work, I don't have a GHC at hand to test though
2024-11-14 16:17:04 +0100 <EvanR> in IO somewhere realList <- evaluate (force list)
2024-11-14 16:17:09 +0100 <geekosaur> consider that loading can be streamed
2024-11-14 16:17:10 +0100 <EvanR> should do it
2024-11-14 16:17:14 +0100 <geekosaur> as can writing
2024-11-14 16:17:39 +0100 <geekosaur> in fact that's where streaming frameworks came from
2024-11-14 16:18:18 +0100 <bailsman> geekosaur: I'm fine that it streams loading and writing. But streaming the list generator into the update and never actually constructing the intermediate list is cheating for the purposes of the benchmark, since that won't be possible in the real use case.
2024-11-14 16:18:20 +0100 <EvanR> usually when you load a big list of stuff from I/O, the whole list will exists just because
2024-11-14 16:18:26 +0100 <EvanR> unless you use lazy I/O which is weird
2024-11-14 16:19:25 +0100 <EvanR> (this is not the case for writing a big list out to I/O, this is a case where you can get streaming, which is good)
2024-11-14 16:19:34 +0100 <geekosaur> or a streaming framework (conduit, pipes, streamly, …)
2024-11-14 16:19:55 +0100acidjnk(~acidjnk@p200300d6e7283f73687bc11ede7922f8.dip0.t-ipconnect.de) acidjnk
2024-11-14 16:20:27 +0100 <geekosaur> anyway if you really want to know if the compiler is "cheating", look at the Core (intermediate representation language, use `-ddump-ds -ddump-to-file`)
2024-11-14 16:20:43 +0100 <geekosaur> or for quick and dirty, play.haskell.org has a button to generate Core
2024-11-14 16:21:54 +0100 <geekosaur> sorry, `-ddump-simpl`
2024-11-14 16:22:01 +0100 <haskellbridge> <Bowuigi> If it is fusing anything it will be fairly obvious there. Reading Core is very necessary for doing very fast Haskell code
2024-11-14 16:22:18 +0100 <bailsman> Using `smallRecs <- evaluate $ force [... | ... <- ...]` makes no difference whatsoever. Map is still faster 4x, absolutely no difference in performance. So can I now conclude it was not cheating?
2024-11-14 16:23:32 +0100 <EvanR> no you should still read the Core dump
2024-11-14 16:24:05 +0100 <EvanR> haskell is so high level you can't conclude anything from the source code
2024-11-14 16:24:15 +0100 <EvanR> on the subject of low level optimizations
2024-11-14 16:24:21 +0100 <bailsman> I printed the -ddump-simpl output to a file but I have no real clue how to interpret what I'm looking at
2024-11-14 16:24:31 +0100 <EvanR> I think there was a core primer somewhere
2024-11-14 16:24:50 +0100 <EvanR> but essentially it's a simplified low level language that haskell is translated to
2024-11-14 16:25:01 +0100 <EvanR> before it's compiled and assembled
2024-11-14 16:25:05 +0100 <bailsman> The pure function just translates to: updatePure_r2HI = map @SmallRecord @SmallRecord updateValue_r2HH
2024-11-14 16:25:24 +0100 <bailsman> sorry list function, I guess they're all pure except the mutable vector one
2024-11-14 16:25:39 +0100 <EvanR> @SmallRecord is a type, updateValue_r2HH should be another thing defined in the dump somewhere
2024-11-14 16:25:39 +0100 <lambdabot> Unknown command, try @list
2024-11-14 16:26:07 +0100 <bailsman> EvanR: I posted the source code of my benchmark here. https://paste.tomsmeding.com/B6koT8Nx
2024-11-14 16:26:13 +0100 <bailsman> please point out any beginner mistakes there
2024-11-14 16:26:42 +0100 <bailsman> All are using the same updateValue function.
2024-11-14 16:26:49 +0100 <EvanR> trying to +1 everything in the collection?
2024-11-14 16:28:39 +0100 <EvanR> it's not clear what defaultMain and bench do
2024-11-14 16:28:44 +0100Inst_(~Inst@user/Inst) (Ping timeout: 272 seconds)
2024-11-14 16:29:18 +0100kuribas`(~user@ip-188-118-57-242.reverse.destiny.be) (Remote host closed the connection)
2024-11-14 16:29:31 +0100 <EvanR> or `nf' ?
2024-11-14 16:29:37 +0100 <bailsman> I copied that from some example code to do a benchmark somewhere
2024-11-14 16:29:46 +0100 <bailsman> I don't understand either but it printed some numbers to my console output
2024-11-14 16:29:48 +0100Inst(~Inst@user/Inst) Inst
2024-11-14 16:31:16 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) (Remote host closed the connection)
2024-11-14 16:31:36 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) alexherbo2
2024-11-14 16:32:01 +0100 <EvanR> well that will have a big effect on performance
2024-11-14 16:32:27 +0100 <EvanR> code doesn't do anything in isolation, the evaluation is on demand
2024-11-14 16:32:33 +0100 <bailsman> I'd like to understand exactly what's going on to make map so much faster.
2024-11-14 16:33:36 +0100 <EvanR> well, mapping a list to get another list is much simpler than building a big tree or copying a vector so you can mutate it
2024-11-14 16:33:49 +0100 <bailsman> Why is it simpler? It's the same operation
2024-11-14 16:33:57 +0100 <EvanR> even simpler if the source list already exists and doesn't need to be evaluated
2024-11-14 16:33:58 +0100 <bailsman> It should be harder because you need to allocate and create a linked list
2024-11-14 16:34:44 +0100 <bailsman> My intuitions are completely wrong, but I don't know exactly why.
2024-11-14 16:35:07 +0100 <ph88> geekosaur, i went back and forth with chatgpt for a bit. Could you take a peek at this document, specifically on line 490 https://bpa.st/MSVA it made an example with tree zippers to implement something for each type, which i don't want. Is there a way to use tree zippers without resorting to generic programming solutions such as GHC.Generics, syb, lens or Data.Data ?
2024-11-14 16:35:20 +0100 <EvanR> you may or may not be allocating any list nodes due to fusion, but even if you did, that's 1 node per item. Meanwhile the IntMap has a more complex structure and the Vector is larger, even if you ignore the fact that you have to copy it
2024-11-14 16:35:31 +0100 <bailsman> Why is the vector larger?
2024-11-14 16:35:46 +0100 <EvanR> it's larger than 1 list node
2024-11-14 16:35:55 +0100 <bailsman> but there's only 1 of them, not 1 million
2024-11-14 16:36:40 +0100 <EvanR> and 1 megabyte chunk of Vector might not play as nice with the GC
2024-11-14 16:37:04 +0100 <EvanR> it goes back to how your "bench" thing is processing the final list, 1 by 1, it's nicer on the GC
2024-11-14 16:37:56 +0100 <haskellbridge> <flip101> Bowuigi: could you please take a look as well?
2024-11-14 16:38:01 +0100philopsos(~caecilius@user/philopsos) philopsos
2024-11-14 16:38:45 +0100 <bailsman> I'm expecting the vector version to compile to something like `nv = new Vector(v.length); for (int i = 0; i < v.length; ++i) nv[i] = updateValue(v[i])`. One allocation, extremely simple update. Whereas the linked list version has to allocate 1M nodes and set up each of their 'next' pointers, so it seems like it should be doing more work.
2024-11-14 16:38:58 +0100 <EvanR> and again, the benchmark code might have gotten optimized so there are no list nodes, other than the source list
2024-11-14 16:39:07 +0100 <bailsman> How do I prevent it from doing that?
2024-11-14 16:39:24 +0100 <EvanR> go to the benchmark code and cripple that
2024-11-14 16:39:37 +0100 <EvanR> fully evaluated the final list before doing whatever it does with it
2024-11-14 16:39:46 +0100 <bailsman> Isn't that what I'm doing already?
2024-11-14 16:39:52 +0100 <bailsman> That's what the nf was for right?
2024-11-14 16:40:04 +0100 <EvanR> I have no idea, I don't see what nf is or bench is
2024-11-14 16:40:20 +0100 <EvanR> right now all I see is "map updateValue someList"
2024-11-14 16:41:16 +0100 <EvanR> finalList <- evaluate (force (map updateValue someList)) ought to slow it down more
2024-11-14 16:41:23 +0100 <bailsman> nf :: NFData b => (a -> b) -> a -> Benchmarkable
2024-11-14 16:41:39 +0100 <EvanR> I'm not familiar with Benchmarkable
2024-11-14 16:42:07 +0100 <EvanR> if nf works, computes full normal form, sounds bad for performance
2024-11-14 16:42:16 +0100 <EvanR> in the case of list
2024-11-14 16:42:26 +0100 <geekosaur> ph88, it's doable without any of those but it's harder since you have to write it all yourself. those libraries exist for a reason
2024-11-14 16:43:17 +0100 <EvanR> when I was tooling with the profiling and performance I would make sure to write my own main IO action so I know what what's
2024-11-14 16:43:32 +0100 <EvanR> control what ultimately is demanding evaluation
2024-11-14 16:43:39 +0100 <geekosaur> especially when you have multiple data types
2024-11-14 16:43:53 +0100 <bailsman> Anyway, I guess we can assume that it isn't cheating, it is actually constructing the intermediate list, and most of the performance difference is going to come from map being a builtin and the vector code not compiling to anything nearly as simple as what I expected. So it's not map being fast, it's map being slowish, and vector being slower, I think.
2024-11-14 16:44:05 +0100 <ph88> geekosaur, doable .. would i have to write code for each data type?
2024-11-14 16:44:14 +0100 <geekosaur> exactly, yes
2024-11-14 16:44:17 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) (Ping timeout: 248 seconds)
2024-11-14 16:44:34 +0100 <ph88> that's going to take so much time, the AST is absolutely huge
2024-11-14 16:44:43 +0100 <geekosaur> that's where generics or syb come in, they generate the necessary code for you
2024-11-14 16:44:49 +0100 <EvanR> 4x faster isn't that much of a difference, it seems plausible you're creating the whole structure for everything. It's not like a 1000x speedup that you'd normally see when you switch from full evaluation to lazy evaluation
2024-11-14 16:45:50 +0100 <ph88> geekosaur, do you think it's still worth to use zippers but then to combine them with a generic approach? i am not sure whether i can go up and down with other approaches such as lens or GHC.Generics
2024-11-14 16:46:30 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
2024-11-14 16:46:34 +0100 <EvanR> bailsman, Vector shines when you start with combine chains of operations together, it fuses away intermediate vectors
2024-11-14 16:46:45 +0100 <bailsman> I only do one operation.
2024-11-14 16:46:55 +0100 <EvanR> so you won't see that benefit there
2024-11-14 16:46:58 +0100 <geekosaur> you're conflating things, syb/generics/uniplate are mechanism, lens uses the mechanism. and lens should indeed be able to navigate up/down
2024-11-14 16:47:46 +0100 <EvanR> again, "I don't know how this benchmark library works, but I'll assume a bunch of conclusions" isn't as good as writing your own code then profiling
2024-11-14 16:48:08 +0100 <EvanR> and looking at the core, of your own code
2024-11-14 16:53:17 +0100 <geekosaur> ph88, it's easier to replace lens there with something else (such as a zipper) than it is to replace the generics mechanism needed to make lens/a zipper/whatever useful
2024-11-14 16:54:05 +0100 <geekosaur> if, as you say, "that's going to take so much time, the AST is absolutely huge", you need generics of some variety to escape that
2024-11-14 16:54:18 +0100 <geekosaur> that's why generics packages exist
2024-11-14 16:54:45 +0100 <ph88> why would i want this? "it's easier to replace lens there with something else (such as a zipper)"
2024-11-14 16:55:03 +0100 <ph88> i have neither, and i like something to traverse while not having to write traversal code for each type
2024-11-14 16:55:26 +0100 <ph88> as i understood it can be ghc.generics with zipper, or lens or maybe something else
2024-11-14 16:55:35 +0100 <geekosaur> then use generics to derive the traversal (all of the generics packages do so in some fashion)
2024-11-14 16:56:11 +0100 <ph88> and you still recommend to do the traversal with zipper yes? (with code derived with generics)
2024-11-14 16:56:27 +0100 <geekosaur> although the default traversals are all of the Traversable variety, unlike a zipper which lets you move at will
2024-11-14 16:56:38 +0100 <geekosaur> which it sounded like you wanted
2024-11-14 16:56:55 +0100 <geekosaur> if you just want something Traversable-style, any generics library will give you that
2024-11-14 16:56:59 +0100 <ph88> what if i don't only want to change the variable `a` but i also want to inspect the nodes and modify/replace them ?
2024-11-14 16:57:24 +0100 <ph88> can zipper do this too ?
2024-11-14 16:57:24 +0100 <geekosaur> that'd be a zipper
2024-11-14 16:57:30 +0100 <ph88> ok cool, thanks geekosaur !
2024-11-14 16:57:58 +0100 <geekosaur> you can do anything to the focused node including remove or replace it, and moving the zipper will reknit the tree
2024-11-14 16:58:42 +0100lortabac(~lortabac@2a01:e0a:541:b8f0:55ab:e185:7f81:54a4) (Quit: WeeChat 4.4.2)
2024-11-14 16:58:51 +0100 <geekosaur> even if it won't work with your structure as is, the wiki page I pointed you to earlier describes what you can do with a zipper
2024-11-14 16:59:12 +0100 <geekosaur> and the tree example is probably closer to your actual AST than a list zipper example would be
2024-11-14 17:00:06 +0100 <bailsman> To test my theory, I wrote a C version of the benchmark. Updating a linked list by allocating nodes one by one and copying over the values takes 14ms, approximately as long as Haskell takes to do map. Updating 1M records inplace in an array takes 2ms.
2024-11-14 17:00:30 +0100 <bailsman> So I think I'm concluding that map is "the best you can do in haskell" because it's optimized and a builtin, and any attempt to do in place algorithms is just going to be massively slow.
2024-11-14 17:00:31 +0100 <EvanR> that's... not going to be an apples to apples comparison
2024-11-14 17:00:38 +0100 <EvanR> are you allocating nodes with malloc
2024-11-14 17:00:57 +0100 <EvanR> allocating nodes in haskell is much faster
2024-11-14 17:01:08 +0100 <bailsman> No it isn't.
2024-11-14 17:01:22 +0100 <EvanR> yes it is
2024-11-14 17:01:56 +0100 <geekosaur> bailsman, what do you think is going on during an allocation?
2024-11-14 17:02:06 +0100 <geekosaur> because it's probably not what actually happens
2024-11-14 17:03:13 +0100 <bailsman> I agree - I'm not really sure. Some GC magic probably. But the point is that it's builtin and optimized, so it's much faster than trying to emulate in-place updates, which compiles to a morass of work and not 5 asm instructions like the c version.
2024-11-14 17:03:20 +0100 <geekosaur> not magic
2024-11-14 17:03:31 +0100 <geekosaur> the nursery/gen 0 is a bump-pointer allocator
2024-11-14 17:03:44 +0100 <geekosaur> gc only gets involved when the pointer reaches the end of the nursery
2024-11-14 17:04:50 +0100 <EvanR> "straight list processing and immutable structures are probably better in haskell than C-like mutable array munging" though is what I've been saying for days
2024-11-14 17:05:01 +0100 <EvanR> but the specific reasons are off
2024-11-14 17:06:10 +0100 <EvanR> before claiming stuff about what stuff compiles to you should check it
2024-11-14 17:06:20 +0100 <bailsman> To me the fact that the Haskell Vector is ~100ms, Haskell map is ~25ms, C allocate-new-linked-list-and-copy version is ~15ms, C array in place is ~2ms is suggestive of the fact that indeed allocating a list is slow, and it's indeed what Haskell is doing, but it's still better than trying to do an array in Haskell.
2024-11-14 17:06:59 +0100 <EvanR> the C version of linked list is just a bad thing to compare to haskell list unless you are careful to emulate what the haskell version did
2024-11-14 17:07:25 +0100 <EvanR> "they are both called list" isn't that inspiring
2024-11-14 17:08:09 +0100 <EvanR> list and arrays in haskell are both good for certain purposes
2024-11-14 17:08:44 +0100 <EvanR> in the case of list, usually not as a data structure
2024-11-14 17:08:53 +0100 <EvanR> but as a looping mechanism
2024-11-14 17:09:22 +0100 <bailsman> I agree with your conclusion - stop trying to be clever and just learn what idiomatic haskell code looks like.
2024-11-14 17:09:25 +0100 <EvanR> in the case of arrays, for lookup tables
2024-11-14 17:10:03 +0100 <bailsman> If you write idiomatic haskell, you get as-slow-as-you-would-expect, if you try to write in-place code, you get way-slower-than-you-would-expect.
2024-11-14 17:10:34 +0100 <EvanR> not necessarily, sometimes idiomatic haskell is faster
2024-11-14 17:11:19 +0100 <EvanR> in any case idiomatic haskell is a starting point for getting into the weeds for optimization
2024-11-14 17:12:30 +0100 <Inst> @bailsman
2024-11-14 17:12:30 +0100 <lambdabot> Unknown command, try @list
2024-11-14 17:12:36 +0100 <Inst> try compile with -fllvm
2024-11-14 17:14:34 +0100 <bailsman> Inst: I compiled my benchmark with -O2 -fllvm. Does not seem meaningfully different. Is -O2 the wrong optimization level?
2024-11-14 17:16:14 +0100 <EvanR> is llvm not the default now anyway
2024-11-14 17:16:16 +0100 <Inst> probably MY skill issue :(
2024-11-14 17:16:35 +0100 <tomsmeding> EvanR: it definitely is not
2024-11-14 17:16:39 +0100 <EvanR> ok