2025/03/09

Newest at the top

2025-03-09 20:51:39 +0100 <int-e> there's that trees that grow thing :P
2025-03-09 20:51:25 +0100infinity0(~infinity0@pwned.gg) (Ping timeout: 252 seconds)
2025-03-09 20:51:18 +0100 <EvanR> to boldly go where no definition has been split before
2025-03-09 20:50:54 +0100 <EvanR> you split the definition
2025-03-09 20:50:05 +0100MyNetAz(~MyNetAz@user/MyNetAz) (Remote host closed the connection)
2025-03-09 20:49:46 +0100 <monochrom> And does Haskell allow you to write like "data T = Case1; foo Case1 = ...; data T = ... | Case2; foo Case2 = ..."?
2025-03-09 20:49:45 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-03-09 20:49:03 +0100 <monochrom> My lecture notes need that IMO.
2025-03-09 20:48:55 +0100abrar(~abrar@static-96-245-187-163.phlapa.fios.verizon.net)
2025-03-09 20:48:48 +0100 <monochrom> Does Haskell even allow you to write "foo [] = 0; example1 = foo []; foo (x:xs) = x + foo xs"?
2025-03-09 20:48:27 +0100abrar(~abrar@static-96-245-187-163.phlapa.fios.verizon.net) (Quit: WeeChat 4.2.2)
2025-03-09 20:48:05 +0100 <EvanR> you can't rearrange the order in the source file?
2025-03-09 20:47:27 +0100 <monochrom> Edit I can say the same about almost every case of the Expr and Value types.
2025-03-09 20:46:23 +0100 <monochrom> I can say the same about almost every case.
2025-03-09 20:45:32 +0100 <monochrom> EvanR: https://www.cs.utoronto.ca/~trebla/CSCC24-latest/09-semantics-1.html is one of my lecture notes where IMO my order there is better than any order accepted by Haskell. For example, after explain function application and the "interp (App f e) = ..." case, it makes no sense to procrastinate walking through static vs dynamic scoping with the example "exampleScoping = ..." until after I also finish recursion "interp (Rec ...) = ..."
2025-03-09 20:45:29 +0100Square(~Square@user/square) (Ping timeout: 260 seconds)
2025-03-09 20:45:13 +0100alp(~alp@2001:861:8ca0:4940:445a:f71:bdb6:b173)
2025-03-09 20:44:20 +0100econo_(uid147250@id-147250.tinside.irccloud.com)
2025-03-09 20:41:13 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-03-09 20:40:39 +0100Square(~Square@user/square) Square
2025-03-09 20:39:39 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 260 seconds)
2025-03-09 20:38:30 +0100ljdarj1ljdarj
2025-03-09 20:38:30 +0100ljdarj(~Thunderbi@user/ljdarj) (Ping timeout: 246 seconds)
2025-03-09 20:37:36 +0100ezzieyguywuf(~Unknown@user/ezzieyguywuf) (Quit: Lost terminal)
2025-03-09 20:36:09 +0100ljdarj1(~Thunderbi@user/ljdarj) ljdarj
2025-03-09 20:35:35 +0100srazkvt(~sarah@user/srazkvt) (Quit: Konversation terminated!)
2025-03-09 20:34:55 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-03-09 20:34:41 +0100hattckory(~hattckory@bras-base-toroon4524w-grc-48-184-145-138-167.dsl.bell.ca) (Ping timeout: 248 seconds)
2025-03-09 20:30:15 +0100hattckory(~hattckory@bras-base-toroon4524w-grc-48-184-145-138-167.dsl.bell.ca)
2025-03-09 20:27:43 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) (Quit: Connection closed for inactivity)
2025-03-09 20:24:11 +0100 <int-e> real-time profiling data and just in time compilation make this more realistic
2025-03-09 20:23:49 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 244 seconds)
2025-03-09 20:23:49 +0100alfiee(~alfiee@user/alfiee) (Ping timeout: 244 seconds)
2025-03-09 20:23:24 +0100 <EvanR> sun with java claimed to have done it
2025-03-09 20:23:13 +0100int-eshrugs
2025-03-09 20:23:11 +0100 <EvanR> "just parallel my code" has been a promise for 30 years now
2025-03-09 20:23:06 +0100 <int-e> that's not generic
2025-03-09 20:22:05 +0100 <Inst> i mean, is this an actual problem? I was asking that, i.e, when I was trying parallelism for a brute force attempt at calculating blackjack's split EV, I was told, just benchmark and see what happens
2025-03-09 20:21:10 +0100 <int-e> And keeping subtasks coarse enough, which makes it hard to do this generically. Oh well.
2025-03-09 20:20:16 +0100 <EvanR> if it doesn't, or if it hurts performance, then it's not good
2025-03-09 20:20:15 +0100 <int-e> (personally I'd worry about splitting tasks evenly enough)
2025-03-09 20:20:09 +0100 <EvanR> one issue with parallel code is delivering on an increase in performance
2025-03-09 20:19:39 +0100 <EvanR> which I seriously doubt
2025-03-09 20:19:33 +0100alfiee(~alfiee@user/alfiee) alfiee
2025-03-09 20:19:30 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-03-09 20:19:27 +0100 <EvanR> thinking it would be the fastest possible
2025-03-09 20:19:25 +0100 <int-e> EvanR: 20; the lone 2^20 was correcting what 1048576 is
2025-03-09 20:19:16 +0100 <EvanR> 19 would imply you're doing x*y in its own parallel computation
2025-03-09 20:18:48 +0100 <EvanR> so how many levels of splitting
2025-03-09 20:17:07 +0100 <Inst> i mean if there's a million element list, it's 2^20, would be less