2024/11/14

Newest at the top

2024-11-14 16:01:46 +0100 <bailsman> It has to actually be stored and loaded from memory to be a fair comparison.
2024-11-14 16:01:11 +0100 <bailsman> I need to benchmark the list already existing
2024-11-14 16:01:05 +0100 <bailsman> Hey, no, that's cheating. Then I've written my benchmark wrong
2024-11-14 16:00:40 +0100 <geekosaur> in the optimal case, the list is never constructed as such, elements are fed directly to map as they are created
2024-11-14 16:00:07 +0100 <geekosaur> bailsman, construction of the list vs. mapping through the list
2024-11-14 15:59:43 +0100 <haskellbridge> <Bowuigi> GHC does dark magic to not actually use a linked list
2024-11-14 15:59:42 +0100 <bailsman> I am just doing [SmallRecord] -> [SmallRecord] by updating a field in the record
2024-11-14 15:59:34 +0100 <geekosaur> ph88, that's what I meant by style but also a mediawiki upgrade is what started the whole outage thing
2024-11-14 15:59:10 +0100 <ph88> wiki got a makeover? i remember being it uglier
2024-11-14 15:59:04 +0100 <bailsman> I don't know what any of those words mean
2024-11-14 15:58:44 +0100 <geekosaur> if your generation and consumption are written correctly, they get pipelined
2024-11-14 15:58:25 +0100 <bailsman> Or does it turn into an in-place algorithm?
2024-11-14 15:58:09 +0100 <bailsman> What do you mean by tight loop? Surely it still has to allocate all the elements for the new list?
2024-11-14 15:57:50 +0100 <haskellbridge> <Bowuigi> Gérard Huet's pearl "The Zipper" is also good if you don't mind OCaml
2024-11-14 15:56:57 +0100 <geekosaur> it uses a tree as the example data structure, where most of them focus on lists which are the easiest case
2024-11-14 15:56:30 +0100 <geekosaur> https://wiki.haskell.org/Zipper
2024-11-14 15:56:30 +0100 <bailsman> Awesome! Thank you to whoever fixed it
2024-11-14 15:56:29 +0100 <geekosaur> actually hgolden in #h-i said there are still some style issues
2024-11-14 15:56:16 +0100 <geekosaur> just found that, yes
2024-11-14 15:56:08 +0100 <hellwolf> (wiki has been fixed)
2024-11-14 15:56:06 +0100 <bailsman> I have some parts right now that use random access. But was thinking maybe I don't want to pay a 4x performance penalty just for random access.
2024-11-14 15:55:53 +0100 <geekosaur> sadly the first reference that comes to mind is on the wiki…
2024-11-14 15:55:42 +0100 <ph88> no
2024-11-14 15:55:38 +0100 <geekosaur> ph88, are you aware of tree zippers?
2024-11-14 15:55:14 +0100 <ph88> when i have some code more or less in the shape of this thing https://hackage.haskell.org/package/containers-0.7/docs/Data-Tree.html#t:Tree how can i write code that changes `a` with State but there are two points to change it, when going down (into the leafs) and going up (back to the root)? also known as visitor pattern
2024-11-14 15:55:03 +0100 <geekosaur> it actually compiles down to a tight loop in most cases, not the C-style linked list you might expect
2024-11-14 15:54:16 +0100 <geekosaur> right, map's going to be one of those cases that [] will work very well for
2024-11-14 15:54:11 +0100 <hellwolf> "data Array i e" is also under rated.
2024-11-14 15:54:06 +0100hgolden(~hgolden@2603:8000:9d00:3ed1:6c70:1ac0:d127:74dd) hgolden
2024-11-14 15:53:38 +0100 <bailsman> Data.Vector.Map over a vector is consistently 4x slower than regular map over []. (Data.Map is 10x slower)
2024-11-14 15:53:14 +0100 <haskellbridge> <Bowuigi> Data.Map is the first one that comes to mind
2024-11-14 15:52:55 +0100 <haskellbridge> <Bowuigi> Have you tried any functional random access data structures?
2024-11-14 15:52:00 +0100 <bailsman> I thought I needed to do a lot of random indexing. But, now I'm not sure if I shouldn't instead redesign everything so that it does not require random access.
2024-11-14 15:51:04 +0100ph88(~ph88@2a02:8109:9e26:c800:7ee4:dffc:4616:9e2a)
2024-11-14 15:50:40 +0100misterfish(~misterfis@31-161-39-137.biz.kpn.net) misterfish
2024-11-14 15:50:37 +0100 <haskellbridge> <Bowuigi> Reasoning imperatively in functional languages leads to bad performance in general
2024-11-14 15:50:23 +0100 <geekosaur> allocation, gc, and iteration are all optimized because it's so common
2024-11-14 15:49:51 +0100 <hellwolf> I mean, if you need to do a log of random indexing, it got to be slow. but for stream processing, it is probably the most efficient
2024-11-14 15:49:41 +0100 <geekosaur> if all you're doing is iterating through them, consider that ghc is optimized for that case: think of a list as a loop encoded as data
2024-11-14 15:48:48 +0100 <bailsman> Plain old lists are consistently the fastest. I find that somewhat confusing, since in imperative languages linked lists are often slow.
2024-11-14 15:47:02 +0100billchenchina(~billchenc@2a0d:2580:ff0c:1:e3c9:c52b:a429:5bfe) billchenchina
2024-11-14 15:41:12 +0100weary-traveler(~user@user/user363627) (Quit: Konversation terminated!)
2024-11-14 15:39:55 +0100Cadey(~cadey@perl/impostor/xe) (Quit: WeeChat 4.4.2)
2024-11-14 15:36:41 +0100yaroot(~yaroot@2400:4052:ac0:d901:1cf4:2aff:fe51:c04c) yaroot
2024-11-14 15:36:27 +0100yaroot(~yaroot@2400:4052:ac0:d901:1cf4:2aff:fe51:c04c) (Read error: Connection reset by peer)
2024-11-14 15:35:59 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) alexherbo2
2024-11-14 15:35:45 +0100ash3en(~Thunderbi@149.222.147.110) (Client Quit)
2024-11-14 15:35:40 +0100alexherbo2(~alexherbo@2a02-8440-3313-668b-a9ec-921f-0511-ee3f.rev.sfr.net) (Remote host closed the connection)
2024-11-14 15:35:09 +0100ash3en(~Thunderbi@149.222.147.110) ash3en
2024-11-14 15:30:28 +0100mari-estel(~mari-este@user/mari-estel) mari-estel