Newest at the top
2024-11-14 16:56:59 +0100 | <ph88> | what if i don't only want to change the variable `a` but i also want to inspect the nodes and modify/replace them ? |
2024-11-14 16:56:55 +0100 | <geekosaur> | if you just want something Traversable-style, any generics library will give you that |
2024-11-14 16:56:38 +0100 | <geekosaur> | which it sounded like you wanted |
2024-11-14 16:56:27 +0100 | <geekosaur> | although the default traversals are all of the Traversable variety, unlike a zipper which lets you move at will |
2024-11-14 16:56:11 +0100 | <ph88> | and you still recommend to do the traversal with zipper yes? (with code derived with generics) |
2024-11-14 16:55:35 +0100 | <geekosaur> | then use generics to derive the traversal (all of the generics packages do so in some fashion) |
2024-11-14 16:55:26 +0100 | <ph88> | as i understood it can be ghc.generics with zipper, or lens or maybe something else |
2024-11-14 16:55:03 +0100 | <ph88> | i have neither, and i like something to traverse while not having to write traversal code for each type |
2024-11-14 16:54:45 +0100 | <ph88> | why would i want this? "it's easier to replace lens there with something else (such as a zipper)" |
2024-11-14 16:54:18 +0100 | <geekosaur> | that's why generics packages exist |
2024-11-14 16:54:05 +0100 | <geekosaur> | if, as you say, "that's going to take so much time, the AST is absolutely huge", you need generics of some variety to escape that |
2024-11-14 16:53:17 +0100 | <geekosaur> | ph88, it's easier to replace lens there with something else (such as a zipper) than it is to replace the generics mechanism needed to make lens/a zipper/whatever useful |
2024-11-14 16:48:08 +0100 | <EvanR> | and looking at the core, of your own code |
2024-11-14 16:47:46 +0100 | <EvanR> | again, "I don't know how this benchmark library works, but I'll assume a bunch of conclusions" isn't as good as writing your own code then profiling |
2024-11-14 16:46:58 +0100 | <geekosaur> | you're conflating things, syb/generics/uniplate are mechanism, lens uses the mechanism. and lens should indeed be able to navigate up/down |
2024-11-14 16:46:55 +0100 | <EvanR> | so you won't see that benefit there |
2024-11-14 16:46:45 +0100 | <bailsman> | I only do one operation. |
2024-11-14 16:46:34 +0100 | <EvanR> | bailsman, Vector shines when you start with combine chains of operations together, it fuses away intermediate vectors |
2024-11-14 16:46:30 +0100 | tromp | (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…) |
2024-11-14 16:45:50 +0100 | <ph88> | geekosaur, do you think it's still worth to use zippers but then to combine them with a generic approach? i am not sure whether i can go up and down with other approaches such as lens or GHC.Generics |
2024-11-14 16:44:49 +0100 | <EvanR> | 4x faster isn't that much of a difference, it seems plausible you're creating the whole structure for everything. It's not like a 1000x speedup that you'd normally see when you switch from full evaluation to lazy evaluation |
2024-11-14 16:44:43 +0100 | <geekosaur> | that's where generics or syb come in, they generate the necessary code for you |
2024-11-14 16:44:34 +0100 | <ph88> | that's going to take so much time, the AST is absolutely huge |
2024-11-14 16:44:17 +0100 | misterfish | (~misterfis@31-161-39-137.biz.kpn.net) (Ping timeout: 248 seconds) |
2024-11-14 16:44:14 +0100 | <geekosaur> | exactly, yes |
2024-11-14 16:44:05 +0100 | <ph88> | geekosaur, doable .. would i have to write code for each data type? |
2024-11-14 16:43:53 +0100 | <bailsman> | Anyway, I guess we can assume that it isn't cheating, it is actually constructing the intermediate list, and most of the performance difference is going to come from map being a builtin and the vector code not compiling to anything nearly as simple as what I expected. So it's not map being fast, it's map being slowish, and vector being slower, I think. |
2024-11-14 16:43:39 +0100 | <geekosaur> | especially when you have multiple data types |
2024-11-14 16:43:32 +0100 | <EvanR> | control what ultimately is demanding evaluation |
2024-11-14 16:43:17 +0100 | <EvanR> | when I was tooling with the profiling and performance I would make sure to write my own main IO action so I know what what's |
2024-11-14 16:42:26 +0100 | <geekosaur> | ph88, it's doable without any of those but it's harder since you have to write it all yourself. those libraries exist for a reason |
2024-11-14 16:42:16 +0100 | <EvanR> | in the case of list |
2024-11-14 16:42:07 +0100 | <EvanR> | if nf works, computes full normal form, sounds bad for performance |
2024-11-14 16:41:39 +0100 | <EvanR> | I'm not familiar with Benchmarkable |
2024-11-14 16:41:23 +0100 | <bailsman> | nf :: NFData b => (a -> b) -> a -> Benchmarkable |
2024-11-14 16:41:16 +0100 | <EvanR> | finalList <- evaluate (force (map updateValue someList)) ought to slow it down more |
2024-11-14 16:40:20 +0100 | <EvanR> | right now all I see is "map updateValue someList" |
2024-11-14 16:40:04 +0100 | <EvanR> | I have no idea, I don't see what nf is or bench is |
2024-11-14 16:39:52 +0100 | <bailsman> | That's what the nf was for right? |
2024-11-14 16:39:46 +0100 | <bailsman> | Isn't that what I'm doing already? |
2024-11-14 16:39:37 +0100 | <EvanR> | fully evaluated the final list before doing whatever it does with it |
2024-11-14 16:39:24 +0100 | <EvanR> | go to the benchmark code and cripple that |
2024-11-14 16:39:07 +0100 | <bailsman> | How do I prevent it from doing that? |
2024-11-14 16:38:58 +0100 | <EvanR> | and again, the benchmark code might have gotten optimized so there are no list nodes, other than the source list |
2024-11-14 16:38:45 +0100 | <bailsman> | I'm expecting the vector version to compile to something like `nv = new Vector(v.length); for (int i = 0; i < v.length; ++i) nv[i] = updateValue(v[i])`. One allocation, extremely simple update. Whereas the linked list version has to allocate 1M nodes and set up each of their 'next' pointers, so it seems like it should be doing more work. |
2024-11-14 16:38:01 +0100 | philopsos | (~caecilius@user/philopsos) philopsos |
2024-11-14 16:37:56 +0100 | <haskellbridge> | <flip101> Bowuigi: could you please take a look as well? |
2024-11-14 16:37:04 +0100 | <EvanR> | it goes back to how your "bench" thing is processing the final list, 1 by 1, it's nicer on the GC |
2024-11-14 16:36:40 +0100 | <EvanR> | and 1 megabyte chunk of Vector might not play as nice with the GC |
2024-11-14 16:35:55 +0100 | <bailsman> | but there's only 1 of them, not 1 million |