2025/12/02

Newest at the top

2025-12-02 18:17:20 +0100 <dutchie> ( https://downloads.haskell.org/ghc/9.12.1/docs/users_guide/9.12.1-notes.html#included-libraries )
2025-12-02 18:17:12 +0100 <dutchie> containers is a dependency of the ghc library so I guess it's pretty much everywhere
2025-12-02 18:13:38 +0100kuribas(~user@ip-188-118-57-242.reverse.destiny.be) (Remote host closed the connection)
2025-12-02 18:12:55 +0100 <__monty__> (I'm a base minimalist.)
2025-12-02 18:12:26 +0100 <haskellbridge> <loonycyborg> You can have small transitive footprint only if most widely used things (like containers and unordered-containers) are in base
2025-12-02 18:12:24 +0100 <__monty__> We're limited in our footprints by the unique dependency constraint at least.
2025-12-02 18:11:26 +0100 <merijn> __monty__: No, we should have "tiny dependency with minimal transitive footprint" unlike JS which is "tiny dependency with massive transitive footprint" :p
2025-12-02 18:11:06 +0100 <haskellbridge> <loonycyborg> Like modern cpus have lot of redundant execution units but not all code ends up using them to the full.
2025-12-02 18:10:26 +0100 <__monty__> Clearly we need to approach JavaScript's tiny dependency culture.
2025-12-02 18:09:07 +0100 <haskellbridge> <loonycyborg> in some cases things might be even free but you'd need to know details very well to take advantage of that
2025-12-02 18:09:06 +0100 <haskellbridge> <loonycyborg> cpu adds lots of own optimizations too
2025-12-02 18:09:05 +0100 <haskellbridge> <loonycyborg> Though it can be hard to picture everything that happens at low level
2025-12-02 18:06:58 +0100 <c_wraith> (we're still pretending we have infinite memory)
2025-12-02 18:06:43 +0100 <c_wraith> because that's when it actually is possible for n to go to infinity anyway
2025-12-02 18:06:17 +0100 <c_wraith> Which is why I usually talk about strings instead of integers
2025-12-02 18:06:03 +0100 <merijn> It's also not true, since integers aren't compared 1 bit at a time :p
2025-12-02 18:05:24 +0100 <haskellbridge> <loonycyborg> I'm not sure why it matters what they share, they all had to be compared anyway for things to be correct :P
2025-12-02 18:05:21 +0100 <merijn> 2^63 share a 1 bit prefix :p
2025-12-02 18:05:18 +0100 <c_wraith> I think you're right. But it does mean *some* comparison will take time has a lower bound determined by the number of distinct keys. that's the part I was looking for.
2025-12-02 18:05:15 +0100humasect(~humasect@dyn-192-249-132-90.nexicom.net) (Remote host closed the connection)
2025-12-02 18:04:44 +0100Enrico63(~Enrico63@host-212-171-79-170.pool212171.interbusiness.it) Enrico63
2025-12-02 18:04:40 +0100 <merijn> c_wraith: The one where the last bit is 0 and the one where the last bit is 1?
2025-12-02 18:04:25 +0100 <merijn> c_wraith: Logically speaking only 2 keys will share a 2^63 prefix, no?
2025-12-02 18:04:11 +0100 <merijn> It's way to late to trust mine :p
2025-12-02 18:04:02 +0100 <c_wraith> Actually it's way too early for me to trust my ability to do numbers.
2025-12-02 18:03:54 +0100 <merijn> Wait, I guess that's the same?
2025-12-02 18:03:36 +0100 <merijn> c_wraith: Don't you mean 2^64 - 2^63 must have the same prefix?
2025-12-02 18:03:31 +0100sord937(~sord937@gateway/tor-sasl/sord937) (Quit: sord937)
2025-12-02 18:03:11 +0100trickard_(~trickard@cpe-85-98-47-163.wireline.com.au)
2025-12-02 18:02:57 +0100 <c_wraith> loonycyborg: to some extent. the number of distinct keys puts a lower bound on the comparison time. If you have 2^64 keys in your structure, at least 2^63 of them must have a common prefix of 63 bits, by the pigeonhole principle
2025-12-02 18:02:50 +0100 <merijn> Or rather, it can be arbitrarily far off
2025-12-02 18:02:47 +0100trickard_(~trickard@cpe-85-98-47-163.wireline.com.au) (Ping timeout: 250 seconds)
2025-12-02 18:02:41 +0100 <merijn> loonycyborg: My complaint is that it doesn't even do that :p
2025-12-02 18:02:38 +0100 <haskellbridge> <loonycyborg> even O(1) can be dog slow
2025-12-02 18:02:24 +0100 <haskellbridge> <loonycyborg> Just need to remember that it only says how algorithm scales with number of items in container
2025-12-02 18:01:37 +0100 <haskellbridge> <loonycyborg> It makes as much sense as any other approximation
2025-12-02 18:00:42 +0100 <merijn> loonycyborg: Don't get me started on my usual rant of big O barely making sense at all in the real world :p
2025-12-02 18:00:14 +0100 <haskellbridge> <loonycyborg> And O is about how it scales with list length
2025-12-02 17:59:55 +0100 <haskellbridge> <loonycyborg> Does it even make sense to use O notation for comparisons? Like they explicitly involve only two items, not entire list
2025-12-02 17:59:53 +0100 <merijn> :p
2025-12-02 17:59:50 +0100 <merijn> c_wraith: Jokes on you, most schools don't teach either at all
2025-12-02 17:59:45 +0100 <c_wraith> if you have Int, everything is O(1), because it's a fixed-size domain
2025-12-02 17:59:28 +0100 <merijn> Then again if you have int, just use a PATRICIA tree
2025-12-02 17:59:18 +0100 <merijn> c_wraith: I mean, for something like Int they are (given the completely broken big O model)
2025-12-02 17:59:09 +0100 <c_wraith> I don't know why schools teach both of those data structures so poorly.
2025-12-02 17:58:40 +0100tromp(~textual@2001:1c00:3487:1b00:4073:6a24:b181:8b56) (Ping timeout: 245 seconds)
2025-12-02 17:58:24 +0100 <c_wraith> but by way of fairness, a binary tree is only O(log n) for stuff if comparisons are O(1), which is equally impossible
2025-12-02 17:58:19 +0100 <merijn> but that idea has been floating around since 2007 and still hasn't happened, so I would rather not hold my breath
2025-12-02 17:58:00 +0100 <merijn> Now that could theoretically be fixed by decoupling base from GHC
2025-12-02 17:57:45 +0100 <merijn> looncyborg: Counter point: Since base changes requires new GHC releases they're slow to become widely available