Newest at the top
| 2025-11-13 11:24:04 +0100 | kuribas | (~user@2a02:1808:67:a09:b55b:215:13f6:6a3b) (Ping timeout: 255 seconds) |
| 2025-11-13 11:22:40 +0100 | <[exa]> | anyway I assume the table building has leaked into the benchmark for the non-IO variant, half a second for selection in whimsy 1M table is....tooo much. |
| 2025-11-13 11:22:16 +0100 | kuribas` | (~user@ip-188-118-57-242.reverse.destiny.be) kuribas |
| 2025-11-13 11:20:25 +0100 | merijn | (~merijn@77.242.116.146) (Ping timeout: 240 seconds) |
| 2025-11-13 11:20:14 +0100 | trickard_ | (~trickard@cpe-62-98-47-163.wireline.com.au) |
| 2025-11-13 11:20:01 +0100 | trickard__ | (~trickard@cpe-62-98-47-163.wireline.com.au) (Read error: Connection reset by peer) |
| 2025-11-13 11:17:57 +0100 | <kuribas> | right |
| 2025-11-13 11:17:17 +0100 | j1n37 | (~j1n37@user/j1n37) j1n37 |
| 2025-11-13 11:16:12 +0100 | j1n37 | (~j1n37@user/j1n37) (Read error: Connection reset by peer) |
| 2025-11-13 11:16:06 +0100 | <[exa]> | notice the same happens for the LinearHashTable below, suddenly 3x slower (per query I assume) |
| 2025-11-13 11:15:42 +0100 | <[exa]> | no htat's a normal thing, if you have too much of an array and access it randomly, you'll eventually hit the cache size (8MB sounds expectable here) and it's going to get a few times slower |
| 2025-11-13 11:15:25 +0100 | xff0x_ | (~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp) (Ping timeout: 240 seconds) |
| 2025-11-13 11:15:03 +0100 | <[exa]> | what's concerning is that the IO variants really seem to be measured differently (there's difference 500ms vs 14 ns, that's big right |
| 2025-11-13 11:14:51 +0100 | <kuribas> | I suppose no because lookup doesn't need GC... |
| 2025-11-13 11:14:02 +0100 | <kuribas> | maybe GC? |
| 2025-11-13 11:13:15 +0100 | <[exa]> | kuribas: that's kinda expected because of cache effects |
| 2025-11-13 11:12:41 +0100 | gmg | (~user@user/gehmehgeh) gehmehgeh |
| 2025-11-13 11:10:14 +0100 | <kuribas> | maybe more log^2(n)? |
| 2025-11-13 11:08:40 +0100 | tzh | (~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz) |
| 2025-11-13 11:08:07 +0100 | deptype_ | (~deptype@2406:b400:3a:73c2:796f:1d1b:ab7f:a73f) |
| 2025-11-13 11:07:54 +0100 | deptype_ | (~deptype@2406:b400:3a:73c2:752d:1b8c:f480:a279) (Remote host closed the connection) |
| 2025-11-13 11:06:06 +0100 | <kuribas> | [exa]: it doubles, until 1000000, when it is suddenly X5. |
| 2025-11-13 11:06:01 +0100 | trickard | (~trickard@cpe-62-98-47-163.wireline.com.au) (Ping timeout: 264 seconds) |
| 2025-11-13 11:05:48 +0100 | trickard__ | (~trickard@cpe-62-98-47-163.wireline.com.au) |
| 2025-11-13 11:05:40 +0100 | merijn | (~merijn@77.242.116.146) merijn |
| 2025-11-13 11:02:57 +0100 | <kuribas> | that looks logarithm-isch... |
| 2025-11-13 11:02:35 +0100 | <kuribas> | [exa]: devide by n gives: 13.29, 17.28, 22.42, 41.10, 85.40, 460 ns |
| 2025-11-13 11:01:45 +0100 | Taneb | (~username@host-95-251-57-201.retail.telecomitalia.it) Taneb |
| 2025-11-13 10:59:58 +0100 | <[exa]> | kuribas: yeah there's something weird there for sure |
| 2025-11-13 10:59:32 +0100 | <kuribas> | Leary: that's probably slower on bounded integers. |
| 2025-11-13 10:59:30 +0100 | <[exa]> | Leary: oh I missed that one again. Thanks! |
| 2025-11-13 10:58:11 +0100 | <Leary> | [exa]: `GHC.Num.integerLog2`? |
| 2025-11-13 10:57:33 +0100 | <kuribas> | it does seem like that from the code https://github.com/haskell-perf/dictionaries/blob/master/Time.hs#L338 |
| 2025-11-13 10:54:27 +0100 | <kuribas> | maybe it measures n lookups? |
| 2025-11-13 10:54:00 +0100 | merijn | (~merijn@77.242.116.146) (Ping timeout: 256 seconds) |
| 2025-11-13 10:53:56 +0100 | weary-traveler | (~user@user/user363627) user363627 |
| 2025-11-13 10:53:13 +0100 | <[exa]> | either that or the Data.HashMap implementation is borked |
| 2025-11-13 10:52:54 +0100 | <[exa]> | I'd suspect they're measuring some laziness artifact |
| 2025-11-13 10:52:44 +0100 | <[exa]> | the benchmark -- the int lookup for Data.HashMap.Strict should be essentially const-time like for the basic & linear hash tables, but it grows linearly |
| 2025-11-13 10:51:45 +0100 | <kuribas> | [exa]: the benchmark, or the zerocount? |
| 2025-11-13 10:51:15 +0100 | <kuribas> | why? |
| 2025-11-13 10:51:05 +0100 | <[exa]> | kuribas: that looks mildly suspicious tbh |
| 2025-11-13 10:50:37 +0100 | fp | (~Thunderbi@2001:708:20:1406::10c5) fp |
| 2025-11-13 10:50:16 +0100 | fp | (~Thunderbi@130.233.70.206) (Quit: fp) |
| 2025-11-13 10:50:14 +0100 | <kuribas> | Well, probably easy to implement using copying and unsafe code. |
| 2025-11-13 10:49:44 +0100 | <kuribas> | Shame I cannot "freeze" a mutable hashmap, to use it from pure code. |
| 2025-11-13 10:48:32 +0100 | tromp | (~textual@2001:1c00:3487:1b00:7d:cf52:961a:9343) |
| 2025-11-13 10:48:05 +0100 | deptype_ | (~deptype@2406:b400:3a:73c2:752d:1b8c:f480:a279) |
| 2025-11-13 10:47:52 +0100 | deptype_ | (~deptype@2406:b400:3a:73c2:bbc0:29cc:d3e9:c519) (Remote host closed the connection) |
| 2025-11-13 10:47:26 +0100 | <kuribas> | Mutable hashtables seem quite a bit faster than immutable hashmaps: https://github.com/haskell-perf/dictionaries |