2025/05/12

Newest at the top

2025-05-12 12:14:03 +0200 <tomsmeding> there is no such thing as hardware-independent performance
2025-05-12 12:13:58 +0200 <tomsmeding> __monty__: the optimisations that a compiler does to your code are instruction-set architecture dependent
2025-05-12 12:12:29 +0200 <Hecate> __monty__: there's not "nothing", but there's few things. Useful things are still bound to the laws of physics
2025-05-12 12:11:15 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr)
2025-05-12 12:07:48 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr) (Read error: Connection reset by peer)
2025-05-12 12:02:27 +0200 <__monty__> This is more of an inquiry for theoretical results. I find it hard to believe that there's *nothing* you can measure independently from a specific hardware configuration.
2025-05-12 12:01:27 +0200euleritian(~euleritia@ip4d17f864.dynamic.kabel-deutschland.de)
2025-05-12 12:01:04 +0200 <__monty__> No, that's not actually what I want.
2025-05-12 12:00:56 +0200merijn(~merijn@77.242.116.146) (Ping timeout: 252 seconds)
2025-05-12 12:00:49 +0200euleritian(~euleritia@ip4d17f864.dynamic.kabel-deutschland.de) (Ping timeout: 248 seconds)
2025-05-12 11:59:14 +0200tromp(~textual@2001:1c00:3487:1b00:ecd3:a00f:e9d8:9bf6)
2025-05-12 11:59:09 +0200Frostillicus(~Frostilli@pool-71-174-119-56.bstnma.fios.verizon.net) (Ping timeout: 245 seconds)
2025-05-12 11:58:25 +0200comerijn(~merijn@77.242.116.146) merijn
2025-05-12 11:54:41 +0200Frostillicus(~Frostilli@pool-71-174-119-56.bstnma.fios.verizon.net)
2025-05-12 11:52:53 +0200JeremyB99(~JeremyB99@172.87.18.1)
2025-05-12 11:51:21 +0200JeremyB99(~JeremyB99@172.87.18.1) (Read error: Connection reset by peer)
2025-05-12 11:51:21 +0200tromp(~textual@2001:1c00:3487:1b00:ecd3:a00f:e9d8:9bf6) (Quit: My iMac has gone to sleep. ZZZzzz…)
2025-05-12 11:47:14 +0200manwithluck(~manwithlu@2a09:bac1:5b80:20::38a:2d) manwithluck
2025-05-12 11:47:00 +0200manwithluck(~manwithlu@104.28.210.121) (Ping timeout: 252 seconds)
2025-05-12 11:44:59 +0200sord937(~sord937@gateway/tor-sasl/sord937) sord937
2025-05-12 11:44:41 +0200sord937(~sord937@gateway/tor-sasl/sord937) (Remote host closed the connection)
2025-05-12 11:41:31 +0200 <Hecate> __monty__: looks like you may want to look into telemetry, or download statistics for pre-built binaries
2025-05-12 11:41:09 +0200 <Hecate> __monty__: so, you want an average of your program's runtime platforms
2025-05-12 11:37:06 +0200fp1(~Thunderbi@87-92-254-11.rev.dnainternet.fi) (Ping timeout: 252 seconds)
2025-05-12 11:34:45 +0200prdak(~Thunderbi@user/prdak) prdak
2025-05-12 11:30:08 +0200 <__monty__> I'm more interested in a statistic that says something about hardware representative of what the software runs on on average. Without having to know the exact distribution of hardware characteristics and without requiring a stable representative of that distribution.
2025-05-12 11:29:20 +0200euleritian(~euleritia@ip4d17f864.dynamic.kabel-deutschland.de)
2025-05-12 11:28:57 +0200euleritian(~euleritia@dynamic-176-006-133-103.176.6.pool.telefonica.de) (Read error: Connection reset by peer)
2025-05-12 11:28:22 +0200 <Hecate> then I'm not sure what you want to measure, actually
2025-05-12 11:27:36 +0200 <Hecate> __monty__: just so we're clear, you want to mesure change against something that is not a real computer or has no real hardware characteristics, despite your program running on actual computers with actual hardware?
2025-05-12 11:26:53 +0200 <__monty__> It's not perfect but I suspect it's a step in the right direction.
2025-05-12 11:26:50 +0200zdercti^(~zdercti@50.168.231.214)
2025-05-12 11:26:32 +0200 <Hecate> __monty__: yes that's what benchmarking does, but your change might have completely different effects on different architectures!
2025-05-12 11:26:26 +0200zdercti^(~zdercti@50.168.231.214) (Ping timeout: 265 seconds)
2025-05-12 11:25:36 +0200 <__monty__> I'm thinking of things like benchmarking before and after a change. To get an estimation of the relative improvement of the change.
2025-05-12 11:25:02 +0200 <Hecate> you're not getting any kind of useful metrics from abstracting that away, there's no "ethereal" or "ideal" computer on which your program runs
2025-05-12 11:24:31 +0200 <Hecate> __monty__: in other words, you will get completely different results according to the cache of the CPU, the quality of RAM, etc
2025-05-12 11:24:03 +0200 <Hecate> __monty__: well, that would mean doing a bunch, bunch of benchmark runs, and then averaging them out per hardware characteristics :D
2025-05-12 11:17:03 +0200 <__monty__> Are there any benchmarking techniques that are independent(ish, at least) of the capacity of the underlying hardware?
2025-05-12 11:12:19 +0200sord937(~sord937@gateway/tor-sasl/sord937) sord937
2025-05-12 11:11:50 +0200 <tomsmeding> I have some personal experience with benchmarking on github actions, and indeed the answer is: don't do that, performance is unpredictable
2025-05-12 11:11:46 +0200sord937(~sord937@gateway/tor-sasl/sord937) (Remote host closed the connection)
2025-05-12 11:06:42 +0200tromp(~textual@2001:1c00:3487:1b00:ecd3:a00f:e9d8:9bf6)
2025-05-12 11:05:52 +0200Square(~Square4@user/square) Square
2025-05-12 10:53:54 +0200kh0d(~kh0d@89.216.103.150) (Quit: Leaving...)
2025-05-12 10:51:28 +0200JeremyB99(~JeremyB99@172.87.18.1)
2025-05-12 10:50:55 +0200JeremyB99(~JeremyB99@172.87.18.1) (Read error: Connection reset by peer)
2025-05-12 10:42:13 +0200Frostillicus(~Frostilli@pool-71-174-119-56.bstnma.fios.verizon.net) (Ping timeout: 252 seconds)
2025-05-12 10:38:50 +0200tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz)
2025-05-12 10:36:30 +0200califax(~califax@user/califx) califx