2025/05/12

Newest at the top

2025-05-12 11:47:14 +0200manwithluck(~manwithlu@2a09:bac1:5b80:20::38a:2d) manwithluck
2025-05-12 11:47:00 +0200manwithluck(~manwithlu@104.28.210.121) (Ping timeout: 252 seconds)
2025-05-12 11:44:59 +0200sord937(~sord937@gateway/tor-sasl/sord937) sord937
2025-05-12 11:44:41 +0200sord937(~sord937@gateway/tor-sasl/sord937) (Remote host closed the connection)
2025-05-12 11:41:31 +0200 <Hecate> __monty__: looks like you may want to look into telemetry, or download statistics for pre-built binaries
2025-05-12 11:41:09 +0200 <Hecate> __monty__: so, you want an average of your program's runtime platforms
2025-05-12 11:37:06 +0200fp1(~Thunderbi@87-92-254-11.rev.dnainternet.fi) (Ping timeout: 252 seconds)
2025-05-12 11:34:45 +0200prdak(~Thunderbi@user/prdak) prdak
2025-05-12 11:30:08 +0200 <__monty__> I'm more interested in a statistic that says something about hardware representative of what the software runs on on average. Without having to know the exact distribution of hardware characteristics and without requiring a stable representative of that distribution.
2025-05-12 11:29:20 +0200euleritian(~euleritia@ip4d17f864.dynamic.kabel-deutschland.de)
2025-05-12 11:28:57 +0200euleritian(~euleritia@dynamic-176-006-133-103.176.6.pool.telefonica.de) (Read error: Connection reset by peer)
2025-05-12 11:28:22 +0200 <Hecate> then I'm not sure what you want to measure, actually
2025-05-12 11:27:36 +0200 <Hecate> __monty__: just so we're clear, you want to mesure change against something that is not a real computer or has no real hardware characteristics, despite your program running on actual computers with actual hardware?
2025-05-12 11:26:53 +0200 <__monty__> It's not perfect but I suspect it's a step in the right direction.
2025-05-12 11:26:50 +0200zdercti^(~zdercti@50.168.231.214)
2025-05-12 11:26:32 +0200 <Hecate> __monty__: yes that's what benchmarking does, but your change might have completely different effects on different architectures!
2025-05-12 11:26:26 +0200zdercti^(~zdercti@50.168.231.214) (Ping timeout: 265 seconds)
2025-05-12 11:25:36 +0200 <__monty__> I'm thinking of things like benchmarking before and after a change. To get an estimation of the relative improvement of the change.
2025-05-12 11:25:02 +0200 <Hecate> you're not getting any kind of useful metrics from abstracting that away, there's no "ethereal" or "ideal" computer on which your program runs
2025-05-12 11:24:31 +0200 <Hecate> __monty__: in other words, you will get completely different results according to the cache of the CPU, the quality of RAM, etc
2025-05-12 11:24:03 +0200 <Hecate> __monty__: well, that would mean doing a bunch, bunch of benchmark runs, and then averaging them out per hardware characteristics :D
2025-05-12 11:17:03 +0200 <__monty__> Are there any benchmarking techniques that are independent(ish, at least) of the capacity of the underlying hardware?
2025-05-12 11:12:19 +0200sord937(~sord937@gateway/tor-sasl/sord937) sord937
2025-05-12 11:11:50 +0200 <tomsmeding> I have some personal experience with benchmarking on github actions, and indeed the answer is: don't do that, performance is unpredictable
2025-05-12 11:11:46 +0200sord937(~sord937@gateway/tor-sasl/sord937) (Remote host closed the connection)
2025-05-12 11:06:42 +0200tromp(~textual@2001:1c00:3487:1b00:ecd3:a00f:e9d8:9bf6)
2025-05-12 11:05:52 +0200Square(~Square4@user/square) Square
2025-05-12 10:53:54 +0200kh0d(~kh0d@89.216.103.150) (Quit: Leaving...)
2025-05-12 10:51:28 +0200JeremyB99(~JeremyB99@172.87.18.1)
2025-05-12 10:50:55 +0200JeremyB99(~JeremyB99@172.87.18.1) (Read error: Connection reset by peer)
2025-05-12 10:42:13 +0200Frostillicus(~Frostilli@pool-71-174-119-56.bstnma.fios.verizon.net) (Ping timeout: 252 seconds)
2025-05-12 10:38:50 +0200tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz)
2025-05-12 10:36:30 +0200califax(~califax@user/califx) califx
2025-05-12 10:36:12 +0200califax(~califax@user/califx) (Ping timeout: 264 seconds)
2025-05-12 10:35:52 +0200hellwolf(~user@01f7-65aa-cc2f-39a8-0f00-4d40-07d0-2001.sta.estpak.ee) hellwolf
2025-05-12 10:32:33 +0200tromp(~textual@2001:1c00:3487:1b00:ecd3:a00f:e9d8:9bf6) (Quit: My iMac has gone to sleep. ZZZzzz…)
2025-05-12 10:32:14 +0200fp1(~Thunderbi@87-92-254-11.rev.dnainternet.fi) fp
2025-05-12 10:29:59 +0200hellwolf(~user@5345-cb48-715e-41e3-0f00-4d40-07d0-2001.sta.estpak.ee) (Ping timeout: 252 seconds)
2025-05-12 10:21:29 +0200 <hololeap> ty
2025-05-12 10:21:27 +0200 <hololeap> that makes sense
2025-05-12 10:20:52 +0200 <merijn> I can imagine the available CPU scales with current load of their github action system
2025-05-12 10:20:50 +0200 <hololeap> fair enough
2025-05-12 10:20:36 +0200 <merijn> yeah
2025-05-12 10:20:33 +0200 <hololeap> you mean in terms of speed?
2025-05-12 10:19:49 +0200 <merijn> Because it seems unlikely the github runner infrastructure is deterministic enough
2025-05-12 10:19:31 +0200 <merijn> hololeap: I don't think anyone does that
2025-05-12 10:16:34 +0200JuanDaugherty(~juan@user/JuanDaugherty) (Client Quit)
2025-05-12 10:16:28 +0200kh0d(~kh0d@89.216.103.150) kh0d
2025-05-12 10:15:58 +0200JuanDaugherty(~juan@user/JuanDaugherty) JuanDaugherty
2025-05-12 10:15:37 +0200JuanDaugherty(~juan@user/JuanDaugherty) (Quit: praxis.meansofproduction.biz (juan@acm.org))