2025/09/17

Newest at the top

2025-09-17 10:39:57 +0200trickard(~trickard@cpe-86-98-47-163.wireline.com.au) (Read error: Connection reset by peer)
2025-09-17 10:39:09 +0200craunts7(~craunts@152.32.99.194)
2025-09-17 10:35:13 +0200 <tomsmeding> now they're actually binary, not decimal, but same idea applies
2025-09-17 10:35:01 +0200 <tomsmeding> this leads to more _absolute_ precision (i.e. smaller error epsilon) around small numbers
2025-09-17 10:34:36 +0200 <tomsmeding> were they decimal with four digits of precision, you'd be able to represent 1.234e-4 and 1.234e100 both exactly, but not 1.2341e100
2025-09-17 10:33:58 +0200 <tomsmeding> floats are "always in scientific notation", with a fixed number of digits of precision
2025-09-17 10:33:42 +0200 <tomsmeding> yes
2025-09-17 10:33:23 +0200 <yin> more precision around small numbers, right?
2025-09-17 10:33:09 +0200mari-estel(~mari-este@user/mari-estel) (Read error: Connection reset by peer)
2025-09-17 10:32:20 +0200 <tomsmeding> yin: the 5*2^55 scales etc. are arbitrary, just to make you not have to compare 1 % 180143985094819840 and 1 % 90071992547409920
2025-09-17 10:31:20 +0200mari73827(~mari-este@user/mari-estel) mari-estel
2025-09-17 10:30:46 +0200 <tomsmeding> the correct result of the rounded inputs, that is
2025-09-17 10:30:34 +0200 <tomsmeding> you lose precision in the product result, of course, but only after having virtually computed the correct result first
2025-09-17 10:30:08 +0200 <tomsmeding> with n repeated additions, you'd get odd error effects around those boundaries; with a single multiplication, things are more well-behaved, I think
2025-09-17 10:29:42 +0200 <tomsmeding> but multiplication is a one-shot operation, n * x is not x + x + ... + x
2025-09-17 10:29:18 +0200 <tomsmeding> __monty__: that's fair
2025-09-17 10:29:12 +0200 <tomsmeding> yin: ^
2025-09-17 10:29:10 +0200 <tomsmeding> whereas the error of 0.3 in Double is similar in scale to that of 0.1 and 0.2, and furthermore goes in the other direction, so it makes sense that 0.1 + 0.2 in Double rounds to 0.3 + epsilon
2025-09-17 10:28:42 +0200drlkf(~drlkf@chat-1.drlkf.net) drlkf
2025-09-17 10:28:41 +0200 <tomsmeding> 0.3 in Float is much less accurate than 0.1 and 0.2, so it makes sense that 0.1 + 0.2 in Float rounds to 0.3
2025-09-17 10:28:41 +0200mari-estel(~mari-este@user/mari-estel) mari-estel
2025-09-17 10:28:11 +0200 <__monty__> I'm pretty sure it can't because the representational error in floats is not a continuous function, it increases in steps. So if your multiplication crosses such a boundary, your error jumps.
2025-09-17 10:28:11 +0200 <lambdabot> [1 % 8,1 % 4,1 % 1]
2025-09-17 10:28:10 +0200 <tomsmeding> > [(toRational (d :: Float) - r) * 5*2^24 | (d, r) <- [(0.1,1%10), (0.2,2%10), (0.3,3%10)]]
2025-09-17 10:28:03 +0200 <lambdabot> [1 % 1,2 % 1,(-2) % 1]
2025-09-17 10:28:02 +0200 <tomsmeding> > [(toRational (d :: Double) - r) * 5*2^55 | (d, r) <- [(0.1,1%10), (0.2,2%10), (0.3,3%10)]]
2025-09-17 10:25:26 +0200caconym74781caconym7478
2025-09-17 10:25:26 +0200drlkf(~drlkf@chat-1.drlkf.net) (*.net *.split)
2025-09-17 10:25:26 +0200haveo(~weechat@pacamara.iuwt.fr) (*.net *.split)
2025-09-17 10:25:26 +0200caconym7478(~caconym@user/caconym) (*.net *.split)
2025-09-17 10:25:26 +0200merijn(~merijn@77.242.116.146) (*.net *.split)
2025-09-17 10:25:07 +0200 <tomsmeding> I'm fairly sure it's accurate though
2025-09-17 10:24:44 +0200 <__monty__> Multiplication simply multiplies the error seems like quite a dangerous assumption with floats.
2025-09-17 10:24:12 +0200xax__(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Ping timeout: 256 seconds)
2025-09-17 10:23:09 +0200 <lambdabot> 0.30000000000000004
2025-09-17 10:23:08 +0200 <yin> > 0.1 + 0.2 :: Double
2025-09-17 10:23:04 +0200 <lambdabot> 0.3
2025-09-17 10:23:03 +0200 <yin> > 0.1 + 0.2 :: Float
2025-09-17 10:22:14 +0200 <tomsmeding> it rounds to that as a binary fraction, which gets rendered as .3 because that's the shortest representation to which the closest actual Float value is the actual value, .25
2025-09-17 10:21:46 +0200 <lambdabot> (1200001.25,1200001.25)
2025-09-17 10:21:45 +0200 <tomsmeding> > (realToFrac (1200001.2 :: Float) :: Double, realToFrac (1200001.3 :: Float) :: Double)
2025-09-17 10:21:33 +0200 <tomsmeding> ah and that is because:
2025-09-17 10:20:57 +0200 <tomsmeding> so 1000001 * the error in 1.2 :: Float is (slightly) less than 0.05, yet 1000001 * 1.2 comes out 0.1 too large
2025-09-17 10:20:51 +0200 <lambdabot> 0.30000000000000004
2025-09-17 10:20:50 +0200 <yin> > 0.1 + 0.2
2025-09-17 10:19:42 +0200 <lambdabot> 4.768376350402832e-2
2025-09-17 10:19:40 +0200 <tomsmeding> > 1000001 * realToFrac (toRational (1.2 :: Float) - (6 % 5)) :: Double
2025-09-17 10:19:36 +0200 <tomsmeding> now what's funny is:
2025-09-17 10:19:10 +0200 <lambdabot> 4.76837158203125e-8
2025-09-17 10:19:09 +0200 <tomsmeding> > realToFrac (toRational (1.2 :: Float) - (6 % 5)) :: Double