2024/05/26

Newest at the top

2024-05-26 18:41:02 +0200Rodney_(~Rodney@176.254.244.83) (Ping timeout: 252 seconds)
2024-05-26 18:41:00 +0200BigKozlowski(~BigKozlow@194.5.60.133) (Ping timeout: 260 seconds)
2024-05-26 18:38:25 +0200euleritian(~euleritia@tmo-118-108.customers.d1-online.com)
2024-05-26 18:38:11 +0200euleritian(~euleritia@dynamic-176-004-181-220.176.4.pool.telefonica.de) (Read error: Connection reset by peer)
2024-05-26 18:32:27 +0200BigKozlowski(~BigKozlow@194.5.60.133)
2024-05-26 18:31:37 +0200 <hammond> heh ok.
2024-05-26 18:31:21 +0200tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2024-05-26 18:31:20 +0200 <tomsmeding> probably, but many things are buggy when software projects are young :)
2024-05-26 18:31:08 +0200 <tomsmeding> gcc -O3 may be more aggressive there, which may help sometimes and not help other times
2024-05-26 18:31:07 +0200 <hammond> yeah i read -O3 was buggy back in the beginning.
2024-05-26 18:30:55 +0200 <tomsmeding> that's an example
2024-05-26 18:30:46 +0200 <hammond> i see. well i was looking at loop-unrolling for example in gcc, and how it can make it slower for some processors.
2024-05-26 18:30:10 +0200 <tomsmeding> ghc -O2 is like gcc -O3, "I want more performance and am willing to 1. spend more compile time and 2. benchmark and profile to see if it really helps"
2024-05-26 18:29:44 +0200 <tomsmeding> ghc -O1 is cabal's default and is like gcc -O2, the "standard optimisation set"
2024-05-26 18:28:50 +0200 <tomsmeding> (0 != O, if they look the same to you you should use a better font :) )
2024-05-26 18:28:12 +0200 <tomsmeding> if any -O flag changes the code's meaning, that's a bug in the compiler
2024-05-26 18:27:55 +0200 <tomsmeding> ghc -O2 is similar
2024-05-26 18:27:50 +0200BigKozlowski(~BigKozlow@194.5.60.133) (Ping timeout: 252 seconds)
2024-05-26 18:27:47 +0200 <tomsmeding> gcc -O3 is just "try harder and take more time, potentially making your code a bit slower if gcc makes some wrong assumptions about hot/cold code or hardware"
2024-05-26 18:27:13 +0200 <tomsmeding> -O3 != -ffast-math
2024-05-26 18:27:04 +0200 <tomsmeding> -ffast-math is a common C compiler flag that does change the meaning of code for the purpose of making some things faster
2024-05-26 18:27:04 +0200 <EvanR> making floating point math faster, while breaking it
2024-05-26 18:26:50 +0200puke(~puke@user/puke)
2024-05-26 18:26:37 +0200 <tomsmeding> if so that would be a gcc bug :p
2024-05-26 18:26:32 +0200 <tomsmeding> ("unsound", in this situation means that it may change the meaning of the code)
2024-05-26 18:26:17 +0200 <tomsmeding> is -O3 in gcc unsound?
2024-05-26 18:25:56 +0200 <hammond> well let me ask you this since im still a beginner, does -02 act like -O3 in gcc where some new bug arrises because of the optimization ? you're essentially making the final code unsafer right?
2024-05-26 18:25:32 +0200Sgeo(~Sgeo@user/sgeo)
2024-05-26 18:25:22 +0200puke(~puke@user/puke) (Max SendQ exceeded)
2024-05-26 18:24:44 +0200 <tomsmeding> STG is fundamentally different from both Core and the imperative CPU execution model
2024-05-26 18:24:25 +0200 <tomsmeding> "take assembly and make it a bit more high level"
2024-05-26 18:24:19 +0200puke(~puke@user/puke)
2024-05-26 18:23:58 +0200puke(~puke@user/puke) (Remote host closed the connection)
2024-05-26 18:23:58 +0200 <tomsmeding> LLVM IR may have a lot of operations, and may be a "big" language in comparison, but conceptually it's quite simple
2024-05-26 18:23:37 +0200 <tomsmeding> but what about thinking up STG, its semantics, and the Core->STG->Cmm->Asm translations?
2024-05-26 18:23:26 +0200BigKozlowski(~BigKozlow@194.5.60.133)
2024-05-26 18:23:12 +0200 <tomsmeding> Leary: the optimisations in Core are easier perhaps
2024-05-26 18:23:01 +0200Rodney_(~Rodney@176.254.244.83)
2024-05-26 18:22:55 +0200 <tomsmeding> the difficulty is in different places
2024-05-26 18:22:47 +0200 <tomsmeding> but then, purely functional languages map less directly to the hardware, so there is more work to do in making them practically efficient
2024-05-26 18:22:38 +0200 <hammond> i see
2024-05-26 18:22:27 +0200 <tomsmeding> in a pure language, this whole point is moot
2024-05-26 18:22:23 +0200euphores(~SASL_euph@user/euphores)
2024-05-26 18:22:20 +0200 <tomsmeding> hence must re-load things from memory etc; if it can analyse the function to not touch those arrays, then it can avoid those additional memory accesses
2024-05-26 18:22:11 +0200 <hammond> tomsmeding: would there be cases where the compiler tries to optimize and actually makes it harder? say if your cpu doesn't have enough L1 cache.
2024-05-26 18:22:11 +0200 <Leary> Seems much easier in Haskell to me. A lot of it is just compile-time evaluation.
2024-05-26 18:21:50 +0200 <tomsmeding> a C compiler must worry that after a function call, all kinds of variables, arrays, etc. may now suddenly have different values
2024-05-26 18:21:32 +0200 <tomsmeding> some things are much easier to _analyse_ about a haskell program, because of the purity
2024-05-26 18:21:19 +0200 <hammond> yeah
2024-05-26 18:21:13 +0200 <tomsmeding> it's not really inherently easier or harder