Newest at the top
2025-10-20 12:11:04 +0200 | CiaoSen | (~Jura@2a02:8071:64e1:da0:5a47:caff:fe78:33db) CiaoSen |
2025-10-20 12:10:02 +0200 | xff0x | (~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp) (Ping timeout: 248 seconds) |
2025-10-20 12:08:05 +0200 | mzg | (mzg@abusers.hu) |
2025-10-20 12:05:30 +0200 | merijn | (~merijn@77.242.116.146) merijn |
2025-10-20 12:05:06 +0200 | jreicher | (~user@user/jreicher) jreicher |
2025-10-20 12:04:25 +0200 | merijn | (~merijn@77.242.116.146) (Ping timeout: 244 seconds) |
2025-10-20 12:02:28 +0200 | SlackCoder | (~SlackCode@208.26.91.234) (Remote host closed the connection) |
2025-10-20 11:59:43 +0200 | <endokqr> | That would be a rather useful trick. I'm not yet entirely sure where I'd put those annotations because I don't know where the interesting stuff and where it's not, but maybe I could figure that out either by first downsampling or spending a few minutes to think about it. |
2025-10-20 11:59:26 +0200 | <dminuoso> | This might give you enough routes to explore |
2025-10-20 11:58:57 +0200 | <dminuoso> | Or you control it per-module with {-# OPTIONS_GHC -fno-prof-auto #-} |
2025-10-20 11:57:16 +0200 | srazkvt | (~sarah@user/srazkvt) (Quit: Konversation terminated!) |
2025-10-20 11:56:52 +0200 | <dminuoso> | In the profiling data it would just collapse it into a single cost center. |
2025-10-20 11:56:32 +0200 | <dminuoso> | endokqr: Btw, it coulkd be sufficient to explicitly declare cost centers on branches you *dont* want to profile, as -fprof-auto (which I presume you are using) does not poke deeper if you attached a cost center. |
2025-10-20 11:55:43 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) (Write error: Broken pipe) |
2025-10-20 11:55:22 +0200 | <endokqr> | But I could steal code from it to flatten the .prof file and then sample from it – that way I'm likely to get a subset of more interesting cost centres without difficult heuristics. |
2025-10-20 11:54:49 +0200 | <endokqr> | That's what I'm aiming for, but on the full 9.1 GB file it eats all my 48 GB of RAM and then my system starts thrashing. |
2025-10-20 11:54:34 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) |
2025-10-20 11:53:46 +0200 | <dminuoso> | endokqr: You might find https://github.com/fpco/ghc-prof-flamegraph of interest (haven't used it in a few years, but I think it should still work fine) |
2025-10-20 11:53:34 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) (Write error: Connection reset by peer) |
2025-10-20 11:53:10 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) |
2025-10-20 11:52:38 +0200 | <dminuoso> | Yes. |
2025-10-20 11:52:31 +0200 | <endokqr> | Ooooh, okay. So the only solution for me is to either post-process the .prof file and try to recognise "unimportant" branches of the tree and prune them, or go in and try to assign cost centres more intelligently? |
2025-10-20 11:52:21 +0200 | <dminuoso> | The "stack frames" what you describe is just the cost centers. |
2025-10-20 11:51:47 +0200 | mreh | (~matthew@host86-146-25-125.range86-146.btcentralplus.com) |
2025-10-20 11:51:35 +0200 | <dminuoso> | endokqr: No, the cost centers are collected regardless. The interval is just how often the RTS stops and writes a record. |
2025-10-20 11:51:02 +0200 | <endokqr> | dminuoso, Yeah, and I would imagine by setting -i to e.g. "10 Hz" would give me fewer stack frames in the time profile. But whatever number I pass there I get the same 9.1 GB .prof file. |
2025-10-20 11:50:45 +0200 | <dminuoso> | In practice this controls the data size of the profiling data |
2025-10-20 11:50:27 +0200 | <dminuoso> | endokqr: -i is just the sampling rate, think of it how accurate/finely grained the data is. |
2025-10-20 11:47:59 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) (Quit: Client closed) |
2025-10-20 11:43:52 +0200 | trickard_ | (~trickard@cpe-53-98-47-163.wireline.com.au) |
2025-10-20 11:43:39 +0200 | trickard_ | (~trickard@cpe-53-98-47-163.wireline.com.au) (Read error: Connection reset by peer) |
2025-10-20 11:40:15 +0200 | merijn | (~merijn@77.242.116.146) merijn |
2025-10-20 11:33:01 +0200 | merijn | (~merijn@77.242.116.146) (Ping timeout: 264 seconds) |
2025-10-20 11:32:27 +0200 | fp | (~Thunderbi@130.233.70.140) fp |
2025-10-20 11:24:43 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) |
2025-10-20 11:23:40 +0200 | merijn | (~merijn@77.242.116.146) merijn |
2025-10-20 11:23:35 +0200 | <endokqr> | I am profiling (+RTS -p) a Haskell program that runs for quite some time and I am interested in data from the full run. Unfortunately, this makes the time huge! I thought I'd be able to adjust the resolution of the time profile with -i and/or -V, but this seems to have no effect. What am I misunderstanding? |
2025-10-20 11:23:16 +0200 | FirefoxDeHuk | (~FirefoxDe@109.108.69.106) (Quit: Client closed) |
2025-10-20 11:21:26 +0200 | tzh | (~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz) |
2025-10-20 11:12:26 +0200 | merijn | (~merijn@77.242.116.146) (Ping timeout: 248 seconds) |
2025-10-20 11:08:30 +0200 | fp | (~Thunderbi@2001:708:20:1406::10c5) (Ping timeout: 256 seconds) |
2025-10-20 11:07:13 +0200 | merijn | (~merijn@77.242.116.146) merijn |
2025-10-20 11:06:56 +0200 | <davean> | mtl you have a specific monad and then properties about it that you can use |
2025-10-20 11:05:37 +0200 | <dminuoso> | The only effect that is universally compatible with most libraries is pure IO. |
2025-10-20 11:04:47 +0200 | <dminuoso> | As a consequence hackage now is filled with code that ends up using any combination. |
2025-10-20 11:04:26 +0200 | <dminuoso> | If you use hard-wired transformers its really hard to compose different transformer code together. If you use mtl code you lack effect order specification. As a result you have a large variety of effect libraries that try to address these issues. |
2025-10-20 11:04:01 +0200 | <srazkvt> | ig because instead of being able to call both functions for the wrapped monad, you need to lift the computations ? |
2025-10-20 11:02:19 +0200 | <davean> | How so? |
2025-10-20 11:00:54 +0200 | <dminuoso> | Despite transformers being labeled with terms like "composition of effects", they are the antithesis of compositionality of library code. |
2025-10-20 11:00:11 +0200 | __monty__ | (~toonn@user/toonn) toonn |