Newest at the top
| 2026-03-07 12:41:02 +0100 | Square3 | (~Square@user/square) Square |
| 2026-03-07 12:40:51 +0100 | <Guest89> | https://paste.tomsmeding.com/AUb2v7Sh |
| 2026-03-07 12:40:18 +0100 | <haskellbridge> | <sm> by all means show it :) |
| 2026-03-07 12:40:07 +0100 | <Guest89> | yes |
| 2026-03-07 12:40:02 +0100 | <haskellbridge> | <sm> do you have a .prof file ? |
| 2026-03-07 12:39:54 +0100 | <Guest89> | do you just want a file dump? |
| 2026-03-07 12:38:58 +0100 | <Guest89> | I've got the files from profiling and some html pages rendered from them if that's what you mean |
| 2026-03-07 12:38:54 +0100 | <haskellbridge> | <sm> I looked it up: stack install --profile PROG; PROG +RTS -P -RTS ... this will save PROG.prof |
| 2026-03-07 12:38:34 +0100 | <Guest89> | when you say *make* a profile, what do you mean exactly? |
| 2026-03-07 12:38:05 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 12:35:51 +0100 | <haskellbridge> | <sm> if you make a profile, people here will help you read it |
| 2026-03-07 12:35:18 +0100 | <haskellbridge> | <sm> or, commenting out large chunks of your program to see what makes a difference |
| 2026-03-07 12:34:24 +0100 | <haskellbridge> | <sm> reducing to a simple program you can share, may help |
| 2026-03-07 12:33:56 +0100 | <haskellbridge> | <sm> Guest89 you won't be able to fix it by kitchen sink experimenting, you'll need to dig in and understand. There's likely many causes of space leak |
| 2026-03-07 12:31:40 +0100 | <Guest89> | actually, one thing I have been doing in general is guard patterns for unpacking. are they lazy as well or strict the same way pattern matching is? |
| 2026-03-07 12:30:20 +0100 | <Guest89> | but it kind of feels like I am at the kitchen sink stage in general if I'm being honest |
| 2026-03-07 12:29:55 +0100 | <Guest89> | I should probably experiment with it more but it seemed like allocations went down only a little (or at least less than I expected) while somehow runtimes increased slightly |
| 2026-03-07 12:29:47 +0100 | Sgeo | (~Sgeo@user/sgeo) (Read error: Connection reset by peer) |
| 2026-03-07 12:29:16 +0100 | <Guest89> | I tried playing around with unboxed tuples and different pragmas like unpacking but they seemed to have varying/counterintuitive results |
| 2026-03-07 12:27:58 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 276 seconds) |
| 2026-03-07 12:26:10 +0100 | wootehfoot | (~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer) |
| 2026-03-07 12:23:55 +0100 | <probie> | If you need multiple return values and can't afford an allocation, look at unboxed tuples (although this is likely overkill) |
| 2026-03-07 12:22:46 +0100 | <probie> | GHC is capable of doing something like `f x = (x, x+1)`, `g x = let (a, b) = f x in a + b` without actually allocating a tuple (assuming it can inline `f`), but that's the same for any user defined type as well |
| 2026-03-07 12:22:18 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 12:21:47 +0100 | <Leary> | Guest89: It might here and there, but that doesn't mean you're better off using them. In particular, they're lazy, so they can easily accumulate big thunks. I suggest replacing such parts of your representation with suitably strict bespoke data declarations. |
| 2026-03-07 12:17:43 +0100 | <Guest89> | I just understood ghc optimized for tuples in particular over ordinary data constructors |
| 2026-03-07 12:17:16 +0100 | <probie> | Why? |
| 2026-03-07 12:17:07 +0100 | <Guest89> | I thought ghc treated them differently? |
| 2026-03-07 12:16:09 +0100 | <probie> | Beyond the special syntax, tuples aren't really much different from `data T2 a b = T2 a b`, `data T3 a b c = T3 a b c`, `data T4 a b c d = T4 a b c d` etc. |
| 2026-03-07 12:15:03 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 265 seconds) |
| 2026-03-07 12:11:08 +0100 | <Guest89> | seems the biggest allocations come from primitive types, tuples etc |
| 2026-03-07 12:08:11 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 12:02:52 +0100 | <Guest89> | also I've been relying on using eventlog2html but it seems to break fairly easily. are there any other options for visualizing the profiles? |
| 2026-03-07 12:02:32 +0100 | ChaiTRex | (~ChaiTRex@user/chaitrex) ChaiTRex |
| 2026-03-07 12:01:49 +0100 | <Guest89> | I have some plots from using -hc that tells me which functions allocate the most but to be honest they're not particularly surprising in that department |
| 2026-03-07 12:01:03 +0100 | <Guest89> | i'll give it a whirl |
| 2026-03-07 12:00:51 +0100 | <Guest89> | sorry, -h |
| 2026-03-07 12:00:49 +0100 | <Leary> | -hT: https://downloads.haskell.org/ghc/latest/docs/users_guide/profiling.html#rts-options-heap-prof |
| 2026-03-07 11:59:50 +0100 | <Guest89> | it's one of the -l(x) RTS settings |
| 2026-03-07 11:59:28 +0100 | <haskellbridge> | <sm> how do you do that Leary ? |
| 2026-03-07 11:59:22 +0100 | hiecaq | (~hiecaq@user/hiecaq) (Quit: ERC 5.6.0.30.1 (IRC client for GNU Emacs 30.2)) |
| 2026-03-07 11:59:04 +0100 | <Guest89> | my reference implementation generates only a few megabytes of data by comparison but again it's not comparable 1:1 |
| 2026-03-07 11:58:57 +0100 | CiaoSen | (~Jura@2a02:8071:64e1:da0:5a47:caff:fe78:33db) CiaoSen |
| 2026-03-07 11:58:49 +0100 | ChaiTRex | (~ChaiTRex@user/chaitrex) (Ping timeout: 258 seconds) |
| 2026-03-07 11:58:11 +0100 | <Guest89> | the only thing I haven't tried has to force computations in different places |
| 2026-03-07 11:57:34 +0100 | <Guest89> | https://paste.tomsmeding.com/xZZPhSCR |
| 2026-03-07 11:57:29 +0100 | Beowulf | (florian@sleipnir.bandrate.org) |
| 2026-03-07 11:57:14 +0100 | <lambdabot> | Help us help you: please paste full code, input and/or output at e.g. https://paste.tomsmeding.com |
| 2026-03-07 11:57:14 +0100 | <Leary> | @where paste |
| 2026-03-07 11:56:45 +0100 | <Leary> | Guest89: The problem is less likely to be allocations than unnecessary retention or unwanted thunks bloating your representation. Allocating is almost free, holding onto it is what costs you. In any case, I would start by heap profiling by type, which doesn't actually require a profiling build. |