Newest at the top
| 2026-03-07 12:52:51 +0100 | <haskellbridge> | <sm> it looks like there's a lot comparing needed to do an insert. Is it your own custom priority queue ? |
| 2026-03-07 12:50:44 +0100 | <Guest89> | it seems like a lot but I can't dismiss it as unexpected |
| 2026-03-07 12:50:19 +0100 | <Guest89> | but most likely it should be less than that because of short circuiting on a lot of the node combinations |
| 2026-03-07 12:50:02 +0100 | <Guest89> | so in this particular example the data is generated from a fold that iteratively uses bddApply on new BDDs, but only 7 times total. the largest BDDs being applied have a few thousand nodes each, which means that the upper bound for bddApply will necessarily be in the millions |
| 2026-03-07 12:48:17 +0100 | arandombit | (~arandombi@user/arandombit) (Ping timeout: 268 seconds) |
| 2026-03-07 12:47:02 +0100 | <Guest89> | well insert is probably from the data structure I use to maintain a priority queue |
| 2026-03-07 12:46:16 +0100 | <Guest89> | I've only seem them now that I'm running with -P instead of -p |
| 2026-03-07 12:46:07 +0100 | <Guest89> | I don't know what they are either |
| 2026-03-07 12:45:47 +0100 | <haskellbridge> | <sm> (I don't know why these names are obfuscated) |
| 2026-03-07 12:44:57 +0100 | <haskellbridge> | <sm> $sinsert_$sgo4 14 million. maybe one of those... |
| 2026-03-07 12:44:50 +0100 | <Guest89> | let me try something for a quick sanity check |
| 2026-03-07 12:44:30 +0100 | <haskellbridge> | <sm> $wbddApply'' is called half a million times |
| 2026-03-07 12:44:02 +0100 | <haskellbridge> | <sm> it sounds like something is doing too much work |
| 2026-03-07 12:43:46 +0100 | <Guest89> | but on that particular run; no, that is excessive |
| 2026-03-07 12:43:34 +0100 | <Guest89> | so currently on a benchmark that I have (encoding `n-queens`) the number of nodes in my data structure is expected to quadruple for each n but currently space and time seems to increase 10-fold instead |
| 2026-03-07 12:43:12 +0100 | arandombit | (~arandombi@user/arandombit) arandombit |
| 2026-03-07 12:43:07 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 268 seconds) |
| 2026-03-07 12:43:01 +0100 | <haskellbridge> | <sm> see the entries column.. you have something being called 26 million times, eg. Is that what you'd expect ? Is the data that large ? |
| 2026-03-07 12:42:17 +0100 | <Guest89> | will try |
| 2026-03-07 12:41:39 +0100 | <haskellbridge> | <sm> lovely. And it might be interesting to run profiterole on that too. |
| 2026-03-07 12:41:02 +0100 | Square3 | (~Square@user/square) Square |
| 2026-03-07 12:40:51 +0100 | <Guest89> | https://paste.tomsmeding.com/AUb2v7Sh |
| 2026-03-07 12:40:18 +0100 | <haskellbridge> | <sm> by all means show it :) |
| 2026-03-07 12:40:07 +0100 | <Guest89> | yes |
| 2026-03-07 12:40:02 +0100 | <haskellbridge> | <sm> do you have a .prof file ? |
| 2026-03-07 12:39:54 +0100 | <Guest89> | do you just want a file dump? |
| 2026-03-07 12:38:58 +0100 | <Guest89> | I've got the files from profiling and some html pages rendered from them if that's what you mean |
| 2026-03-07 12:38:54 +0100 | <haskellbridge> | <sm> I looked it up: stack install --profile PROG; PROG +RTS -P -RTS ... this will save PROG.prof |
| 2026-03-07 12:38:34 +0100 | <Guest89> | when you say *make* a profile, what do you mean exactly? |
| 2026-03-07 12:38:05 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 12:35:51 +0100 | <haskellbridge> | <sm> if you make a profile, people here will help you read it |
| 2026-03-07 12:35:18 +0100 | <haskellbridge> | <sm> or, commenting out large chunks of your program to see what makes a difference |
| 2026-03-07 12:34:24 +0100 | <haskellbridge> | <sm> reducing to a simple program you can share, may help |
| 2026-03-07 12:33:56 +0100 | <haskellbridge> | <sm> Guest89 you won't be able to fix it by kitchen sink experimenting, you'll need to dig in and understand. There's likely many causes of space leak |
| 2026-03-07 12:31:40 +0100 | <Guest89> | actually, one thing I have been doing in general is guard patterns for unpacking. are they lazy as well or strict the same way pattern matching is? |
| 2026-03-07 12:30:20 +0100 | <Guest89> | but it kind of feels like I am at the kitchen sink stage in general if I'm being honest |
| 2026-03-07 12:29:55 +0100 | <Guest89> | I should probably experiment with it more but it seemed like allocations went down only a little (or at least less than I expected) while somehow runtimes increased slightly |
| 2026-03-07 12:29:47 +0100 | Sgeo | (~Sgeo@user/sgeo) (Read error: Connection reset by peer) |
| 2026-03-07 12:29:16 +0100 | <Guest89> | I tried playing around with unboxed tuples and different pragmas like unpacking but they seemed to have varying/counterintuitive results |
| 2026-03-07 12:27:58 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 276 seconds) |
| 2026-03-07 12:26:10 +0100 | wootehfoot | (~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer) |
| 2026-03-07 12:23:55 +0100 | <probie> | If you need multiple return values and can't afford an allocation, look at unboxed tuples (although this is likely overkill) |
| 2026-03-07 12:22:46 +0100 | <probie> | GHC is capable of doing something like `f x = (x, x+1)`, `g x = let (a, b) = f x in a + b` without actually allocating a tuple (assuming it can inline `f`), but that's the same for any user defined type as well |
| 2026-03-07 12:22:18 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn |
| 2026-03-07 12:21:47 +0100 | <Leary> | Guest89: It might here and there, but that doesn't mean you're better off using them. In particular, they're lazy, so they can easily accumulate big thunks. I suggest replacing such parts of your representation with suitably strict bespoke data declarations. |
| 2026-03-07 12:17:43 +0100 | <Guest89> | I just understood ghc optimized for tuples in particular over ordinary data constructors |
| 2026-03-07 12:17:16 +0100 | <probie> | Why? |
| 2026-03-07 12:17:07 +0100 | <Guest89> | I thought ghc treated them differently? |
| 2026-03-07 12:16:09 +0100 | <probie> | Beyond the special syntax, tuples aren't really much different from `data T2 a b = T2 a b`, `data T3 a b c = T3 a b c`, `data T4 a b c d = T4 a b c d` etc. |
| 2026-03-07 12:15:03 +0100 | merijn | (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 265 seconds) |