2025/10/20

Newest at the top

2025-10-20 11:24:43 +0200FirefoxDeHuk(~FirefoxDe@109.108.69.106)
2025-10-20 11:23:40 +0200merijn(~merijn@77.242.116.146) merijn
2025-10-20 11:23:35 +0200 <endokqr> I am profiling (+RTS -p) a Haskell program that runs for quite some time and I am interested in data from the full run. Unfortunately, this makes the time huge! I thought I'd be able to adjust the resolution of the time profile with -i and/or -V, but this seems to have no effect. What am I misunderstanding?
2025-10-20 11:23:16 +0200FirefoxDeHuk(~FirefoxDe@109.108.69.106) (Quit: Client closed)
2025-10-20 11:21:26 +0200tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz)
2025-10-20 11:12:26 +0200merijn(~merijn@77.242.116.146) (Ping timeout: 248 seconds)
2025-10-20 11:08:30 +0200fp(~Thunderbi@2001:708:20:1406::10c5) (Ping timeout: 256 seconds)
2025-10-20 11:07:13 +0200merijn(~merijn@77.242.116.146) merijn
2025-10-20 11:06:56 +0200 <davean> mtl you have a specific monad and then properties about it that you can use
2025-10-20 11:05:37 +0200 <dminuoso> The only effect that is universally compatible with most libraries is pure IO.
2025-10-20 11:04:47 +0200 <dminuoso> As a consequence hackage now is filled with code that ends up using any combination.
2025-10-20 11:04:26 +0200 <dminuoso> If you use hard-wired transformers its really hard to compose different transformer code together. If you use mtl code you lack effect order specification. As a result you have a large variety of effect libraries that try to address these issues.
2025-10-20 11:04:01 +0200 <srazkvt> ig because instead of being able to call both functions for the wrapped monad, you need to lift the computations ?
2025-10-20 11:02:19 +0200 <davean> How so?
2025-10-20 11:00:54 +0200 <dminuoso> Despite transformers being labeled with terms like "composition of effects", they are the antithesis of compositionality of library code.
2025-10-20 11:00:11 +0200__monty__(~toonn@user/toonn) toonn
2025-10-20 10:59:44 +0200 <dminuoso> Say something like runMaybeT $ do ...
2025-10-20 10:59:21 +0200 <dminuoso> Except for some local computation tricks.
2025-10-20 10:58:58 +0200 <dminuoso> davean: Apart from ReaderT, I've never really used transformers much for a bunch of reasons.
2025-10-20 10:58:19 +0200 <dminuoso> tomsmeding: Perhaps. liftIO is just one of the few things that never really clicked on the naming to me.
2025-10-20 10:58:11 +0200 <davean> It isn't how the effects compose though, which gets really improtant with state and such
2025-10-20 10:57:26 +0200 <dminuoso> davean: Until now I was just focused more on thinking of transformers as a syntactical construct where IO resided in since thats how I think of how the effects compose.
2025-10-20 10:57:19 +0200 <tomsmeding> and contrary to what davean is saying, I do not think your perspective is wrong, it's just a perspective that mismatches with what I think is the intended intuition behind "lift"
2025-10-20 10:56:59 +0200 <dminuoso> davean: No, this is actually just a tangent I was starting to explore.
2025-10-20 10:56:47 +0200 <tomsmeding> when you run them, you get a computation inside m, yes
2025-10-20 10:56:46 +0200 <davean> dminuoso: which I think is where your confusion is
2025-10-20 10:56:34 +0200 <dminuoso> Ah I guess not.
2025-10-20 10:56:33 +0200 <tomsmeding> StateT s m a ~= s -> m (a, s)
2025-10-20 10:56:23 +0200 <tomsmeding> https://hackage.haskell.org/package/transformers-0.6.1.0/docs/Control-Monad-Trans-State-Strict.htm…
2025-10-20 10:56:16 +0200 <davean> dminuoso: no, no, that is very much NOT what they do
2025-10-20 10:55:43 +0200 <tomsmeding> <monochrom> The best thing about meaningful names is that there are so many meanings to choose from!
2025-10-20 10:55:40 +0200 <dminuoso> Dont all monad transformers put the base monad on the outside, in the sense that if we have some tranformer stack over IO, ultimately we have something like `IO ((M1 :.: M2 :.: ...) a)` (and possibly a lambda outside for Reader)?
2025-10-20 10:55:33 +0200 <tomsmeding> 47 minutes ago
2025-10-20 10:55:22 +0200 <davean> what did monochrom say?
2025-10-20 10:54:17 +0200 <tomsmeding> it's exactly what monochrom said
2025-10-20 10:54:04 +0200 <tomsmeding> indeed, SomeT IO may well have more logic than IO itself, so also in that sense, it's "lifting" into a more exalted space of SomeT IO computations
2025-10-20 10:53:44 +0200 <davean> it maps the IO subspace into the SomeT space, and specicily the IO subspace of said
2025-10-20 10:53:07 +0200 <tomsmeding> the sky above is larger than you, so lifting moves it into the larger thing
2025-10-20 10:53:04 +0200 <davean> It can't leave IO
2025-10-20 10:53:00 +0200 <davean> Yah, it NEVER LEAVES IO
2025-10-20 10:52:51 +0200 <tomsmeding> dminuoso: I think of liftIO as lifting "into SomeT", not "out of IO"
2025-10-20 10:52:48 +0200 <davean> Write, that projects from the IO space to the IO subspace of SomeT IO
2025-10-20 10:52:01 +0200 <dminuoso> Well I meant `liftIO (putStrLn "foo")` of course.
2025-10-20 10:51:41 +0200 <davean> No, putStrLn is already an object in IO, it has no other existance
2025-10-20 10:51:23 +0200 <dminuoso> This may just be the difference between operational and semantic thinking.
2025-10-20 10:50:05 +0200merijn(~merijn@77.242.116.146) (Ping timeout: 256 seconds)
2025-10-20 10:49:26 +0200 <dminuoso> Rather than pulling it out.
2025-10-20 10:49:18 +0200 <dminuoso> If we take a given IO action, say `putStrLn "Hello world"`, then its the action of putting that core inside layers and layers until we have a matching sphere.
2025-10-20 10:49:07 +0200 <davean> It *is* the inner core, its not that we choose to think about it, it is litterly enclosed by
2025-10-20 10:48:46 +0200 <dminuoso> davean: Sure, and in that model wouldnt we think of IO as the inner core?