2026/04/02

Newest at the top

2026-04-02 18:33:54 +0200arandombit(~arandombi@user/arandombit) (Ping timeout: 248 seconds)
2026-04-02 18:25:26 +0200ss4(~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer)
2026-04-02 18:24:18 +0200sp1ff(~user@2601:1c2:4080:14c0::ace8) (Read error: Connection reset by peer)
2026-04-02 18:22:12 +0200califax(~califax@user/califx) califx
2026-04-02 18:21:58 +0200califax(~califax@user/califx) (Quit: ZNC 1.10.1 - https://znc.in)
2026-04-02 18:16:25 +0200comerijn(~merijn@77.242.116.146) (Ping timeout: 245 seconds)
2026-04-02 18:03:55 +0200EvanR(~EvanR@user/evanr) EvanR
2026-04-02 18:02:04 +0200gmg(~user@user/gehmehgeh) gehmehgeh
2026-04-02 17:58:19 +0200 <alter2000> ooh yea that's the one I was thinking about, misremembered the conduit integration
2026-04-02 17:57:36 +0200 <alter2000> (practical library yap aside I don't see any way to handle it besides a sort of `chunksOf <threads>`)
2026-04-02 17:57:09 +0200 <geekosaur> `async-pool` is a thing
2026-04-02 17:56:52 +0200 <alter2000> iirc Conduit had a useful adaptation of `async` that let you build a thread pool sink that consumed chunks of tasks
2026-04-02 17:55:44 +0200 <alter2000> bwe: would it make sense to `S.mapM (Concurrently {- or whatever concurrency primitive type you feel like -}) >>> S.foldM runConcurrently`, or do none of the existing async libraries allow capping out on the number of concurrent coroutines?
2026-04-02 17:44:53 +0200somemathguy(~somemathg@user/somemathguy) somemathguy
2026-04-02 17:37:20 +0200pavonia(~user@user/siracusa) (Quit: Bye!)
2026-04-02 17:33:17 +0200 <mauke> the difference between processing a list and a list is
2026-04-02 17:33:05 +0200 <mauke> huh?
2026-04-02 17:32:10 +0200 <bwe> mauke: then again, how is processing a list concurrently without streams different than with list?
2026-04-02 17:31:39 +0200 <bwe> mauke: well, I could do a chunksOf 5 and fork a thread for that, for example.
2026-04-02 17:30:35 +0200 <Vq> gentauro: Guard patterns and long expressions can trigger stylish-haskell to give up on alignment, but doing it by hand in those cases work for me.
2026-04-02 17:30:22 +0200 <mauke> bwe: the former is impossible because the rest of the stream doesn't exist unless you run the embedded action first
2026-04-02 17:29:50 +0200 <mauke> bwe: the latter can be parallelized by collecting the values in a list and mapping over that, or by having your 'f' fork off a new thread for each element
2026-04-02 17:29:45 +0200 <Vq> Maybe I subconsciously regard pretty Python to be a lost cause. :o)
2026-04-02 17:29:27 +0200 <gentauro> Vq: when doing `case … of` all your cases get aligned with a nice uniform separation <3
2026-04-02 17:28:35 +0200 <Vq> When I write Python I quite like having the formatter black just strictly format everything, but with Haskell I prefer the amount of artistic freedom stylish-haskell gives me.
2026-04-02 17:28:26 +0200 <mauke> bwe: the way I see it, there are two kinds of monadic actions involved. one is the actions "embedded" in the stream; the other kind is the ones returned by your 'f'
2026-04-02 17:28:25 +0200 <gentauro> :-\
2026-04-02 17:28:21 +0200 <gentauro> for good or for bad, the `elm-format` is very rigid in the sense that "Evans knows best, take it or leave it". The good thing about that approach, is that all code cases using the tool, looks the same. And that bad is that people who don't like it, just don't use the tool
2026-04-02 17:27:12 +0200 <gentauro> Vq: so do I. However, from time to time, the project changes and then codes become "ugly" until you tweak to your liking
2026-04-02 17:26:53 +0200 <mauke> bwe: depends on what you mean by that
2026-04-02 17:26:19 +0200 <Vq> I know it's not universal, but I like the alignment behaviour of stylish-haskell.
2026-04-02 17:25:39 +0200 <gentauro> stylish-haskell is pretty, pretty, nice
2026-04-02 17:25:14 +0200Vqdoes use stylish-haskell
2026-04-02 17:24:57 +0200gentauroa trick to avoid a compile check, is to hook up `stylish-haskell` on save. If the code doesn't format nicely, then there is something that does not compile ;)
2026-04-02 17:24:50 +0200 <bwe> Currently I am doing `S.mapM_ f s` over some stream `s`. While this processes the stream sequentially, how do I process the stream concurrently (instead of using S.mapM_ )? https://hackage.haskell.org/package/streaming-0.2.4.0/docs/Streaming-Prelude.html
2026-04-02 17:24:00 +0200 <gentauro> Vg: it will save you a few compiles ;)
2026-04-02 17:23:17 +0200 <Vq> I generally use hoogle as well and the only autocompletion thing I have is hippie-expand. I think I need to try LSP out though.
2026-04-02 17:21:17 +0200 <gentauro> I don't really use the intellisense. I'm kind of used to Hoogle
2026-04-02 17:20:52 +0200 <gentauro> Vg: kind of. It gives you hints on refactoring and so. However, if you use `length` you will get annoyed by -> `Name: Infinite: ghc-internal/length`. It's OK I guess
2026-04-02 17:19:06 +0200ft(~ft@p508db341.dip0.t-ipconnect.de) ft
2026-04-02 17:17:54 +0200FirefoxDeHuk(~FirefoxDe@user/FirefoxDeHuk) (Client Quit)
2026-04-02 17:17:04 +0200FirefoxDeHuk(~FirefoxDe@user/FirefoxDeHuk) FirefoxDeHuk
2026-04-02 17:16:44 +0200wennefer0(~wennefer0@user/wennefer0) wennefer0
2026-04-02 17:09:54 +0200somemathguy(~somemathg@user/somemathguy) (Quit: WeeChat 4.1.1)
2026-04-02 17:07:05 +0200 <Vq> gentauro: I haven't started using LSP for any language yet. Does it work well for Haskell?
2026-04-02 17:05:59 +0200jmcantrell_(~weechat@user/jmcantrell) (Ping timeout: 252 seconds)
2026-04-02 16:58:16 +0200lisbeths(uid135845@id-135845.lymington.irccloud.com) (Quit: Connection closed for inactivity)
2026-04-02 16:51:57 +0200acidjnk_new3(~acidjnk@p200300d6e700e5001e1160b7d23e5dd6.dip0.t-ipconnect.de) acidjnk
2026-04-02 16:49:15 +0200rainbyte(~rainbyte@181.47.219.3) (Ping timeout: 246 seconds)
2026-04-02 16:44:38 +0200humasect(~humasect@dyn-192-249-132-90.nexicom.net) humasect