Newest at the top
| 2026-04-02 19:39:37 +0200 | alter2000 | (~alter2000@user/alter2000) alter2000 |
| 2026-04-02 19:37:31 +0200 | <lambdabot> | https://hackage.haskell.org/package/accelerate |
| 2026-04-02 19:37:31 +0200 | <tomsmeding> | @package accelerate |
| 2026-04-02 19:37:10 +0200 | <tomsmeding> | Accelerate 1.4 is a thing! |
| 2026-04-02 19:34:29 +0200 | jmcantrell_ | (~weechat@user/jmcantrell) (Ping timeout: 252 seconds) |
| 2026-04-02 19:32:04 +0200 | arandombit | (~arandombi@user/arandombit) arandombit |
| 2026-04-02 19:32:04 +0200 | arandombit | (~arandombi@2a02:2455:8656:7100:a19d:edd6:ba09:3a91) (Changing host) |
| 2026-04-02 19:32:04 +0200 | arandombit | (~arandombi@2a02:2455:8656:7100:a19d:edd6:ba09:3a91) |
| 2026-04-02 19:31:26 +0200 | alter2000 | (~alter2000@user/alter2000) (Ping timeout: 256 seconds) |
| 2026-04-02 19:26:04 +0200 | mistivia | (~mistivia@user/mistivia) mistivia |
| 2026-04-02 19:16:22 +0200 | arandombit | (~arandombi@user/arandombit) (Ping timeout: 276 seconds) |
| 2026-04-02 19:15:45 +0200 | alter2000 | (~alter2000@user/alter2000) alter2000 |
| 2026-04-02 19:04:41 +0200 | Square2 | (~Square4@user/square) (Ping timeout: 248 seconds) |
| 2026-04-02 19:03:55 +0200 | alter2000 | (~alter2000@user/alter2000) (Ping timeout: 264 seconds) |
| 2026-04-02 19:02:18 +0200 | Square3 | (~Square@user/square) Square |
| 2026-04-02 19:01:24 +0200 | arandombit | (~arandombi@user/arandombit) arandombit |
| 2026-04-02 19:01:24 +0200 | arandombit | (~arandombi@2a02:2455:8656:7100:a19d:edd6:ba09:3a91) (Changing host) |
| 2026-04-02 19:01:24 +0200 | arandombit | (~arandombi@2a02:2455:8656:7100:a19d:edd6:ba09:3a91) |
| 2026-04-02 18:53:06 +0200 | <tomsmeding> | is there a way to see the hackage build queue? (i.e. not just build reports for a single package, but something that gives an indication of how long it's going to take) |
| 2026-04-02 18:52:27 +0200 | jmcantrell_ | (~weechat@user/jmcantrell) jmcantrell |
| 2026-04-02 18:50:02 +0200 | vetkat | (~vetkat@user/vetkat) vetkat |
| 2026-04-02 18:47:41 +0200 | vetkat | (~vetkat@user/vetkat) (Read error: Connection reset by peer) |
| 2026-04-02 18:33:54 +0200 | arandombit | (~arandombi@user/arandombit) (Ping timeout: 248 seconds) |
| 2026-04-02 18:25:26 +0200 | ss4 | (~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer) |
| 2026-04-02 18:24:18 +0200 | sp1ff | (~user@2601:1c2:4080:14c0::ace8) (Read error: Connection reset by peer) |
| 2026-04-02 18:22:12 +0200 | califax | (~califax@user/califx) califx |
| 2026-04-02 18:21:58 +0200 | califax | (~califax@user/califx) (Quit: ZNC 1.10.1 - https://znc.in) |
| 2026-04-02 18:16:25 +0200 | comerijn | (~merijn@77.242.116.146) (Ping timeout: 245 seconds) |
| 2026-04-02 18:03:55 +0200 | EvanR | (~EvanR@user/evanr) EvanR |
| 2026-04-02 18:02:04 +0200 | gmg | (~user@user/gehmehgeh) gehmehgeh |
| 2026-04-02 17:58:19 +0200 | <alter2000> | ooh yea that's the one I was thinking about, misremembered the conduit integration |
| 2026-04-02 17:57:36 +0200 | <alter2000> | (practical library yap aside I don't see any way to handle it besides a sort of `chunksOf <threads>`) |
| 2026-04-02 17:57:09 +0200 | <geekosaur> | `async-pool` is a thing |
| 2026-04-02 17:56:52 +0200 | <alter2000> | iirc Conduit had a useful adaptation of `async` that let you build a thread pool sink that consumed chunks of tasks |
| 2026-04-02 17:55:44 +0200 | <alter2000> | bwe: would it make sense to `S.mapM (Concurrently {- or whatever concurrency primitive type you feel like -}) >>> S.foldM runConcurrently`, or do none of the existing async libraries allow capping out on the number of concurrent coroutines? |
| 2026-04-02 17:44:53 +0200 | somemathguy | (~somemathg@user/somemathguy) somemathguy |
| 2026-04-02 17:37:20 +0200 | pavonia | (~user@user/siracusa) (Quit: Bye!) |
| 2026-04-02 17:33:17 +0200 | <mauke> | the difference between processing a list and a list is |
| 2026-04-02 17:33:05 +0200 | <mauke> | huh? |
| 2026-04-02 17:32:10 +0200 | <bwe> | mauke: then again, how is processing a list concurrently without streams different than with list? |
| 2026-04-02 17:31:39 +0200 | <bwe> | mauke: well, I could do a chunksOf 5 and fork a thread for that, for example. |
| 2026-04-02 17:30:35 +0200 | <Vq> | gentauro: Guard patterns and long expressions can trigger stylish-haskell to give up on alignment, but doing it by hand in those cases work for me. |
| 2026-04-02 17:30:22 +0200 | <mauke> | bwe: the former is impossible because the rest of the stream doesn't exist unless you run the embedded action first |
| 2026-04-02 17:29:50 +0200 | <mauke> | bwe: the latter can be parallelized by collecting the values in a list and mapping over that, or by having your 'f' fork off a new thread for each element |
| 2026-04-02 17:29:45 +0200 | <Vq> | Maybe I subconsciously regard pretty Python to be a lost cause. :o) |
| 2026-04-02 17:29:27 +0200 | <gentauro> | Vq: when doing `case … of` all your cases get aligned with a nice uniform separation <3 |
| 2026-04-02 17:28:35 +0200 | <Vq> | When I write Python I quite like having the formatter black just strictly format everything, but with Haskell I prefer the amount of artistic freedom stylish-haskell gives me. |
| 2026-04-02 17:28:26 +0200 | <mauke> | bwe: the way I see it, there are two kinds of monadic actions involved. one is the actions "embedded" in the stream; the other kind is the ones returned by your 'f' |
| 2026-04-02 17:28:25 +0200 | <gentauro> | :-\ |
| 2026-04-02 17:28:21 +0200 | <gentauro> | for good or for bad, the `elm-format` is very rigid in the sense that "Evans knows best, take it or leave it". The good thing about that approach, is that all code cases using the tool, looks the same. And that bad is that people who don't like it, just don't use the tool |