2025/10/30

Newest at the top

2025-10-30 22:00:16 +0100 <omentic> well, not building pandoc
2025-10-30 21:59:47 +0100 <davean> caching? What caching do you want that cabal doesn't default to doing>
2025-10-30 21:59:06 +0100 <omentic> oh -- actually before i head out, do you know: is it possible to get some sort of caching working with cabal (or stack) if i've got a package that's not on stackage?
2025-10-30 21:58:29 +0100 <omentic> okay i will deal with this later, thanks for the pointers
2025-10-30 21:58:21 +0100 <omentic> urgh
2025-10-30 21:58:09 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-10-30 21:56:51 +0100 <geekosaur> I don't normally see things building in parallel with that situation (but I also don't run into it much, I have enough cores that I set jobs: fairly high)
2025-10-30 21:56:48 +0100 <omentic> looks like jobs: $ncpus in my global cabal config but yeah i've got a line here locally
2025-10-30 21:56:18 +0100 <geekosaur> oh wait, you said jobs: 1, sorry
2025-10-30 21:56:07 +0100 <geekosaur> check your cabal configuration
2025-10-30 21:55:50 +0100 <geekosaur> if job semaphores are supported, all the ghcs will "check out" threads as available
2025-10-30 21:55:46 +0100 <omentic> hm. okay. with ghc-options containing -threaded -j2 -rtsopts -with-rtsopts=-N2 and jobs: 1, i still see cabal building five or six things in parallel and all my cores spinning up...
2025-10-30 21:55:37 +0100Googulator98(~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu)
2025-10-30 21:55:29 +0100Googulator98(~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu) (Quit: Client closed)
2025-10-30 21:55:18 +0100 <geekosaur> but then you need to arrange for each dependency to use ghc-options
2025-10-30 21:55:12 +0100peterbecich(~Thunderbi@172.222.148.214) peterbecich
2025-10-30 21:54:14 +0100myme1myme
2025-10-30 21:54:05 +0100 <geekosaur> building multiple dependencies concurrently
2025-10-30 21:54:03 +0100 <omentic> legacy behavior? or is there an improvement wrt. linking or something
2025-10-30 21:53:45 +0100 <omentic> why would you want cabal to use more than one thread if ghc can spawn multiple threads by itself?
2025-10-30 21:53:16 +0100 <geekosaur> with job semaphores the number of concurrently running ghc threads will be limited across the entire build
2025-10-30 21:52:49 +0100 <geekosaur> because cabal will spawn as many ghcs as it's allowed to run threads, each of which will use the number of threads you tell it
2025-10-30 21:52:33 +0100 <omentic> uhhhh, oh that is good to know
2025-10-30 21:52:10 +0100 <geekosaur> something else to keep in mind is that limiting GHC's threads will only work if you either limit cabal to one thread or use ghc semaphores (in sufficiently recent ghc)
2025-10-30 21:51:26 +0100ttybitnik(~ttybitnik@user/wolper) ttybitnik
2025-10-30 21:50:38 +0100 <omentic> mmm. neither the no. of parallel modules nor the no. of threads in use appears to be limited by -j 2 or -j2. weird. maybe -with-rtsopts=-N is overwriting that?
2025-10-30 21:50:19 +0100L29Ah(~L29Ah@wikipedia/L29Ah) L29Ah
2025-10-30 21:48:38 +0100 <omentic> hmm, and -j 2 in ghc-options should limit the maximum no. of threads to 2?
2025-10-30 21:48:18 +0100myme(~myme@2a01:799:d5e:5f00:dea7:7da2:d01d:b0f1) (Ping timeout: 244 seconds)
2025-10-30 21:48:06 +0100myme1(~myme@2a01:799:d5e:5f00:ffab:db87:b0e2:97dd) myme
2025-10-30 21:47:37 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2025-10-30 21:46:30 +0100L29Ah(~L29Ah@wikipedia/L29Ah) (Ping timeout: 256 seconds)
2025-10-30 21:45:54 +0100Googulator98(~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu)
2025-10-30 21:45:41 +0100Googulator98(~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu) (Quit: Client closed)
2025-10-30 21:42:42 +0100merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-10-30 21:42:25 +0100 <monochrom> Telling cabal -N does not limit its -j
2025-10-30 21:41:11 +0100 <haskellbridge> <sm> +RTS -N is essentially a suggestion of how many cores the program should use, IIRC. But for cabal commands, -j is easier
2025-10-30 21:40:45 +0100Googulator55(~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu) (Quit: Client closed)
2025-10-30 21:40:40 +0100Googulator98(~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu)
2025-10-30 21:40:28 +0100 <haskellbridge> <sm> you should definitely check, with [h]top
2025-10-30 21:40:28 +0100 <omentic> if it is using up a bunch of memory and swapping a lot...
2025-10-30 21:40:10 +0100 <omentic> i do kind of wonder if fucked-up swap is contributing to only seeing this with cabal
2025-10-30 21:39:34 +0100 <omentic> what is the -N argument to RTS opts? this page references it, but doesn't define it: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/runtime_control.html
2025-10-30 21:38:56 +0100 <haskellbridge> <sm> avoid swapping
2025-10-30 21:38:43 +0100 <haskellbridge> <sm> also does swapping contribute to cpu activity / heat ? building haskell is probably using more memory
2025-10-30 21:37:25 +0100peterbecich(~Thunderbi@172.222.148.214) (Ping timeout: 256 seconds)
2025-10-30 21:37:08 +0100 <omentic> monochrom: re: memory, gulp
2025-10-30 21:37:03 +0100 <haskellbridge> <sm> by the way I'll guess those other build systems also use multiple processors, but probably they finish sooner
2025-10-30 21:36:50 +0100 <monochrom> You can always run it in VirtualBox. VirtualBox can lie about # of CPUs and speed.
2025-10-30 21:35:48 +0100 <monochrom> nice prevents it using 100% iff something else of higher priority uses 100%. >:)