2025/03/28

Newest at the top

2025-03-28 22:42:27 +0100L29Ah(~L29Ah@wikipedia/L29Ah) L29Ah
2025-03-28 22:40:39 +0100malte(~malte@mal.tc) (Remote host closed the connection)
2025-03-28 22:29:59 +0100wootehfoot(~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer)
2025-03-28 22:23:59 +0100malte(~malte@mal.tc) malte
2025-03-28 22:23:54 +0100nitrix(~nitrix@user/meow/nitrix) nitrix
2025-03-28 22:21:58 +0100nitrix(~nitrix@user/meow/nitrix) (Quit: ZNC 1.9.1 - https://znc.in)
2025-03-28 22:21:14 +0100bsima(~bsima@2604:a880:400:d0::19f1:7001) bsima
2025-03-28 22:20:28 +0100bsima(~bsima@143.198.118.179) (Quit: ZNC 1.8.2 - https://znc.in)
2025-03-28 22:20:27 +0100malte(~malte@mal.tc) (Remote host closed the connection)
2025-03-28 22:15:01 +0100Unicorn_Princess(~Unicorn_P@user/Unicorn-Princess/x-3540542) Unicorn_Princess
2025-03-28 22:11:35 +0100 <merijn> Considering the machine running the code had, like, 192 GB RAM :p
2025-03-28 22:11:10 +0100 <merijn> EvanR: I mean, the raw data was a database of, like, 8 GB and 2 billion rows. That had a bunch of joins blowing up the data queried even more, then doing a full scan of that data. 10s of megabytes was much less than I was prepared to use :p
2025-03-28 22:09:44 +0100 <EvanR> otoh that would exhaust the first hard drive I had, much less the ram that went with it
2025-03-28 22:09:20 +0100 <EvanR> merijn, tens of megabytes sounds pretty good practically speaking
2025-03-28 22:06:29 +0100malte(~malte@mal.tc) malte
2025-03-28 22:04:56 +0100 <haskellbridge> <Liamzee> erm, not purview, review
2025-03-28 22:03:34 +0100 <haskellbridge> <Liamzee> going full embedded with LH, at this time, is probably not a good idea, but developing wrapped linear Haskell libraries is probably a good transition to build up skills within the community and build up support for the LinearTypes extension
2025-03-28 22:03:28 +0100AlexZenon(~alzenon@178.34.150.194)
2025-03-28 22:02:53 +0100 <haskellbridge> <Liamzee> yeah, i saw, I recently did a purview of the LH ecosystem :)
2025-03-28 22:02:29 +0100AlexZenon(~alzenon@178.34.150.194) (Ping timeout: 252 seconds)
2025-03-28 22:02:24 +0100kimiamania(~65804703@user/kimiamania) kimiamania
2025-03-28 22:02:04 +0100 <tomsmeding> Liamzee: https://hackage.haskell.org/package/text-builder-linear for example
2025-03-28 22:02:01 +0100kimiamania(~65804703@user/kimiamania) (Quit: PegeLinux)
2025-03-28 22:01:48 +0100 <merijn> EvanR: It was a program streaming a few GB worth of data from SQLite via conduit, it actually worked remarkably well in keeping low RES memory for the data size
2025-03-28 22:01:42 +0100 <haskellbridge> <Liamzee> I guess, I'll ask directly, is there something wrong with using LinearTypes as something wrapped over by non-linear Haskell to provide a low-cost performance gain without needing FFI?
2025-03-28 22:00:15 +0100 <tomsmeding> so if you're using a State monad that you would like to do mutable updates with, it's actually very easy to achieve that now
2025-03-28 22:00:14 +0100 <EvanR> napkin math says that implies you "run out of resident memory" 400 times a second
2025-03-28 21:59:42 +0100 <haskellbridge> <Liamzee> one cute trick you can do with LinearTypes that I haven't seen much is using LinearTypes ;)
2025-03-28 21:59:38 +0100 <tomsmeding> you get `modify :: (s %1-> s) -> State s a -> State s a`, and it otherwise works essentially exactly like the normal state monad
2025-03-28 21:59:14 +0100 <tomsmeding> one cute trick you can do with LinearTypes that I haven't seen much is `newtype State s a = Staet (s %1-> (Ur a, s))`
2025-03-28 21:58:42 +0100 <merijn> Liamzee: FWIW, I've had Haskell programs with allocation rates of over 4 Gb/s (and this is the average over a duration of 2-3 minutes) that never exceeded more than a few 10s of MB of resident memory
2025-03-28 21:57:45 +0100jespada(~jespada@2800:a4:231e:8900:903f:fbe:20bf:5608) (Client Quit)
2025-03-28 21:57:45 +0100jespada(~jespada@2800:a4:231e:8900:903f:fbe:20bf:5608) jespada
2025-03-28 21:57:31 +0100 <EvanR> allocation is fast and easy
2025-03-28 21:57:29 +0100 <merijn> Maybe 1 or 2 instructions for an atomic CAS?
2025-03-28 21:57:18 +0100 <merijn> It's basically "increment a number" and a comparison to check if you exceeded the heap
2025-03-28 21:56:07 +0100 <merijn> Like, I'd be surprised if allocating memory took more than 5 instructions in GHC
2025-03-28 21:55:32 +0100jespada(~jespada@2800:a4:231e:8900:903f:fbe:20bf:5608) (Quit: My Mac has gone to sleep. ZZZzzz…)
2025-03-28 21:55:24 +0100 <merijn> EvanR: In place updates when reference count == 1
2025-03-28 21:55:15 +0100 <merijn> Liamzee: FYI, GHC's allocator is *stupidly* fast, though
2025-03-28 21:55:10 +0100 <EvanR> "automatic mutation" ?
2025-03-28 21:51:48 +0100 <haskellbridge> <Liamzee> which is a feature that some people want
2025-03-28 21:51:42 +0100 <haskellbridge> <Liamzee> i'm just being conservative because i was told that there's no automatic mutation implied by linear haskell
2025-03-28 21:50:15 +0100 <haskellbridge> <Liamzee> Web type?
2025-03-28 21:49:57 +0100L29Ah(~L29Ah@wikipedia/L29Ah) (Ping timeout: 248 seconds)
2025-03-28 21:45:34 +0100 <EvanR> it just sounds like you're conflating "no GC memory management" with linear types
2025-03-28 21:44:43 +0100 <EvanR> that's why Ur/Web waits until the end of the processing program to discard the entire block of memory
2025-03-28 21:44:09 +0100 <EvanR> calling into a memory management system to "free" stuff constantly sounds like overhead of its own
2025-03-28 21:43:56 +0100 <haskellbridge> <Liamzee> and not needing the GC to be on to cover the data
2025-03-28 21:43:47 +0100 <haskellbridge> <Liamzee> at least in theory, though, it should have better performance due to not needing to do as many allocations