2026/01/23

Newest at the top

2026-01-23 13:20:35 +0100 <merijn> tomsmeding: MVar doesn't have to ever lock and queue if you don't want to, though
2026-01-23 13:18:10 +0100 <merijn> int-e: Bad STM design can easily cause that
2026-01-23 13:17:59 +0100 <merijn> int-e: Almost surely
2026-01-23 13:10:54 +0100Googulator23(~Googulato@team.broadbit.hu) (Quit: Client closed)
2026-01-23 13:10:54 +0100Googulator25(~Googulato@team.broadbit.hu)
2026-01-23 13:05:33 +0100comonad(~comonad@p200300d02722ae00dce4ce9451b59974.dip0.t-ipconnect.de) (Ping timeout: 260 seconds)
2026-01-23 13:03:09 +0100 <tomsmeding> (you may be thinking of ForeignPtr, which does implement Ord)
2026-01-23 13:01:37 +0100Googulator23(~Googulato@team.broadbit.hu)
2026-01-23 13:00:51 +0100 <tomsmeding> Axman6: MVar doesn't implement Ord, only Eq
2026-01-23 12:57:54 +0100 <tomsmeding> int-e: I'm not aware of any, I'm an academic
2026-01-23 12:57:29 +0100fp1fp
2026-01-23 12:57:29 +0100fp(~Thunderbi@2001:708:20:1406::10c5) (Ping timeout: 265 seconds)
2026-01-23 12:57:27 +0100 <ncf> Leary: i didn't mean to encapsulate general recursion tbh, only to point out that the clarity of expressing things in terms of (co)algebras needn't come at the price of general recursion
2026-01-23 12:57:22 +0100 <tomsmeding> concurrent programming: correct, fast, convenient; pick 2
2026-01-23 12:57:16 +0100 <int-e> Do we have any canonical STM horror story (along the lines of "it worked great until we ran it in production with 50 simultaneous threads and then it spent 90% of its time retrying STM transactions"?)
2026-01-23 12:56:51 +0100fp1(~Thunderbi@2001:708:150:10::9d7e) fp
2026-01-23 12:56:28 +0100 <tomsmeding> *as it has similar
2026-01-23 12:56:14 +0100 <tomsmeding> and if you are worried about performance implications of using an MVar over an IORef, you should also be worried about STM, as it similar (?) overhead, and also has starvation issues if you have very long and also very short transactions that update the same TVars
2026-01-23 12:53:26 +0100 <int-e> You can perhaps criticize the IORef docs for not mentioning STM, but the reason for that is probably historical, and you'll find out about STM when you read the MVar docs.
2026-01-23 12:53:23 +0100 <tomsmeding> (if you only ever lock such locks in a particular global order, this problem cannot arise)
2026-01-23 12:53:20 +0100merijn(~merijn@77.242.116.146) merijn
2026-01-23 12:53:01 +0100 <tomsmeding> (for completeness: you have two locks, A and B, and two threads, 1 and 2. 1 locks A and then B, and 2 locks B and then A. If the two executions interleave, 1 has A locked and 2 has B locked and they both wait on the other, indefinitely)
2026-01-23 12:52:41 +0100 <__monty__> You may be right.
2026-01-23 12:52:03 +0100 <tomsmeding> if anything, having to order locks to avoid deadlock is a more insidious risk that you may not see coming if you haven't studied concurrent programming
2026-01-23 12:51:27 +0100 <tomsmeding> __monty__: while yes, adding another IORef later means you can't update both in the same atomic transaction, I'm not sure what part of the API would lead one to assume that you can
2026-01-23 12:50:41 +0100Googulator23(~Googulato@team.broadbit.hu) (Ping timeout: 272 seconds)
2026-01-23 12:49:47 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Read error: Connection reset by peer)
2026-01-23 12:48:20 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-23 12:45:34 +0100 <danz20169> maybe just passed them as black boxes
2026-01-23 12:44:39 +0100 <danz20169> did you use any library to encode PNGs as types?
2026-01-23 12:43:10 +0100 <danz20169> seems a solution suited to server-side data vis
2026-01-23 12:41:34 +0100 <Axman6> I have also done that - it needed to serve images of some live-ish data, and generating the images was pretty slow, so with each new piece of data it'd just make new PNGs and update the map in the IORef. Meant all the HTTP requests were instant
2026-01-23 12:41:08 +0100 <mauke> worked great
2026-01-23 12:40:41 +0100 <mauke> there was a writer thread that would occasionally update the structure by just overwriting the Map
2026-01-23 12:40:19 +0100 <mauke> I had a server that would answer client queries from a central data structure (a Map stored in an IORef)
2026-01-23 12:39:57 +0100 <Axman6> they can store an arbitrarily complicated record too, and aMIOref can be used to update as much or as little of that structure as you like
2026-01-23 12:39:05 +0100 <mauke> or a Map
2026-01-23 12:39:04 +0100 <Axman6> I've been reading a lot of the Cardano code recently, and they make a lot of use of STM, as well as pure data structures.
2026-01-23 12:37:47 +0100 <int-e> (But you can have a single IORef that stores a tuple or a record.)
2026-01-23 12:37:42 +0100 <Axman6> IORefs with atomicModifyIORef are amazing, if you can store all your state in pure data structures that can always be changed without doing any other IO. if you can't guarantee those properties, other options are much safer
2026-01-23 12:37:25 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 255 seconds)
2026-01-23 12:37:06 +0100danza(~danza@user/danza) (Ping timeout: 256 seconds)
2026-01-23 12:36:44 +0100 <Axman6> IIRC MVar has a consistent Ord instance?
2026-01-23 12:36:43 +0100trickard_trickard
2026-01-23 12:36:36 +0100 <int-e> You can't atomically update two IORefs at the same time.
2026-01-23 12:36:31 +0100 <Axman6> you just have to be careful about the order you access things
2026-01-23 12:35:37 +0100 <__monty__> Well, it suggests you can extend atomicity across multiple, no? So if you can't do that easily without deadlocking it's not a great suggestion.
2026-01-23 12:34:48 +0100danz20169(~danza@user/danza) danza
2026-01-23 12:34:47 +0100Inline(~User@2001-4dd7-bc56-0-bf4e-84aa-8c9c-590c.ipv6dyn.netcologne.de) (Quit: KVIrc 5.2.6 Quasar http://www.kvirc.net/)
2026-01-23 12:33:19 +0100 <int-e> If that's what you mean I don't know how it's misleading.