2026/01/23

Newest at the top

2026-01-23 12:35:37 +0100 <__monty__> Well, it suggests you can extend atomicity across multiple, no? So if you can't do that easily without deadlocking it's not a great suggestion.
2026-01-23 12:34:48 +0100danz20169(~danza@user/danza) danza
2026-01-23 12:34:47 +0100Inline(~User@2001-4dd7-bc56-0-bf4e-84aa-8c9c-590c.ipv6dyn.netcologne.de) (Quit: KVIrc 5.2.6 Quasar http://www.kvirc.net/)
2026-01-23 12:33:19 +0100 <int-e> If that's what you mean I don't know how it's misleading.
2026-01-23 12:33:06 +0100 <int-e> "Extending the atomicity to multiple IORefs is problematic, so it is recommended that if you need to do anything more complicated then using MVar instead is a good idea."
2026-01-23 12:32:56 +0100 <mauke> with MVars you can deadlock instead
2026-01-23 12:32:46 +0100 <mauke> well, it only talks about atomicity
2026-01-23 12:31:45 +0100 <__monty__> So the doc suggesting MVars instead is misleading?
2026-01-23 12:30:30 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-23 12:30:26 +0100 <mauke> applies to MVar, too
2026-01-23 12:29:21 +0100 <__monty__> mauke: The footgun is thinking you'll be able to just add another IORef later and not run into trouble.
2026-01-23 12:28:06 +0100housemate(~housemate@2405:6e00:2457:9d18:a3e8:cd50:91c3:2f91) (Quit: https://ineedsomeacidtocalmmedown.space/)
2026-01-23 12:27:28 +0100Square2(~Square@user/square) (Ping timeout: 256 seconds)
2026-01-23 12:18:55 +0100 <danza> well that sounds saner to me for the goal
2026-01-23 12:16:25 +0100merijn(~merijn@77.242.116.146) (Ping timeout: 246 seconds)
2026-01-23 12:15:59 +0100 <tomsmeding> also 3. an MVar can also function as a one-place channel instead of a lock
2026-01-23 12:13:27 +0100housemate(~housemate@2405:6e00:2457:9d18:a3e8:cd50:91c3:2f91) housemate
2026-01-23 12:13:15 +0100 <tomsmeding> in return for the overhead, an MVar gives you 1. fairness (if you're blocking on the MVar and no one holds the MVar indefinitely, you're guaranteed to get it eventually), 2. the ability to do IO while holding the lock
2026-01-23 12:09:49 +0100 <tomsmeding> whether this is important in the application depends on how often you do this, of course
2026-01-23 12:09:33 +0100 <tomsmeding> atomicModifyIORef is little more than a single CPU instruction (compare-and-swap)
2026-01-23 12:09:19 +0100 <tomsmeding> it's a lock with an explicit queue attached (a list of threads waiting to take the lock) for fairness
2026-01-23 12:09:04 +0100 <mauke> footgun how?
2026-01-23 12:08:55 +0100 <tomsmeding> an MVar definitely has much more overhead than an IORef
2026-01-23 12:08:38 +0100 <__monty__> Does an MVar have that much more overhead that the footgun factor is worth it?
2026-01-23 12:07:39 +0100 <mauke> https://hackage-content.haskell.org/package/base-4.22.0.0/docs/Data-IORef.html#v:atomicModifyIORef
2026-01-23 12:06:19 +0100 <mauke> if we're just updating a data structure that someone else reads from and no other interaction, wouldn't an IORef suffice?
2026-01-23 12:04:48 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2026-01-23 12:04:01 +0100 <__monty__> I'm not sure. mauke seems to suggest using forkIO.
2026-01-23 12:01:43 +0100oskarw(~user@user/oskarw) oskarw
2026-01-23 12:01:05 +0100lantti(~lantti@xcalibur.cc.tut.fi)
2026-01-23 12:00:20 +0100oskarw(~user@user/oskarw) (Remote host closed the connection)
2026-01-23 12:00:04 +0100thenightmail(~thenightm@user/thenightmail) thenightmail
2026-01-23 11:59:39 +0100thenightmail(~thenightm@user/thenightmail) (Ping timeout: 260 seconds)
2026-01-23 11:57:25 +0100trickard_(~trickard@cpe-93-98-47-163.wireline.com.au)
2026-01-23 11:57:20 +0100XZDX(~xzdx@user/XZDX) (Remote host closed the connection)
2026-01-23 11:57:11 +0100trickard(~trickard@cpe-93-98-47-163.wireline.com.au) (Read error: Connection reset by peer)
2026-01-23 11:57:02 +0100 <bwe> __monty__: So, when I start the web server, I need to fork from that the runner that updates the MVar. That would work while different binary doesn't, right?
2026-01-23 11:54:18 +0100 <[exa]> bwe: yeah technically the "variable" reference doesn't change, but you're allowed to rewrite what's it pointing to
2026-01-23 11:54:05 +0100 <lambdabot> error: [GHC-88464] Variable not in scope: forkIO
2026-01-23 11:54:04 +0100 <mauke> :t forkIO
2026-01-23 11:50:48 +0100 <__monty__> Threads don't imply different binaries. They don't even imply different processes. Rather the reverse.
2026-01-23 11:49:11 +0100 <bwe> Then MVar is nothing but a (changeable) State across different threads, does that mean different binaries? How do they find them each other, then?
2026-01-23 11:47:20 +0100 <bwe> ...and I thought data stored in Reader doesn't change (once loaded).
2026-01-23 11:46:35 +0100fp(~Thunderbi@2001:708:20:1406::10c5) fp
2026-01-23 11:41:14 +0100hellwolf(~user@e7d0-28a4-0ea3-c496-0f00-4d40-07d0-2001.sta.estpak.ee) hellwolf
2026-01-23 11:40:33 +0100 <danza> but they should have one MVar per query? Anyway yes, sounds like something better solved in hyperbole
2026-01-23 11:39:45 +0100Googulator(~Googulato@team.broadbit.hu) (Ping timeout: 272 seconds)
2026-01-23 11:39:00 +0100 <[exa]> bwe: yeah MVars are great for that, loading them doesn't cost anything and you can atomically flip to the new state
2026-01-23 11:38:43 +0100 <bwe> danza: I am quite tolerant for outdated database states within a range of up to 3 minutes (update time of my internal cache).
2026-01-23 11:37:16 +0100 <bwe> [exa]: Well, if I get you right, that is similar to what I thought. "How can I update some thing in a different thread from another (that just sleeps between updates)?"