2025/04/20

Newest at the top

2025-04-20 22:09:56 +0200haritz(~hrtz@user/haritz) haritz
2025-04-20 22:09:56 +0200haritz(~hrtz@152.37.68.178) (Changing host)
2025-04-20 22:09:31 +0200haritz(~hrtz@152.37.68.178)
2025-04-20 22:09:17 +0200 <haskellbridge> <hellwolf> Instead, it helps you to have a type safe way to build linear temporal logic.
2025-04-20 22:09:04 +0200 <haskellbridge> So I'd say nothing prevents you from having lazy semantics, since you don't have to consume.
2025-04-20 22:09:02 +0200 <haskellbridge> <hellwolf> GHC's LinearTypes only says "A function f is linear if: when its result is consumed exactly once, then its argument is consumed exactly once."
2025-04-20 22:07:06 +0200 <haskellbridge> <hellwolf> maybe you give an example, and let me give it a try and learn?
2025-04-20 22:06:46 +0200haritz(~hrtz@user/haritz) (Ping timeout: 265 seconds)
2025-04-20 22:05:55 +0200 <haskellbridge> <hellwolf> why not? it's just lambda with linear arrows. You would need to take care of dup/ignore data to produce valid diagrams; the type system forces you anyways.
2025-04-20 22:03:55 +0200 <haskellbridge> <hellwolf> I am curious. I don't know the answer.
2025-04-20 22:03:53 +0200 <monochrom> Or maybe I assumed it didn't matter. The question I would ask: Can I obtain compositional reasoning from linear temporal logic?
2025-04-20 22:02:29 +0200 <monochrom> Maybe I did.
2025-04-20 22:02:03 +0200 <haskellbridge> <hellwolf> Do you assume laziness cannot be obtained from linear temporal logic?
2025-04-20 22:01:59 +0200 <monochrom> Sure! Now "take n (repeat ())" takes Theta(n)-time.
2025-04-20 22:01:35 +0200 <mauke> why do I need a statement about repeat ()?
2025-04-20 22:01:26 +0200 <lambdabot> []
2025-04-20 22:01:25 +0200 <mauke> > take 0 undefined
2025-04-20 22:00:30 +0200 <monochrom> (I did, it's my PhD thesis: https://www.cs.utoronto.ca/~trebla/albert-thesis-ut.pdf )
2025-04-20 21:59:17 +0200 <monochrom> How do you prove that in Haskell "take 0 (repeat ())" is O(1)-time, without a whole-program analysis of the the "take 0 (repeat ())"? Can you just prove one statement about "take 0", and one statement about "repeat ()", and combine them to deduce O(1)-time?
2025-04-20 21:57:52 +0200merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 252 seconds)
2025-04-20 21:57:36 +0200 <monochrom> People like eager languages because running times are compositional. The running time of "take 0 . repeat" is factorable to (time of take 0) + (time of repeat) = infinity.
2025-04-20 21:54:46 +0200 <haskellbridge> <hellwolf> At least you didn't say it's false. I feel relieved :D
2025-04-20 21:54:41 +0200j1n37(~j1n37@user/j1n37) j1n37
2025-04-20 21:53:33 +0200 <monochrom> That is tautological.
2025-04-20 21:53:03 +0200 <haskellbridge> In some cases, when timeless equations are of advantage, denotational semantics is really great; but the moment you want to deal with performance, or worse, side effects, inevitably you need the help from the "timely" logic.
2025-04-20 21:53:03 +0200 <haskellbridge> <hellwolf> I am still conceiving this idea. And I am puching above my weight:
2025-04-20 21:51:02 +0200 <haskellbridge> <hellwolf> it's just boxes and wires with very rigorous rules.
2025-04-20 21:50:58 +0200 <monochrom> Compositional reasoning means if I want to prove "foo . bar" correct for example, I can break it down into proving a statement about foo, a statement about bar, and a combination of them to obtain overall correctness.
2025-04-20 21:50:48 +0200 <haskellbridge> linear-logic and symmetric monoidal category can represent operations in quite simple diagrams.
2025-04-20 21:50:48 +0200 <haskellbridge> <hellwolf> You sure?
2025-04-20 21:50:47 +0200merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-04-20 21:50:37 +0200j1n37(~j1n37@user/j1n37) (Read error: Connection reset by peer)
2025-04-20 21:48:53 +0200j1n37(~j1n37@user/j1n37) j1n37
2025-04-20 21:48:51 +0200 <monochrom> OK here is why my preference, but I can't speak for others. It is easier to do compositional reasoning in denotational semantics than in operational semantics.
2025-04-20 21:48:35 +0200j1n37(~j1n37@user/j1n37) (Read error: Connection reset by peer)
2025-04-20 21:43:51 +0200 <monochrom> Then don't.
2025-04-20 21:42:51 +0200 <monochrom> (not serious answer) Even physics is now trying to get rid of time from really fundamental theories >:)
2025-04-20 21:42:32 +0200 <haskellbridge> <hellwolf> Somehow when it involves "sequence" and timeliness, it becomes dirty. I cannot justify it anymore.
2025-04-20 21:41:34 +0200ljdarj1ljdarj
2025-04-20 21:41:34 +0200ljdarj(~Thunderbi@user/ljdarj) (Ping timeout: 244 seconds)
2025-04-20 21:41:33 +0200 <haskellbridge> <hellwolf> This is my linear-time temporal logic take on the meaning of "par", "pseq phi psi": $\neg \psi_{whnf}\ \mathcal{U}\ \phi_{whnf}$ This is mathematics, too, you know, why this is less "simple and elegant".
2025-04-20 21:40:56 +0200 <monochrom> I won't defend "low level is direct, high level abstract is clean", although I am mathematically inclined and I feel that way. But then I am happy to get low level, unless when I'm too lazy to.
2025-04-20 21:40:28 +0200 <int-e> sounds subjective to me, so there is no good answer
2025-04-20 21:40:22 +0200 <haskellbridge> <hellwolf> almost a pure aesthetic preference.
2025-04-20 21:40:03 +0200 <haskellbridge> <hellwolf> all the answer I got from denotational conalists are usually about "simple and elegance"
2025-04-20 21:39:07 +0200 <haskellbridge> <hellwolf> What makes encoding timeliness in computation as logic less "clean" than timeless denotational meanings? I used to be a "conalist", too. But now I am questioning and want to understand more.
2025-04-20 21:38:18 +0200ljdarj1(~Thunderbi@user/ljdarj) ljdarj
2025-04-20 21:38:11 +0200inca(~inca@h213.233.30.71.dynamic.ip.windstream.net)
2025-04-20 21:37:46 +0200merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 276 seconds)
2025-04-20 21:35:58 +0200 <monochrom> Or even downright evil, spawn two threads to evaluate x and y concurrently, and wait for both. Just to try bringing down your computer.