2026/01/27

Newest at the top

2026-01-27 10:46:17 +0100 <gentauro> rather said, exhaustive PM.
2026-01-27 10:46:05 +0100 <gentauro> so PM is a "very difficult" problem to solve
2026-01-27 10:45:54 +0100 <gentauro> that's what I recall from the SPJ talk
2026-01-27 10:45:32 +0100 <gentauro> jreicher: and int-e if you pm on `case (x_1, x_2, …, x_n) of …` then you will have an upperbound of 2^n cases
2026-01-27 10:43:32 +0100oskarw(~user@user/oskarw) oskarw
2026-01-27 10:42:22 +0100marinelli(~weechat@gateway/tor-sasl/marinelli) (Quit: marinelli)
2026-01-27 10:39:38 +0100 <tomsmeding> which is, iirc, polynomially bounded by the collective size of the types it outputs (for all subterms), and thus only exponential if you give it a program that has exponentially large inferred types
2026-01-27 10:38:50 +0100 <tomsmeding> ok so the GMTM paper undersells the problem a little bit, because it likens it to the complexity of hindley-milner
2026-01-27 10:38:31 +0100 <tomsmeding> ah right
2026-01-27 10:38:17 +0100 <int-e> And you can encode 3-SAT if you're checking pattern completeness.
2026-01-27 10:37:58 +0100 <tomsmeding> sure, but you can typically turn a computation problem into a decision problem by checking whether the output satisfies some condition
2026-01-27 10:37:28 +0100housemate(~housemate@202.7.248.67) housemate
2026-01-27 10:37:26 +0100 <jreicher> tomsmeding: IIRC output size isn't relevant to NP-completeness. That class is only defined for decision problems
2026-01-27 10:36:23 +0100 <int-e> ...cases to keep track of.
2026-01-27 10:36:17 +0100 <int-e> Where Haskell demands that you match the first argument, then optionally the fourth, then the second, then optionally the fifth, then the third argument, then optionally the sixth... and now if you want to avoid redundant pattern matches, you have to keep track of which of the first 3 arguments were equal to A when tackling the next 3 cases. And you can scale that up to 2*n arguments and 2^n...
2026-01-27 10:35:30 +0100 <tomsmeding> Whether this is NP-complete only because the output size may be exponential, or whether it may also be if the output size is small, I don't know
2026-01-27 10:35:07 +0100 <jreicher> Ta
2026-01-27 10:34:02 +0100 <tomsmeding> (that first paper describes the algorithm in GHC before the newest LYG algorithm; the LYG paper ( https://dl.acm.org/doi/pdf/10.1145/3408989 ) claims it has the same complexity as that)
2026-01-27 10:33:41 +0100housemate(~housemate@202.7.248.67) (Quit: https://ineedsomeacidtocalmmedown.space/)
2026-01-27 10:33:12 +0100 <int-e> Something like this should do the trick: https://paste.debian.net/hidden/81e97dc6
2026-01-27 10:33:00 +0100 <tomsmeding> it's pattern match exhaustivity checking
2026-01-27 10:32:48 +0100 <jreicher> tomsmeding: but why is that pattern matching?
2026-01-27 10:32:27 +0100 <tomsmeding> (citation: section 3.4 of https://people.cs.kuleuven.be/~tom.schrijvers/Research/papers/icfp2015.pdf ; cited work: https://epubs.siam.org/doi/abs/10.1137/S0097539793246252 )
2026-01-27 10:32:15 +0100 <tomsmeding> jreicher: > Sekar et al. [25] show that the problem of finding redundant clauses is NP-complete, by encoding the boolean satisfiability (SAT) problem into it.
2026-01-27 10:31:41 +0100Lord_of_Life(~Lord@user/lord-of-life/x-2819915) Lord_of_Life
2026-01-27 10:29:18 +0100 <int-e> jreicher: I wasn't answering you anyway, just clarifying what I wrote.
2026-01-27 10:29:04 +0100Lord_of_Life(~Lord@user/lord-of-life/x-2819915) (Quit: Laa shay'a waqi'un moutlaq bale kouloun moumkine)
2026-01-27 10:28:15 +0100 <jreicher> That's still vague.
2026-01-27 10:28:04 +0100 <jreicher> tomsmeding: what's the parameter then? What's "n", if you know what I mean?
2026-01-27 10:28:02 +0100 <int-e> exponential in the number of cases in the source code
2026-01-27 10:27:33 +0100 <tomsmeding> jreicher: exhaustivity checking, I think
2026-01-27 10:27:30 +0100 <int-e> gentauro: Doesn't ring a bell but I imagine that if you implement Haskell's pattern matching while avoiding matching the same scrutinee twice, you'll end up with cases that have to keep track of exponentially many cases of what has been matched and what hasn't, and correspondingly, exponentially many code paths.
2026-01-27 10:27:22 +0100 <jreicher> Linear is inefficient. I just don't understand why it would be worse than linear.
2026-01-27 10:27:10 +0100 <probie> The `case` in core is linear, but Haskell itself is a bit more lenient
2026-01-27 10:26:49 +0100 <lortabac> efficient compilation of pattern matching is a little more involved
2026-01-27 10:26:27 +0100 <lortabac> jreicher: trying all the equations one by one would be inefficient
2026-01-27 10:25:52 +0100kuribas(~user@ip-188-118-57-242.reverse.destiny.be) kuribas
2026-01-27 10:25:31 +0100 <jreicher> I still don't get this. I thought pattern matching would be linear. I'm sure I'm misunderstanding the measure.
2026-01-27 10:22:43 +0100 <tomsmeding> yes
2026-01-27 10:22:30 +0100 <int-e> that seems unrelated to these particular warnings though (and the fact that they can't be overridden in a good way that would indicate call sites that are vetted to be fine)
2026-01-27 10:22:29 +0100 <tomsmeding> no sub-exponential algorithm known, perhaps? (I don't know)
2026-01-27 10:21:56 +0100 <jreicher> What is an "exponential problem"?
2026-01-27 10:20:37 +0100 <gentauro> I recall a talk from SPJ were he stated that PM is and exponential problem
2026-01-27 10:20:15 +0100 <gentauro> int-e: pattern matching in Haskell aren't exhaustive by default right?
2026-01-27 10:15:45 +0100hakutaku(~textual@chen.yukari.eu.org) (Quit: My MacBook has gone to sleep. ZZZzzz…)
2026-01-27 10:14:26 +0100int-eruns
2026-01-27 10:14:14 +0100 <int-e> import Prelude hiding (head); head (x:xs) = x
2026-01-27 10:13:31 +0100 <f-a> thanks int-e
2026-01-27 10:13:16 +0100 <int-e> {-# OPTIONS_GHC -Wno-x-partial #-}
2026-01-27 10:12:48 +0100 <probie> don't use `head`