Newest at the top
2025-03-24 21:14:54 +0100 | alp | (~alp@2001:861:8ca0:4940:92ce:4d9a:3c9b:8560) |
2025-03-24 21:12:42 +0100 | alfiee | (~alfiee@user/alfiee) (Ping timeout: 246 seconds) |
2025-03-24 21:08:41 +0100 | alfiee | (~alfiee@user/alfiee) alfiee |
2025-03-24 21:05:03 +0100 | notdabs | (~Owner@2600:1700:69cf:9000:c43:fe75:997:32d7) |
2025-03-24 20:58:07 +0100 | kh0d | (~kh0d@212.200.65.82) kh0d |
2025-03-24 20:57:36 +0100 | petrichor | (~znc-user@user/petrichor) (Ping timeout: 276 seconds) |
2025-03-24 20:56:40 +0100 | kh0d | (~kh0d@212.200.65.82) (Remote host closed the connection) |
2025-03-24 20:54:19 +0100 | nckx | (nckx@libera/staff/owl/nckx) (Ping timeout: 608 seconds) |
2025-03-24 20:46:15 +0100 | kh0d | (~kh0d@212.200.65.82) kh0d |
2025-03-24 20:44:23 +0100 | kh0d | (~kh0d@212.200.65.82) (Remote host closed the connection) |
2025-03-24 20:44:07 +0100 | <Athas> | Er, maintain an immutable array. |
2025-03-24 20:43:02 +0100 | <tomsmeding> | just implement it! |
2025-03-24 20:42:55 +0100 | <Athas> | Maybe I'm overthinking it. |
2025-03-24 20:42:52 +0100 | <Athas> | The algorithm is really quite trivial: https://mathworld.wolfram.com/DeterminantExpansionbyMinors.html |
2025-03-24 20:42:29 +0100 | <Athas> | So maybe it is good enough to maintain a mutable array and then just a bunch of linked lists. |
2025-03-24 20:42:14 +0100 | <Athas> | Yes. But the imperative solution is to essentially encode the matrix as a linked list of row indexes and column indexes, such that e.g. R[i] is the physical index of the successor to row 'i'. |
2025-03-24 20:41:11 +0100 | <tomsmeding> | so it may well be that they need to be fast after all, even if the data is small |
2025-03-24 20:40:59 +0100 | <tomsmeding> | because you'll probably also be doing O(n!) of those matrix manipulations |
2025-03-24 20:40:46 +0100 | <tomsmeding> | Athas: perhaps the dataset will be small, but even then, what really matters here is the _proportion_ of time that you spend in matrix manipulations |
2025-03-24 20:39:57 +0100 | machinedgod | (~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 252 seconds) |
2025-03-24 20:39:51 +0100 | <tomsmeding> | hah, O(n!) is brutal |
2025-03-24 20:39:38 +0100 | <Athas> | Of course, the lovely part of an O(n!) algorithm is that the dataset will necessarily be so small that perhaps an inefficient matrix representation is unimportant. |
2025-03-24 20:39:15 +0100 | <Athas> | This is not a good algorithm (the complexity is something like O(n!)), but what's worse is that it depends on recursively removing rows and columns from a matrix. This can be done in efficient ways with mutable arrays, but I haven't cracked a nice way to do it in Haskell. |
2025-03-24 20:38:21 +0100 | <Athas> | Btw, I found that Haskell did pretty well for implementing a differentiable Runge-Kutta ODE solver, but now I'm trying to work out how to implement computation of a matrix determinant by minors. |
2025-03-24 20:37:41 +0100 | <Athas> | Yes, that is much better than rejecting esoterica. |
2025-03-24 20:36:36 +0100 | <tomsmeding> | you could even mark certain tools as "experimental", and not show them by default (or make it easy to filter them out or something) |
2025-03-24 20:36:23 +0100 | Sayman | (~Sayman@2401:4900:1ca3:94e0:d788:422d:b642:f1c) (Client Quit) |
2025-03-24 20:36:22 +0100 | <Athas> | As long as it fits in a Docker container, I would say it should go in. |
2025-03-24 20:36:14 +0100 | Sayman | (~Sayman@2401:4900:1ca3:94e0:d788:422d:b642:f1c) |
2025-03-24 20:36:04 +0100 | <Athas> | I think the architecture is robust enough to handle arbitrary amounts of weird tools. The main bottleneck is that the table will look weird, but that is a fixable UI issue. |
2025-03-24 20:35:43 +0100 | <tomsmeding> | I see |
2025-03-24 20:35:41 +0100 | Sayman | (~Sayman@2401:4900:1ca3:94e0:d788:422d:b642:f1c) (Quit: Client closed) |
2025-03-24 20:35:38 +0100 | <Athas> | But Sam has turned into a maximalist. |
2025-03-24 20:35:34 +0100 | <tomsmeding> | as in, is it "benchmark all the implementations" or "let's collect the major ones and get a useful comparison" |
2025-03-24 20:35:24 +0100 | <Athas> | tomsmeding: that remains to be seen! |
2025-03-24 20:35:13 +0100 | <tomsmeding> | Athas: in general, how happy are gradbench with random experimental libraries? |
2025-03-24 20:35:09 +0100 | <Athas> | Ah well. |
2025-03-24 20:34:48 +0100 | <Sayman> | geekosaur okay |
2025-03-24 20:34:45 +0100 | <geekosaur> | again, I'm not really involved with web stuff. you might post to /r/haskell or discourse.haskell.org |
2025-03-24 20:34:43 +0100 | <tomsmeding> | Athas: that array-aware thing that I was hacking on (and that you said I should continue hacking on) I didn't continue working on yet, so that's in usable state if the existing array API happens to be enough for your algorithm, and otherwise not :p |
2025-03-24 20:34:12 +0100 | <Sayman> | will it gonna be better to continue or to change my idea? |
2025-03-24 20:33:59 +0100 | <tomsmeding> | Athas: I wrote my own thing that wasn't even meant to be faster than 'ad', but it is by a factor of 2 sometimes -- but it _really_ is not fleshed out at all, so you won't have much luck with it |
2025-03-24 20:33:55 +0100 | <geekosaur> | I also am one of the core maintainers of xmonad, and when I can I contribute to ghc and cabal development |
2025-03-24 20:33:55 +0100 | <Sayman> | should I continue with my proposal |
2025-03-24 20:33:36 +0100 | <geekosaur> | not sure what you mean by "admin", I'm a moderator here and on Matrix. I do help out with beginners also |
2025-03-24 20:33:00 +0100 | <Athas> | Something that abused Template Haskell? |
2025-03-24 20:32:55 +0100 | <Athas> | tomsmeding: did you know of some Haskell AD library that is faster than 'ad'? |
2025-03-24 20:32:44 +0100 | <Sayman> | or a mentor |
2025-03-24 20:32:39 +0100 | <Sayman> | are you the admin? |
2025-03-24 20:31:25 +0100 | <geekosaur> | but I'm not really the right person to talk to about web stuff |