Newest at the top
2025-03-19 21:56:13 +0100 | <[exa]> | davean: somehow I first read that as "being failure tolerant as a distributed computing user is a good strategy". Which is truly eternal. |
2025-03-19 21:55:51 +0100 | <haskellbridge> | <magic_rb> Its what i play when im trying to fall asleep, (unironically) |
2025-03-19 21:55:27 +0100 | <haskellbridge> | <magic_rb> You dont like the sound of whirring servers and harddrives ?? Weird |
2025-03-19 21:55:13 +0100 | sabathan | (~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr) |
2025-03-19 21:54:51 +0100 | <davean> | magic_rb: Everyone knows LPC is healthier for you |
2025-03-19 21:54:48 +0100 | <haskellbridge> | <magic_rb> Hey if your things works without requiring a Phd in it, its already surpassed k8s |
2025-03-19 21:54:15 +0100 | <[exa]> | c'mon guys I have standards, the comparison to kubes hurt :D |
2025-03-19 21:53:44 +0100 | <haskellbridge> | <magic_rb> Lmao |
2025-03-19 21:53:39 +0100 | killy | (~killy@terminal-3-187.retsat1.com.pl) |
2025-03-19 21:53:36 +0100 | <[exa]> | I'm okay with "bad cluster computing" |
2025-03-19 21:53:33 +0100 | <davean> | Just because you suck at it doesn't mean you aren't doing it. Actualyl being failure tolerant is usaully a good strategy |
2025-03-19 21:53:31 +0100 | <tomsmeding> | [exa]: that just means you have standards |
2025-03-19 21:53:28 +0100 | <haskellbridge> | <magic_rb> Kubernetes is "distributed computing" and kubernetes barely works on a single node let alone 30 |
2025-03-19 21:53:08 +0100 | <haskellbridge> | <magic_rb> Its a still cluster, just a bad one |
2025-03-19 21:52:57 +0100 | <[exa]> | "distributed computing" somehow means to me "I'm proud that my programs can resynchronize after 6 years of lag and the user doesn't notice the outage" |
2025-03-19 21:51:59 +0100 | <[exa]> | tomsmeding: it doesn't really classify, it's got a centralized coordinator and I ignore any failure etc. |
2025-03-19 21:51:24 +0100 | <haskellbridge> | <magic_rb> Or smth like |
2025-03-19 21:51:24 +0100 | <haskellbridge> | <magic_rb> Even better if you call it a "HPC cluster" |
2025-03-19 21:51:20 +0100 | sabathan | (~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr) (Read error: Connection reset by peer) |
2025-03-19 21:48:13 +0100 | peterbecich | (~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich |
2025-03-19 21:46:54 +0100 | merijn | (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 252 seconds) |
2025-03-19 21:46:36 +0100 | <tomsmeding> | protip: if you are having computers talk to each other about what they compute, you should instead say "I'm doing distributed computing", that sounds cooler |
2025-03-19 21:42:30 +0100 | alfiee | (~alfiee@user/alfiee) (Ping timeout: 252 seconds) |
2025-03-19 21:40:29 +0100 | fp1 | (~Thunderbi@2001:708:20:1406::1370) (Ping timeout: 260 seconds) |
2025-03-19 21:40:02 +0100 | merijn | (~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn |
2025-03-19 21:38:23 +0100 | <[exa]> | this above is the first attempt because I want a few computers to talk to each other about what they compute and I don't see myself debugging this without usable types |
2025-03-19 21:38:03 +0100 | alfiee | (~alfiee@user/alfiee) alfiee |
2025-03-19 21:37:49 +0100 | <[exa]> | like, for numerical things I still just go to julia |
2025-03-19 21:37:24 +0100 | <[exa]> | that would be great tbh |
2025-03-19 21:35:20 +0100 | <tomsmeding> | I'm hacking on something that can be seen as a competitor to hmatrix, but it's not stable enough yet |
2025-03-19 21:35:05 +0100 | wootehfoot | (~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer) |
2025-03-19 21:34:32 +0100 | <tomsmeding> | to get around the fact that GHC is not good at compiling fast numerical code |
2025-03-19 21:34:24 +0100 | euleritian | (~euleritia@95.90.214.149) |
2025-03-19 21:33:53 +0100 | <tomsmeding> | accelerate retains the higher-order array operations (SOACs, in the lingo in the field) but ceases being a "normal" library, being a deeply embedded DSL instead |
2025-03-19 21:33:12 +0100 | <tomsmeding> | hmatrix does that |
2025-03-19 21:33:05 +0100 | <tomsmeding> | first-order operations like sum, add-two-arrays-elementwise, multiply-two-arrays-elementwise, etc. can be fast just fine by writing and FFI'ing in some C code |
2025-03-19 21:32:35 +0100 | <tomsmeding> | any array library in haskell with higher-order operations like map/fold/scan/etc. will not be super-fast |
2025-03-19 21:31:40 +0100 | <tomsmeding> | those remarks about vectorisation apply just as well to massiv |
2025-03-19 21:31:10 +0100 | <[exa]> | yes they're on this trac thing, not github |
2025-03-19 21:31:08 +0100 | <tomsmeding> | with 3.4 being the main branch |
2025-03-19 21:30:58 +0100 | <tomsmeding> | that 4.1.0.1 release on github was an experiment, apparently |
2025-03-19 21:30:48 +0100 | <tomsmeding> | it seems maintained with a new release just a few months ago |
2025-03-19 21:30:36 +0100 | <tomsmeding> | I don't think this is a reason to move from repa though, however many others there may be |
2025-03-19 21:30:16 +0100 | <[exa]> | I'm confused all the way to massiv now |
2025-03-19 21:30:08 +0100 | <tomsmeding> | right |
2025-03-19 21:29:27 +0100 | <tomsmeding> | ok those github releases just make no sense, perhaps? |
2025-03-19 21:29:17 +0100 | <[exa]> | https://groups.google.com/g/haskell-repa/c/ULjCQC8nJL8 |
2025-03-19 21:29:15 +0100 | <[exa]> | well |
2025-03-19 21:28:47 +0100 | <tomsmeding> | this is highly confusing |
2025-03-19 21:28:46 +0100 | <[exa]> | didn't look to me like that but maybe 3.4 is newer than 4.1 because of some versioning LTS strategy or what |