2026/01/19

Newest at the top

2026-01-19 08:45:43 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-19 08:45:39 +0100acidjnk(~acidjnk@p200300d6e7171938d9c94c377475857c.dip0.t-ipconnect.de) acidjnk
2026-01-19 08:45:13 +0100humasect(~humasect@dyn-192-249-132-90.nexicom.net) humasect
2026-01-19 08:43:26 +0100xax__(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz)
2026-01-19 08:39:53 +0100tromp(~textual@2001:1c00:3487:1b00:f96f:f7c1:9b58:4be8)
2026-01-19 08:39:22 +0100driib3180(~driib@vmi931078.contaboserver.net) (Ping timeout: 255 seconds)
2026-01-19 08:34:58 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2026-01-19 08:30:14 +0100FANTOM(~fantom@87.75.185.177)
2026-01-19 08:29:57 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-19 08:24:43 +0100olivial(~benjaminl@user/benjaminl) benjaminl
2026-01-19 08:24:26 +0100olivial(~benjaminl@user/benjaminl) (Read error: Connection reset by peer)
2026-01-19 08:21:51 +0100Square2(~Square4@user/square) Square
2026-01-19 08:21:37 +0100Square(~Square@user/square) (Ping timeout: 246 seconds)
2026-01-19 08:20:12 +0100driib3180(~driib@vmi931078.contaboserver.net) driib
2026-01-19 08:19:13 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 250 seconds)
2026-01-19 08:17:21 +0100driib3180(~driib@vmi931078.contaboserver.net) (Quit: Ping timeout (120 seconds))
2026-01-19 08:14:09 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-19 08:11:58 +0100notzmv(~umar@user/notzmv) notzmv
2026-01-19 08:03:15 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 245 seconds)
2026-01-19 07:59:16 +0100vanishingideal(~vanishing@user/vanishingideal) (Ping timeout: 256 seconds)
2026-01-19 07:58:22 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-19 07:48:51 +0100 <Guest70> Anybody used Accelerate lately, and had good/bad experiences?
2026-01-19 07:47:23 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2026-01-19 07:45:53 +0100haritz(~hrtz@user/haritz) (Quit: ZNC 1.8.2+deb3.1+deb12u1 - https://znc.in)
2026-01-19 07:42:08 +0100 <Guest70> Yes I think so
2026-01-19 07:42:04 +0100 <Guest70> I.e. you wouldn't write a "whole program" in Futhark. So "embedded" in the sense that you might have OpenGL code in your Haskell program
2026-01-19 07:41:46 +0100 <int-e> Sure. That makes it like OpenCL. Or web assembly.
2026-01-19 07:41:44 +0100 <Axman6> Leary: Thank you, I'll take a look
2026-01-19 07:41:21 +0100 <Guest70> int-e: I understand, but I mean that Futhark code is designed to exist within a larger program, written in another language
2026-01-19 07:40:56 +0100 <int-e> And at least at a glance, Futhark is not 'embedded' in this sense.
2026-01-19 07:40:33 +0100 <Guest70> I should say that I don't want truly "magic" (take a normal haskell program and run it on the GPU), but I'd like to be able to hand off big pieces of work to the GPU
2026-01-19 07:40:27 +0100 <int-e> Guest70: the 'embedded' in 'EDSL' means it's embedded into a programming language (like Haskell), rather than having its own syntax, with lexer and parser.
2026-01-19 07:40:20 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-19 07:39:30 +0100 <Guest70> It hands off a lot to OpenCL though
2026-01-19 07:39:30 +0100 <int-e> AFAIK and AFAICS graph reduction and the dynamic allocation that it entails is just not GPU friendly; the computations aren't nearly uniform enough.
2026-01-19 07:39:25 +0100 <Axman6> I wouldn't call Accelerate low level
2026-01-19 07:39:16 +0100 <Guest70> Yes, I think Futhark is still meant to be embedded into your program but not the language
2026-01-19 07:38:49 +0100 <Guest70> Maybe they're similar levels of abstraction?
2026-01-19 07:38:42 +0100 <int-e> Futhark, hmm. Similar but it drops the 'E' from 'EDSL'?
2026-01-19 07:37:40 +0100 <Guest70> Accelerate is very low-level, right? Was hoping for a bit of magic parallelism like futhark
2026-01-19 07:36:34 +0100 <int-e> I mean there are things like https://hackage.haskell.org/package/accelerate but they're not really targeting Haskell as much as that they provide an EDSL for vectorizable computations that can be mapped to GPUs.
2026-01-19 07:36:28 +0100 <Axman6> Probably Accelerate still
2026-01-19 07:34:54 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2026-01-19 07:33:24 +0100 <int-e> don't?
2026-01-19 07:31:21 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 265 seconds)
2026-01-19 07:30:45 +0100 <Guest70> What's the state of the art compiling Haskell to GPU these days?
2026-01-19 07:26:16 +0100merijn(~merijn@host-cl.cgnat-g.v4.dfn.nl) merijn
2026-01-19 07:22:19 +0100poscat(~poscat@user/poscat) poscat
2026-01-19 07:19:29 +0100poscat(~poscat@user/poscat) (Remote host closed the connection)
2026-01-19 07:18:18 +0100 <Leary> Axman6: I haven't used them, but based on Matthew Pickering's talk 'What we have learned about memory profiling in the last 5 years' <https://www.youtube.com/watch?v=8i8HJiBI0lo> you want to try eventlog-live or ghc-debug.