2025/05/24

Newest at the top

2025-05-24 20:11:16 +0200merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 276 seconds)
2025-05-24 20:11:09 +0200 <monochrom> I don't know anything beyond that.
2025-05-24 20:10:50 +0200 <talismanick> isn't topology only explicitly used in rarer techniques like persistent homology/TDA?
2025-05-24 20:09:07 +0200 <monochrom> (so not just linear algebra, but also diff geom and topology)
2025-05-24 20:08:25 +0200 <monochrom> Although, I now recall a talk from which I learned that research on the brain does model collected data as an n-dim manifold (if there are n measuring probes).
2025-05-24 20:08:14 +0200 <talismanick> find a submanifold*
2025-05-24 20:07:51 +0200 <talismanick> and training it means walking this manifold surface with calculus to find a manifold with much smaller n
2025-05-24 20:07:30 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr)
2025-05-24 20:07:07 +0200 <EvanR> I wish my linear algebra course prefaced the innumerable matrix exercises with "this is used in literally everything ever so pay attention"
2025-05-24 20:07:06 +0200 <talismanick> which - for some arbitrarily-large n - maps trivially onto an n-manifold (some arbitrarily curved and connected space with the restriction that it be locally similar to R^n, so calculus is possible)
2025-05-24 20:07:06 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr) (Read error: Connection reset by peer)
2025-05-24 20:05:51 +0200 <monochrom> Wait why is it not called simply "the linear algebra hypothesis"? :)
2025-05-24 20:05:42 +0200merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) merijn
2025-05-24 20:05:40 +0200srazkvt(~sarah@user/srazkvt) (Quit: Konversation terminated!)
2025-05-24 20:05:18 +0200 <monochrom> Ah. Thanks.
2025-05-24 20:05:07 +0200 <talismanick> for neural nets, the data is internally encoded in long, long vectors of floats
2025-05-24 20:04:48 +0200 <EvanR> it's the 100 - 0 rule, 100% of the practical value comes from 0% of the possibly dimensions
2025-05-24 20:04:19 +0200peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 260 seconds)
2025-05-24 20:04:11 +0200 <monochrom> What is the manifold hypothesis?
2025-05-24 20:02:34 +0200 <talismanick> and whatever happened to all that talk of the manifold hypothesis, anyways? do they still tag on some arbitrarily high dimension rather than speaking of (uniformly-convergent?) sequences of spaces because infinite-dimensional manifolds are too much of a pain to be worth the extra effort?
2025-05-24 20:02:12 +0200 <monochrom> (Instead I paid attention to distinguishing between fast food franchises heh.)
2025-05-24 20:00:48 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr)
2025-05-24 20:00:22 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr) (Remote host closed the connection)
2025-05-24 20:00:17 +0200 <talismanick> like, at what point do you start getting better results with automated optimizations when you instead write "assume we have a Banach space with these additional properties giving us reasonably-fast convergence for special cases x, y, & z"?
2025-05-24 19:59:48 +0200 <monochrom> I grew up in a very urban city (Hong Kong) so I never paid attention to tree species and names, be it in English or Chinese.
2025-05-24 19:58:50 +0200 <talismanick> yeah, exactly lol
2025-05-24 19:58:46 +0200 <int-e> . o O ( it's a tree )
2025-05-24 19:58:40 +0200wootehfoot(~wootehfoo@user/wootehfoot) (Read error: Connection reset by peer)
2025-05-24 19:58:40 +0200 <monochrom> Well ML just needs 10^10-dimensional for now.
2025-05-24 19:58:38 +0200 <talismanick> and the last commit was a year ago
2025-05-24 19:58:13 +0200 <monochrom> heh OK
2025-05-24 19:58:04 +0200 <talismanick> no clue lol
2025-05-24 19:57:56 +0200 <monochrom> Hrm why did they call it FIR?
2025-05-24 19:57:50 +0200 <talismanick> rewriting ML in terms of infinite-dimensional function spaces could be powerful if only there existed a modicum of a guarantee that if you follow this-this-and-that algebraic restriction you'll get efficient compilation to the GPU
2025-05-24 19:56:39 +0200 <monochrom> (That's what I have been telling first-year students. You want CG? It's linear algebra again. You want ML? It's linear algebra again. You want quantum computing? It's linear algebra again, and this time with complex numbers too!)
2025-05-24 19:56:15 +0200 <talismanick> now we need someone to hook that up together with https://gitlab.com/sheaf/fir as a highly-optimizing backend
2025-05-24 19:55:56 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr)
2025-05-24 19:55:34 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr) (Read error: Connection reset by peer)
2025-05-24 19:55:25 +0200 <monochrom> Also next time when someone asks about quantum computing libraries >:) >:)
2025-05-24 19:55:05 +0200 <monochrom> Hrm, next time someone asks "does Haskell have machine learning libraries" may I answer "yes, it's called vector-space"? >:)
2025-05-24 19:54:59 +0200merijn(~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 244 seconds)
2025-05-24 19:54:11 +0200 <talismanick> https://hackage.haskell.org/package/vector-space is a step in that direction
2025-05-24 19:52:10 +0200 <monochrom> I heard that when applying category theory to linear algebra, naturality becomes being independent of the choice of basis.
2025-05-24 19:52:04 +0200Frostillicus(~Frostilli@pool-71-174-119-69.bstnma.fios.verizon.net) (Ping timeout: 260 seconds)
2025-05-24 19:51:59 +0200 <talismanick> Conal Elliott has written about this
2025-05-24 19:51:39 +0200jmcantrell(~weechat@user/jmcantrell) (Ping timeout: 265 seconds)
2025-05-24 19:51:19 +0200 <talismanick> for a statically-typed language, that is
2025-05-24 19:50:13 +0200 <talismanick> arrays/rank-polymorphism <-> choice of basis with explicit dimensions, etc
2025-05-24 19:50:06 +0200tromp(~textual@2001:1c00:3487:1b00:30a6:c51d:9dbb:1dc5)
2025-05-24 19:50:01 +0200sabathan2(~sabathan@amarseille-159-1-12-107.w86-203.abo.wanadoo.fr)