Newest at the top
2025-01-18 23:17:55 +0100 | merijn | (~merijn@128-137-045-062.dynamic.caiway.nl) merijn |
2025-01-18 23:14:50 +0100 | Unicorn_Princess | (~Unicorn_P@user/Unicorn-Princess/x-3540542) Unicorn_Princess |
2025-01-18 23:14:42 +0100 | elnegro | (elnegro@r167-57-7-222.dialup.adsl.anteldata.net.uy) (Remote host closed the connection) |
2025-01-18 23:10:00 +0100 | michalz | (~michalz@185.246.207.201) |
2025-01-18 23:07:15 +0100 | merijn | (~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds) |
2025-01-18 23:07:04 +0100 | califax | (~califax@user/califx) califx |
2025-01-18 23:05:56 +0100 | r-sta | (~r-sta@sgyl-37-b2-v4wan-168528-cust2421.vm6.cable.virginm.net) (Quit: Client closed) |
2025-01-18 23:05:52 +0100 | <r-sta> | anyone that wants to be involved i can email |
2025-01-18 23:05:52 +0100 | califax | (~califax@user/califx) (Remote host closed the connection) |
2025-01-18 23:05:34 +0100 | <r-sta> | ill be around from time to time so chime in if interested |
2025-01-18 23:05:13 +0100 | <r-sta> | dont all respond at once, this chan has a habbit of deluging you with input |
2025-01-18 23:04:27 +0100 | <r-sta> | having been part of the currently leading team in these efforts worldwide, it is a fantastic opportunity for haskell |
2025-01-18 23:04:01 +0100 | <r-sta> | as well as being able to present a pretty decent out of the box algorithm that many people might find useful for small optimization tasks, the task of getting something which works much better in higher dimensions is an open problem referred to as AGI |
2025-01-18 23:02:44 +0100 | <r-sta> | and the class abstrations that provide the learning interface should be at the heart of the community codebase |
2025-01-18 23:02:32 +0100 | merijn | (~merijn@128-137-045-062.dynamic.caiway.nl) merijn |
2025-01-18 23:02:21 +0100 | <r-sta> | there should be *way more pure learning routines* |
2025-01-18 23:02:10 +0100 | <r-sta> | because the learning routines are not easy to access at top level |
2025-01-18 23:01:57 +0100 | <r-sta> | normally you would have to have some package. a lot of people use matlab |
2025-01-18 23:01:45 +0100 | <r-sta> | thats basically what i bring to the table. it would probably outperform any that exist on here, and maybe other places too |
2025-01-18 23:01:27 +0100 | <r-sta> | the one i use presents some pertinant considerations, and is quite good for people wanting something to use in their own projects |
2025-01-18 23:00:50 +0100 | <r-sta> | the idea is that you kind of commit to learning how learning routines work so as to be able to maintain them |
2025-01-18 23:00:19 +0100 | <r-sta> | but id quite like to find existing learning routines to wrap aswell |
2025-01-18 23:00:07 +0100 | <r-sta> | if people agree to this then i can start by uploading the learning routine i use |
2025-01-18 22:59:41 +0100 | <r-sta> | and presented in a way which everyone agrees on |
2025-01-18 22:59:30 +0100 | <r-sta> | id like all the peripherals i commonly build to be up on hackage |
2025-01-18 22:58:54 +0100 | <r-sta> | or to help with the maintainance |
2025-01-18 22:58:49 +0100 | <r-sta> | but there is a codebase that could easily be migrated, and id like some people from within the comunity to hand it to |
2025-01-18 22:58:24 +0100 | <r-sta> | which im really happy about! |
2025-01-18 22:58:20 +0100 | <r-sta> | in haskell |
2025-01-18 22:58:16 +0100 | <r-sta> | especially considering all the stuff we have done over recent years with MIT |
2025-01-18 22:57:57 +0100 | <r-sta> | im sure there are enough ML contributors that the haskell effort could be quite reasonable |
2025-01-18 22:57:30 +0100 | <r-sta> | i have consultation within the maintainance of my own codebase and that which is shared accademically |
2025-01-18 22:56:42 +0100 | <r-sta> | bunch* |
2025-01-18 22:56:33 +0100 | <r-sta> | for which there are several suggestions. and a nunch of other domain specific considerations like this |
2025-01-18 22:56:06 +0100 | <r-sta> | a comittee could design descisions like how to handle class abstractions for parametric objects etc |
2025-01-18 22:55:15 +0100 | <r-sta> | ML dev associated to language maintainance seems less pie in the sky than ever rn |
2025-01-18 22:54:58 +0100 | elnegro | (elnegro@r167-57-7-222.dialup.adsl.anteldata.net.uy) elnegro |
2025-01-18 22:54:52 +0100 | <r-sta> | i could easily lead this, and haskell is the perfect language. its the difference if new users come and are like, nice compiler, or are like, nice compiler, and nice ML stuff |
2025-01-18 22:54:03 +0100 | <r-sta> | im looking for, either, out of the box things to wrap, or help cobbling together something like that for the whole community |
2025-01-18 22:53:34 +0100 | <r-sta> | as you pass a loss function in, you can have an arbitrary optimisation routine advance the initial guess |
2025-01-18 22:52:53 +0100 | <r-sta> | its producing new param vecs |
2025-01-18 22:52:45 +0100 | <r-sta> | this is a stateful thing that needs a loss |
2025-01-18 22:52:37 +0100 | <r-sta> | (a -> Double) -> s - > [Double] -> (s,[Double]) |
2025-01-18 22:52:01 +0100 | <r-sta> | (a->Double) is like a loss |
2025-01-18 22:51:53 +0100 | <r-sta> | idk if i could be more specific with a type |
2025-01-18 22:51:30 +0100 | merijn | (~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds) |
2025-01-18 22:51:06 +0100 | <r-sta> | the user is the one that has to generate the code! |
2025-01-18 22:50:58 +0100 | <r-sta> | we do parameter search not combinatoric search, thats the limmitation |
2025-01-18 22:50:43 +0100 | <r-sta> | not like, code optimization |
2025-01-18 22:50:16 +0100 | <r-sta> | optimization* |