2025/01/16

2025-01-16 00:00:48 +0100 <haskellbridge> <magic_rb> if have a record such as "Foo { bar :: Text }" then I generate a lens called "bar", i cannot construct the record with "Foo { bar = "foobar" }". Is there any way around this? it complains about a ambiguous reference, though i dont see how he lens is a valid candidate on the LHS of "bar = "foobar"" anyway
2025-01-16 00:02:01 +0100mange(~user@user/mange) mange
2025-01-16 00:07:18 +0100__monty__(~toonn@user/toonn) (Quit: leaving)
2025-01-16 00:08:27 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 00:09:06 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-01-16 00:10:20 +0100sord937_(~sord937@gateway/tor-sasl/sord937) (Quit: sord937_)
2025-01-16 00:12:54 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 00:16:37 +0100 <haskellbridge> <sm> Guest71: I probably wouldn't use a Hackage category or tag for this, I'd rely on a common package name prefix instead
2025-01-16 00:17:25 +0100 <haskellbridge> <sm> if you are able to publish as one package, it will simplify maintenance hugely
2025-01-16 00:17:48 +0100 <haskellbridge> <sm> maintenance and further packaging
2025-01-16 00:19:50 +0100notzmv(~umar@user/notzmv) notzmv
2025-01-16 00:20:30 +0100 <haskellbridge> <magic_rb> got it, somehow "DuplicateRecordFields" did it
2025-01-16 00:23:00 +0100stiell(~stiell@gateway/tor-sasl/stiell) (Ping timeout: 264 seconds)
2025-01-16 00:23:50 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 00:25:17 +0100stiell(~stiell@gateway/tor-sasl/stiell) stiell
2025-01-16 00:28:32 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 272 seconds)
2025-01-16 00:29:48 +0100sawilagar(~sawilagar@user/sawilagar) (Ping timeout: 272 seconds)
2025-01-16 00:31:11 +0100Midjak(~MarciZ@82.66.147.146) (Quit: This computer has gone to sleep)
2025-01-16 00:32:16 +0100 <Guest71> sm: Thanks for your input. I was planning to use a prefix already, but I was wondering if there were additional conventions around grouping packages.
2025-01-16 00:32:16 +0100 <Guest71> On the topic of monopacking: I figured users would prefer to be able to pick micropackages as to avoid pulling in dependencies for pieces of the project they were not planning to use (reduce binary size and total logic surface). Am I wrong about this?
2025-01-16 00:33:39 +0100saulosilva(~saulosilv@181.216.220.107) (Quit: Client closed)
2025-01-16 00:39:13 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 00:39:47 +0100ljdarj1(~Thunderbi@user/ljdarj) ljdarj
2025-01-16 00:41:00 +0100califax(~califax@user/califx) (Ping timeout: 264 seconds)
2025-01-16 00:41:43 +0100califax(~califax@user/califx) califx
2025-01-16 00:42:03 +0100ljdarj(~Thunderbi@user/ljdarj) (Ping timeout: 245 seconds)
2025-01-16 00:42:03 +0100ljdarj1ljdarj
2025-01-16 00:43:39 +0100Guest69(~Guest69@2601:642:4103:1b0:a477:319d:e581:377a)
2025-01-16 00:43:40 +0100 <haskellbridge> <sm> you're not wrong, sometimes it is worth providing separate packages for that reason. But there's a maintenance and packaging cost to be carried for everymore, fewer larger packages will be less work
2025-01-16 00:44:21 +0100 <haskellbridge> <sm> how many packages are you contemplating ?
2025-01-16 00:46:00 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 00:46:07 +0100mrmr155334346318(~mrmr@user/mrmr) mrmr
2025-01-16 00:46:20 +0100 <jackdk> Guest71: and also, how large are these packages (and their combination)?
2025-01-16 00:46:20 +0100 <haskellbridge> <sm> also sometimes even users prefer one or a few big packages, even if it's not optimal in bandwidth/disk space, it simplifies their life
2025-01-16 00:48:35 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 00:51:08 +0100 <Guest71> sm: Around 7 at the moment, but could potentially grow (basically there is a collection of implementations for a large interface, each implementation is its own repo, eventually more could be added for more use-cases). I am not too concerned about maintenance burden because the plan is to build a CI/CD pipeline to take care of that. Ideally I'd do
2025-01-16 00:51:08 +0100 <Guest71> that once, tweak slightly for each repo and then never think about it again. Should I be worried about something?
2025-01-16 00:51:09 +0100 <Guest71> jackdk: ~10 kloc a piece. For 7 repos that would be ~70 kloc.
2025-01-16 00:51:38 +0100 <geekosaur> you might look at amazonka for an example of how to do it?
2025-01-16 00:52:28 +0100 <jackdk> Amazonka IMHO is a special case because so much of it is autogenerated. But even there we eventually hope to trial multiple cabal sublibraries since we have to update the whole universe in lockstep anyway
2025-01-16 00:54:03 +0100 <jackdk> I'm not sure I'd call a 10kLoC package a "micropackage". That's no `left-pad` or `is-number`, and the PVP implications for a monopackage are not ideal: any backwards-incompatible change in one will bump PVP major version and make anyone not depending on the changed package go back and check for other breakage.
2025-01-16 00:54:09 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 265 seconds)
2025-01-16 00:55:22 +0100 <jackdk> That said, now that I've read Guest71's question properly, breaking up an interface to a large API is probably a reasonable case for having at least multiple distinct sublibraries if not distinct packages (depend on how much lockstep updating you need to do if something changes)
2025-01-16 00:57:23 +0100lol_(~lol@2603:3016:1e01:b9c0:49df:554e:a17b:a07c)
2025-01-16 00:58:28 +0100 <jackdk> Is there any reason a monorepo with a cabal.project won't work for you here? That would save you building a lot of CI machinery, which is time you can spend writing more Haskell
2025-01-16 00:59:28 +0100 <jackdk> Amazonka is 3.8×10⁶ LoC, the vast majority of which is autogenerated, and it works fine enough
2025-01-16 01:00:51 +0100jcarpenter2(~lol@2603:3016:1e01:b9c0:794b:ce9f:2a3d:41ae) (Ping timeout: 252 seconds)
2025-01-16 01:02:39 +0100 <Guest71> jackdk: monopackage would work. I'm just thinking that from a UX point of view users in general do not want to pull stuff they will not use: larger binaries, larger logic surface means potentially more room to misuse/bugs, potentially worse test coverage, and as you said, now your API is less stable (more moving parts),
2025-01-16 01:03:16 +0100ljdarj1(~Thunderbi@user/ljdarj) ljdarj
2025-01-16 01:03:25 +0100 <haskellbridge> <sm> Guest71: it may be a good move, but you will have to think about it from time to time, even with automation, if you care about packaging (one or more of your packages will break with X ghc/dep version / on Y platform, disrupting your/packagers' scripts... etc.)
2025-01-16 01:03:37 +0100 <jackdk> Guest71: I mean monorepo, not monopackage. Cabal (and Stack too but I don't know the details) let you maintain several cabal packages in a single project repository, which you can then individually submit to Hackage.
2025-01-16 01:03:49 +0100ljdarj(~Thunderbi@user/ljdarj) (Ping timeout: 265 seconds)
2025-01-16 01:03:50 +0100ljdarj1ljdarj
2025-01-16 01:04:42 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 01:05:24 +0100lbseale(~quassel@user/ep1ctetus) ep1ctetus
2025-01-16 01:05:28 +0100 <jackdk> But I would also look at sublibraries which _should_ be usable everywhere now? Amazonka doesn't use them (yet?) because it predates them being widely available. Then you have a larger initial download but everything's in one place. Users still wouldn't compile things they don't need.
2025-01-16 01:06:09 +0100acidjnk_new(~acidjnk@p200300d6e7283f02edd754543fe6660f.dip0.t-ipconnect.de) (Ping timeout: 248 seconds)
2025-01-16 01:07:02 +0100 <Guest71> sm: I suppose that is always a possibility. But assuming I fix that for one package, then I would just apply the diff to the rest and be done, no?
2025-01-16 01:08:56 +0100 <Guest71> jackdk: It could work as a monorepo too. In fact, it used to be a monorepo and was later broken up (even the namespacing was mostly kept). Do you believe a monorepo with micropackaging would be better?
2025-01-16 01:09:38 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 265 seconds)
2025-01-16 01:11:18 +0100 <jackdk> I consolidated work's split repositories into a monorepo, which has since grown to ~150kLoC. It was a large improvement because maintenance often topologically sorting the repositories in your head and working from the leaves up, and if something broke you often had to go back to the leaves and start again.
2025-01-16 01:11:32 +0100 <jackdk> The monorepo has been a massive QoL improvement. But the depgraph is relatively deep; if your project's depgraph is wide and shallow you might not hit this problem. Why did you split it up?
2025-01-16 01:12:49 +0100 <jackdk> Also, perhaps the term "split packaging" is more accurate than "micropackaging"? The latter makes me think of `left-pad` and single-function libraries.
2025-01-16 01:15:15 +0100 <jackdk> The way I see it, you have several options: 1a. monorepo, monopackage; 1b. monorepo, single package with sublibraries; 1c. monorepo, split packages; 2a. multi-repo, automated consolidation to single package; 2b. multi-repo, automated consolidation to package-with-sublibraries; 3. multi-repo, individual packages
2025-01-16 01:18:22 +0100 <jackdk> My gut says to avoid 1a because of the large compilation load it'll ask of your users. 1b and 1c seem reasonable, and since you're looking at updating bindings to a single broad interface I could imagine the versioning updating in lockstep. 2a and 2b look to me like they introduce a lot of devops complexity for benefit that I cannot see. 3 could cause a lot of dependency troubles if you have a deep depgraph.
2025-01-16 01:19:41 +0100Tuplanolla(~Tuplanoll@91-159-69-59.elisa-laajakaista.fi) (Quit: Leaving.)
2025-01-16 01:20:04 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 01:20:05 +0100 <Guest71> jackdk: Regarding the reason for the split, It was relatively easy to do (each component was contained in a submodule anyway) and figured there was no downside to doing it. On the upside, it enforces API boundaries, keeps your git history relevant (reverting or rebasing across submodules is no fun), and just plain keeps your workstation resource
2025-01-16 01:20:10 +0100Guest69(~Guest69@2601:642:4103:1b0:a477:319d:e581:377a) (Ping timeout: 240 seconds)
2025-01-16 01:21:48 +0100 <jackdk> I don't understand why you brought up git submodules. I agree that they are no fun - I'd have used wholly independent repositories or a single repository. A single cabal project will enforce package boundaries
2025-01-16 01:22:06 +0100 <jackdk> (between the package it contains)
2025-01-16 01:24:46 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 01:26:53 +0100 <haskellbridge> <sm> Guest71: I'm just saying that all else being equal, many packages costs more to understand/maintain/package than one package. But all else isn't equal, it'll be an engineering judgement call
2025-01-16 01:28:33 +0100 <haskellbridge> <sm> long term maintainer and packager capacity, number and type of users are things to consider
2025-01-16 01:28:53 +0100 <Guest71> jackdk: Regarding the reason for the split, It was relatively easy to do (each component was contained in a submodule anyway) and figured there was no downside to doing it. On the upside, it enforces API boundaries, keeps your git history relevant (reverting or rebasing across submodules is no fun), and just plain keeps your workstation resource
2025-01-16 01:28:54 +0100 <Guest71> usage lower. I guess also mapping packages to repos 1-to-1 seemed better design all else being equal, but maybe I am wrong about that one. That's why I wanted input from the community.
2025-01-16 01:29:28 +0100 <haskellbridge> <sm> ot
2025-01-16 01:29:41 +0100 <Guest71> Oops, sorry about that. My message got replaced with a copy of the old thing.
2025-01-16 01:29:58 +0100 <haskellbridge> <sm> it's worth thinking about, because you can't erase things uploaded to hackage, or easily rearrange VC history
2025-01-16 01:30:02 +0100 <jackdk> I just had the strangest case of deja vu
2025-01-16 01:30:34 +0100 <haskellbridge> <magic_rb> Ive heard that overly polymorphic code doesnt specialize well, but then id ask why? Because couldnt GHC track what different instantiations of each toplevel binding occur and then based on some metric (plain count, expected machine code size) specialize automatically? I assume something like this happens already, but question is why not well enough where "overly polymorphic" ceases to be a problem
2025-01-16 01:32:44 +0100 <haskellbridge> <sm> will docs/changelogs be easy to segment by package ? Will you always release all packages in lockstep, or will you allow say a bugfix release to an individual package, complicating release scripts ? etc.
2025-01-16 01:33:25 +0100 <haskellbridge> <sm> issue tracker per package ?
2025-01-16 01:33:31 +0100 <Guest71> I was going to say that I wasn't meaning to say git submodules, but Haskell submodules (let's call them subprojects). What I was trying to say is that keeping a monorepo means that a subproject's history is merged together with everything else, which means more work around handing history.
2025-01-16 01:33:32 +0100 <Guest71> As for the boundary enforcement, it is true that you may enforce it with cabal. I suppose in my mind that is a sort of "softer" enforcement than splitting repos (boundary enforcement is regulated by a file inside the project repo), so I just reached for the most generic solution. Maybe this was a bad call?
2025-01-16 01:34:24 +0100 <haskellbridge> <sm> I think we don't know enough to have (more of :) an opinion
2025-01-16 01:35:29 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 01:36:18 +0100JuanDaugherty(~juan@user/JuanDaugherty) JuanDaugherty
2025-01-16 01:36:40 +0100 <Guest71> sm: issue tracker per package, yes. Docs are basically pure Haddock, so they are split already. I would not release in lockstep.
2025-01-16 01:37:43 +0100 <jackdk> That Haskell calls a source file a "module" means I find it confusing to call a unit of project organisation a "submodule", especially when that word had meaning to `git`. For clarity: a cabal "package" has multiple "components" (library, named sublibrary, executable, test suite, etc). One or more "packages" on the filesystem can be collected into a "project", where `cabal.project` lets you apply settings across them.
2025-01-16 01:38:01 +0100 <haskellbridge> <sm> will people often want to install all the packages together ? It sounds like not
2025-01-16 01:38:56 +0100 <Guest71> sm: I'm expecting not, except for users that happen to have different use-cases that could be better served by different implementations
2025-01-16 01:39:10 +0100 <JuanDaugherty> 'source module" is an ancient idiom for a single file of code text
2025-01-16 01:39:37 +0100 <JuanDaugherty> as opposed to "object module"
2025-01-16 01:39:56 +0100 <haskellbridge> <sm> +1 to jackdk. "package" is the right term for units of hackage/cabal/stack-stuff
2025-01-16 01:40:03 +0100 <geekosaur> (for which we have IBM to blame, IIRC)
2025-01-16 01:40:04 +0100sprotte24(~sprotte24@p200300d16f35c200f4f310a9fb58ced0.dip0.t-ipconnect.de) (Read error: Connection reset by peer)
2025-01-16 01:40:05 +0100 <Guest71> Okay, point taken. What's the best way to call everything under a namespace?
2025-01-16 01:40:16 +0100 <JuanDaugherty> yes them
2025-01-16 01:40:19 +0100 <geekosaur> which kind of namespace?
2025-01-16 01:40:33 +0100 <geekosaur> if you mean module namespaces, they're pretty fake
2025-01-16 01:40:36 +0100 <Guest71> As in I have lib/Foo/Bar/Baz.hs
2025-01-16 01:40:40 +0100 <haskellbridge> <sm> a project can consist of one or more repos which may store one or more packages
2025-01-16 01:40:44 +0100 <Guest71> What do I call everythin under Foo/Bar?
2025-01-16 01:40:44 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 272 seconds)
2025-01-16 01:40:51 +0100 <Guest71> The modules called Foo.Bar.*
2025-01-16 01:40:57 +0100 <geekosaur> modules
2025-01-16 01:40:58 +0100 <jackdk> Without knowing what the mysterious API is, it sounds like the bindings are more loosely coupled than Amazonka's. This makes me lean toward either option 1c (monorepo with split packages) or 3 (split repos with individual release infrastructure)
2025-01-16 01:41:15 +0100 <geekosaur> there is no real concept of related-ness; a different package could also use the Foo.Bar "namespace"
2025-01-16 01:41:41 +0100 <JuanDaugherty> cause in that time computers were called ibm machines so their usages became industry norms, even tho there was actually much greater arch diversity
2025-01-16 01:41:57 +0100 <Guest71> I can tell more about the project, it's not a secret. I just don't want to overwhelm you with noise.
2025-01-16 01:42:59 +0100 <Guest71> The project originally had a form of Foo.Iface, Foo.Bar.Baz1, Foo.Bar.Baz2, Foo.Bar.Baz3... all the BazN modules implement the Foo.Iface
2025-01-16 01:43:21 +0100 <JuanDaugherty> "module" is more of a construct and often enough contentious, eg. in prolog
2025-01-16 01:43:37 +0100 <jackdk> I think knowing the specific interface would be extremely helpful, if you can provide it.
2025-01-16 01:43:45 +0100 <jackdk> At least its name, I mean.
2025-01-16 01:44:49 +0100califax(~califax@user/califx) (Remote host closed the connection)
2025-01-16 01:45:08 +0100califax(~califax@user/califx) califx
2025-01-16 01:45:51 +0100 <Guest71> jackdk: it's called Nera
2025-01-16 01:46:13 +0100 <Guest71> Though I imagine you were hoping for something descriptive
2025-01-16 01:47:13 +0100xff0x(~xff0x@2405:6580:b080:900:8310:6e2:3d63:5127) (Ping timeout: 248 seconds)
2025-01-16 01:47:38 +0100 <jackdk> Ideally a link to a high-level API page. So far I have found a page on cloud-based aged care software and a PDF report from a consultancy about Smart Meters for the Australian Energy Market Commission. Please, help us help you.
2025-01-16 01:47:44 +0100 <JuanDaugherty> can a namespace just be a namespace? names r powerful, their rectification is a whole deal
2025-01-16 01:48:15 +0100 <Guest71> Oh, sorry, it isn't public as of right now.
2025-01-16 01:48:40 +0100 <Guest71> I'm currently in the process of trying to publish it, hence this conversation :)
2025-01-16 01:49:06 +0100 <Guest71> It basically combines a bunch of numeric interfaces and add some other stuff on top
2025-01-16 01:49:11 +0100 <Guest71> It's a numeric library
2025-01-16 01:50:04 +0100 <Guest71> JuanDaugherty: sorry, I'm not sure what you meant by that (re: namespaces being just namespaces)
2025-01-16 01:50:09 +0100JuanDaughertyColinRobinson
2025-01-16 01:50:52 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 01:51:33 +0100 <Guest71> jackdk: were you planning to look at the repo?
2025-01-16 01:52:22 +0100 <Guest71> Or rather, just tell me what information you would like about the project.
2025-01-16 01:52:42 +0100 <jackdk> I was planning on getting an idea of the problem you are trying to solve, because my head is in the HTTP API space by default. I don't know much about numeric code and how people expect it to be organised.
2025-01-16 01:54:48 +0100 <ColinRobinson> 'unit of compilation' is the clear but you generally only see that in specs
2025-01-16 01:55:04 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds)
2025-01-16 01:55:31 +0100 <jackdk> At this point I'm not sure I know what questions to ask. I will point out that you can soft-launch a package by uploading it to a public repo and working from there. Both Cabal and Stack let you pull in code from git repos, and then hopefully you can attract more users and find out what splits they're looking for
2025-01-16 01:57:00 +0100 <Guest71> The interface is a collection of numeric operations and the implementations are well... different ways to implement each of them.
2025-01-16 01:57:00 +0100 <Guest71> As an example: there are many ways to compute a logarithmic expression. So, you have different sub-projects that implement the different ways. Depending on your algorithm or problem as a user of the library, you may prefer one or the other. Maybe you have different use-cases, and you want to use more than one.
2025-01-16 01:57:33 +0100 <haskellbridge> <sm> good idea! validate your repo/package organisation a little before committing to hackage uploads
2025-01-16 01:59:41 +0100 <jackdk> I'm reminded of https://hackage.haskell.org/package/ad which has a bunch of different automatic differentiation methods in a single library. If the scope of each operation is relatively small, a similar approach may serve you well. It can be annoying to ask your users to go back to their cabal file and re-jig their build just to experiment with another implementation of the same method.
2025-01-16 02:00:35 +0100 <haskellbridge> <sm> +1
2025-01-16 02:00:37 +0100 <Guest71> The interface is actually remarkably boring, and by far the least important part. It only exists because I want my users to be able to swap implementations transparently and benchmark the differences for their workloads easily to make a more informed decision.
2025-01-16 02:01:20 +0100 <jackdk> OTOH, I consider it a good thing that all the different regex engines in the `regex-*` universe are in different packages, because you usually just pick one (either TDFA or PCRE) and stick with it. I wouldn't use `regex-base` as a guide for how to design an interface usable by multiple implementations, though. My head spins every time I look at it.
2025-01-16 02:02:53 +0100 <haskellbridge> <sm> if the different packages have heterogenous dependencies, such that some of them may be hard to build on certain platforms / with certain GHC versions, that could be a reason to segment. If the deps are the same for all of them, and the only issue is downloading unused code.. that's not a big cost
2025-01-16 02:03:02 +0100 <Guest71> jackdk: From skimming the summary that looks exactly like the kind of project I'm developing
2025-01-16 02:03:38 +0100vanishingideal(~vanishing@user/vanishingideal) (Quit: Lost terminal)
2025-01-16 02:03:55 +0100 <jackdk> IME it is harder to take back over-engineered solutions than it is to add smarts where it's needed. I would start with (for example) a package for calculating "logarithmic expressions" and put them all in there. If you need to split the package later, you can then extract an interface and create several implementations, but the reverse is harder: you can't un-claim a package in the Hackage namespace.
2025-01-16 02:06:14 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 02:06:34 +0100 <Guest71> jackdk: so your are advising to publish a monopackage and split later is users want it?
2025-01-16 02:08:05 +0100 <Guest71> sm: they implementations should be transparently interchangeable so they should have the same dependencies and build under the same stack.
2025-01-16 02:09:04 +0100 <haskellbridge> <sm> I think we're advising a cautious less is more approach: yes start with one repo and one package, on github (eg); get a sense of how this will work; if it seems ok, publish that to hackage
2025-01-16 02:09:16 +0100 <Guest71> jackdk: yet, what you say about the regex package is what I imagine my users thinking: I just want the implementation X, don't make me pull Y if I'm not going ot use it
2025-01-16 02:09:35 +0100 <jackdk> 70kLoC is about 4× the size of `lens`. I don't understand numeric code well enough to know whether that's a reasonable size for a library or library family. My advice would be to upload a monopackage to a public forge site, hold off on the Hackage release while you get users and see where the convenient cuts are, and split if necessary.
2025-01-16 02:09:49 +0100 <jackdk> sm: +1
2025-01-16 02:10:01 +0100 <haskellbridge> <sm> most users will rather pull one simple package than have to think about which variant they need
2025-01-16 02:10:19 +0100 <haskellbridge> <sm> especially if they're going to be comparing implementations as you say
2025-01-16 02:10:24 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2025-01-16 02:10:37 +0100 <haskellbridge> <sm> but YMMV and you can always split up the package later as jack says
2025-01-16 02:10:42 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 246 seconds)
2025-01-16 02:10:53 +0100 <jackdk> Especially if the "experimentation phase" means "add and benchmark them all" and that's something that most users will want to do.
2025-01-16 02:12:52 +0100 <haskellbridge> <sm> if you unnecessarily proliferate packages and it's not providing real benefit, I promise some day as maintainer you will rue the day
2025-01-16 02:14:04 +0100 <jackdk> Knowing little about your problem space, and having tremendous respect for Ed's library design sensibilities, I would lean towards following `ad` as a model over `regex-*` since it sounds closer to what you want to do. Note also that the Kmettverse of category-theory-flavoured packages started as a single `category-extras` package that was split when it became too large
2025-01-16 02:14:20 +0100 <Guest71> So my theory is that for initial setup my users would pull an umbrella package that re-exports the smaller packages, benchmark their code, pick an implementation (or implementations) and then change their dependencies to only pull the one implementation.
2025-01-16 02:14:20 +0100 <jackdk> sm: +1, again
2025-01-16 02:14:55 +0100ljdarj(~Thunderbi@user/ljdarj) (Ping timeout: 244 seconds)
2025-01-16 02:16:20 +0100 <Guest71> That ad package does look like the class of problem that my library tries to solve, though
2025-01-16 02:16:20 +0100 <jackdk> I think it is easier to add that engineering later, should it prove necessary, than it will be to remove it should it prove unnecessary. It also won't leave a trail of obsolete packages on Hackage. YAGNI is a great rule of thumb, particularly if you can get your code into users' hands (via a soft-launch on a public repo) and find out whether you are really GNI.
2025-01-16 02:17:58 +0100Jeanne-Kamikaze(~Jeanne-Ka@79.127.217.40) Jeanne-Kamikaze
2025-01-16 02:18:31 +0100 <jackdk> The risk is that you do all this work and people bounce off your library because they don't understand your abstraction over all the implementations. I have seen it many times with `regex-*`: beginners come in with a problem and know that regexen are _a_ solution to that problem, and get completely lost. If your target audience includes people dipping into Haskell from other languages, there's a lot to be said for simple packages.
2025-01-16 02:20:30 +0100 <jackdk> Also, I would expect numeric code to be pure, and the "interface" to be "all these functions have the same type, so I can apply the one I want". If FFI or complex data structures are involved, perhaps that's not the case, but it's an ideal. I consider https://hackage.haskell.org/package/search-algorithms a beautiful package, because its interface is just functions.
2025-01-16 02:21:06 +0100acidjnk_new(~acidjnk@p200300d6e7283f246994c33ea14f59d4.dip0.t-ipconnect.de) acidjnk
2025-01-16 02:21:37 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 02:21:49 +0100 <Guest71> I am now realizing I made no emphasis on the umbrella package and it might address many of the concerns you raise.
2025-01-16 02:21:49 +0100 <Guest71> Because the plan was to have an umbrella project that reexports all the other smaller packages. Semantically, it would look exactly like the monopackage.
2025-01-16 02:22:09 +0100 <Guest71> (Ideally you'd use it a little at first and then commit to one of the implementations)
2025-01-16 02:23:10 +0100 <Guest71> jackdk: the different implementations have different data requirements so they do handle special purpose data structures. Hence the interface (a type class) and not just a single type.
2025-01-16 02:23:26 +0100 <Guest71> No FFI though, pure Haskell.
2025-01-16 02:23:40 +0100 <Guest71> (Portable Haskell too, if that matters)
2025-01-16 02:24:25 +0100 <Guest71> Then again, as you say, semantic equivalence means that I can split off at any time in the future and not even break API
2025-01-16 02:24:35 +0100 <Guest71> That would for sure be more conservative
2025-01-16 02:24:36 +0100 <jackdk> Beware that Haddocks' rendering of module re-exports is not the clearest, especially for new users. I still think I'd do the simplest thing that could possibly work, which still sounds like a single package.
2025-01-16 02:26:30 +0100ColinRobinson(~juan@user/JuanDaugherty) (Quit: ColinRobinson)
2025-01-16 02:26:46 +0100 <Guest71> Making the monopackage from the microrepos would probably just entail using git submodules and crafting a .cabal file for the purpose. So basically easy.
2025-01-16 02:27:02 +0100 <jackdk> This conversation has gone on for nearly 90 minutes without a clear resolution, and I'm sorry but I have to tap out and focus on work. I think you should soft-launch a monopackage to a git forge and try and drum up some users. That would give you more information about what your real users need from your library, and how to split it up; and give us something to look at so we're making recommendations based on something that we can see.
2025-01-16 02:27:54 +0100 <Guest71> jackdk: Oh, sorry to keep you busy. Thank you so much!
2025-01-16 02:28:39 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 260 seconds)
2025-01-16 02:28:49 +0100 <jackdk> I still don't understand (but would like to, once it's in a public repo) what numeric algorithms you're implementing and whether 4× the LoC of `lens` indicates opportunities for simplification or for library splitting. Best of luck Guest71, I hope you figure out something that works for you and your users.
2025-01-16 02:30:59 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) (Ping timeout: 260 seconds)
2025-01-16 02:32:23 +0100califax(~califax@user/califx) (Remote host closed the connection)
2025-01-16 02:32:44 +0100otto_s(~user@p5b044c54.dip0.t-ipconnect.de) (Ping timeout: 260 seconds)
2025-01-16 02:34:29 +0100otto_s(~user@p4ff27909.dip0.t-ipconnect.de)
2025-01-16 02:38:56 +0100xff0x(~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp)
2025-01-16 02:39:40 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 02:43:25 +0100califax(~califax@user/califx) califx
2025-01-16 02:44:19 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 02:45:02 +0100 <Guest71> sm: Thanks to you as well, you were very helpful!
2025-01-16 02:47:01 +0100 <haskellbridge> <sm> 👍
2025-01-16 02:49:30 +0100 <Guest71> geekosaur, ColinRobinson, thanks for your input as well
2025-01-16 02:50:01 +0100Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-01-16 02:53:12 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2025-01-16 02:55:02 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 03:03:07 +0100acidjnk_new(~acidjnk@p200300d6e7283f246994c33ea14f59d4.dip0.t-ipconnect.de) (Read error: Connection reset by peer)
2025-01-16 03:04:15 +0100mulk(~mulk@p5b112493.dip0.t-ipconnect.de) (Ping timeout: 246 seconds)
2025-01-16 03:04:18 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 276 seconds)
2025-01-16 03:13:22 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 265 seconds)
2025-01-16 03:14:55 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 03:17:50 +0100Jeanne-Kamikaze(~Jeanne-Ka@79.127.217.40) (Quit: Leaving)
2025-01-16 03:19:11 +0100mulk(~mulk@pd9514590.dip0.t-ipconnect.de) mulk
2025-01-16 03:19:27 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 03:21:04 +0100user363627(~user@user/user363627) (Remote host closed the connection)
2025-01-16 03:23:20 +0100smalltalkman(uid545680@id-545680.hampstead.irccloud.com) smalltalkman
2025-01-16 03:23:35 +0100chexum(~quassel@gateway/tor-sasl/chexum) (Remote host closed the connection)
2025-01-16 03:24:06 +0100chexum(~quassel@gateway/tor-sasl/chexum) chexum
2025-01-16 03:30:18 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 03:34:34 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 03:43:10 +0100vanishingideal(~vanishing@user/vanishingideal) (Remote host closed the connection)
2025-01-16 03:44:06 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-01-16 03:45:40 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 03:49:58 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 245 seconds)
2025-01-16 03:52:46 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 03:56:37 +0100Square(~Square@user/square) Square
2025-01-16 04:03:03 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 246 seconds)
2025-01-16 04:06:23 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2025-01-16 04:06:51 +0100L29Ah(~L29Ah@wikipedia/L29Ah) (Read error: Connection timed out)
2025-01-16 04:06:59 +0100remedan(~remedan@ip-62-245-108-153.bb.vodafone.cz) (Quit: Bye!)
2025-01-16 04:07:20 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Remote host closed the connection)
2025-01-16 04:07:35 +0100remedan(~remedan@ip-62-245-108-153.bb.vodafone.cz) remedan
2025-01-16 04:11:42 +0100euleritian(~euleritia@77.23.250.232)
2025-01-16 04:14:07 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 04:16:20 +0100Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-01-16 04:18:33 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 04:19:28 +0100remedan(~remedan@ip-62-245-108-153.bb.vodafone.cz) (Quit: Bye!)
2025-01-16 04:19:39 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2025-01-16 04:20:43 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-01-16 04:21:54 +0100remedan(~remedan@ip-62-245-108-153.bb.vodafone.cz) remedan
2025-01-16 04:27:00 +0100housemate(~housemate@146.70.66.228) housemate
2025-01-16 04:29:30 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 04:34:15 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 04:36:54 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 252 seconds)
2025-01-16 04:37:33 +0100Guest20(~Guest71@2800:a4:10ef:7400:35a0:bf8a:5772:25a7)
2025-01-16 04:37:34 +0100 <Square> I will be neading to create a JSON (de)serializer for a type that will require a context during the deserialization step. Aiui aeson doesn't support this. Sure I could roll my own with with some parser combinators, but I wonder if anyone sees a simpler approach?
2025-01-16 04:38:10 +0100Guest71(~Guest71@2800:a4:10ef:7400:35a0:bf8a:5772:25a7) (Ping timeout: 240 seconds)
2025-01-16 04:38:38 +0100 <jackdk> Square: what sort of data is carried by the context?
2025-01-16 04:41:23 +0100 <Square> jackdk, info about how fields should be deserialized. Like the presence of '"value":12' could get all sorts of type wrappings. 'Int', 'Maybe Int', 'SomeType (Maybe Int)'
2025-01-16 04:42:36 +0100 <jackdk> Is the universe of possible contexts small enough to be represented by a family of newtypes?
2025-01-16 04:44:52 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 04:45:08 +0100 <jackdk> merijn: your client is blinking in and out of this channel again
2025-01-16 04:45:33 +0100 <Square> jackdk, Hmm possibly. But I forgot to say that above mentioned types will be shuved into a existential type
2025-01-16 04:46:01 +0100 <Square> existentially qualified wrapper type*
2025-01-16 04:46:08 +0100 <Square> quantified*
2025-01-16 04:46:15 +0100 <jackdk> Can we jump to the concrete? What type are you trying to deserialise into and is the context defined anywhere (even perhaps outside of Haskell, like an API doc?)
2025-01-16 04:46:39 +0100 <Square> ok
2025-01-16 04:47:11 +0100 <Square> Say i want to deserialize a Map SomeKey
2025-01-16 04:47:12 +0100 <Square> ops
2025-01-16 04:47:40 +0100 <jackdk> If this is going to be a long explanation, you might want to use a pastebin
2025-01-16 04:47:57 +0100 <Square> Say i want to deserialize a 'Map SomeKey Box'. 'data Box = forall a. Box a'.
2025-01-16 04:48:06 +0100 <Square> ok, ill do that
2025-01-16 04:48:27 +0100 <jackdk> Thanks. Please mention me when it's ready so I get a beep
2025-01-16 04:49:33 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 265 seconds)
2025-01-16 04:57:28 +0100 <Square> i will! Realized I needed to think a bit to write something comprehensive.
2025-01-16 05:00:16 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 05:04:45 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 05:05:24 +0100jzargo(~jzargo@user/jzargo) jzargo
2025-01-16 05:10:06 +0100 <Square> jackdk, Hope this makes sense https://paste.tomsmeding.com/dCXlspBC
2025-01-16 05:10:23 +0100housemate(~housemate@146.70.66.228) (Quit: Nothing to see here. I wasn't there. I take IRC seriously. I do not work for any body DIRECTLY although I do represent BOT NET.)
2025-01-16 05:12:12 +0100 <Square> So I redifined my idea a bit, so ignore types mentioned in posts before the paste.
2025-01-16 05:12:21 +0100m5zs7k(aquares@web10.mydevil.net) (Ping timeout: 276 seconds)
2025-01-16 05:14:23 +0100 <Square> I guess the parser would use a "Output Box" where box would be the sumtype housing the listing of type I mentioned in the paste. That "Box" would need existential quantifican (+ Typeable) as the enums would be arbitrary.
2025-01-16 05:15:40 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 05:17:07 +0100m5zs7k(aquares@web10.mydevil.net) m5zs7k
2025-01-16 05:18:11 +0100bitdex(~bitdex@gateway/tor-sasl/bitdex) bitdex
2025-01-16 05:20:10 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 05:23:40 +0100Guest20(~Guest71@2800:a4:10ef:7400:35a0:bf8a:5772:25a7) (Ping timeout: 240 seconds)
2025-01-16 05:28:37 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2025-01-16 05:31:05 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 05:37:45 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds)
2025-01-16 05:49:08 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 05:51:09 +0100sp1ff(~user@c-67-160-173-55.hsd1.wa.comcast.net) sp1ff
2025-01-16 05:57:33 +0100CaptainSlog(~user@67.237.174.60)
2025-01-16 05:57:56 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 06:01:29 +0100CaptainSlog(~user@67.237.174.60) (Quit: ERC 5.5.0.29.1 (IRC client for GNU Emacs 29.4))
2025-01-16 06:04:28 +0100 <jackdk> I'm sorry, I still don't really understand what I'm looking at. Is `Wrap a` meant to be a sum type? Is the universe of keys meant to be finite?
2025-01-16 06:05:22 +0100j1n37(~j1n37@user/j1n37) (Read error: Connection reset by peer)
2025-01-16 06:05:58 +0100 <jackdk> Is it fair to characterise the problem as "I want to provide a set of keys, whether or not the keys are optional, and the types of their expected value, and get a JSON deserialiser for an object with those keys"?
2025-01-16 06:08:31 +0100j1n37(~j1n37@user/j1n37) j1n37
2025-01-16 06:08:50 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 06:13:25 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 248 seconds)
2025-01-16 06:19:22 +0100 <mauke> Square: I mean, you could use aeson to decode to Value
2025-01-16 06:19:36 +0100 <mauke> then the rest boils down to tree conversion, not json parsing
2025-01-16 06:21:58 +0100dontdieych2(~quassel@user/dontdieych2) dontdieych2
2025-01-16 06:24:13 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 06:28:44 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 06:29:50 +0100Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-01-16 06:30:12 +0100stiell(~stiell@gateway/tor-sasl/stiell) (Remote host closed the connection)
2025-01-16 06:30:33 +0100stiell(~stiell@gateway/tor-sasl/stiell) stiell
2025-01-16 06:35:52 +0100raym(~ray@user/raym) raym
2025-01-16 06:39:24 +0100tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Read error: Connection reset by peer)
2025-01-16 06:39:36 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 06:39:38 +0100tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) tzh
2025-01-16 06:42:59 +0100tt12310978324354(~tt1231@2603:6010:8700:4a81:219f:50d3:618a:a6ee) (Ping timeout: 260 seconds)
2025-01-16 06:43:53 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds)
2025-01-16 06:46:03 +0100olivial(~benjaminl@user/benjaminl) (Read error: Connection reset by peer)
2025-01-16 06:46:19 +0100olivial(~benjaminl@user/benjaminl) benjaminl
2025-01-16 06:47:34 +0100tt12310978324354(~tt1231@2603:6010:8700:4a81:219f:50d3:618a:a6ee) tt1231
2025-01-16 06:47:38 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-16 06:48:01 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 248 seconds)
2025-01-16 06:48:02 +0100tnt2tnt1
2025-01-16 06:54:40 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 06:59:34 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 265 seconds)
2025-01-16 06:59:45 +0100alp(~alp@2001:861:8ca0:4940:e814:f100:32a:4db4) (Ping timeout: 248 seconds)
2025-01-16 07:00:21 +0100hsw_(~hsw@112-104-8-145.adsl.dynamic.seed.net.tw) (Ping timeout: 248 seconds)
2025-01-16 07:00:24 +0100ft(~ft@p4fc2a354.dip0.t-ipconnect.de) (Quit: leaving)
2025-01-16 07:04:19 +0100takuan(~takuan@178-116-218-225.access.telenet.be)
2025-01-16 07:08:50 +0100michalz(~michalz@185.246.207.221)
2025-01-16 07:10:03 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 07:13:37 +0100gorignak(~gorignak@user/gorignak) (Ping timeout: 248 seconds)
2025-01-16 07:13:41 +0100tcard(~tcard@2400:4051:5801:7500:cf17:befc:ff82:5303) (Remote host closed the connection)
2025-01-16 07:13:44 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Remote host closed the connection)
2025-01-16 07:13:58 +0100tcard(~tcard@2400:4051:5801:7500:cf17:befc:ff82:5303)
2025-01-16 07:14:03 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) peterbecich
2025-01-16 07:14:41 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 248 seconds)
2025-01-16 07:16:59 +0100myxos(~myxos@syn-065-028-251-121.res.spectrum.com) (Remote host closed the connection)
2025-01-16 07:17:31 +0100gorignak(~gorignak@user/gorignak) gorignak
2025-01-16 07:17:41 +0100myxos(~myxos@syn-065-028-251-121.res.spectrum.com) myxokephale
2025-01-16 07:18:16 +0100doyougnu(~doyougnu@syn-045-046-170-068.res.spectrum.com) (Quit: ZNC 1.8.2 - https://znc.in)
2025-01-16 07:18:32 +0100doyougnu(~doyougnu@syn-045-046-170-068.res.spectrum.com)
2025-01-16 07:18:47 +0100lbseale(~quassel@user/ep1ctetus) (Quit: No Ping reply in 180 seconds.)
2025-01-16 07:20:03 +0100lbseale(~quassel@user/ep1ctetus) ep1ctetus
2025-01-16 07:22:57 +0100gorignak(~gorignak@user/gorignak) (Quit: quit)
2025-01-16 07:23:11 +0100 <Square> jackdk, the number of keys will be in the several 100s range. And data Wrap = IWrap Int | SWrap String | forall a. (Enum a, Typeable a) => SCWrap a | forall a. (Enum a, Typeable a) => MCWrap (Set a) | etc...
2025-01-16 07:25:26 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 07:28:36 +0100hsw(~hsw@112-104-8-145.adsl.dynamic.seed.net.tw) hsw
2025-01-16 07:32:10 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 07:32:13 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-01-16 07:36:35 +0100euleritian(~euleritia@77.23.250.232) (Ping timeout: 244 seconds)
2025-01-16 07:36:36 +0100Sgeo(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-01-16 07:37:13 +0100euleritian(~euleritia@dynamic-176-007-194-010.176.7.pool.telefonica.de)
2025-01-16 07:40:18 +0100alp(~alp@2001:861:8ca0:4940:1917:36a4:8890:6036)
2025-01-16 07:41:14 +0100Square2(~Square4@user/square) Square
2025-01-16 07:44:30 +0100Square(~Square@user/square) (Ping timeout: 265 seconds)
2025-01-16 07:45:36 +0100euleritian(~euleritia@dynamic-176-007-194-010.176.7.pool.telefonica.de) (Read error: Connection reset by peer)
2025-01-16 07:45:54 +0100euleritian(~euleritia@77.23.250.232)
2025-01-16 07:47:54 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 07:51:27 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-16 07:52:15 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 276 seconds)
2025-01-16 07:52:15 +0100tnt2tnt1
2025-01-16 07:53:08 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 272 seconds)
2025-01-16 07:55:40 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 07:57:22 +0100acidjnk(~acidjnk@p200300d6e7283f2464fbbfe361ec58f6.dip0.t-ipconnect.de) acidjnk
2025-01-16 07:57:22 +0100euleritian(~euleritia@77.23.250.232) (Read error: Connection reset by peer)
2025-01-16 07:57:29 +0100Sgeo(~Sgeo@user/sgeo) Sgeo
2025-01-16 07:57:40 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)
2025-01-16 07:58:04 +0100 <Square2> https://learnyouahaskell.org seems to have problems with latest iterations of Firefox? https://imgur.com/a/tosoyH6 . Author says he idles here using username "BONUS", but I can't find him.
2025-01-16 07:59:17 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-16 07:59:28 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 272 seconds)
2025-01-16 07:59:28 +0100tnt2tnt1
2025-01-16 07:59:35 +0100Sgeo_(~Sgeo@user/sgeo) Sgeo
2025-01-16 08:00:27 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 265 seconds)
2025-01-16 08:03:22 +0100Sgeo(~Sgeo@user/sgeo) (Ping timeout: 265 seconds)
2025-01-16 08:03:54 +0100 <mauke> that was like 10 years ago
2025-01-16 08:03:57 +0100CiaoSen(~Jura@2a05:5800:21a:4900:ca4b:d6ff:fec1:99da) CiaoSen
2025-01-16 08:05:00 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-16 08:05:54 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 252 seconds)
2025-01-16 08:05:54 +0100tnt2tnt1
2025-01-16 08:11:02 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 08:12:15 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Ping timeout: 246 seconds)
2025-01-16 08:12:40 +0100euleritian(~euleritia@dynamic-176-007-194-010.176.7.pool.telefonica.de)
2025-01-16 08:13:50 +0100remexre(~remexre@user/remexre) (Read error: Connection reset by peer)
2025-01-16 08:13:58 +0100remexre(~remexre@user/remexre) remexre
2025-01-16 08:16:18 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 276 seconds)
2025-01-16 08:16:45 +0100rvalue-(~rvalue@user/rvalue) rvalue
2025-01-16 08:17:29 +0100rvalue(~rvalue@user/rvalue) (Ping timeout: 260 seconds)
2025-01-16 08:19:57 +0100 <Square2> mauke, Any suggestions on where to send new users these days? I guess some users will get cold feet if they're told that is the best learning resource.
2025-01-16 08:23:13 +0100rvalue-rvalue
2025-01-16 08:25:39 +0100 <mauke> @where books
2025-01-16 08:25:39 +0100 <lambdabot> https://www.extrema.is/articles/haskell-books is the best list of Haskell books. See also: LYAH, HTAC, RWH, PH, YAHT, SOE, HR, PIH, TFwH, wikibook, PCPH, HPFFP, FSAF, HftVB, TwT, FoP, PFAD, WYAH,
2025-01-16 08:25:39 +0100 <lambdabot> non-haskell-books
2025-01-16 08:26:24 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 08:26:44 +0100dysthesis(~dysthesis@user/dysthesis) dysthesis
2025-01-16 08:29:00 +0100vanishingideal(~vanishing@user/vanishingideal) (Remote host closed the connection)
2025-01-16 08:29:55 +0100ash3en(~Thunderbi@146.70.124.222) ash3en
2025-01-16 08:30:00 +0100koz(~koz@121.99.240.58) (Quit: ZNC 1.8.2 - https://znc.in)
2025-01-16 08:30:50 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 252 seconds)
2025-01-16 08:32:40 +0100Sgeo_(~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-01-16 08:32:48 +0100koz(~koz@121.99.240.58)
2025-01-16 08:34:27 +0100 <Square2> mauke, thanks
2025-01-16 08:35:19 +0100ProofTechnique_(sid79547@id-79547.ilkley.irccloud.com) (*.net *.split)
2025-01-16 08:35:19 +0100amir(sid22336@user/amir) (*.net *.split)
2025-01-16 08:35:19 +0100lexi-lambda(sid92601@id-92601.hampstead.irccloud.com) (*.net *.split)
2025-01-16 08:35:19 +0100S11001001(sid42510@id-42510.ilkley.irccloud.com) (*.net *.split)
2025-01-16 08:35:19 +0100T_S_____(sid501726@id-501726.uxbridge.irccloud.com) (*.net *.split)
2025-01-16 08:35:19 +0100dsal(sid13060@id-13060.lymington.irccloud.com) (*.net *.split)
2025-01-16 08:35:39 +0100 <mauke> @where tutorial
2025-01-16 08:35:39 +0100 <lambdabot> http://www.haskell.org/tutorial/
2025-01-16 08:35:41 +0100 <mauke> @where tutorials
2025-01-16 08:35:41 +0100 <lambdabot> http://haskell.org/haskellwiki/Tutorials
2025-01-16 08:35:43 +0100 <mauke> ...
2025-01-16 08:38:13 +0100 <Square2> great
2025-01-16 08:39:18 +0100sord937(~sord937@gateway/tor-sasl/sord937) sord937
2025-01-16 08:40:37 +0100ProofTechnique_(sid79547@id-79547.ilkley.irccloud.com)
2025-01-16 08:40:37 +0100amir(sid22336@user/amir) amir
2025-01-16 08:40:37 +0100lexi-lambda(sid92601@id-92601.hampstead.irccloud.com) lexi-lambda
2025-01-16 08:40:37 +0100S11001001(sid42510@id-42510.ilkley.irccloud.com) S11001001
2025-01-16 08:40:37 +0100T_S_____(sid501726@id-501726.uxbridge.irccloud.com)
2025-01-16 08:40:37 +0100dsal(sid13060@id-13060.lymington.irccloud.com) dsal
2025-01-16 08:41:47 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 08:43:38 +0100manwithluck(~manwithlu@194.177.28.164) (Read error: Connection reset by peer)
2025-01-16 08:44:03 +0100manwithluck(~manwithlu@194.177.28.164) manwithluck
2025-01-16 08:46:20 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds)
2025-01-16 08:53:13 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2025-01-16 08:54:15 +0100 <haskellbridge> <sm> @where links
2025-01-16 08:54:21 +0100 <sm> @where links
2025-01-16 08:54:21 +0100 <lambdabot> https://haskell-links.org collected Haskell links and search tools, including @where links
2025-01-16 08:54:37 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-16 08:55:12 +0100acidjnk(~acidjnk@p200300d6e7283f2464fbbfe361ec58f6.dip0.t-ipconnect.de) (Ping timeout: 272 seconds)
2025-01-16 08:55:30 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 252 seconds)
2025-01-16 08:55:31 +0100tnt2tnt1
2025-01-16 08:56:09 +0100lortabac(~lortabac@2a01:e0a:541:b8f0:55ab:e185:7f81:54a4) lortabac
2025-01-16 08:56:40 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) merijn
2025-01-16 09:00:02 +0100caconym(~caconym@user/caconym) (Quit: bye)
2025-01-16 09:00:42 +0100caconym(~caconym@user/caconym) caconym
2025-01-16 09:01:01 +0100merijn(~merijn@128-137-045-062.dynamic.caiway.nl) (Ping timeout: 244 seconds)
2025-01-16 09:01:20 +0100housemate(~housemate@146.70.66.228) housemate
2025-01-16 09:08:07 +0100vanishingideal(~vanishing@user/vanishingideal) vanishingideal
2025-01-16 09:08:17 +0100sawilagar(~sawilagar@user/sawilagar) sawilagar
2025-01-16 09:14:26 +0100euleritian(~euleritia@dynamic-176-007-194-010.176.7.pool.telefonica.de) (Read error: Connection reset by peer)
2025-01-16 09:14:44 +0100dysthesis(~dysthesis@user/dysthesis) (Remote host closed the connection)
2025-01-16 09:14:45 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)
2025-01-16 09:19:01 +0100peterbecich(~Thunderbi@syn-047-229-123-186.res.spectrum.com) (Ping timeout: 248 seconds)
2025-01-16 09:26:34 +0100housemate(~housemate@146.70.66.228) (Ping timeout: 252 seconds)
2025-01-16 09:30:46 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) (Ping timeout: 244 seconds)
2025-01-16 09:32:49 +0100Katarushisu(~Katarushi@finc-20-b2-v4wan-169598-cust1799.vm7.cable.virginm.net) (Quit: The Lounge - https://thelounge.chat)
2025-01-16 09:33:00 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Ping timeout: 276 seconds)
2025-01-16 09:33:20 +0100Katarushisu(~Katarushi@finc-20-b2-v4wan-169598-cust1799.vm7.cable.virginm.net) Katarushisu
2025-01-16 09:33:56 +0100euleritian(~euleritia@dynamic-176-007-194-010.176.7.pool.telefonica.de)
2025-01-16 09:37:27 +0100akegalj(~akegalj@168-159.dsl.iskon.hr) akegalj
2025-01-16 09:44:13 +0100merijn(~merijn@77.242.116.146) merijn
2025-01-16 09:50:18 +0100euleritian(~euleritia@dynamic-176-007-194-010.176.7.pool.telefonica.de) (Ping timeout: 272 seconds)
2025-01-16 09:50:31 +0100euleritian(~euleritia@dynamic-176-004-001-234.176.4.pool.telefonica.de)
2025-01-16 09:54:35 +0100crvs(~crvs@185.147.238.3) crvs
2025-01-16 09:56:24 +0100mange(~user@user/mange) (Ping timeout: 276 seconds)
2025-01-16 09:57:03 +0100ash3en(~Thunderbi@146.70.124.222) (Quit: ash3en)
2025-01-16 10:01:58 +0100machinedgod(~machinedg@d108-173-18-100.abhsia.telus.net) machinedgod
2025-01-16 10:03:20 +0100bitdex(~bitdex@gateway/tor-sasl/bitdex) (Remote host closed the connection)
2025-01-16 10:03:52 +0100bitdex(~bitdex@gateway/tor-sasl/bitdex) bitdex
2025-01-16 10:04:17 +0100merijn(~merijn@77.242.116.146) (Ping timeout: 248 seconds)
2025-01-16 10:08:10 +0100kimiamania(~65804703@user/kimiamania) (Quit: PegeLinux)
2025-01-16 10:09:56 +0100kimiamania(~65804703@user/kimiamania) kimiamania
2025-01-16 10:12:34 +0100euleritian(~euleritia@dynamic-176-004-001-234.176.4.pool.telefonica.de) (Read error: Connection reset by peer)
2025-01-16 10:13:02 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)
2025-01-16 10:13:47 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Read error: Connection reset by peer)
2025-01-16 10:14:22 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)
2025-01-16 10:14:25 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) todi
2025-01-16 10:15:14 +0100kuribas(~user@ip-188-118-57-242.reverse.destiny.be) kuribas
2025-01-16 10:15:36 +0100merijn(~merijn@77.242.116.146) merijn
2025-01-16 10:18:45 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Ping timeout: 248 seconds)
2025-01-16 10:19:04 +0100euleritian(~euleritia@dynamic-176-004-140-216.176.4.pool.telefonica.de)
2025-01-16 10:19:44 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) (Ping timeout: 252 seconds)
2025-01-16 10:20:35 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) todi
2025-01-16 10:20:37 +0100merijn(~merijn@77.242.116.146) (Ping timeout: 265 seconds)
2025-01-16 10:20:54 +0100euleritian(~euleritia@dynamic-176-004-140-216.176.4.pool.telefonica.de) (Read error: Connection reset by peer)
2025-01-16 10:21:11 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)
2025-01-16 10:24:43 +0100dysthesis(~dysthesis@user/dysthesis) dysthesis
2025-01-16 10:25:26 +0100todi1(~todi@p57803331.dip0.t-ipconnect.de)
2025-01-16 10:26:13 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) (Ping timeout: 248 seconds)
2025-01-16 10:28:07 +0100lxsameer(~lxsameer@Serene/lxsameer) lxsameer
2025-01-16 10:31:30 +0100todi1(~todi@p57803331.dip0.t-ipconnect.de) (Ping timeout: 276 seconds)
2025-01-16 10:32:16 +0100merijn(~merijn@77.242.116.146) merijn
2025-01-16 10:37:25 +0100cyphase(~cyphase@user/cyphase) (Ping timeout: 244 seconds)
2025-01-16 10:42:29 +0100cyphase(~cyphase@user/cyphase) cyphase
2025-01-16 10:43:17 +0100tzh(~tzh@c-76-115-131-146.hsd1.or.comcast.net) (Quit: zzz)
2025-01-16 10:49:09 +0100acidjnk(~acidjnk@p200300d6e7283f2464fbbfe361ec58f6.dip0.t-ipconnect.de) acidjnk
2025-01-16 10:55:48 +0100akegalj(~akegalj@168-159.dsl.iskon.hr) (Ping timeout: 245 seconds)
2025-01-16 10:58:57 +0100housemate(~housemate@146.70.66.228) housemate
2025-01-16 11:06:23 +0100akegalj(~akegalj@89-172-132-1.adsl.net.t-com.hr)
2025-01-16 11:06:51 +0100comerijn(~merijn@77.242.116.146) merijn
2025-01-16 11:06:52 +0100xff0x(~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp) (Ping timeout: 244 seconds)
2025-01-16 11:08:24 +0100tnt2(~Thunderbi@user/tnt1) tnt1
2025-01-16 11:08:50 +0100tnt1(~Thunderbi@user/tnt1) (Ping timeout: 272 seconds)
2025-01-16 11:08:50 +0100tnt2tnt1
2025-01-16 11:09:36 +0100merijn(~merijn@77.242.116.146) (Ping timeout: 252 seconds)
2025-01-16 11:14:25 +0100L29Ah(~L29Ah@wikipedia/L29Ah) L29Ah
2025-01-16 11:16:03 +0100Smiles(uid551636@id-551636.lymington.irccloud.com) Smiles
2025-01-16 11:19:40 +0100hgolden_(~hgolden@2603:8000:9d00:3ed1:6ff3:8389:b901:6363) hgolden
2025-01-16 11:20:30 +0100hgolden(~hgolden@2603:8000:9d00:3ed1:6ff3:8389:b901:6363) (Read error: Connection reset by peer)
2025-01-16 11:20:59 +0100hgolden__(~hgolden@2603:8000:9d00:3ed1:6ff3:8389:b901:6363) hgolden
2025-01-16 11:22:48 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
2025-01-16 11:24:00 +0100hgolden_(~hgolden@2603:8000:9d00:3ed1:6ff3:8389:b901:6363) (Ping timeout: 252 seconds)
2025-01-16 11:24:08 +0100dtman34(~dtman34@c-76-156-106-11.hsd1.mn.comcast.net) (Ping timeout: 244 seconds)
2025-01-16 11:24:19 +0100dtman34_(~dtman34@c-76-156-106-11.hsd1.mn.comcast.net) dtman34
2025-01-16 11:24:21 +0100sprotte24(~sprotte24@p200300d16f3cd90039044ee6d2c6f144.dip0.t-ipconnect.de)
2025-01-16 11:24:43 +0100__monty__(~toonn@user/toonn) toonn
2025-01-16 11:25:02 +0100sprotte24(~sprotte24@p200300d16f3cd90039044ee6d2c6f144.dip0.t-ipconnect.de) (Client Quit)
2025-01-16 11:28:03 +0100tromp(~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
2025-01-16 11:28:49 +0100V(~v@ircpuzzles/2022/april/winner/V) (Remote host closed the connection)
2025-01-16 11:29:02 +0100xdej(~xdej@quatramaran.salle-s.org) (Ping timeout: 252 seconds)
2025-01-16 11:29:11 +0100xdej(~xdej@quatramaran.salle-s.org)
2025-01-16 11:29:17 +0100V(~v@ircpuzzles/2022/april/winner/V) V
2025-01-16 11:32:18 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) todi
2025-01-16 11:34:05 +0100tv(~tv@user/tv) (Read error: Connection reset by peer)
2025-01-16 11:35:03 +0100ColinRobinson(~juan@user/JuanDaugherty) JuanDaugherty
2025-01-16 11:36:28 +0100ash3en(~Thunderbi@2a03:7846:b6eb:101:93ac:a90a:da67:f207) ash3en
2025-01-16 11:37:09 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) (Ping timeout: 276 seconds)
2025-01-16 11:38:06 +0100todi(~todi@p57803331.dip0.t-ipconnect.de) todi
2025-01-16 11:39:40 +0100emmanuelux(~emmanuelu@user/emmanuelux) (Ping timeout: 252 seconds)
2025-01-16 11:45:54 +0100housemate(~housemate@146.70.66.228) (Quit: Nothing to see here. I wasn't there. I take IRC seriously. I do not work for any body DIRECTLY although I do represent BOT NET.)
2025-01-16 11:47:51 +0100 <smiesner> hi, i want to update a haskell package in nix. is this the correct file i have to edit and PR? https://github.com/NixOS/nixpkgs/blob/nixos-24.11/pkgs/development/haskell-modules/hackage-package…
2025-01-16 11:48:04 +0100 <dminuoso> No.
2025-01-16 11:48:17 +0100 <dminuoso> smiesner: What exactly do you want to do with the package?
2025-01-16 11:48:34 +0100 <smiesner> I want to bump it's version, since there is an update of the library.
2025-01-16 11:48:49 +0100 <dminuoso> smiesner: That will happen automatically to a degree.
2025-01-16 11:50:24 +0100Square2(~Square4@user/square) (Ping timeout: 260 seconds)
2025-01-16 11:51:09 +0100 <yushyin> see first line of that file: /* hackage-packages.nix is auto-generated by hackage2nix -- DO NOT EDIT MANUALLY! */
2025-01-16 11:51:34 +0100 <smiesner> oh great, reading would have helped as always
2025-01-16 11:51:40 +0100 <dminuoso> There is ./pkgs/development/haskell-modules/configuration-hackage2nix/stackage.yaml however that is used to fiddle with that manuallz
2025-01-16 11:51:41 +0100mreh(~matthew@host86-146-138-36.range86-146.btcentralplus.com) mreh
2025-01-16 11:52:17 +0100 <dminuoso> But not for regular updates. Roughly nix is doing the same thing as stackage: provide a snapshot of hackage where things mostly work with each other.
2025-01-16 11:52:24 +0100 <dminuoso> s/nix/nixpkgs/
2025-01-16 11:53:06 +0100 <dminuoso> Err ./pkgs/development/haskell-modules/configuration-hackage2nix/main.yaml rather, sorry
2025-01-16 11:53:22 +0100 <dminuoso> (As you can see, this roughly follows stackage)
2025-01-16 11:55:47 +0100 <dminuoso> smiesner: https://nixos.org/manual/nixpkgs/stable/#haskell-available-packages contains a bit of documentation on the subject
2025-01-16 11:55:49 +0100 <smiesner> dminuoso: thank you very much!
2025-01-16 11:56:16 +0100 <mreh> is it possible to (somewhat) reliably memoise with function type arguments? maybe with something like a StableName and some compiler magic?
2025-01-16 11:56:30 +0100 <yushyin> smiesner: maybe you can/should overlay the package or even use haskell.nix, either way, it might be worth popping over to #haskell:nixos.org and asking them what you should do now
2025-01-16 11:56:41 +0100 <dminuoso> (So roughly we take most of hackage, but when the package is also on some stackage resolver - dont ask me which - then we try to provide the version from that stackage resolveer/
2025-01-16 11:57:09 +0100 <dminuoso> At work we have stopped using nixpkgs haskell packages because it leads to lot of problems and lack of speed.
2025-01-16 11:57:38 +0100 <dminuoso> It's only haskell.nix here now, which gives us all the benefits and the big price tag attached to it. :-)
2025-01-16 11:57:55 +0100 <smiesner> thanks for the very detailed explanation and links!
2025-01-16 11:58:23 +0100lortabac(~lortabac@2a01:e0a:541:b8f0:55ab:e185:7f81:54a4) (Quit: WeeChat 4.4.2)
2025-01-16 11:58:36 +0100 <dminuoso> The biggest price tag is that you really want a local nix cache with some CI that is able to build the bootstrap GHCs continuously for all architectures you have.
2025-01-16 11:58:37 +0100housemate(~housemate@146.70.66.228) housemate
2025-01-16 11:59:03 +0100 <dminuoso> If you can muster that, its great. If not... haskell.nix is a kind of terrible experience unless you dont mind following the update cadence of IOG.
2025-01-16 11:59:27 +0100 <smiesner> just for my understanding now: when a library is bumped on hackage, how long does nix need to integrate it? or is it no time at all bc it 'researches' on every nix build try?
2025-01-16 12:00:01 +0100 <dminuoso> smiesner: It depends on whether it receives a pin in that main.yaml
2025-01-16 12:00:07 +0100 <dminuoso> (or a pin via stackage resolver)
2025-01-16 12:00:23 +0100 <dminuoso> If its unpinned, it should get updated the next time haskell2nix runs (daily perhaps? unsure)
2025-01-16 12:02:11 +0100 <dminuoso> smiesner: Though know, that these updates dont get merged into master automatically.
2025-01-16 12:02:28 +0100 <smiesner> hm, i cant find the package I want to use the latest version of in the main.yaml. it's this one: https://search.nixos.org/packages?channel=24.11&show=haskellPackages.hosc&from=0&size=50&sort=rele…
2025-01-16 12:02:39 +0100 <smiesner> i'm really a noobie with nix, sorry
2025-01-16 12:02:47 +0100 <smiesner> just used it once or twice
2025-01-16 12:02:56 +0100 <dminuoso> smiesner: There is a branch called haskell-updates which I believe is updated daily
2025-01-16 12:03:30 +0100 <dminuoso> Now there's some maintainer script to merge open haskell-updates PRs back into master.
2025-01-16 12:03:48 +0100 <dminuoso> See https://github.com/NixOS/nixpkgs/blob/haskell-updates/pkgs/development/haskell-modules/HACKING.md
2025-01-16 12:05:01 +0100 <dminuoso> smiesner: https://github.com/NixOS/nixpkgs/blob/haskell-updates/pkgs/development/haskell-modules/HACKING.md#… here we go
2025-01-16 12:05:07 +0100 <dminuoso> Hydra evaluates haskell-branch every 4 hours.
2025-01-16 12:05:24 +0100 <dminuoso> The Haskell team members generally hang out in the Matrix room #haskell:nixos.org.
2025-01-16 12:05:29 +0100 <dminuoso> Check with them for further details.
2025-01-16 12:06:59 +0100 <Leary> mreh: Try 'stable-memo' or 'memo-ptr' and find out.
2025-01-16 12:07:02 +0100 <smiesner> thank you so much!
2025-01-16 12:07:54 +0100housemate(~housemate@146.70.66.228) (Quit: Nothing to see here. I wasn't there. I take IRC seriously. I do not work for any body DIRECTLY although I do represent BOT NET.)
2025-01-16 12:11:04 +0100xff0x(~xff0x@2405:6580:b080:900:7761:255c:77e5:46e1)
2025-01-16 12:11:41 +0100 <mreh> Leary, I think I'll have a go and see what happens.
2025-01-16 12:12:07 +0100 <mreh> I was asking on the off chance someone had tried it before.
2025-01-16 12:12:18 +0100comerijn(~merijn@77.242.116.146) (Ping timeout: 252 seconds)
2025-01-16 12:14:19 +0100 <mreh> I'm planning an interface where users can fetch a graphics pipeline uniform from the environment when they call a function with a getter, but every call will result in a new uniform descriptor being added, and on some systems there are only 4 of those available
2025-01-16 12:14:53 +0100 <mreh> so if I could memoise the binding number for each getter, that would be nice
2025-01-16 12:15:08 +0100 <mreh> but it's not the end of the world
2025-01-16 12:17:23 +0100 <Leary> I would not rely on this kind of memoisation, but write something that explicitly generates and manages unique identifiers.
2025-01-16 12:20:03 +0100 <mreh> It might be easier just to document the gotcha. It's not very onerous to ask users to only get a uniform once in their pipeline definition.
2025-01-16 12:25:16 +0100merijn(~merijn@77.242.116.146) merijn
2025-01-16 12:27:48 +0100Digitteknohippie(~user@user/digit) Digit
2025-01-16 12:28:50 +0100Digit(~user@user/digit) (Ping timeout: 248 seconds)
2025-01-16 12:32:12 +0100merijn(~merijn@77.242.116.146) (Ping timeout: 252 seconds)
2025-01-16 12:33:09 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Ping timeout: 248 seconds)
2025-01-16 12:33:24 +0100euleritian(~euleritia@dynamic-176-004-140-216.176.4.pool.telefonica.de)
2025-01-16 12:34:11 +0100myxos(~myxos@syn-065-028-251-121.res.spectrum.com) (Ping timeout: 244 seconds)
2025-01-16 12:34:54 +0100myxos(~myxos@syn-065-028-251-121.res.spectrum.com) myxokephale
2025-01-16 12:35:19 +0100Typedfern(~Typedfern@104.red-83-37-43.dynamicip.rima-tde.net) (Ping timeout: 260 seconds)
2025-01-16 12:38:31 +0100pointlessslippe-(~pointless@62.106.85.17) (Quit: ZNC - http://znc.in)
2025-01-16 12:40:55 +0100weary-traveler(~user@user/user363627) user363627
2025-01-16 12:42:18 +0100dysthesis(~dysthesis@user/dysthesis) (Remote host closed the connection)
2025-01-16 12:44:17 +0100pointlessslippe1(~pointless@62.106.85.17) pointlessslippe1
2025-01-16 12:45:06 +0100ash3en(~Thunderbi@2a03:7846:b6eb:101:93ac:a90a:da67:f207) (Quit: ash3en)
2025-01-16 12:45:36 +0100euleritian(~euleritia@dynamic-176-004-140-216.176.4.pool.telefonica.de) (Read error: Connection reset by peer)
2025-01-16 12:45:53 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)
2025-01-16 12:48:35 +0100notzmv(~umar@user/notzmv) (Ping timeout: 252 seconds)
2025-01-16 12:48:36 +0100pointlessslippe1(~pointless@62.106.85.17) (Ping timeout: 252 seconds)
2025-01-16 12:51:23 +0100Typedfern(~Typedfern@51.red-83-37-40.dynamicip.rima-tde.net) typedfern
2025-01-16 12:52:19 +0100lortabac(~lortabac@2a01:e0a:541:b8f0:55ab:e185:7f81:54a4) lortabac
2025-01-16 12:54:28 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de) (Ping timeout: 252 seconds)
2025-01-16 12:54:58 +0100euleritian(~euleritia@dynamic-176-004-140-216.176.4.pool.telefonica.de)
2025-01-16 12:55:53 +0100Typedfern(~Typedfern@51.red-83-37-40.dynamicip.rima-tde.net) (Ping timeout: 244 seconds)
2025-01-16 12:56:54 +0100tv(~tv@user/tv) tv
2025-01-16 13:00:04 +0100caconym(~caconym@user/caconym) (Quit: bye)
2025-01-16 13:00:46 +0100SlackCoder(~SlackCode@64-94-63-8.ip.weststar.net.ky) SlackCoder
2025-01-16 13:01:21 +0100CiaoSen(~Jura@2a05:5800:21a:4900:ca4b:d6ff:fec1:99da) (Ping timeout: 252 seconds)
2025-01-16 13:01:25 +0100DigitteknohippieDigit
2025-01-16 13:01:45 +0100caconym(~caconym@user/caconym) caconym
2025-01-16 13:05:03 +0100euleritian(~euleritia@dynamic-176-004-140-216.176.4.pool.telefonica.de) (Read error: Connection reset by peer)
2025-01-16 13:05:20 +0100euleritian(~euleritia@ip4d17fae8.dynamic.kabel-deutschland.de)