things I've changed from the PR:
- dropped legacy (i.e. non-flake) compat stuff, which turns out to account for most of the diff
- dropped `packages.garnet` since it doesn't work with `nix build .#garnet`
- back to using Rust-extended packages everywhere, which isn't very important but seems fine anyway
the rest is just re-inlining things and other refactors
final question:
it does seems a bit weird that `garnet-rs` arg to `project.nix` was always the same
might be a mistake, and we're supposed to be using local for local build?
"improve nix-haskell use"
there are a few things going on here but the main ones are ...
note that changes we keep are essentially:
- bumping `nix-haskell` to avoid shell hook workaround
- various changes in how we call `nix-haskell`
- using `libCgarnet_rs` name, which Cabal expects
- adding proper non-dev-shell targets, so that e.g. `nix run` works
should add co-author attribution
This reverts commit 1f1c0d959da699ce04f7951ecbcdb7976c8c0750.
This doesn't work well with multi-component builds. For example, it requires `"haskell.sessionLoading": "singleComponent"` in VSCode, which makes HLS work less reliably.
The script addition is a bit hacky, but there's no obvious straightforward arch/version-independent way to get most of the build path. And eventually, once issues with HLS etc. are sorted out we will revert to using Cabal hooks anyway.
Even though `Int` and `isize` should be the same in practice, we can't cleanly convert, as the type information isn't quite there. And anyway, strictly speaking per the report, `Int` is only guaranteed to hold 30 bits.
Note that this is essentially unchanged even if we specify `usize_is_size_t = true` for `cbindgen`.
We drop the tools specifically designed for Haskell and Rust together, in favour of general tools for using each with C.
Namely, we use Mozilla's `cbindgen` for generating header files from the Rust source, and Well-Typed's new `hs-bindgen` tool for generating Haskell from those header files.
The Rust code here is essentially the result of expanding the old macro, then inlining and renaming internals.
The most important thing here is that we're now relying solely on robust well-maintained tools.