If you do multi stage builds (example here) it is slightly easier to use venvs.
If you use the global environment you need to hardcode the path to global packages. This path can change when base images are upgraded.
If you do multi stage builds (example here) it is slightly easier to use venvs.
If you use the global environment you need to hardcode the path to global packages. This path can change when base images are upgraded.
Sure, but in the case where you upgrade python and it affects python packages it would affect global packages and a venv in the same way.
upgrading your base image won’t affect your python packages
Surely if upgrading python will affect your global python packages it will also affect your venv python packages?
you can use multi stage builds to create drastically smaller final images
This can also be done without using venv's, you just need to copy them to the location where global packages are installed.
I don't think they have anything to do with each other, it looks like prefix.dev uses conda packages.
Yeah I agree, I am sure they are missing some obscure stuff. But in practise it has everything that I use and there has been no need for me to touch flatpak/appimage/snap
What is the app?
Could you link to the Lemmy style app please, I haven't heard of this before
I think helix (or some derivative) has good long tern prospects. It has a fairly large communuty abd It is much more accessible than (neo)vim.
I have personally used fedora and nixos on a gen 1 framework 13 and it works great.
Does Framework do anything regarding FOSS drivers or firmware?
Regarding your question they say this:
We deliberately selected components and modules that didn’t require new kernel driver development and have been providing distro maintainers with pre-release hardware to test to improve compatibility. We’re also working on enabling firmware updates through LVFS to complete the Linux experience.
source: https://frame.work/gb/en/linux
You might be interested in this article that compares nix and docker. It explains why docker builds are not considered reproducible:
For example, a Dockerfile will run something like apt-get-update as one of the first steps. Resources are accessible over the network at build time, and these resources can change between docker build commands. There is no notion of immutability when it comes to source.
and why nix builds are reproducible a lot of the time:
Builds can be fully reproducible. Resources are only available over the network if a checksum is provided to identify what the resource is. All of a package's build time dependencies can be captured through a Nix expression, so the same steps and inputs (down to libc, gcc, etc.) can be repeated.
Containerization has other advantages though (security) and you can actually use nix's reproducible builds in combination with (docker) containers.
So how useful it is in practice?
It's useful for quite a few things in practise:
This video shows off some of the cool things you can do with nix: https://youtube.com/watch?v=6Le0IbPRzOE&feature=share9
How do updates work?
You update a programming by specifying the latest version of a program in config and rebuilding.
You update the OS by pointing to the channel you want to use and rebuilding.
You can time travel back to a previous state if anything goes wrong.
Can it play Crysis?
I expect so, some people.do use nix for gaming.
Putting aside the speed uv has a bunch of features that usually require 2-4 separate tools. These tools are very popular but not very well liked. The fact these tools are so popular proves that pip is not sufficient for many use cases. Other languages have a single tool (e.g. cargo) that are very well liked.