Yes, you can find it here.
It's great to see such cool FOSS options. Thanks for sharing this – super helpful!
The latest version of Eternity is compatible with Lemmy 0.19. You might need to log out and log back in, though.
However, the new functions aren't implemented yet (e.g. the new sorting methods)
I don't think it's necessarily a third-party reseller thing. I bought tickets a few days ago literally from their app, only to get the same email that said I needed to 'verify my identity' because I bought the tickets for an 'unauthorized third-party reseller'.
I think the IzzyOnDroid repo is updated once a day at 6 PM UTC.
I'm considering implementing kbin support once the "Lemmy part" is in a better shape.
Also, could you update me on the status of the kbin API? Is it not yet available?
I've just published the first alpha release!
I want to get an F-Droid release first, but I think it's too early for now.
Contributions are appreciated!
Infinity is written in Java. I may port some parts of it to Kotlin some day, but It's not my priority for now.
From what I've seen, it's definitely worth quantizing. I've used llama 3 8B (fp16) and llama 3 70B (q2_XS). The 70B version was way better, even with this quantization and it fits perfectly in 24 GB of VRAM. There's also this comparison showing the quantization option and their benchmark scores:
Source
To run this particular model though, you would need about 45GB of RAM just for the q2_K quant according to Ollama. I think I could run this with my GPU and offload the rest of the layers to the CPU, but the performance wouldn't be that great(e.g. less than 1 t/s).