No, but since Canada can regulate/limit the oil and gas it exports, this is still a useful number.
Imports also need to be counted.
Unfortunately climate change is every country's responsibility to fix, since every bit helps.
No, but since Canada can regulate/limit the oil and gas it exports, this is still a useful number.
Imports also need to be counted.
Unfortunately climate change is every country's responsibility to fix, since every bit helps.
I'd suggest some kind of "press this key to view debug information" text (or make it documented but not visible, to avoid people just pressing whatever button is written on the screen)
Nostr is culturally vaguely american, and it's hard to distinguish the libertarians from the Trumpists there (I've seen several posts saying "Trump will be better for Bitcoin", for example). Libertarians and republicans both sell themselves as "small government".
"Leftist libertarians" generally call themselves anarchists, in my experience.
It's actually several seminars. For historical reasons presentations from gay frogs, lesbian frogs, bisexual frogs, etc. Are all grouped under "gay frog seminar"
Just to offer some support, you're right and those are good questions
You mean the engine owned by the guy who refuses to abide to the GDPR, thinks anti-suicide messages would be bias, wants to use AI to "remove bias from news articles" (and from reviews)? https://d-shoot.net/kagi.html goes into it, it's a whole mess.
Right now all search engines suck, unfortunately.
For the screenshot you might want to use a terminal that doesn't have bloom, a CRT filter, and a background, I genuinely can't see the TUI.
Neural networks are named like that because they're based on a model of neurons from the 50s, which was then adapted further to work better with computers (so it doesn't resemble the model much anymore anyway). A more accurate term is Multi-Layer Perceptron.
We now know this model is... effectively completely wrong.
Additionally, the main part (or glue, really) of LLMs is not even an MLP, but a "self-attention" layer. You can't say LLMs work like a brain, because they don't. The rest is debatable but it's important to remember that there are billions of dollars of value in selling the dream of conscious AI.
Nah. Programming is... really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
It's more relevant for NA, it's an indigenous thing iirc.
One possibility is to allow users to join a controlled allowlist (or a blocklist, though that runs more into that problem), where some actor acts as a trust authority (which the user picks). This keeps the P2P model while still allowing for large networks since every individual doesn't have to be a "server admin". A user could also pick several trust authorities.
Essentially, the network would act as a framework for "centralized" groups, while identity remains completely its own.
The aggregator is called the Relay, and I haven't even found anything suggesting one could realistically selfhost it. Then you need to handle the massive stream of data coming through it with AppViews, which are tough to handle too (there are a few but not many iirc).
That said, I am also impressed with the thought behind ATProtocol. It seems much more robust and defined than ActivityPub.