[-] dave@feddit.uk 6 points 1 week ago

To build if you’re doing this a lot and can’t use a brand lens with known correction data in darktable, set up graph paper and phone camera on a tripod to keep everything consistent, then photograph the empty paper. Create a transform in gimp to make the paper completely flat / square, and then just reapply that transform for every object you need to capture.

[-] dave@feddit.uk 6 points 1 month ago

So I’m normally a command line fan and have used git there. But I’m also using sublimerge and honestly I find it fantastic for untangling a bunch of changes that need to be in several commits; being able to quickly scroll through all the changed files, expand & collapse the diffs, select files, hunks, and lines directly in the gui for staging, etc. I can’t see that being any faster / easier on the command line.

[-] dave@feddit.uk 7 points 2 months ago

People on Reddit. We’re the people off Reddit :)

[-] dave@feddit.uk 7 points 3 months ago

Lelete dater.

[-] dave@feddit.uk 5 points 5 months ago

That’s because the explanation is often a bit disingenuous. There’s practically no difference between “listening locally” and “constantly processing what you’re saying”. The device is constantly processing what you’re saying, simply to recognise the trigger word. That processing just isn’t shared off device until the trigger is detected. That’s the claim by the manufacturers, and so far it’s not been proved wrong (as mentioned elsewhere, plenty of people are trying). It’s hard to prove a negative but so far it seems not enough data is leaving to prove anything suspect.

I would put money on a team of people working for Amazon / Google to extract value from that processed speech data without actually sending that data off device. Things like aggregate conversation topic / sentiment, logging adverts heard on tv / radio for triangulation, etc. None of that would invalidate the “not constantly recording you” claim.

[-] dave@feddit.uk 6 points 6 months ago

Walkaway by Cory Doctorow.

[-] dave@feddit.uk 7 points 10 months ago

I think maybe a re-read is in order. They’re claiming the new format outperforms the (presumably) old format by 28%, not that the CTR is 28%.

[-] dave@feddit.uk 6 points 1 year ago

What’s the difference between a duck?

One of its legs are both the same.

[-] dave@feddit.uk 5 points 1 year ago

It’s a great analysis, and I don’t disagree with anything you said (mostly because you’re better informed than I am). But you nailed it with “Why would I need this? I don’t know yet.” It should all be driven by need—the fact their are more options is great, but doesn’t mean they should be used just because they’re there… For many hobbyists, ease of access and speed to get started is the main driver, and for those cases, pre-built boards are the answer.

I remember talking to a car manufacturer in the early 2000s who said it would be relatively easy to make cars to a custom length / load space. But they tend to make specific models because if you give people too much choice, they get paralysed and don’t choose anything.

I suspect it’s not quite that simple but the principle seems sound.

[-] dave@feddit.uk 5 points 1 year ago

I’m going to get back to watching that later.

[-] dave@feddit.uk 6 points 1 year ago

I’ve had most success explaining LLM ‘fallibility’ to non-techies using the image gen examples. Google ‘AI hands’, and ask them if they see anything wrong. Now point out that we’re _extremely_sensitive to anything wrong with our hands, and so these are very easy for us to spot. But the AI has no concept of what a hand is, it’s just seen a _lot _ of images from different angles, sometimes fingers are hidden, sometimes intertwined etc. So it will happily generate lots more of those kinds of images, with no regard to whether they could / should actually exists.

It’s a pretty similar idea with the LLMs. It’s seen a lot of text, and can put together words in a convincing-looking way. But it has no concept of what it’s writing, and the equivalent of the ‘hands’ will be there in the text. It’s just that we can’t see them at first glance like we can with the hands.

view more: ‹ prev next ›

dave

joined 2 years ago