0
submitted 1 year ago by mark@programming.dev to c/privacy@lemmy.ml

I'm a dev and I was browsing Mozilla's careers page and came across this. I find a privacy respecting company being interested in building an AI powered recommendation engine a little odd. Wouldn't they need to sift through the very data we want private in order for a recommendation engine to be good? Curious of what others think.

top 2 comments
sorted by: hot top controversial new old
[-] fiat_lux@kbin.social 1 points 1 year ago

Mozilla has a huge amount of information already submitted by volunteers to train their own specific-subject LLM.

And as we saw from Meta's nearly ethical-consideration-devoid CM3Leon (no i will not pronounce it "Chameleon") paper, you don't need a huge dataset to train if you supplement with your own preconfigured biases. For better or worse.

Just because something is "AI-powered" doesn't mean the training datasets have to be acquired without ethics. Even if there is something to be said for making material public and the inevitable consequences it can be used.

I hope whoever gets the job can help pave the way for ethics standards in AI research.

[-] mark@programming.dev 1 points 1 year ago

Ironically, this comment reads just like an AI wrote it.

load more comments
view more: next ›
this post was submitted on 17 Jul 2023
0 points (NaN% liked)

Privacy

31543 readers
366 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 4 years ago
MODERATORS