20
you are viewing a single comment's thread
view the rest of the comments
[-] Baku@aussie.zone 4 points 3 weeks ago
[-] SituationCake@aussie.zone 3 points 3 weeks ago

Of course they did. It is universal that in every workplace people exist who just turn up and tick boxes, they have no care of their impacts on anyone else - clients, coworkers, the public etc. Zero f’s and zero diligence. Sometimes their mundane box ticking is harmless, but in some positions it can be very destructive. I don’t even know how a business could ban the use of AI. Even if block in company IT, someone could just do it on their phone and copy paste. The ban would only be useful if the person was discovered, and then probably have to go through the warning process. But damage already done. So unfortunately I think we are stuck with it forever from now on. Enshitification of the world continues.

[-] Pilk@aussie.zone 2 points 3 weeks ago* (last edited 3 weeks ago)

People need to realise how easy it is for a human to figure out synthetic content. At least, with the current state of AI text generation.

I don't think it shouldn't be used (edit: This context is one of many exceptions); I do think it should be clearly labelled as synthetic.

Reddit is a wasteland for this shit already, though. Probably too late.

[-] melbaboutown@aussie.zone 6 points 3 weeks ago* (last edited 3 weeks ago)

The problem is not only does it make poor recommendations that affect the legal outcome and safety of the child.

The LLM has also now got hold of sensitive and potentially identifiable personal information, which is now subject to the company’s own rules of how that information will be handled and disclosed.

Edit: The gathering of the sensitive personal information also wasn’t disclosed or consented to.

So I don’t think it should be used for this purpose. The use here was completely inappropriate.

I’ve also refused to allow my GP to use AI to take notes during the consultation, because I don’t think the owner of that technology should have access to my medical information or give itself permission to disclose it.

Ps. In the infancy of AI I used to participate in citizen science projects as a volunteer, training the models to recognise slides with cancer cells. I also watched in interest as it was used to generate simple forms challenging parking fines (?) for those who couldn’t afford legal assistance.

So it’s not like I’m screaming about technology being bad and Thomas Edison being a witch. I simply think a lot of corner cutting and misuse is happening without regulations, and leading to real harm.

[-] imoldgreeeg@aussie.zone 4 points 3 weeks ago
this post was submitted on 24 Sep 2024
20 points (91.7% liked)

Melbourne

1848 readers
53 users here now

This community is a place created for the people of Melbourne and Victoria. We are a positive, welcoming and inclusive community. We might not agree about everything, but we always strive to stay civil and respectful.

The focus of our discussions is based around things that effect Victoria, but we are also free to discuss our local perspective on wider issues. Or head to the regular Daily Random Discussion thread to talk about anything.

Full Community Guidelines

Ongoing discussions, FAQs & Resources (still under construction)

Adoption Certificate for Nellie, the Daily Thread numbat (with thanks to @Catfish)

Feedback & Suggestions

founded 1 year ago
MODERATORS