16
submitted 3 months ago by Emperor@feddit.uk to c/andfinally@feddit.uk

In March, health technology startup HeHealth debuted Calmara AI, an app proclaiming to be “your intimacy bestie for safer sex.” The app was heavily marketed to women, who were told they could upload a picture of their partner’s penis for Calmara to scan for evidence of a sexually transmitted infection (STI). Users would get an emoji-laden “Clear!” or “Hold!!!” verdict — with a disclaimer saying the penis in question wasn’t necessarily free of all STIs.

The reaction Ella Dawson, sex and culture critic, had when she first saw Calmara AI’s claim to provide “AI-powered scans [that] give you clear, science-backed answers about your partner’s sexual health status” can be easily summed up: “big yikes.” She raised the alarm on social media, voicing her concerns about privacy and accuracy. The attention prompted a deluge of negative press and a Los Angeles Times investigation.

The Federal Trade Commission was also concerned. The agency notified HeHealth, the parent company of Calmara AI, that it was opening an investigation into possibly fraudulent advertising claims and privacy concerns. Within days, HeHealth pulled its apps off the market.

HeHealth CEO Yudara Kularathne emphasized that the FTC found no wrongdoing and said that no penalties were imposed. “The HeHealth consumer app was incurring significant losses, so we decided to close it to focus on profitability as a startup,” he wrote over email, saying that the company is now focused on business-to-business projects with governments and NGOs mostly outside the United States.

More and more AI-powered sexual health apps have been cropping up, and there’s no sign of stopping. Some of the new consumer-focused apps are targeted toward women and queer people, who often have difficulties getting culturally sensitive and gender-informed care. Venture capitalists and funders see opportunities in underserved populations — but can prioritize growth over privacy and security.

top 6 comments
sorted by: hot top controversial new old
[-] SpaceNoodle@lemmy.world 8 points 3 months ago

Not today with the ongoing AI bullshit craze. There are legit AI models that can identify medical conditions, but I'd worry that this product is just an LLM wearing a stethoscope.

[-] CyprianSceptre@feddit.uk 6 points 3 months ago

Forget about the AI. There is so much wrong with this.

  1. If it gets it wrong, either false positive or false negative, then there are serious consequences. This means there is no way it can "play it safe" with an answer.
  2. I don't trust that it's not capturing and storing data. The risk of a "highly personal" data being leaked is completely unwarranted.
  3. It works from a photo, therefore it's unlikely to pick up much more than you can see by eye. You'd be better off just learning what to look for.
  4. It won't detect STIs with no visual symptoms, so provides an entirely false sense of confidence, potentially increasing the risks of those STIs to the general population.
  5. Let's say it works perfectly, and the AI algorithm runs completely locally with no data being transferred to the cloud or being captured/stored. Do you want to have someone you don't trust about being honest about STIs taking a photo of your genitals?

That's what I can come up with in 10 seconds. Feel free to add to the list, I'm sure it's not complete...

[-] GreatAlbatross@feddit.uk 3 points 3 months ago

They year is 2044.

The police pull you over for doing 90 on an 80 limited motorway.

The police computer malfunctions, and instead of serving up your driving license photo for comparison, they get a lovely picture of that bum rash you caught in the 2030s.

[-] I_am_10_squirrels@beehaw.org 1 points 3 months ago

Hey Becky, now we smash? The computer says I'm clean!

[-] Hossenfeffer@feddit.uk 3 points 3 months ago

When I had a camera shoved up my fundament an AI was watching the camera feed to learn how to spot potential cancerous growths, precancerous polyps, etc. Lucky AI. Apparently the process is that it scans the feed, highlights on screen areas it wants the radiologist to take another look at, and they then verify if it's a real issue or nothing to worry about. In that process flow I'm entirely comfortable with it being a second pair of eyes for the radiologist.

Eventually I guess it could replace the radiologist, but I'd want to see a 100% success rate demonstrated over a sufficiently long test period before that could happen.

[-] fubarx@lemmy.ml 3 points 3 months ago

The Gradient Descent, Hallucination, and Insufficient Training Data jokes just write themselves.

this post was submitted on 09 Sep 2024
16 points (94.4% liked)

And Finally...

1177 readers
44 users here now

A place for odd or quirky world news stories.

Elsewhere in the Fediverse:

Rules:

founded 2 years ago
MODERATORS