[-] llama@lemmy.dbzer0.com 2 points 3 hours ago* (last edited 3 hours ago)

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that's an issue that can be solved with some prompt engineering and as one's account gets more established.

[-] llama@lemmy.dbzer0.com 1 points 15 hours ago

There's a flatpak too, but it's not good.

[-] llama@lemmy.dbzer0.com 2 points 15 hours ago

Really? It's been working just fine for me.

[-] llama@lemmy.dbzer0.com 3 points 19 hours ago

I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it scraped from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.

I'm not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn't fully mitigate the concern. Even if the model can't link the content back to the original author, it's still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn't resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.

[-] llama@lemmy.dbzer0.com 3 points 19 hours ago* (last edited 19 hours ago)

Yes, the platform in question is Perplexity AI, and it conducts web searches. When it performs a web search, it generally gathers and analyzes a substantial amount of data. This compiled information can be utilized in various ways, including creating profiles of specific individuals or users. The reason I bring this up is that some people might consider this a privacy concern.

I understand that Perplexity employs other language models to process queries and that the information it provides isn't necessarily part of the training data used by these models. However, the primary concern for some people could be that their posts are being scraped (which raises a lot of privacy questions) and could also, potentially, be used to train AI models. Hence, the question.

[-] llama@lemmy.dbzer0.com 15 points 21 hours ago* (last edited 21 hours ago)

There are several way, honestly. For Android, there's NewPipe. The app itself fetches the YouTube data. For PC, there are similar applications that do the same such FreeTube. Those are the solutions I recommend.

If you're one of those, you can also host your own Invidious and/or Piped instances. But I like NewPipe and FeeTube better.

[-] llama@lemmy.dbzer0.com 6 points 21 hours ago

Yeah, it hallucinated that part.

[-] llama@lemmy.dbzer0.com 8 points 22 hours ago

Don't give me any ideas now >:)

[-] llama@lemmy.dbzer0.com 6 points 22 hours ago

Oh, no. I don't dislike it, but I also don't have strong feelings about it. I'm just interested in hearing other people's opinions; I believe that if something is public, then it is indeed public.

153
submitted 22 hours ago* (last edited 3 hours ago) by llama@lemmy.dbzer0.com to c/asklemmy@lemmy.world

I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


#Prompt Update

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

[-] llama@lemmy.dbzer0.com 7 points 23 hours ago

That's totally okay! You didn't address my specific issue; but I didn't know that there were instances across the Fediverse that chose to defederate from Threads. I think that, in a way, you've partly answered my question. Thank you for taking your time to answer.

[-] llama@lemmy.dbzer0.com 14 points 23 hours ago* (last edited 23 hours ago)

Oh, that's interesting; I had no idea. I'll go to my Threads profile and check it. I'll edit this comment later informing whether there really is that option.

Edit: The only option I was able to find was "Suggesting posts on other apps," and when I clicked on it I only had two options: Instagram and Facebook – none of which are part of the Fediverse. I wasn't able to find anything on 'Meta's Accounts Center' either.

[-] llama@lemmy.dbzer0.com 14 points 23 hours ago

I didn't know that website, thank you! However, my instance has chosen not to defederate from Threads – according to the source you cited. So, I'm not entirely sure why this is happening.

37

I use both Threads and Mastodon. However, I realized that sometimes (public) profiles on Threads don't show up on Mastodon and vice versa. I also realized that most comments made on Threads posts don't show up on Mastodon – that is, if the posts appear on Mastodon at all. The same is true the other way around. Why does this happen?

55
submitted 2 days ago* (last edited 2 days ago) by llama@lemmy.dbzer0.com to c/asklemmy@lemmy.world

I've been using Lemmy since the Reddit exodus. I haven't looked back since, but I miss a lot of mental health communities that I haven't been able to find replacements for here on Lemmy. Does anyone know any cool mental health communities that are somewhat active?

view more: next ›

llama

joined 2 days ago