670
top 50 comments
sorted by: hot top controversial new old
[-] Someplaceunknown@fedia.io 233 points 1 month ago

"LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."

Please let this happen

[-] orl0pl@lemmy.world 36 points 1 month ago

Market crash and third world war. What a time to be alive!

[-] Semi_Hemi_Demigod@lemmy.world 199 points 1 month ago

I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.

[-] ohwhatfollyisman@lemmy.world 64 points 1 month ago

... bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects...

one doesn't imagine any of them even remotely thinks a technological panacaea is feasible.

... while they get super rich off it.

because they're only focusing on this.

[-] azertyfun@sh.itjust.works 16 points 1 month ago

Oh they definitely exist. At a high level the bullshit is driven by malicious greed, but there are also people who are naive and ignorant and hopeful enough to hear that drivel and truly believe in it.

Like when Microsoft shoves GPT4 into notepad.exe. Obviously a terrible terrible product from a UX/CX perspective. But also, extremely expensive for Microsoft right? They don't gain anything by stuffing their products with useless annoying features that eat expensive cloud compute like a kid eats candy. That only happens because their management people truly believe, honest to god, that this is a sound business strategy, which would only be the case if they are completely misunderstanding what GPT4 is and could be and actually think that future improvements would be so great that there is a path to mass monetization somehow.

[-] Voroxpete@sh.itjust.works 17 points 1 month ago

That's not what's happening here. Microsoft management are well aware that AI isn't making them any money, but the company made a multi billion dollar bet on the idea that it would, and now they have to convince shareholders that they didn't epicly fuck up. Shoving AI into stuff like notepad is basically about artificially inflating "consumer uptake" numbers that they can then show to credulous investors to suggest that any day now this whole thing is going to explode into an absolute tidal wave of growth, so you'd better buy more stock right now, better not miss out.

load more comments (5 replies)
[-] Semi_Hemi_Demigod@lemmy.world 13 points 1 month ago

True, they just sell it to their investors as a panacea

load more comments (2 replies)
load more comments (3 replies)
[-] halcyoncmdr@lemmy.world 103 points 1 month ago

No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.

Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.

load more comments (12 replies)
[-] Greg@lemmy.ca 65 points 1 month ago

largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.

I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.

[-] mutant_zz@lemmy.world 30 points 1 month ago

OpenAI published a paper about GPT titled "Sparks of AGI".

I don't think they really believe it but it's good to bring in VC money

load more comments (2 replies)
[-] Chozo@fedia.io 28 points 1 month ago

Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn't tell you the difference between an LLM and an AGI, and should be dismissed as spam.

[-] zbyte64@awful.systems 19 points 1 month ago

The call is coming from inside. Google CEO claims it will be like alien intelligence so we should just trust it to make political decisions for us bro: https://www.computing.co.uk/news/2024/ai/former-google-ceo-eric-schmidt-urges-ai-acceleration-dismisses-climate

load more comments (1 replies)
load more comments (1 replies)
[-] theacharnian@lemmy.ca 56 points 1 month ago

It's so funny how all this is only a problem within a capitalist frame of reference.

load more comments (2 replies)
[-] DirigibleProtein@aussie.zone 53 points 1 month ago
[-] Blackmist@feddit.uk 45 points 1 month ago

Thank fuck. Can we have cheaper graphics cards again please?

I'm sure a RTX 4090 is very impressive, but it's not £1800 impressive.

[-] lorty@lemmy.ml 10 points 1 month ago

Just wait for the 5090 prices...

load more comments (11 replies)
load more comments (13 replies)
[-] masquenox@lemmy.world 41 points 1 month ago
[-] UnderpantsWeevil@lemmy.world 16 points 1 month ago

I've been hearing about the imminent crash for the last two years. New money keeps getting injected into the system. The bubble can't deflate while both the public and private sector have an unlimited lung capacity to keep puffing into it. FFS, bitcoin is on a tear right now, just because Trump won the election.

This bullshit isn't going away. Its only going to get forced down our throats harder and harder, until we swallow or choke on it.

load more comments (2 replies)
[-] homesweethomeMrL@lemmy.world 32 points 1 month ago

"The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence."

"As I have always warned," he added, "that's just a fantasy."

load more comments (5 replies)
[-] Boxscape@lemmy.sdf.org 32 points 1 month ago

Well duhhhh.
Language models are insufficient.
They also need:

load more comments (5 replies)
[-] CerealKiller01@lemmy.world 31 points 1 month ago

Huh?

The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.

Even if AI were to completely freeze right now, people will continue using it.

Why are people reacting like AI is going to get dropped?

[-] finitebanjo@lemmy.world 19 points 1 month ago* (last edited 1 month ago)

People are dumping billions of dollars into it, mostly power, but it cannot turn profit.

So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.

This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.

Search up the Dot Com Bubble.

[-] theherk@lemmy.world 18 points 1 month ago

Because in some eyes, infinite rapid growth is the only measure of success.

[-] pdlorah@lemmy.ca 11 points 1 month ago

People pay real money for smartphones.

[-] Petter1@lemm.ee 10 points 1 month ago

People pay real Money for AIaaS as well..

load more comments (15 replies)
[-] randon31415@lemmy.world 30 points 1 month ago

The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.

[-] Yaky@slrpnk.net 23 points 1 month ago

Other than with language models, this has already happened: Take a look at apps such as Merlin Bird ID (identifies birds fairly well by sound and somewhat okay visually), WhoBird (identifies birds by sound, ) Seek (visually identifies plants, fungi, insects, and animals). All of them work offline. IMO these are much better uses of ML than spammer-friendly text generation.

load more comments (2 replies)
load more comments (5 replies)
[-] recapitated@lemmy.world 22 points 1 month ago

I think I've heard about enough of experts predicting the future lately.

[-] LovableSidekick@lemmy.world 20 points 1 month ago* (last edited 1 month ago)

Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.

[-] shortwavesurfer@lemmy.zip 19 points 1 month ago

Because nobody could have possibly saw that coming. /s

[-] originalucifer@moist.catsweat.com 18 points 1 month ago

is this where we get to explain again why its not really ai?

[-] just_another_person@lemmy.world 20 points 1 month ago

Nope, just where you divest your stocks like any other tech run.

load more comments (9 replies)
[-] Etterra@lemmy.world 18 points 1 month ago

Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.

[-] KeenFlame@feddit.nu 17 points 1 month ago

I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you

load more comments (3 replies)
[-] Defaced@lemmy.world 16 points 1 month ago

This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.

[-] Mushroomm@sh.itjust.works 13 points 1 month ago

It's been 5 minutes since the new thing did a new thing. Is it the end?

[-] OsrsNeedsF2P@lemmy.ml 13 points 1 month ago

I work with people who work in this field. Everyone knows this, but there's also an increased effort in improvements all across the stack, not just the final LLM. I personally suspect the current generation of LLMs is at its peak, but with each breakthrough the technology will climb again.

Put differently, I still suspect LLMs will be at least twice as good in 10 years.

[-] dejected_warp_core@lemmy.world 13 points 1 month ago

Welcome to the top of the sigmoid curve.

If you were wondering what 1999 felt like WRT to the internet, well, here we are. The Matrix was still fresh in everyone's mind and a lot of online tech innovation kinda plateaued, followed by some "market adjustments."

load more comments (1 replies)
[-] RecluseRamble@lemmy.dbzer0.com 11 points 1 month ago

Of course it'll crash. Saying it's imminent though suggests someone needs to exercise their shorts.

[-] rational_lib@lemmy.world 11 points 1 month ago

As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:

  1. Copilot can't read my mind and figure out what I'm trying to do.
  2. I'm working on an uncommon problem where the typical solutions don't work
  3. Copilot is unable to tell when it doesn't "know" the answer, because of course it's just simulating communication and doesn't really know anything.

2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.

I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.

[-] ggppjj@lemmy.world 13 points 1 month ago* (last edited 1 month ago)

Not copilot, but I run into a fourth problem:
4. The LLM gets hung up on insisting that a newer feature of the language I'm using is wrong and keeps focusing on "fixing" it, even though it has access to the newest correct specifications where the feature is explicitly defined and explained.

load more comments (4 replies)
load more comments (12 replies)
[-] EleventhHour@lemmy.world 10 points 1 month ago

Apparently, there was only so much IP to steal

load more comments
view more: next ›
this post was submitted on 13 Nov 2024
670 points (94.9% liked)

Technology

60082 readers
3204 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS