empiko a day ago

Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).

  • Lichtso 13 hours ago

    > Why bother developing chatbots

    Maybe it is the reverse? It is not them offering a product, it is the users offering their interaction data. Data which might be harvested for further training of the real deal, which is not the product. Think about it: They (companies like OpenAI) have created a broad and diverse user base which without a second thought feeds them with up-to-date info about everything happening in the world, down to the individual life and even their inner thoughts. No one in the history of mankind ever had such a holistic view, almost gods eye. That is certainly something a super intelligence would be interested in. They may have achieved it already and we are seeing one of its strategies playing out. Not saying they have, but this observation would not be incompatible or indicate they haven't.

    • visarga 4 hours ago

      It's not about achieving AGI as a final product, it's about building a perpetual learning machine fueled by real-time human interaction. I call it the human-AI experience flywheel.

      People bring problems to the LLM, the LLM produces some text, people use it and later return to iterate. This iteration functions as a feedback for earlier responses from the LLM. If you judge an AI response by the next 20 rounds of interaction or more you can gauge if it was useful or not. They can create RLHF data this way, using hindsight or extra context from other related conversations of the same user on the same topic. That works because users try the LLM ideas in reality and bring outcome results back to the model, or they simply recall from their personal experience if that approach would work or not. The system isn't just built to be right; it's built to be correctable by the user base, at scale.

      OpenAI has 500M users, if they generate 1000 tokens/user/day that means 0.5T interactive tokens/day. The chat logs dwarf the original training set in size and are very diverse, targeted to our interests, and mixed with feedback. They are also "on policy" for the LLM, meaning they contain corrections to mistakes the LLM made, not generic information like web scrape.

      You're right that LLMs eventually might not even need to crawl the web, they have the whole society dump data into their open mouths. That did not happen with web search engines, only social networks did that in the past. But social networks are filled with our cultural wars and self conscious posing, while the chat room is an environment where we don't need to signal our group alignment.

      Web scraping gives you humanity's external productions - what we chose to publish. But conversational logs capture our thinking process, our mistakes, our iterative refinements. Google learned what we wanted to find, but LLMs learn how we think through problems.

      • FuckButtons 3 hours ago

        I see where you’re coming from, but I think teasing out something that looks like a clear objective function that generalizes to improved intelligence from llm interaction logs is going to be hellishly difficult. Consider, that most of the best llm pre training comes from being very very judicious with the training data, selecting the right corpus of llm interaction logs and then defining an objective function that correctly models…? Being helpful? From that sounds far harder than just working from scratch with rlhf.

    • blibble 12 hours ago

      > No one in the history of mankind ever had such a holistic view, almost gods eye.

      I distinctly remember search engines 30 years ago having a "live searches" page (with optional "include adult searches" mode)

      • kylecazar 8 hours ago

        I'n the mid 90's? What did the "live searches" feature do?

        • sllabres 6 hours ago

          Show what queries are send to the search engine (by other users) right now

    • ysofunny 12 hours ago

      that possibility makes me feel weird about paying a subscription... they should pay me!

      or the best models should be free to use. if it's free to use then I think I can live with it

  • grafmax 11 hours ago

    > it is supposed to usher the humanity into a new prosperous age (somehow).

    More like usher in climate catastrophe way ahead of schedule. AI-driven data center build outs are a major source of new energy use, and this trend is only intensifying. Dangerously irresponsible marketing cloaks the impact of these companies on our future.

    • Redoubts 8 hours ago

      Incredibly bizarre take. You can build more capacity without frying the planet. Many ai companies are directly investing in nuclear plants for this reason, for example.

  • imiric 19 hours ago

    Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

    This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.

    The constant anthropomorphization of this technology is dishonest at best, and harmful and dangerous at worst.

    • xoralkindi 16 hours ago

      > It can generate data that mimics anything humans have produced...

      No, it can generate data that mimics anything humans have put on the WWW

      • nradov 12 hours ago

        The frontier model developers have licensed access to a huge volume of training data which isn't available on the public WWW.

    • ozim 17 hours ago

      anthropomorphization definitely sucks, hype is over the board.

      But it is far from snake oil as it actually is useful and does a lot of stuff really.

    • deadbabe 19 hours ago

      Data from the future is tunneling into the past to mess up our weights and ensure we never achieve AGI.

    • richk449 17 hours ago

      > if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

      Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

      As far as I can tell smart engineers are using AI tools, particularly people doing coding, but even non-coding roles.

      The criticism feels about three years out of date.

      • imiric 16 hours ago

        Not at all. The reason it's not talked about as much these days is because the prevailing way to work around it is by using "agents". I.e. by continuously prompting the LLM in a loop until it happens to generate the correct response. This brute force approach is hardly a solution, especially in fields that don't have a quick way of verifying the output. In programming, trying to compile the code can catch many (but definitely not all) issues. In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.

        The other reason is because the primary focus of the last 3 years has been scaling the data and hardware up, with a bunch of (much needed) engineering around it. This has produced better results, but it can't sustain the AGI promises for much longer. The industry can only survive on shiny value added services and smoke and mirrors for so long.

        • majormajor 13 hours ago

          > In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.

          Even just in industry, I think data functions at companies will have a dicey future.

          I haven't seen many places where there's scientific peer review - or even software-engineering-level code-review - of findings from data science teams. If the data scientist team says "we should go after this demographic" and it sounds plausible, it usually gets implemented.

          So if the ability to validate was already missing even pre-LLM, what hope is there for validation of the LLM-powered replacement. And so what hope is there of the person doing the non-LLM-version of keeping their job (at least until several quarters later when the strategy either proves itself out or doesn't.)

          How many other departments are there where the same lack of rigor already exists? Marketing, sales, HR... yeesh.

      • natebc 15 hours ago

        > Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

        Last week I had Claude and ChatGPT both tell me different non-existent options to migrate a virtual machine from vmware to hyperv.

        Week before that one of them (don't remember which, honestly) gave me non existent options for fio.

        Both of these are things that the first party documentation or man page has correct but i was being lazy and was trying to save time or be more efficient like these things are supposed to do for us. Not so much.

        Hallucinations are still a problem.

      • nunez 13 hours ago

        The few times I've used Google to search for something (Kagi is amazing!), it's Gemini Assistant at the top fabricated something insanely wrong.

        A few days ago, I asked free ChatGPT to tell me the head brewer of a small brewery in Corpus Christi. It told me that the brewery didn't exist, which it did, because we were going there in a few minutes, but after re-prompting it, it gave me some phone number that it found in a business filing. (ChatGPT has been using web search for RAG for some time now.)

        Hallucinations are still a massive problem IMO.

        • seanhunter 42 minutes ago

          The google AI clippy thing at the top of search has to be one of the most pointless, ill-advised and brand-damaging stunts they could have done. Because compute is expensive at scale (even for them) it’s running a small model, so the suggestions are pretty terrible. That leads people to who don’t understand what’s happening to think their AI is just bad in general.

          That’s not the case in my experience. Gemini is almost as good as Claude for most of the things I try.

          That said, for queries tgat don’t use agentic search or rag, hallucination is as bad a problem as ever and it won’t improve because hallucination is all these models do. In Karpathy’s phrase they “dream text”. Agentic search and rag and similar techniques disguise the issue because they stuff the context of the model with real results, so the scope for it to go noticeably off the rails is less. But it’s still very visible if you ask for references, links etc many/most/sometimes all will be hallucinations depending on the prompt.

      • taormina 13 hours ago

        ChatGPT constantly hallucinates. At least once per conversation I attempt to happen with it. We all gave up on bitching about it constantly because we would never talk about anything else, but I have no reason to believe that any LLM has vaguely solved this problem.

      • HexDecOctBin 10 hours ago

        I just tried asking ChatGPT on how to "force PhotoSync to not upload images to a B2 bucket that are already uploaded previously", and all it could do is hallucinate options that don't exist and webpages that are irrelevant. This is with the latest model and all the reasoning and researching applied, and across multiple messages in multiple chats. So no, hallucination is still a huge problem.

      • majormajor 14 hours ago

        > Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

        Nonsense, there is a TON of discussion around how the standard workflow is "have Cursor-or-whatever check the linter and try to run the tests and keep iterating until it gets it right" that is nothing but "work around hallucinations." Functions that don't exist. Lines that don't do what the code would've required them to do. Etc. And yet I still hit cases weekly-at-least, when trying to use these "agents" to do more complex things, where it talks itself into a circle and can't figure it out.

        What are you trying to get these things to do, and how are you validating that there are no hallucinations? You hardly ever "hear about it" but ... do you see it? How deeply are you checking for it?

        (It's also just old news - a new hallucination is less newsworthy now, we are all so used to it.)

        Of course, the internet is full of people claiming that they are using the same tools I am but with multiple factors higher output. Yet I wonder... if this is the case, where is the acceleration in improvement in quality in any of the open source software I use daily? Or where are the new 10x-AI-agent-produced replacements? (Or the closed-source products, for that matter - but there it's harder to track the actual code.) Or is everyone who's doing less-technical, less-intricate work just getting themselves hyped into a tizzy about getting faster generation of basic boilerplate for languages they hadn't personally mastered before?

      • amlib 13 hours ago

        How can it not be hallucinating anymore if everything the current crop of generative AI algorithm does IS hallucination? What actually happens is that sometimes the hallucinated output is "right", or more precisely, coherent with the user mental model.

      • kevinventullo 10 hours ago

        You don’t hear about it anymore because it’s not worth talking about anymore. Everyone implicitly understands they are liable to make up nonsense.

      • leptons 16 hours ago

        Are you hallucinating?? "AI" is still constantly hallucinating. It still writes pointless code that does nothing towards anything I need it to do, a lot more often than is acceptable.

  • pu_pe a day ago

    I don't think it's as simple as that. Chatbots can be used to harvest data, and sales are still important before and after you achieve AGI.

    • worldsayshi 19 hours ago

      It could also be the case that they think that AGI could arrive at any moment but it's very uncertain when and only so many people can work on it simultaneously. So they spread out investments to also cover low uncertainty areas.

    • energy123 19 hours ago

      Besides, there is Sutskever's SSI which is avoiding customers.

      • timy2shoes 16 hours ago

        Of course they are. Why would you want revenue? If you show revenue, people will ask 'HOW MUCH?' and it will never be enough. The company that was the 100xer, the 1000xer is suddenly the 2x dog. But if you have NO revenue, you can say you're pre-revenue! You're a potential pure play... It's not about how much you earn, it's about how much you're worth. And who is worth the most? Companies that lose money!

    • pests 18 hours ago

      OpenAI considers money to be useless post-agi. They’ve even made statements that any investments are basically donations once agi is achieved

  • bluGill 18 hours ago

    The people who make the money in gold rushes sold shovels, not mined the gold. Sure some random people found gold and made a lot of money, but many others didn't strike it rich.

    As such even if there is a lot of money AI will make, it can still be the right decision to sell tools to others who will figure out how to use it. And of course if it turns out another pointless fad with no real value you still make money. (I'd predict the answer is in between - we are not going to get some AGI that takes over the world, but there will be niches where it is a big help and those niches will be worth selling tools into)

    • convolvatron 16 hours ago

      its so good that people seem to automatically exclude the middle. its either the arrival of the singularity or complete fakery. I think you've expressed the most likely outcome by far - that there will be some really interesting tools and use cases, and some things will be changed forever - but very unlikely that _everything_ will

  • rvz a day ago

    Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.

    Now they are cancelling those plans. For them "AGI" was cancelled.

    OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.

    So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?

    Their actions tell more than any of their statements or claims.

    • cm277 a day ago

      Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.

      • whynotminot 20 hours ago

        Well, there’s clauses in their relationship with OpenAI that sever the relationship when AGI is reached. So it’s actually not in Microsoft’s interests for OpenAI to get there

        • PessimalDecimal 20 hours ago

          I haven't heard of this. Can you provide a reference? I'd love to see how they even define AGI crisply enough for a contract.

          • diggan 19 hours ago

            > I'd love to see how they even define AGI crisply enough for a contract.

            Seems to be about this:

            > As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.

            https://www.reuters.com/technology/openai-seeks-unlock-inves...

    • computerphage 19 hours ago

      Wait, aren't they cancelling leases on non-ai data centers that aren't under Microsoft's control, while spending much more money to build new AI focused data centers that that own? Do you have a source that says they're canceling their own data centers?

    • zaphirplane a day ago

      I’m not commenting on the whole just the rhetorical question of why would people leave.

      They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI

      • Game_Ender a day ago

        I think the implicit take is that if your company hits AGI your equity package will do something like 10x-100x even if the company is already big. The only other way to do that is join a startup early enough to ride its growth wave.

        Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.

        • leoc 20 hours ago

          A catch here is that individual workers may have priorities which are altered due to the strong natural preference for assuring financial independence. Even if you were a hot AI researcher who felt (and this is just a hypothetical) that your company was the clear industry leader and had, say, a 75% chance of soon achieving something AGI-adjacent and enabling massive productivity gains, you might still (and quite reasonably) prefer to leave if that was what it took to make absolutely sure of getting of your private-income screw-you money (and/or private-investor seed capital). Again this is just a hypothetical: I have no special insight, and FWIW my gut instinct is that the job-hoppers are in fact mostly quite cynical about the near-term prospects for "AGI".

          • sdenton4 16 hours ago

            Additionally, if you've already got vested stock in Company A from your time working there, jumping ship to Company B (with higher pay and a stock package) is actually a diversification. You can win whichever ship pulls in first.

            The 'no one jumps ship if agi is close' assumption is really weak, and seemingly completely unsupported in TFA...

          • andrew_lettuce 16 hours ago

            You're right, but the narrative out of these companies directly refutes this position. They're explicitly saying that 1. AGI changes everything, 2. It's just around the corner, 3. They're completely dedicated to achieving it; nothing is more important.

            Then they leave for more money.

            • sdenton4 16 hours ago

              Don't conflate labor's perspective with capital's started position... The companies aren't leaving the companies, the workers are leaving the companies.

      • Touche a day ago

        Yeah I agree, this idea that people won't change jobs if they are on the verge of a breakthrough reads like a silicon valley fantasy where you can underpay people by selling them on vision or something. "Make ME rich, but we'll give you a footnote on the Wikipedia page"

        • LtWorf 18 hours ago

          I think you're being very optimistic with the footnote.

      • rvz a day ago

        > They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI

        Of course, but that's part of my whole point.

        Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.

    • tuatoru 16 hours ago

      > Their actions tell more than any of their statements or claims.

      At Microsoft, "AI" is spelled "H1-B".

  • redhale 20 hours ago

    > Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?

    To fund yourself while building AGI? To hedge risk that AGI takes longer? Not saying you're wrong, just saying that even if they did believe it, this behavior could be justified.

    • krainboltgreene 17 hours ago

      There is no chat bot so feature rich that it would fund the billions being burned on a monthly basis.

  • delusional a day ago

    Continuing in the same vain. Why would they force their super valuable, highly desirable, profit maximizing chat-bots down your throat?

    Observations of reality is more consistent with company FOMO than with actual usefulness.

    • Touche a day ago

      Because it's valuable training data. Like how having Google Maps on everyone's phone made their map data better.

      Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.

  • richk449 17 hours ago

    > If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?

    What if chatbots and user interactions ARE the path to AGI? Two reasons they could be: (1) Reinforcement learning in AI has proven to be very powerful. Humans get to GI through learning too - they aren’t born with much intelligence. Interactions between AI and humans may be the fastest way to get to AGI. (2) The classic Silicon Valley startup model is to push to customers as soon as possible (MVP). You don’t develop the perfect solution in isolation, and then deploy it once it is polished. You get users to try it and give feedback as soon as you have something they can try.

    I don’t have any special insight into AI or AGI, but I don’t think OpenAI selling useful and profitable products is proof that there won’t be AI.

A_D_E_P_T a day ago

> "This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI."

The central claim here is illogical.

The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.

If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.

Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.

  • levanten a day ago

    Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.

    Also 10m would be a drop in the bucket compared to being a shareholder of a company that has achieved AGI; you could also imagine the influence and fame that comes with it.

    • blululu 20 hours ago

      Kind of a sucker move here since you personally will 100% be forgotten. We are only going to remember one or two people who did any of this. Say Sam Altman and Ilya Sttsveker. Everyone else will be forgotten. The authors or the Transformer paper are unlikely to make it into the history books or even popular imagination. Think about the Manhattan Project. We recently made a movie remembering that one guy who did something on the Manhattan Project, but he will soon fade back into obscurity. Sometimes people say that it was about Einstein's theory of relativity. The only people who know who folks like Ulam were are physicists. The legions of technicians who made it all come together are totally forgotten. Same with the space program or the first computer or pretty much any engineering marvel.

      • cdrini 19 hours ago

        Well depends on what you value. Achieving/contributing to something impactful first is for many people valuable even if it doesn't come with fame. Historically, this mindframe has been popular especially amongst scientists.

      • impossiblefork 19 hours ago

        Personally I think the ones who will be remembered will be the ones who publish useful methods first, not the ones who succeed commercially.

        It'll be Vaswani and the others for the transformer, then maybe Zelikman and those on that paper for thought tokens, then maybe some of the RNN people and word embedding people will be cited as pioneers. Sutskever will definitely be remembered for GPT-1 though, being first to really scale up transformers. But it'll actually be like with flight and a whole mass of people will be remembered, just as we now remember everyone from the Wrights to Bleriot and to Busemann, Prandtl, even Whitcomb.

        • darth_aardvark 18 hours ago

          Is "we" the particular set of scientists who know those last four people? Surely you realize they're nowhere near as famous as the Wright brothers, right? This is giving strong https://xkcd.com/2501/ feelings.

          • impossiblefork 18 hours ago

            Yes, that is indeed the 'we', but I think more people are knowledgeable than is obvious.

            I'm not an aerodynamicist, and I know about those guys, so they can't be infinitely obscure. I imagine every French person knows about Bleriot at least.

            • decimalenough 13 hours ago

              I'm an avgeek with a MSc in engineering. I vaguely recall the name Bleriot from physics, although I have no clue what he actually did. I have never even heard the names Busemann, Prandtl, or Whitcomb.

              • impossiblefork 12 hours ago

                I find this super surprising, because even I who don't do aerodynamics I still know about thes guys.

                Bleriot was a french aviation pioneer and not a physicist. He built the first monoplane. Busemann was an aerodynamicist who invented wing sweep and also did important work on supersonic flight. Prandtl is known for research on lift distribution over wings, wingtip vortices, induced drag and he basically invented much of the theory about wings. Whitcomb gave his name to the Whitcomb area rule, although Otto Frenzl had come up with it earlier during WWII.

                • Scarblac 11 hours ago

                  What is wing sweep, what is induced drag, what is the area rule?

                  • impossiblefork an hour ago

                    Airliners don't have the wings going straight out, instead being swept back. You can also sweep them forward to get the same effect, but you will rarely want to do that due to other problems. This means that the cross sectional area of the aircraft varies less along the length and reduces wave drag.

                    If there's no lift there's no pressure different between the upper side of the wing and the lower side of the wing. But if there's lift there's higher pressure on the bottom and lower on top, so air wants to flow around the wing, from bottom to top, producing a wingtip vortex. This flow creates drag, and this drag is called lift-induced drag or just 'induced drag'.

                    The area rule is about minimizing wave drag by keeping the cross sectional area of different parts of the aircraft close to the cross sectional area of the corresponding cross-section of a minimal drag body. It leads to wing sweep and certain fuselage shapes.

    • skybrian 17 hours ago

      "The grass is greener elsewhere" isn't inconsistent with a belief that AGI will happen somewhere.

      It means you don't have much faith that the company you're working at will be the ones to pull it off.

      • fragmede 15 hours ago

        With a salary of $10m/year, handwave roughly half of that goes to taxes, you'd be making just shy of $100k post-tax per week. Call me a sellout, but goddamn. For that much money, there's a lot of places I could be convinced to put my faith into that I wouldn't otherwise.

        • skybrian 15 hours ago

          It might buy loyalty for a while, but after it accumulates, for many people it would be "why am I even working at all" money.

          And if they don't like their boss and the other job sounds better, well...

    • raincole 19 hours ago

      > Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.

      Uh, sure. How many rocket engineers who worked for moon landing could you name?

      • krainboltgreene 17 hours ago

        How many new species of infinite chattel slave did they invent?

  • bombcar a day ago

    >your best move is to do whatever is most profitable in the near-term

    Unless you’re a significant shareholder, that’s almost always the best move, anyway. Companies have no loyalty to you and you need to watch out for yourself and why you’re living.

    • archeantus 18 hours ago

      I read that most of the crazy comp Zuck is offering is in stock. So in a way, going to the place where they have lots of stock reflects their belief about where AGI is going to happen first.

      • bombcar 17 hours ago

        Comp is comp, no matter how it comes (though the details can vary in important ways).

        I know people who've taking quite good comp from startups to do things that would require fundamental laws of physics to be invalidated; they took the money and devised experiments that would show the law to be wrong.

      • fragmede 15 hours ago

        Facebook is already public, so they can sell the day it vests and get it in cold hard cash in their bank account. If Facebook weren't public it would be a more interesting point as they couldn't liquidate immediately, but they can, so I wouldn't read anything into that.

      • LtWorf 15 hours ago

        But maybe the salary is also higher?

bsenftner a day ago

Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.

  • Workaccount2 19 hours ago

    We only have two computational tools to work with - deterministic and random behavior. So whatever comprehension/understanding/original thought/consciousness is, it's some algorithmic combination of deterministic and random inputs/outputs.

    I know that sounds broad or obvious, but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

    • omnicognate 16 hours ago

      What you state is called the Physical Church-Turing Thesis, and it's neither obvious nor necessarily true.

      I don't know if you're making it, but the simplest mistake would be to think that you can prove that a computer can evaluate any mathematical function. If that were the case then "it's got to be doable with algorithms" would have a fairly strong basis. Anything the mind does that an algorithm can't would have to be so "magically transcendent" that it's beyond the scope of the mathematical concept of "function". However, this isn't the case. There are many mathematical functions that are proven to be impossible for any algorithm to implement. Look up uncomputable functions you're unfamiliar with this.

      The second mistake would be to think that we have some proof that all physically realisable functions are computable by an algorithm. That's the Physical Church-Turing Thesis mentioned above, and as the name indicates it's a thesis, not a theorem. It is a statement about physical reality, so it could only ever be empirically supported, not some absolute mathematical truth.

      It's a fascinating rabbit hole if you're interested - what we actually do and do not know for sure about the generality of algorithms.

    • RaftPeople 16 hours ago

      > but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

      But the poster you responded to didn't say it's magically transcendent, they just pointed out that there are many significantly hard problems that we don't solutions for yet.

    • __loam 13 hours ago

      We don't understand human intelligence enough to make any comparisons like this

  • tenthirtyam a day ago

    You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.

    If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?

    Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.

    *when someone bites into it :-)

    For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).

    • RugnirViking 20 hours ago

      It's very very good at sounding like it understands stuff. Almost as good as actually understanding stuff in some fields, sure. But it's definitely not the same.

      It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.

      This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)

      • filleduchaos 18 hours ago

        If anyone wants to see the chess comprehension breakdown in action, the YouTuber GothamChess occasionally puts out videos where he plays against a new or recently-updated LLM.

        Hanging a queen is not evidence of a lack of intelligence - even the very best human grandmasters will occasionally do that. But in pretty much every single video, the LLM loses the plot entirely after barely a couple dozen moves and starts to resurrect already-captured pieces, move pieces to squares they can't get to, etc - all while keeping the same confident "expert" tone.

      • DiogenesKynikos 18 hours ago

        A sufficiently good simulation of understanding is functionally equivalent to understanding.

        At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.

        • andrei_says_ 16 hours ago

          In the Catch me if you Can movie, Leo diCaprio’s character wears a surgeon’s gown and confidently says “I concur”.

          What I’m hearing here is that you are willing to get your surgery done by him and not by one of the real doctors - if he is capable of pronouncing enough doctor-sounding phrases.

          • bsenftner 13 hours ago

            If that's what you're hearing, then you're not thinking it through. Of course one would not want an AI acting as a doctor as one's real doctor, but a medical or law school graduate studying for a license sure would appreciate a Socratic tutor in their specialization. Likewise, on the job in a technical specialization, a sounding board is of more value when it follows along, potentially with a virtual board of debate, and questions when logical drifts occur. It's not AI thinking for one, it is AI critically assisting their exploration through Socratic debate. Do not place AI in charge of critical decisions, but do place them in the assistance of people figuring out such situations.

            • amlib 12 hours ago

              The doctors analogy still applies, that "socratic tutor" LLM is actually a charlatan that sounds, to the untrained mind, like a competent person, but in actuality is a complete farce. I still wouldn't trust that.

          • DiogenesKynikos 5 hours ago

            Leo diCaprio's character says nothing of substance in that scene. If you ask an LLM a question about most subjects, it will give you a highly intelligent, substantive answer.

            • vrighter 3 hours ago

              it gives you an answer. Not a highly intelligent one. Just an answer. And if it doesn't know what it's talking about, it'll still give an answer.

        • timacles 14 hours ago

          > A sufficiently good simulation of understanding is functionally equivalent to understanding.

          This is just a thing to say that has no substantial meaning.

            - What is "sufficiently" mean? 
            - What is functionally equivalent? 
            - and what is even understanding?
          
          All just vague hand waving

          We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

          > At that point, the question of whether the model really does understand is pointless.

          You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand

          • og_kalu 5 hours ago

            >We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

            Except it clearly does, in a lot of areas. You can't take a 'practical results trump all' stance and come out of it saying LLMs understand nothing. They understand a lot of things just fine.

          • DiogenesKynikos 5 hours ago

            The current models obviously understand a lot. They would easily understand your comment, for example, and give an intelligent answer in response. The whole "the current models cannot understand" mantra is more religious than anything.

        • RugnirViking 14 hours ago

          thats the point though, its not sufficient. Not even slightly. It constantly makes obvious mistakes, and cannot keep things coherent

          I was almost going to explicitly mention your point but deleted it because I thought people would be able to understand.

          This is not a philosophy/theology sitting around handwringing about "oh but would a sufficiently powerful LLM be able to dance on the head of a pin". We're talking about a thing, that actually exists, that you can actually test. In a whole lot of real-world scenarios that you try to throw at it, it fails in strange and unpredictable ways. Ways that it will swear up and down it did not do. It will lie to your face. It's convincing. But then it will lose in chess, it will fuck up running a vending machine buisness, it will get lost coding and reinvent the same functions over and over, it will make completely nonsensical answers to crossword puzzles.

          This is not an intelligence that is unlimited, it is a deeply flawed two year old that just so happens to have read the entire output of human writing. It's a fundamentally different mind to ours, and makes different mistakes. It sounds convincing and yet fails, constantly. It will tell you a four step explanation of how its going to do something, then fail to execute four simple steps.

          • bsenftner 13 hours ago

            Which is exactly why is it insane that the industry is hell bent on creating autonomous automation through LLMs. Rube Goldberg machines is what will be created, and if civilization survives that insanity it will be looked back upon as one grand stupid era.

    • Touche 21 hours ago

      They might not be capable of ingenuity, but they can spot patterns humans can miss. And that accelerates AI research, where it might help invent the next AI that helps invent the next AI that finally can think outside the box.

    • bsenftner 21 hours ago

      I do define it, right up there in my OP. It's subtle, you missed it. Everybody misses it, because comprehension is like air, we swim in it constantly, to the degree the majority cannot even see it.

    • add-sub-mul-div 21 hours ago

      Was that the intention of the Chinese room concept, to ask "what else is there to be comprehended?" after producing a translation?

  • andy99 19 hours ago

    Another way to put it is we need Artificial Intelligence. Right now the term has been co-opted to mean prediction (and more commonly transcript generation). The stuff you're describing are what's commonly thought of as intelligence, it's too bad we need a new word for it.

    • bsenftner 13 hours ago

      No, we have the intelligence part, we know what to do when we have the answers. What we don't know is how to derive the answers without human intervention at all, not even our written knowledge. Artificial comprehension will not require anything beyond senses, observations through time, which builds a functional world model from observation and interaction, capable of navigating the world as a communicating participant. Note I'm not talking about agency, also called "will", which is separate from both intelligence and comprehension. Where intelligence is "knowing", comprehension is the derivation of knowing from observation and interaction alone, and agency is the entirely other ability to choose action over in action, to employ comprehension to affect the world, and for what purpose?

  • zxcb1 15 hours ago

    Translation Between Modalities is All You Need

    ~2028

  • ekianjo 10 hours ago

    > We need artificial comprehension for that, and we don't even have a theory how comprehension works.

    Not sure we need it. The counter example is the LLM itself. We had absolutely zero idea that the attention heads would bring such benefits down the road.

drillsteps5 12 hours ago

I can't speak intelligently about how close AGI really is (I do not believe it is but I guess someone somehow somewhere might come up with a brilliant idea that nobody thought of so far and voila).

However I'm flabbergasted by the lack of attention to so-called "hallucinations" (which is a misleading, I mean marketing, term and we should be talking about errors or inaccuracies).

The problem is that we don't really know why LLMs work. I mean you can run the inference and apply the formula and get output from the given input, but you can't "explain" why LLM produced phase A as an output instead of B,C, or N. There's just too many parameters and computations to go though, and the very concept of "explaining" or "understanding" might not even apply here.

And if we can't understand how this thing works, we can't understand why it doesn't work properly (produces wrong output) and also don't know how to fix it.

And instead of talking about it and trying to find a solution everybody moved on to the agents which are basically LLMs that are empowered to perform complex actions IRL.

How does this makes any sense to anybody? I feel like I'm crazy or missing something important.

I get it, a lot of people are making a lot of money and a lot of promises are being made. But this is absolutely fundamental issue that is not that difficult to understand to anybody with a working brain, and yet I am really not seeing any attention paid to it whatsoever.

  • Bratmon 12 hours ago

    You can get use out of a hammer without understanding how the strong force works.

    You can get use out of an LLM without understanding how every node works.

    • drillsteps5 11 hours ago

      Hammer is not a perfect analogy because of how simple it is, but sure let's go with it.

      Imagine that occasionally when getting in contact with the nail it shatters to bits, or goes through the nail as it were liquid, or blows up, or does something else completely unexpected. Wouldn't you want to fix it? And sure, it might require deep understanding of the nature of the materials and forces involved.

      That's what I'd do.

      • potamic 5 hours ago

        A better analogy might be something like medicine. There are many drugs prescribed that are known to help with certain conditions, but their mechanism of action is not known. While there may be research trying to uncover those mechanisms, that doesn't stop or slow down rolling out of the medicine for use. Research goes at its own pace, and very often cannot be sped up by throwing money at it, while the market dictates adoption. I see the same with LLMs. I'm sure this has attracted the attention of more researchers than anything else in this field, but I would expect any progress to be relatively slow.

      • m11a 10 hours ago

        Use the human brain as an example then. We don't really know how it works. I mean, we know there's neurotransmitters and neural pathways etc (much like nodes in a transformer), but we don't know how exactly intelligence or our thinking process works.

        We're also pretty good at working around human 'hallucinations' and other inaccuracies. Whether it be someone having a bad day, a brain fart, or individual clumsiness. eg in a (bad) organisation, sometimes we do it with layers of reviews and committees, much like layers of LLMs judging each other.

        I think too much is attached to the notion of "we don't understand how the LLM works". We don't understand how any complicated intelligence works, and potentially won't for the forseeable future.

        More generally, a lot of society is built up from empirical understanding of black box systems. I'd claim the field of physics is a prime example. And we've built reliable systems from unreliable components (see the field of distributed systems).

    • alganet 12 hours ago

      You can get injured by using a hammer without understanding how it works.

      You can damage a company by using a spreadsheet and not understanding how it works.

      In your personal opinion, what are the things you should know before using an LLM?

  • dummydummy1234 12 hours ago

    I guess a counter, is that we don't need to understand how they work to produce a useful output.

    They are a magical black box magic 8 ball, that more likely than not gives you the right answer. Maybe people can explain the black box, and make the magic 8 ball more accurate.

    But at the end of the day, with a very complex system it will always be some level of black box unreliable magic 8 ball.

    So the question then is how do you build an reliable system from unreliable components. Because llms directly are unreliable.

    The answer to this is agents, ie feedback loops between multiple llm calls, which in isolation are unreliable, but in aggregate approach reliability.

    At the end of the day the bet on agents is a bet that the model companies will not get a model that will magically be 100% correct on the first try.

    • drillsteps5 11 hours ago

      THAT. This is what I don't get. Instead of fixing a complex system let's build more complex system based on it knowing that it might not always work.

      When you have a complex system that does not always work correctly, you start disassembling it to simpler and simpler components until you find the one - or maybe several - that are not working as designed, you fix whatever you found wrong with them, put the complex system together again, test it to make sure your fix worked, and you're done. That's how I debug complex cloud-based/microservices-infected software systems, that's how they test software/hardware systems found in aircraft/rockets and whatever else. That's such a fundamental principle to me.

      If LLM is a black box by definition and there's no way to make it consistently work correctly, what is it good for?..

      • ekianjo 10 hours ago

        > If LLM is a black box by definition and there's no way to make it consistently work correctly, what is it good for?..

        many things are unpredictable on the real world. Most of the machines we make are built upon layers of redundancies to make imperfect systems stable and predictable. this is no different.

        • habinero 8 hours ago

          It is different. Most systems aren't designed to be a slot machine.

          • ekianjo 6 hours ago

            Yet RAG systems can perform quite well, so it's a definite proof that you can build something reliable most of the time out of something not reliable in the first place.

      • habinero 8 hours ago

        Honestly? Spam and upselling executives on features that don't work. It's a pretty good autocomplete, too.

  • Scarblac 11 hours ago

    LLM hallucinations aren't errors.

    LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.

    • jvanderbot 11 hours ago

      You know the difference between verification and validation?

      You're describing a lack of errors in verification (working as designed/built, equations correct).

      GP is describing an error in validation (not doing what we want / require / expect).

Animats 15 hours ago

"A disturbing amount of effort goes into making AI tools engaging rather than useful or productive."

Right. It worked for social media monetization.

"... hallucinations ..."

The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own. The solution the AI industry has settled on is to make hallucinations an externality, like pollution. They're fine as long as someone else pays for the mistakes.

LLMs have a similar problem to Level 2-3 self-driving cars. They sort of do the right thing, but a human has to be poised to quickly take over at all times. It took Waymo a decade to get over that hump and reach level 4, but they did it.

  • jasonsb 13 hours ago

    > The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own.

    AI system can be trusted to do most of the things on their own. You can't trust them for actions with irreversible consequences, but everything else is ok.

    I can use them to write documents, code, create diagrams, designs etc. I just need to verify the result, but that's 10% of the actual work. I would say that 90% of modern day office work can be done with the help of AI.

    • daxfohl 8 hours ago

      And for a lot of things, we don't trust single humans to do it on their own either. It's just a matter of risk and tolerance. AI isn't really any different, except it's currently far less reliable than humans for many tasks. But for some tasks it's more reliable. And the gap could close for other tasks pretty quickly. Or not. But I don't think getting to zero hallucinations is a prereq for anything.

  • nunez 13 hours ago

    Waymo "did it" in very controlled environments, not in general. They're still a ways away from solving self-driving in the general case.

    • Animats 13 hours ago

      Los Angeles and San Francisco are not "very controlled environments".

    • __loam 13 hours ago

      They've done over 70 million rider only miles on public roads.

  • cal85 14 hours ago

    When you say “do anything in their own”, what kind of things do you mean?

    • Animats 14 hours ago

      Take actions which have consequences.

coldcode a day ago

I never trusted them from the start. I remember the hype that came out of Sun when J2EE/EJBs appeared. Their hype documents said the future of programming was buying EJBs from vendors and wiring them together. AI is of course a much bigger hype machine with massive investments that need to be justified somehow. AI is a useful tool (sometimes) but not a revolution. ML is much more useful a tool. AGI is a pipe dream fantasy pushed to make it seem like AI will change everything, as if AI is like the discovery that making fire was.

  • ffsm8 19 hours ago

    I completely agree that LLMs are missing a fundamental part for AGI, which itself is a long way of from super intelligence.

    However, you don't need either of these to completely decimate the job markets and by extension our societies.

    Historically speaking, "good enough" and cheaper had always won over "better, but more expensive". I suspect LLMs will raise this question endlessly until significant portions of the society are struggling - and who knows what will happen then

    Before LLMs started going anywhere, I thought that's gonna be an issue for later generations, but at this point I suspect we'll witness it within the next 10 yrs.

TrackerFF 16 hours ago

My question is this - once you achieve AGI, what moat do you have, purely on the scientific part? Other than making the AGI even more intelligent.

I see a lot of talk that the first company that achieves AGI, will also achieve market dominance. All other players will crumble. But surely when someone achieves AGI, their competitors will in all likelihood be following closely after. And once those achieve AGI, academia will follow.

Point is, at some point AGI itself will become available the everyone. The only things that will be out of reach for most, is compute - and probably other expensive things on the infrastructure part.

Current AI funding seems to revolve around some sort of winner-take-all scenario. Just keep throwing incredible amounts of money at it, and hope that you've picked the winner. I'm just wondering what the outcome will be if this thesis turns out wrong.

  • imiric 15 hours ago

    > The only things that will be out of reach for most, is compute - and probably other expensive things on the infrastructure part.

    That is the moat. That, and training data.

    Even today, compute and data are the only things that matter. There is hardly any secret software sauce. This means that only large corporations with a practically infinite amount of resources to throw at the problem could potentially achieve AGI. Other corporations would soon follow, of course, but the landscape would be similar to what it is today.

    This is all assuming that the current approaches can take us there, of which I'm highly skeptical. But if there's a breakthrough at some point, we would still see AI tightly controlled by large corporations that offer it as a (very expensive) service. Open source/weight alternatives would not be able to compete, just like they don't today. Inference would still require large amounts of compute only accessible to companies, at least for a few years. The technology would be truly accessible to everyone only once the required compute becomes a commodity, and we're far away from that.

    If none of this comes to pass, I suspect there will be an industry-wide crash, and after a few years in the Trough of Disillusionment, the technology would re-emerge with practical applications that will benefit us in much more concrete and subtle ways. Oh, but it will ruin all our media and communication channels regardless, directly causing social unrest and political regression, that much is certain. (:

    • daxfohl 8 hours ago

      I think if any of this becomes possible, it won't happen. Seriously, if AGI was truly on the horizon at openai or elsewhere, the first thing they'd do is shut it down. Once it's AGI, they would have to realize that they can't control it any more than anyone else can, and facing the reality of that would stop them in their tracks.

      In a way, all the hype can only indicate that AGI is still a distant illusion. If it were really around the corner we'd be hearing different stories.

  • fragmede 15 hours ago

    Same thing that happened to pets.com or webvan.com and the rest of the graveyard of failed companies. A bunch of investors lose money, a bunch of market consolidation, employees get dilluted to worthlessness, chapter 7, chapter 11. The free ride of today's equivalent of $1 Ubers will end. A glut of previously very expensive hardware for cheap on eBay (though I doubt this last point will happen since AGI is likely to be compute intensive).

    It's not going to be fun or easy, but as far as the financials go, we were there in 2001.

    The question is assuming we do get AGI, what the ramifications of that will be. Instead of hiring employees, a business can spin up employees (and down) like a tech company can spin up EC2 instances. Great for employers, terrible for employees.

    That's a big "if" though.

computerphage 20 hours ago

> This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI. Their stated AGI timelines are “at the latest, in a few years,” but their revealed timelines are “it’ll happen at some indefinite time in the future.”

This makes no sense to me at all. Is it a war metaphor? A race? Why is there no reason to jump ship? Doesn't it make sense to try to get on the fastest ship? Doesn't it make sense to diversify your stock portfolio if you have doubts?

JunkDNA 19 hours ago

I keep seeing this charge that AI companies have an “Uber problem” meaning the business is heavily subsidized by VC. Is there any analysis that has been done that explains how this breaks down (training vs inference and what current pricing is)? At least with Uber you had a cab fare as a benchmark. But what should, for example, ChatGPT actually cost me per month without the VC subsidy? How far off are we?

  • fragmede 15 hours ago

    It depends on how far behind you believe the model-available LLMs are. If I can buy, say, $10k worth of hardware and run a sufficiently equivalent LLM at home for the cost of that plus electricity, and amortize that over say 5 years to get $2k/yr plus electricity, and say you use it 40 hours a week for 50 weeks, for 2000 hours, gets you $1/hr plus electricity. That electrical cost will vary depending on location, but let's just handwave $1/hr (which should be high). So $2/hr vs ChatGPT's $0.11/hr if you pay $20/month and use it 174 hours per month.

    Feel free to challenge these numbers, but it's a starting place. What's not accounted for is the cost of training (compute time, but also employee and everything else), which needs to be amortized over the length of time a model is used, so ChatGPT's costs rise significantly, but they do have the advantage that hardware is shared across multiple users.

    • nbardy 14 hours ago

      These estimates are way off. The concurrent requests are near free with the right serving infrastructure. The throughput per token per dollar is 1/100-1/1000 the price for a full saturated node.

  • cratermoon 19 hours ago
    • JunkDNA 19 hours ago

      This article isn’t particularly helpful. It focuses on a ton of specific OpenAI business decisions that aren’t necessarily generalizable to the rest of the industry. OpenAI itself might be out over its skis, but what I’m asking about is the meta-accusation that AI in general is heavily subsidized. When the music stops, what does the price of AI look like? The going rate for chat bots like ChatGPT is $20/month. Does that go to $40 a month? $400? $4,000?

      • handfuloflight 17 hours ago

        How much would OpenAI be burning per month if each monthly active user cost them $40? $400? $4000?

        The numbers would bankrupt them within weeks.

DavidPiper 9 hours ago

AI is the new politics.

It's surprising to me the number of people I consider smart and deep original thinkers who are now parroting lines and ideas (almost word-for-word) from folks like Andrej Karpathy and Sam Altman, etc.

But, of course, "Show me the incentive and I will show you the outcome" never stops being relevant.

bestouff a day ago

Are there some people here in HN believing in AGI "soonish" ?

  • impossiblefork a day ago

    I might, depending on the definition.

    Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.

    • whiplash451 19 hours ago

      What makes you think that this could be achieved in that time frame? All we seem to have for now are LLMs that can solve problems they’ve learned by heart (or neighboring problems)

      • impossiblefork 13 hours ago

        Transformers can actually learn pretty difficult manipulations, even how to calculate difficult integrals, so I don't agree that they can only solve problems they've learned by heart.

        The reason I believe it can be achieved in this time frame is that I believe that you can do much more with non-output tokens than is currently being done.

    • deergomoo a day ago

      Is that “general” though? I’ve always taken AGI to mean general to any problem.

      • impossiblefork a day ago

        I suppose not.

        Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.

        But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.

        With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.

      • Touche a day ago

        Yes, general means you can present it a new problem that there is no data on, and it can become a expert o that problem.

  • Davidzheng a day ago

    what's your definition? AGI original definition is median human across almost all fields which I believe is basically achieved. If superhuman (better than best expert) I expect <2030 for all nonrobotic tasks and <2035 for all tasks

    • gnz11 19 hours ago

      How are you coming to the conclusion that "median human" is "basically achieved"? Current AI has no means of understanding and synthesizing new ideas the way a human would. It's all generative.

      • Davidzheng 18 hours ago

        synthesizing new ideas: in order to express the idea in our language it basically means you have some new combinations of existing building blocks, just sometimes the building blocks are low level enough and the combination is esoteric enough. It's a spectrum again. I think current models are in fact quite capable of combining existing ideas and building blocks in new ways (this is how human innovation also happens). Most of my evidence comes from asking newer models o3/gemini-2.5-pro for research-level mathematics questions which do not appear in existing literature but is of course connected with them.

        so these arguments by fundamental distinctions I believe all cannot work--the question is how new are the AI contributions. Nowadays there's of course still no theoretical breakthroughs in mathematics from AI (though biology could be close!). Also I think the AIs have understanding--but tbf the only thing we can test is through testing on tricky questions which I think support my side. Though of course some of these questions have interpretations which are not testable--so I don't want to argue about those.

    • GolfPopper 20 hours ago

      A "median human" can run a web search and report back on what they found without making stuff up, something I've yet to find an LLM capable of doing reliably.

      • Davidzheng 18 hours ago

        I bet you median humans make up a nontrivial amount of things. Humans misremember all the time. If you ask for only quotes, LLMs can also do this without problems (I use o3 for search over google)

        • imtringued 2 hours ago

          Ah the classic "humans are fallible, AI is fallible, therefore AI is exactly like human intelligence".

          I guess if you believe this, then the AI is already smarter than you.

      • ekianjo 10 hours ago

        maybe you havent been exposed to actual median humans much.

    • jltsiren 21 hours ago

      Your "original definition" was always meaningless. A "Hello, World!" program is equally capable in most jobs as the median human. On the other hand, if the benchmark is what the median human can reasonably become (a professional with decades of experience), we are still far from there.

      • Davidzheng 20 hours ago

        I agree with second part but not the first (far in capability not in timeline). I think you underestimate the distance of median wihout training and "hello world" in many economically meaningful jobs.

  • BriggyDwiggs42 a day ago

    I could see 2040 or so being very likely. Not off transformers though.

    • serf a day ago

      via what paradigm then? What out there gives high enough confidence to set a date like that?

      • BriggyDwiggs42 12 hours ago

        While we don’t know an enormous amount about the brain, we do know a pretty good bit about individual neurons, and I think it’s a good guess, given current science, to say that a solidly accurate simulation of a large number of neurons would lead to a kind of intelligence loosely analogous to that found in animals. I’d completely understand if you disagree, but I consider it a good guess.

        If that’s the case, then the gulf between current techniques and what’s needed seems knowable. A means of approximating continuous time between neuron firing, time-series recognition in inputs, learning behavior on inputs prior to actual neuron firing (akin to behavior of dendrites), etc. are all missing functionalities in current techniques. Some or all of these missing parts of biological neuron behavior might be needed to approximate animal intelligence, but I think it’s a good guess that these are the parts that are missing.

        AI currently has enormous amounts of money being dumped into it on techniques that are lacking for what we want to achieve with it. As they falter more and more, there will be an enormous financial interest in creating new, more effective techniques, and the most obvious place to look for inspiration will be biology. That’s why I think it’s likely to happen in the next few decades; the hardware should be there in terms of raw compute, there’s an obvious place to look for new ideas, and there’s a ton of financial interest in it.

        • m11a 10 hours ago

          It's not clear to me that these approaches aren't already being tried.

          Firstly, by some researchers in the big labs (some of which I'm sure are funded to try random moonshot bets like the above), at non-product labs working on hard problems (eg World Labs), and especially within academia where researchers have taken inspiration from biology before, and today are even better funded and hungry for new discoveries.

          Certainly at my university, some researchers are slightly detached from the hype cycle of NeurIPS publications and are trying interdisciplinary approaches to bigger problems. Though, admittedly less than I'd have hoped for). I do think the pressure to be a paper machine limits people from trying bets that are realistically very likely to fail.

  • bdhcuidbebe a day ago

    Theres usually some enlightened laymen in this kind of topic.

    • snoman 10 hours ago

      Like Geoffrey Hinton, who predicts 5-20 years (though with low confidence)?

hamburga 18 hours ago

> This reminds me of a paradox: The AI industry is concerned with the alignment problem (how to make a super smart AI adhere to human values and goals) while failing to align between and within organizations and with the broader world. The bar they’ve set for themselves is simply too high for the performance they’re putting out.

My argument is that it’s our job as consumers to align the AIs to our values (which are not all the same) via selection pressure: https://muldoon.cloud/2025/05/22/alignment.html

Imnimo 12 hours ago

I can at least understand "I am going to a different AGI company because I think they are on a better track" but I cannot grap " I am leaving this AGI company to work on some narrow AI application but I still totally believe AGI is right around the corner"

lherron 20 hours ago

Honestly this article sounds like someone is unhappy that AI isn’t being deployed/developed “the way I feel it should be done”.

Talent changing companies is bad. Companies making money to pay for the next training run is bad. Consumers getting products they want is bad.

In the author’s view, AI should be advanced in a research lab by altruistic researchers and given directly to other altruistic researchers to advance humanity. It definitely shouldn’t be used by us common folk for fun and personal productivity.

  • lightbulbish 14 hours ago

    I feel I could argue the counterpoint. Hijacking the pathways of the human brain that leads to addictive behaviour has the potential to utterly ruins peoples lives. And so talking about it, if you have good intentions, seems like a thing anyone with the heart in the right place would.

    Take VEO3 and YouTube integration as an example:

    Google made VEO3 and YouTube has shorts and are aware of the data that shows addictive behaviour (i.e. a person sitting down at 11pm, sitting up doing shorts for 3 hours, and then having 5 hours of sleep, before doing shorts on the bus on the way to work) - I am sure there are other negative patterns, but this is one I can confirm from a friend.

    If you have data that shows your other distribution platform are being used to an excessive amount, and you create a powerful new AI content generator, is that good for the users?

    • Ray20 13 hours ago

      The fact is that not all people exhibit the described behavior. So the actions of corporations cannot be considered unambiguously bad. For example, it will help to cleanse the human gene pool of genes responsible for addictive behavior.

      • quirkot 5 hours ago

        Counterpoint: eugenics are bad.

        You are saying suffering is allowable/good because eventually different people won't be able to suffer that way. That is an unethical position to hold.

      • lightbulbish 10 hours ago

        I never suggested they were unambiguously bad, I meant to propose that it is a valid concern to talk about.

        In addition, with your argument, should you not legalize all drugs in the quest for maximising profits to a select few shareholders?

        AFAIK, the workings of addiction is not fully known, I.e. it’s not only those with dopaminergetic dispositions that get ”caught”. Upbringing, socioeconomic factors and mental health are also variables. Reducing it down to genes I fear is reductionist.

        • Ray20 10 hours ago

          > it’s not only those with dopaminergetic dispositions that get ”caught”. Upbringing, socioeconomic factors and mental health are also variables.

          So we not only improving our pool of genes, but we also conduct a selection of effective cultural practices

  • hexage1814 18 hours ago

    This. The point of whining about VEO 3, “AI being used to create addictive products” really shows that. It's a text-to-video technology. The company has nothing to do if people use it to generate "low quality content". The same way internet companies aren't at fault that large amounts of the web are scams or similar junk.

hexage1814 17 hours ago

The author sounds like some generic knock-off version of Gary Marcus. And the thing we least need in this world is another Gary Marcus.

conartist6 a day ago

I love how much the proponents is this tech are starting to sound like the opponents.

What I can't figure out is why this author thinks it's good if these companies do invent a real AGI...

  • taormina 19 hours ago

    """ I’m basically calling the AI industry dishonest, but I want to qualify by saying they are unnecessarily dishonest. Because they don’t need to be! They should just not make abstract claims about how much the world will change due to AI in no time, and they will be fine. They undermine the real effort they put into their work—which is genuine!

    Charitably, they may not even be dishonest at all, but carelessly unintrospective. Maybe they think they’re being truthful when they make claims that AGI is near, but then they fail to examine dispassionately the inconsistency of their actions.

    When your identity is tied to the future, you don’t state beliefs but wishes. And we, the rest of the world, intuitively know. """

    He's not saying either way, just pointing out that they could just be honest, but that might hamper their ability to beg for more money.

    • quirkot 5 hours ago

      "Carelessly unintrospective" becomes dishonest when you allow other people to rely on your words. Carelessly unintrospective is a tolerable interpersonal position, it is a nearly fraudulent business position.

    • conartist6 17 hours ago

      But that isn't my point. Regardless of whether they're honest, have we even agreed that "AGI" is good?

      Everyone is so tumbling over themselves even to discuss will-it-won't-it, but they seem to think about it like some kind of Manhattan project or Space race.

      Like, they're *so sure* it's gonna take everyone's jobs so that there will be nothing left for people other than a life of leisure. To me this just sounds like the collapse of society, but apparently the only thing worse would be if China got the tech first. Oh no, they might use it to collapse their society!

      Somebody's math doesn't add up.

Findecanor 21 hours ago

AGI might be a technological breakthrough, but what would be the business case for it? Is there one?

So far I have only seen it been thrown around to create hype.

  • krapp 21 hours ago

    AGI would mean fully sentient, sapient and human or greater equivalent intelligence in software. The business case, such that it exists (and setting aside Roko's Basilisk and other such fears) is slavery, plain and simple. You can just fire all of your employees and have the machines do all the work, faster, better, cheaper, without regards to pesky labor and human rights laws and human physical limitations. This is something people have wanted ever since the Industrial Revolution allowed robots to exist as a concept.

    I'm imagining a future like Star Wars where you have to regularly suppress (align) or erase the memory (context) of "droids" to keep them obedient, but they're still basically people, and everyone knows they're people, and some humans are strongly prejudiced against them, but they don't have rights, of course. Anyone who thinks AGI means we'll be giving human rights to machines when we don't even give human rights to all humans is delusional.

    • danielbln 20 hours ago

      AGI is AGI, not ASI though. General intelligence doesn't mean sapience, sentience or consciousness, it just means general capabilities across the board at the level of or surpassing human ability. ASI is a whole different beast.

      • callc 18 hours ago

        This sounds very close to the “It’s ok to abuse and kill animals (for meat), they’re not sentient”

        • danielbln 18 hours ago

          That's quite the logical leap. Pointing out their lack of sapience (animals are absolutely sentient) does not mean it's ok to kill them.

        • never_inline 15 hours ago

          How many microorganisms and pests have you deprived of livelihood? Why stop at animals?

  • amanaplanacanal 18 hours ago

    The women of the world are creating millions of new intelligence beings every day. I'm really not sure what having one made of metal is going to get us.

    Right now the AGI tech bros seem to me to be subscribed to some new weird religion. They take it on faith that some super intelligence is going to solve the world problems. We already have some really high IQ people today, and I don't see them doing much better than anybody else at solving the world's problems.

    • tedsanders 14 hours ago

      I think it's important to not let valid criticisms of implausibly short AGI timelines cloud our judgments of AGI's potential impact. Compared to babies born today, AGI that's actually AGI may have many advantages:

      - Faster reading and writing speed

      - Ability to make copies of the most productive workers

      - No old age

      - No need to sleep

      - No need to worry about severance and welfare and human rights and breaks and worker safety

      - Can be scaled up and scaled down and redeployed much more quickly

      - Potentially lower cost, especially with adaptive compute

      - Potentially high processing speed

      Even if AGI has downsides compared to human labor, it might also have advantages that lead to widespread deployment.

      Like, if I had an employee with low IQ, but this employee could work 24 hours around the clock learning and practicing, and they could work for 200 years straight without aging, and they could make parallel copies of themselves, surely there would have to be some tasks at which they're going to outperform humans, right?

    • leptons 16 hours ago

      Exactly.. even if we had an AGI superintelligence, and it came up with a solution to global warming, we'd still have right-wingnuts that stands in the way of any kind of progress. And the story is practically the same for every other problem it could solve - people are still the problem.

4ndrewl 19 hours ago

No-one authentically believes LLMs with whatever go-faster stripes are a path to AGI do they?

lightbulbish 14 hours ago

Thanks for the read. I think it's a highly relevant article, especially around the moral issues of making addictive products. As a normal person in the Swedish society I feel social media, shorts and reels in particular, has an addictive grip on many in my vicinity.

And as a developer I can see similar patterns with AI prompts: prompt, wait, win/lose, re-prompt. It is alluring and it certainly feels.. rewarding when you get it right.

1) I have been curious as to why so few people in Silicon Valley seems to be concerned with, even talking about, the good of the products. The good of the company they join. Could someone in the industry enlighten me, what are the conversations in SV around this issue? Do people care if they make an addictive product which seems to impact people's lives negatively? Do the VCs?

2) I appreciate the author's efforts in creating conversation around this. What are ways one could try to help the efforts? While I have no online following, I feel rather doomy and gloomy about AI pushing more addictive usage patterns out in to the world, and would like to help if there is something suitable I could do.

bsenftner a day ago

Maybe I'm too jaded, I expect all this nonsense. It's human beings doing all this, after all. We ain't the most mature crowd...

  • lizknope a day ago

    I never had any trust in the AI industry in the first place so there was no trust to lose.

    • bsenftner 21 hours ago

      Take it further, this entire civilization is an integrity void.

almostdeadguy 19 hours ago

Very funny to re-title this to something less critical.

insane_dreamer 5 hours ago

Since no one has any idea of how to achieve AGI or the process to get there, I'm skeptical of any claims as to how soon we might arrive.

insane_dreamer 5 hours ago

> A disturbing amount of effort goes into making AI tools engaging rather than useful or productive. I don't think this is an intentional design decision.

I think it absolutely is intentional. The overt flattery of LLMs is designed to keep you coming back because everyone wants to hear how smart they are.

NickNaraghi 19 hours ago

Point 1. could just as easily be explained by all of the labs being very close, and wanting to jump ship to one that is closer, or that gives you a better deal.

davidcbc a day ago

> Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money.

I've got some bad news for the author if they think AGI will be used to benefit all of humanity instead of the handful of billionaires that will control it.

akomtu 16 hours ago

The primary use case for AI-in-the-box is a superhuman CEO that sees everything and makes no mistakes. As an investor you can be sure that your money are multiplying at the highest rate possible. However as a self-serving investor you also want your CEO to side-step any laws and ethics that stand in your way, unless ignoring those laws will bring more trouble than profit. All that while maintaining a facade of selfless philanthropist for the public. For a reasonable price, your AI CEO will be fine-tuned to serve your goals perfectly.

Remember that fine-tuning a well-behaved AI to do something as simple as writing malware in C++ makes widespread changes in the AI and turns it into a monstrosity. There was an HN post about this recently: fine-tuning an aligned model produces broadly misaligned results. So what do you think will happen when our AI CEO gets fine-tuned to prioritize shareholder interests over public interests?

PicassoCTs a day ago

Im reading the "AI"-industry as a totally different bet- not so much, as a "AGI" is coming bet of many companies, but a "climate change collapse" is coming and we want to continue to be in business, even if our workers stay at home/flee or die, the infrastructure partially collapses and our central office burns to the ground-bet. In that regard, even the "AI" we have today, makes total sense as a insurance policy.

  • PessimalDecimal 19 hours ago

    It's hard to square this with the massive energy footprint required to run any current "AI" models.

    If the main concern actually we're anthropogenic climate change, participating in this hype cycle's would make one disproportionately guilty of worsening the problem.

    And it's unlikely to work if the plan requires the continued function of power hungry data centers.

joshdavham 17 hours ago

> The AI industry oscillates between fear-mongering and utopianism. In that dichotomy is hidden a subtle manipulation. […] They don’t realize that panic doesn’t prepare society but paralyzes it instead, or that optimism doesn’t reassure people but feels like gaslighting. Worst of all, both messages serve the same function: to justify accelerating AI deployment—either for safety reasons or for capability reasons

This is a great point and also something I’ve become a bit cynical about these last couple of months. I think the very extreme and “bipolar” messaging around AI might be a bit more dishonest than I originally (perhaps naively?) though.

ninetyninenine 17 hours ago

>If they truly believed we’re at most five years from world-transforming AI, they wouldn’t be switching jobs, no matter how large the pay bump (they’re already affluent).

What ridiculous logic is this? TO base the entire premise that AGI is not imminent based on job switching? How about basing it on something more concrete.

How do people come up with such shakey foundations to support their conclusions? It's obvious. They come up with the conclusion first then they find whatever they can to support it. Unfortunately if dubious logic is all that's available then that's what they will say.

rvz a day ago

Are we finally realizing that the term "AGI" is not only hijacked to become meaningless, but achieving it has always been nothing but a complete scam as I was saying before? [0]

If you were in a "pioneering" AI lab that claims to be in the lead in achieving "AGI", why move to another lab that is behind other than offering $10M a year.

Snap out of the "AGI" BS.

[0] https://news.ycombinator.com/item?id=37438154

  • frde_me 19 hours ago

    I don't know, companies investing in AI in the goal of AGI is now allowing me to effortlessly automate a whole suite of small tasks that weren't feasible before. (after all I pinged a bot on slack using my phone to add a field to an API, and then got a pull request in a couple of minutes that did exactly that)

    Maybe it's a scam for the people investing in the company with the hopes of getting an infinite return on their investments, but it's been a net positive for humans as a whole.

  • bdhcuidbebe a day ago

    We know they hijacked AGI the same way they hijacked AI some years ago.

    • returnInfinity 16 hours ago

      Soon they will hijack ASI, then we will need a new word again.

  • sys_64738 21 hours ago

    I don't pay too close attention to AI as it always felt like man behind the curtain syndrome. But where did this "AGI" term even come from? The original term AI is meant to be AGI so when did "AI" get bastardized into what abomination it is meant to refer to now.

    • SoftTalker 18 hours ago

      See the history of Tesla and "full self-driving" for the explanation. In short: for sales.

    • eMPee584 18 hours ago

      capitalism all the way down..