• @MrMamiya@feddit.de
    link
    fedilink
    6
    edit-2
    2 years ago

    It’s gonna be so fucking rich that the staggering mass of stupidity online prevents us from improving an AI beyond our intelligence level.

    Thank the shitposter in your life.

      • @jcg@halubilo.social
        link
        fedilink
        1
        edit-2
        2 years ago

        Shitposting alone saves. Blessed is he who shitposts, more blessed is the one who has been shitposted upon. Shitpost save us all

    • @erwan@lemmy.ml
      link
      fedilink
      22 years ago

      You can’t really blame the amount of stupidity online.

      The problem is that ChatGPT (and other LLM) produce content of the average quality of its input data. AI is not limited to LLM.

      For chess we were able to build AI that vastly outperform even the best human grandmasters. Imagine if we were to release a chess AI that is just as good as the average human…

      • @Atomic@sh.itjust.works
        link
        fedilink
        1
        edit-2
        2 years ago

        We call them chess ai. But they’re not actually real A.I. chess bots work off of opening books, predetermined best practices. And then analyzes each position and potential offshoots with an evaluation function.

        They will then start to brute-force positions until it finds a path that is beneficial.

        While it may sound very much alike. It works very differently than an A.I. However. It turned out that A.I software became better than humans at writing these functions.

        So in a sense, chess computers are not A.I. They’re created by A.I. at least Stockfish 12 has these “A.I inspired” evaluations. (Currently they’re on Stockfish 15 I believe)

        And yes. We also did make “chess AI” that is as bad as the average player. We even made some that are worse. Because we figured it would be nice if people can play a chess computer that is on the same skill level as the player. Rather than just being destroyed every time.

        • Tempi :sans: :metroidPrime:
          link
          fedilink
          12 years ago

          @Atomic @erwan you’re talking about “classic AI”, so to speak, but reinforcement learning is a machine learning method that has beaten a lot of games, including chess. Read about AlphaZero for example. It doesn’t need opening books, it just learns games by playing against itself.

        • @erwan@lemmy.ml
          link
          fedilink
          02 years ago

          The definition of “AI” is fuzzy and keeps changing. Basically when an AI use case becomes solved and widespread it stopped being seen as AI.

          Face recognition, OCR, speech recognition, all those used to be considered AI but now they’re just an app on your phone.

          I’m sure in a few years we’ll stop thinking about text generation as AI, but just one more tool we can leverage.

          There is no clear definition of “real AI”.

          • Dr Cog
            link
            fedilink
            1
            edit-2
            2 years ago

            Those are all still AI. Scientists still have a functional definition that includes these plus more scripted AI like in video games.

            Essentially, any algorithm that learns and acts on information that has not been explicitly programmed is considered AI.

  • TheSaneWriter
    link
    fedilink
    52 years ago

    I’m not too surprised, they’re probably downgrading the publicly available version of ChatGPT because of how expensive it is to run. Math was never its strong suit, but it could do it with enough resources. Without those resources, it’s essentially guessing random numbers.

    • PupBiru
      link
      fedilink
      22 years ago

      from what i understand, the big change in chat-gpt4 was that the model could “ask for help” from other tools: for maths, it knew it was a maths problem, transformed it to something a specialised calculation app could do, and then passed it off to that other code to do the actual calculation

      same thing for a lot of its new features; it was asking specialised software to do the bits it wasn’t good at

    • @givesomefucks@lemmy.world
      link
      fedilink
      12 years ago

      Yep.

      Standard VC bullshit.

      Burn money providing a lot for nothing to build brand recognition. Then cut free service before bringing out “premium” that at first works better than the original.

      Until a bunch of people starting paying and the resources aren’t scaled up to match.

  • dugite-code
    link
    fedilink
    42 years ago

    This is my experience in general. ChatGTP when from amazingly good to overall terrible. I was asking it for snippets of javascript, explanations of technical terms and it was shockingly good. Now I’m lucky if even half of what it outputs is even remotely based on reality.

  • Sagrotan
    link
    fedilink
    22 years ago

    It learns to be more human. More human than human, that’s our motto here at Tyrell.

    • @StarkillerX42@lemmy.ml
      link
      fedilink
      -112 years ago

      I’ve never been able to get a solution that was even remotely correct. Granted, most of the times I ask ChatGPT is when I’m having a hard time solving it myself.

    • Excel
      link
      fedilink
      22 years ago

      This has nothing to do with that. They already have all the data they could ever need to train the model.

    • @Perfide@reddthat.com
      link
      fedilink
      12 years ago

      I mean, whose to say they aren’t? But also, the fediverse is worthless compared to the big players. The entirety of the fediverses content to date is like a days worth of twitter or reddit content.

    • Do you think maybe it’s a simple and interesring way of discussing changes in the inner workings of the model, and that maybe people know that we already have calculators?

      • @Fisk400@lemmy.world
        link
        fedilink
        12 years ago

        I think it’s a lazy way of doing it. OpenAI has clearly stated that math isn’t something that they are even trying to make it good at. It’s like testing how fast Usain bolt is by having him bake a cake.

        If chatgpt is getting worse at math it might just be a side effect of them making it better at reading comprehension or something they want it to be good at there is no way to know that.

        Measure something it is supposed to be good at.

        • @Stoneykins@lemmy.one
          link
          fedilink
          22 years ago

          Nah, asking it to do math is perfect. People are looking for emergent qualities and things it can do that they never expected it to be able to do. The fact that it could do somewhat successful math before despite not being a calculator was fascinating, and the fact that it can’t now is interesting.

          Let the devs worry about how good it is at what it is supposed to do. I want to hear about stuff like this.

        • @ThreeHalflings@sh.itjust.works
          link
          fedilink
          2
          edit-2
          2 years ago

          All the things it’s supported to be good at are completely subjectively judged.

          That’s why, u less you have a panel of experts in your back pocket, you need something with a yes or no answer to have an interesting discussion.

          If people were discussing ChatGPT’s code writing ability, you’d complain that it wasn’t designed to do that either. The problem is that it was designed to transform inputs tk relatively beliveable outputs, representative of its training set. Great. That’s not super useful. It’s actual utility comes from its emergent behaviours.

          Lemme know when you make a post detailing the opinions of some university “Transform inputs to outputs” professors. Until then, well ocmrinue to discuss its behaviour in observable, verifiable and useful areas.

          • @Fisk400@lemmy.world
            link
            fedilink
            12 years ago

            We have people that assign numerical values to peoples ability to read and write every day. They are english teachers. They test all kinds of stuff like vocabulary, reading comprehension and grammar and in the end they assign grades to those skills. I don’t even need tiny professors in my pocket, they are just out there being teachers to children of all ages.

            One of the task I have chatGPT was to name and describe 10 dwarven characters. Their names have to be adjectives like grumpy but the description can not be based on him being grumpy. He has to be something other than grumpy.

            ChatGPT wrote 5 dwarves that followed the instructions and then defaulted to describing each dwarf based on their name. Sneezy was sickly, yawny was lazy and so on. This gives a score of 5/10 on the task I gave it.

            There is a tapestry of clever tests you can give it with language in focus to test the ability of a natural language model without giving it a bunch of numbers.

            • OK, you go get a panel of highschool English teachers together and see how useful their opinions are. Lemme know when your post is up, I’ll be interested then.

              • @Fisk400@lemmy.world
                link
                fedilink
                12 years ago

                Sorry, I thought we were having a discussion when we were supposed to just be smug cunts. I will correct my behaviour in the future.

        • @atomdmac@lemmy.world
          link
          fedilink
          02 years ago

          Has it gotten better at other stuff? Are you posing a possible scenario or asserting a fact? Would be curious about specific measurements if the later.

          • @Fisk400@lemmy.world
            link
            fedilink
            -12 years ago

            Possible scenario. We can’t know about the internal motivations of OpenAI unless they tell us and I haven’t seen any statements from them outside the fact that they don’t care if it’s bad at math.