• @PerogiBoi@lemmy.ca
    link
    fedilink
    English
    451 year ago

    Also check out LLM Studio and GPT4all. Both of these let you run private ChatGPT alternatives from Hugging Face and run them off your ram and processor (can also offload to GPU).

    • @webghost0101@sopuli.xyz
      link
      fedilink
      English
      71 year ago

      Something i am really missing is a breakdown of How good these models actually are compared to eachother.

      A demo on hugging face couldnt tell me the boiling point of water while the authors own example prompt asked the boiling point for some chemical.

    • @M500@lemmy.ml
      link
      fedilink
      English
      41 year ago

      I can’t find a way to run any of these on my homeserver and access it over http. It looks like it is possible but you need a gui to install it in the first place.

        • @Scipitie@lemmy.dbzer0.com
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          The x is “x11 forwarding” - don’t know about you but I don’t run an X on a headless server. Plus a GUI install is not exactly the best for reproducability which at least I aim for with my server infrastructure.

          • @Emma_Gold_Man@lemmy.dbzer0.com
            link
            fedilink
            English
            11 year ago

            You don’t need to run an X server on the headless server. As long as the libraries are compiled in to the client software (the GUI app), it will work. No GUI would need to be installed on the headless server, and the libraries are present in any common Linux distro already (and support would be compiled into a GUI-only app unless it was Wayland-only).

            I agree that a GUI-only installer is a bad thing, but the parent was saying they didn’t know how it could be done. “ssh -X” (or -Y) is how.

            • @Scipitie@lemmy.dbzer0.com
              link
              fedilink
              English
              11 year ago

              That’s a huge today-I-learned for me, thank you! I took ill throw xeyes on it just to use ssh - C for the first time in my life. I actually assumed wrong.

              I’ll edit my post accordingly!

      • @PerogiBoi@lemmy.ca
        link
        fedilink
        English
        15
        edit-2
        1 year ago

        Mistral is thought to be almost as good. I’ve used the latest version of mistral and found it more or less identical in quality of output.

        It’s not as fast though as I am running it off of 16gb of ram and an old GTX 1060 card.

        If you use LLM Studio I’d say it’s actually better because you can give it a pre-prompt so that all of its answers are within predefined guardrails (ex: you are glorb the cheese pirate and you have a passion for mink fur coats).

        There’s also the benefit of being able to load in uncensored models if you would like questionable content created (erotica, sketchy instructions on how to synthesize crystal meth, etc).

      • @Hestia@lemmy.world
        link
        fedilink
        English
        11 year ago

        Depends on your use case. If you want uncensored output then running locally is about the only game in town.

  • ElPussyKangaroo
    link
    fedilink
    English
    51 year ago

    Any recommendations from the community for models? I use ChatGPT for light work like touching up a draft I wrote, etc. I also use it for data related tasks like reorganization, identification etc.

    Which model would be appropriate?

  • stevedidWHAT
    link
    fedilink
    English
    11 year ago

    Open source good, together monkey strong 💪🏻

    Build cool village with other frens, make new things, celebrate as village

    • @Falcon@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      Many are close!

      In terms of usability though, they are better.

      For example, ask GPT4 for an example of cross site scripting in flask and you’ll have an ethics discussion. Grab an uncensored model off HuggingFace you’re off to the races

      • tubbadu
        link
        fedilink
        English
        11 year ago

        Seems interesting! Do I need high end hardware or can I run them on my old laptop that I use as home server?

        • @Falcon@lemmy.world
          link
          fedilink
          English
          11 year ago

          Oh no you need a 3060 at least :(

          Requires cuda. They’re essentially large mathematical equations that solve the probability of the next word.

          The equations are derived by trying different combinations of values until one works well. (This is the learning in machine learning). The trick is changing the numbers in a way that gets better each time (see e.g. gradient descent)

    • Infiltrated_ad8271
      link
      fedilink
      31 year ago

      The question is quickly answered as none is currently that good, open or not.

      Anyway it seems that this is just a manager. I see some competitors available that I have heard good things about, like mistral.