• rebelsimile@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    I haven’t tried them yet but do LORAs (and all their variants) add a layer of learning concepts into LLMs like they do in image generators?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 months ago

      People definitely do LoRA with LLMs. This was a great writeup on the topic from a while back.

      But I have a broader issue with a lot of discussion on LLMs currently, which is that community testing and evaluation of methods and approaches is typically done on smaller models due to cost, and I’m generally very skeptical as to the generalization of results in those cases to large models.

      Especially on top of the increased issues around Goodhart’s Law and how the industry is measuring LLM performance right now.

      Personally I prefer avoiding fine tuned models wherever possible and just working more on crafting longer constrained contexts for pretrained models with a pre- or post-processing layer to format requests and results in acceptable ways if needed (latency permitting, but things like Groq are fast enough this isn’t much of an issue).

      There’s a quality and variety that’s lost with a lot of the RLHF models these days (though getting better with the most recent batch like Claude 3 Opus).

      • rebelsimile@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Thanks for the link! I actually use SD a lot practically so it’s been taking up like 95% of my attention in the AI space. I have LM Studio on my Mac and it blazes through responses with the 7b model and tends to meet most of my non-coding needs.

        Can you explain what you mean here?

        Personally I prefer avoiding fine tuned models wherever possible and just working more on crafting longer constrained contexts for pretrained models with a pre- or post-processing layer to format requests and results in acceptable ways if needed (latency permitting, but things like Groq are fast enough this isn’t much of an issue).

        Are you saying better initial prompting on a raw pre-trained model?

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Yeah. So with the pretrained models they aren’t instruct tuned so instead of “write an ad for a Coca Cola Twitter post emphasizing the brand focus of ‘enjoy life’” you need to do things that will work for autocompletion like:

          As an example of our top shelf social media copywriting services, consider the following Cleo winning tweet for the client Coca-Cola which emphasized their brand focus of “enjoy life”:

          In terms of the pre- and post-processing, you can use cheaper and faster models to just convert a query or response from formatting for the pretrained model into one that is more chat/instruct formatted. You can also check for and filter out jailbreaking or inappropriate content at those layers too.

          Basically the pretrained models are just much better at being more ‘human’ and unless what you are getting them to do is to complete word problems or the exact things models are optimized around currently (which I think poorly map to real world use cases), for a like to like model I prefer the pretrained.

          Though ultimately the biggest advantage is the overall model sophistication - a pretrained simpler and older model isn’t better than a chat/instruct tuned more modern larger model.