imjonse 4 days ago

Same issue described on HF: https://huggingface.co/blog/gradient_accumulation

It also highlights the main disadvantage of Transformers codebase using the copy-paste method for models, where this fix needs to be applied to every single model separately.

  • danielhanchen 3 days ago

    Unfortunately transformers is a general library for many models, and so there are tonnes of different architectures. Unfortunately copy paste and changing some parts of the arch is the only way feasible in the meantime.

  • CraigJPerry 4 days ago

    >> disadvantage of Transformers codebase using the copy-paste method for models, where this fix needs to be applied to every single model separately

    What are the best tools we have available for tackling this kind of large scale copy-paste change?

    https://github.com/huggingface/transformers/pull/34191/commi...

    This feels too complex to tackle with PyCharm structural find and replace, even a more powerful structural find and replace like https://comby.dev/ feels underpowered here.

    Sourcegraph batch changes? That solves broadcasting the change but doesn’t help with capturing the change to make.

    Open rewrite? The python implementation is early stages, not prod ready as I understand it. Plus this change is too complex to use refaster templates even if we could use orw so you’d be debugging a fairly involved method visitor which in this case is probably orders of magnitude more time consuming than just making the changes manually.

    What else is there that I don’t know about?

    • danielhanchen 3 days ago

      Ye a complete change was necessary for now - HF had to isolate the cross entropy loss and make another class for it, and it had to be applied to all model archs.

danielhanchen 4 days ago

Oh hey! :) TLDR naively gradient accumulation was over-weighting short sequence lengths in LLM finetuning and training runs, and under-weighting long sequence lengths.

For eg a text with sequence lengths of [1, 100] would be scaled by 1/(100+1) in full batch training, but grad accum of 2 would weight [1] as 1/1 * 1/2 = 1/2, whilst [100] as 1/100 * 1/2 = 1/200. (1/2 since grad accum needs to divide by the # of grad accum steps)

  • ejddhbrbrrnrn 4 days ago

    Is this a general issue rather than unsloth specific. How wide is this problem? Sounds wild if it has been affecting everyones training.

    • danielhanchen 3 days ago

      Unfortunately it's not an Unsloth issue but a general issue affecting nearly all trainers which use grad accum. We worked with Huggingface so their trainers should be fixed now though in the main branch

xcodevn 4 days ago

Look from a different point of view: this is a feature, not a bug. With this, every example has equal weight, while with the fix, every token has equal weight.

  • oergiR 4 days ago

    That makes it sound like it’s a choice, which it isn’t really. The way to look at it is from a probabilistic perspective: with the fix, you maximise the probability of the data. Without the fix, you fairly arbitrarily raise some probabilities to a power greater than one, and some to a power less than one.

    • danielhanchen 3 days ago

      Yes exactly- mathematically it was incorrect to begin with.

  • pama 4 days ago

    Although there may be uses for such a modified loss, based on the tone of the writeup it feels like this was an unintended bug in their training code. Training llms with variable max sequence length on different GPU is a recipe for inefficient training anyways, so careful optimizion of MFU at scale, or fixed max sequence length per batch, would have avoided this “bug”.

    • danielhanchen 3 days ago

      Ye one way to fix it is to use fixed sequence lengths, but it'll still be a tad bit off. Packing say 1000 small sequences to fit a large sequence lengths still will incur the same issue since the denominator will be off by 1000, but yes the problem is much less pronounced.

      • pama 3 days ago

        Not sure what you meant here; of course one needs to still correctly estimate the cross-entropy loss in the end (in order to keep their sanity, or compare to runs with different total batch size), but each mini-batch term has the same relative contribution to the entropy.

        Edit: oh, I guess you probably meant that the code was previously not averaging correctly even for the case of same total mini-batch length... I haven't looked at this code.

        • danielhanchen 3 days ago

          Yes so making the sequences all the same ie through packing them into one still introduces issues since packing them into one has an unpadded token at the end due to the autoregressive nature of the training process.

          • pama a day ago

            Not sure what you mean. There is always an end to a batch. It doesn't have to be the end of a document entry, otherwise the model might get lazy and learn something related to the position of the text (i.e. look into the position encoding and call it a day).

  • danielhanchen 4 days ago

    Yes you're correct, but in normal full batch training without gradient accumulation, all tokens are weighted equally. Standard grad accum does not, and so the "fix" makes grad accum and full batch training finally mathematically equivalent