Toggle light / dark theme

The Unreasonable Ineffectiveness of the Deeper Layers

Posted in futurism

We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a…


Join the discussion on this paper page.