650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy

650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy

Super Data Science: ML & AI Podcast with Jon Krohn

SparseGPT is a noteworthy one-shot pruning technique that can halve the size of large language models like GPT-3 without adversely affecting accuracy. In this episode, Jon Krohn provides an overview of this development a…

Related tracks

See all