要約
推論能力の強化のために推論時間のスケーリングが重要になるにつれ、推論効率の良いモデルを構築することがますます重要になってきている。我々はNemotron-Hを紹介する。Nemotron-Hは8Bと56B/47BのハイブリッドMamba-Transformerモデルであり、与えられた精度レベルに対して推論コストを削減するように設計されている。この目標を達成するために、一般的なTransformerモデルアーキテクチャの自己アテンション層の大部分を、一定の計算を行い、生成トークン毎に一定のメモリを必要とするMamba層に置き換える。我々は、Nemotron-Hモデルが、他の同程度の大きさのオープンソースTransformerモデル(例えば、Qwen-2.5-7B/72BやLlama-3.1-8B/70B)と比較して、より良い、もしくは同程度の精度を提供し、同時に推論において最大3$times$高速であることを示す。推論速度をさらに向上させ、推論時に必要なメモリを削減するために、MiniPuzzleと呼ばれる刈り込みと蒸留による新しい圧縮技術を用いて、56BモデルからNemotron-H-47B-Baseを作成した。Nemotron-H-47B-Baseは56Bモデルと同程度の精度を達成したが、推論時間は20%高速化した。さらに、FP8ベースの学習レシピを紹介し、BF16ベースの学習と同等の結果が得られることを示す。このレシピは56Bモデルの学習に使用される。すべてのNemotron-Hモデルがリリースされ、Hugging Face、NeMo、Megatron-LMがサポートされます。
要約(オリジナル)
As inference-time scaling becomes critical for enhanced reasoning capabilities, it is increasingly becoming important to build models that are efficient to infer. We introduce Nemotron-H, a family of 8B and 56B/47B hybrid Mamba-Transformer models designed to reduce inference cost for a given accuracy level. To achieve this goal, we replace the majority of self-attention layers in the common Transformer model architecture with Mamba layers that perform constant computation and require constant memory per generated token. We show that Nemotron-H models offer either better or on-par accuracy compared to other similarly-sized state-of-the-art open-sourced Transformer models (e.g., Qwen-2.5-7B/72B and Llama-3.1-8B/70B), while being up to 3$\times$ faster at inference. To further increase inference speed and reduce the memory required at inference time, we created Nemotron-H-47B-Base from the 56B model using a new compression via pruning and distillation technique called MiniPuzzle. Nemotron-H-47B-Base achieves similar accuracy to the 56B model, but is 20% faster to infer. In addition, we introduce an FP8-based training recipe and show that it can achieve on par results with BF16-based training. This recipe is used to train the 56B model. All Nemotron-H models will be released, with support in Hugging Face, NeMo, and Megatron-LM.
arxiv情報
著者 | NVIDIA,:,Aaron Blakeman,Aarti Basant,Abhinav Khattar,Adithya Renduchintala,Akhiad Bercovich,Aleksander Ficek,Alexis Bjorlin,Ali Taghibakhshi,Amala Sanjay Deshmukh,Ameya Sunil Mahabaleshwarkar,Andrew Tao,Anna Shors,Ashwath Aithal,Ashwin Poojary,Ayush Dattagupta,Balaram Buddharaju,Bobby Chen,Boris Ginsburg,Boxin Wang,Brandon Norick,Brian Butterfield,Bryan Catanzaro,Carlo del Mundo,Chengyu Dong,Christine Harvey,Christopher Parisien,Dan Su,Daniel Korzekwa,Danny Yin,Daria Gitman,David Mosallanezhad,Deepak Narayanan,Denys Fridman,Dima Rekesh,Ding Ma,Dmytro Pykhtar,Dong Ahn,Duncan Riach,Dusan Stosic,Eileen Long,Elad Segal,Ellie Evans,Eric Chung,Erick Galinkin,Evelina Bakhturina,Ewa Dobrowolska,Fei Jia,Fuxiao Liu,Gargi Prasad,Gerald Shen,Guilin Liu,Guo Chen,Haifeng Qian,Helen Ngo,Hongbin Liu,Hui Li,Igor Gitman,Ilia Karmanov,Ivan Moshkov,Izik Golan,Jan Kautz,Jane Polak Scowcroft,Jared Casper,Jarno Seppanen,Jason Lu,Jason Sewall,Jiaqi Zeng,Jiaxuan You,Jimmy Zhang,Jing Zhang,Jining Huang,Jinze Xue,Jocelyn Huang,Joey Conway,John Kamalu,Jon Barker,Jonathan Cohen,Joseph Jennings,Jupinder Parmar,Karan Sapra,Kari Briski,Kateryna Chumachenko,Katherine Luna,Keshav Santhanam,Kezhi Kong,Kirthi Sivamani,Krzysztof Pawelec,Kumar Anik,Kunlun Li,Lawrence McAfee,Leon Derczynski,Lindsey Pavao,Luis Vega,Lukas Voegtle,Maciej Bala,Maer Rodrigues de Melo,Makesh Narsimhan Sreedhar,Marcin Chochowski,Markus Kliegl,Marta Stepniewska-Dziubinska,Matthieu Le,Matvei Novikov,Mehrzad Samadi,Michael Andersch,Michael Evans,Miguel Martinez,Mike Chrzanowski,Mike Ranzinger,Mikolaj Blaz,Misha Smelyanskiy,Mohamed Fawzy,Mohammad Shoeybi,Mostofa Patwary,Nayeon Lee,Nima Tajbakhsh,Ning Xu,Oleg Rybakov,Oleksii Kuchaiev,Olivier Delalleau,Osvald Nitski,Parth Chadha,Pasha Shamis,Paulius Micikevicius,Pavlo Molchanov,Peter Dykas,Philipp Fischer,Pierre-Yves Aquilanti,Piotr Bialecki,Prasoon Varshney,Pritam Gundecha,Przemek Tredak,Rabeeh Karimi,Rahul Kandu,Ran El-Yaniv,Raviraj Joshi,Roger Waleffe,Ruoxi Zhang,Sabrina Kavanaugh,Sahil Jain,Samuel Kriman,Sangkug Lym,Sanjeev Satheesh,Saurav Muralidharan,Sean Narenthiran,Selvaraj Anandaraj,Seonmyeong Bak,Sergey Kashirsky,Seungju Han,Shantanu Acharya,Shaona Ghosh,Sharath Turuvekere Sreenivas,Sharon Clay,Shelby Thomas,Shrimai Prabhumoye,Shubham Pachori,Shubham Toshniwal,Shyamala Prayaga,Siddhartha Jain,Sirshak Das,Slawek Kierat,Somshubra Majumdar,Song Han,Soumye Singhal,Sriharsha Niverty,Stefania Alborghetti,Suseella Panguluri,Swetha Bhendigeri,Syeda Nahida Akter,Szymon Migacz,Tal Shiri,Terry Kong,Timo Roman,Tomer Ronen,Trisha Saar,Tugrul Konuk,Tuomas Rintamaki,Tyler Poon,Ushnish De,Vahid Noroozi,Varun Singh,Vijay Korthikanti,Vitaly Kurin,Wasi Uddin Ahmad,Wei Du,Wei Ping,Wenliang Dai,Wonmin Byeon,Xiaowei Ren,Yao Xu,Yejin Choi,Yian Zhang,Ying Lin,Yoshi Suhara,Zhiding Yu,Zhiqi Li,Zhiyu Li,Zhongbo Zhu,Zhuolin Yang,Zijia Chen |
発行日 | 2025-04-04 17:41:58+00:00 |
arxivサイト | arxiv_id(pdf) |