-
最近の投稿
- Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
- Technical report on label-informed logit redistribution for better domain generalization in low-shot classification with foundation models
- In-Context Meta LoRA Generation
- Zero-Shot Medical Phrase Grounding with Off-the-shelf Diffusion Models
- An Efficient Numerical Function Optimization Framework for Constrained Nonlinear Robotic Problems
-
最近のコメント
表示できるコメントはありません。 cs.AI (32932) cs.CL (24897) cs.CR (2547) cs.CV (39077) cs.LG (37834) cs.RO (19152) cs.SY (2930) eess.IV (4679) eess.SY (2924) stat.ML (4977)
「62E17」カテゴリーアーカイブ
Quantitative Error Bounds for Scaling Limits of Stochastic Iterative Algorithms
要約 確率的勾配降下法 (SGD) や確率的勾配ランジュバン力学 (SGLD) … 続きを読む