Robustness of Learning from Task Instructions

要約

【タイトル】タスク指示から学習する自己学習モデルの頑健性

【要約】

・伝統的な教師あり学習は、個々のタスクに基づいて作業し、タスク特有の例を大量にトレーニングする必要があります。
・これはタスクの汎化の発展を阻害するため、新しいタスクに素早く簡単に汎化できるシステムを構築するために、タスク指示が最近の監督の新しいトレンドとして採用されました。
・タスク指示はタスクの定義をモデルに与え、入力に基づいて適切な答えを出力させることができますが、タスク指示はしばしば異なる形で表されるため、任意の新しいタスクを扱う頑健なシステムが必要です。
・本研究は、新しいタスクの指示が(i)操作される、(ii)言い換えられる、または(iii)簡潔なレベルの異なる場合の、タスク指示に基づく自己学習モデルの頑健性を調べた初めての研究です。

要約(オリジナル)

Traditional supervised learning mostly works on individual tasks and requires training on a large set of task-specific examples. This paradigm seriously hinders the development of task generalization since preparing a task-specific example set is costly. To build a system that can quickly and easily generalize to new tasks, task instructions have been adopted as an emerging trend of supervision recently. These instructions give the model the definition of the task and allow the model to output the appropriate answer based on the instructions and inputs. However, task instructions are often expressed in different forms, which can be interpreted from two threads: first, some instructions are short sentences and are pretrained language model (PLM) oriented, such as prompts, while other instructions are paragraphs and are human-oriented, such as those in Amazon MTurk; second, different end-users very likely explain the same task with instructions of different textual expressions. A robust system for task generalization should be able to handle any new tasks regardless of the variability of instructions. However, the system robustness in dealing with instruction-driven task generalization is still unexplored. This work investigates the system robustness when the instructions of new tasks are (i) manipulated, (ii) paraphrased, or (iii) from different levels of conciseness. To our knowledge, this is the first work that systematically studies how robust a PLM is when it is supervised by instructions with different factors of variability.

arxiv情報

著者 Jiasheng Gu,Hongyu Zhao,Hanzi Xu,Liangyu Nie,Hongyuan Mei,Wenpeng Yin
発行日 2023-05-02 20:18:06+00:00
arxivサイト arxiv_id(pdf)

提供元, 利用サービス

arxiv.jp, OpenAI

カテゴリー: cs.CL パーマリンク