iconLogo
Published:2025/12/16 13:04:50

木構造で賢く!NLIのコスパ最強モデル爆誕🎉

超要約: 文の構造を活かして、少ないパラメータで賢く文章の意味を理解するモデルを作ったよ!

🌟 ギャル的キラキラポイント✨ ● Transformer(トランスフォーマー)モデルって最強だけど、お金かかるじゃん?でも、これはコスパ最強💖 ● 文の構造をツリーみたいにすることで、少ないデータでも賢くなれるってこと✨ ● チャットボットとか検索エンジンの進化が止まらない予感…!😎


詳細解説

続きは「らくらく論文」アプリで

Tree Matching Networks for Natural Language Inference: Parameter-Efficient Semantic Understanding via Dependency Parse Trees

Jason Lunder

In creating sentence embeddings for Natural Language Inference (NLI) tasks, using transformer-based models like BERT leads to high accuracy, but require hundreds of millions of parameters. These models take in sentences as a sequence of tokens, and learn to encode the meaning of the sequence into embeddings such that those embeddings can be used reliably for NLI tasks. Essentially, every word is considered against every other word in the sequence, and the transformer model is able to determine the relationships between them, entirely from scratch. However, a model that accepts explicit linguistic structures like dependency parse trees may be able to leverage prior encoded information about these relationships, without having to learn them from scratch, thus improving learning efficiency. To investigate this, we adapt Graph Matching Networks (GMN) to operate on dependency parse trees, creating Tree Matching Networks (TMN). We compare TMN to a BERT based model on the SNLI entailment task and on the SemEval similarity task. TMN is able to achieve significantly better results with a significantly reduced memory footprint and much less training time than the BERT based model on the SNLI task, while both models struggled to preform well on the SemEval. Explicit structural representations significantly outperform sequence-based models at comparable scales, but current aggregation methods limit scalability. We propose multi-headed attention aggregation to address this limitation.

cs / cs.CL / cs.AI / cs.LG