Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph

Zur Kurzanzeige

dc.identifier.uri http://dx.doi.org/10.15488/17475
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/17605
dc.contributor.author D’Souza, Jennifer
dc.contributor.author Hrou, Moussab
dc.contributor.author Auer, Sören
dc.contributor.editor Strauss, Christine
dc.contributor.editor Amagasa, Toshiyuki
dc.contributor.editor Kotsis, Gabriele
dc.contributor.editor Tjoa, A Min
dc.contributor.editor Khalil, Ismail
dc.date.accessioned 2024-06-04T07:19:45Z
dc.date.available 2024-06-04T07:19:45Z
dc.date.issued 2023
dc.identifier.citation D’Souza, J.; Hrou, M.; Auer, S.: Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. In: Strauss, C.; Amagasa, T.; Kotsis, G.; Tjoa, A M.; Khalil, I. (Eds.): Database and Expert Systems Applications: 34th International Conference, DEXA 2023, Penang, Malaysia, August 28–30, 2023, Proceedings, Part I. New York, NY : Springer, 2023 (Lecture Notes in Computer Science ; 14146), S. 508-515. DOI: https://doi.org/10.1007/978-3-031-39847-6_40
dc.description.abstract Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models. eng
dc.language.iso eng
dc.publisher New York, NY : Springer
dc.relation.ispartof Database and Expert Systems Applications: 34th International Conference, DEXA 2023, Penang, Malaysia, August 28–30, 2023, Proceedings, Part I
dc.relation.ispartofseries Lecture Notes in Computer Science ; 14146
dc.rights This document may be downloaded, read, stored and printed for your own use within the limits of § 53 UrhG but it may not be distributed on other websites via the internet or passed on to external parties. eng
dc.rights Dieses Dokument darf im Rahmen von § 53 UrhG zum eigenen Gebrauch kostenfrei heruntergeladen, gelesen, gespeichert und ausgedruckt, aber nicht auf anderen Webseiten im Internet bereitgestellt oder an Außenstehende weitergegeben werden. ger
dc.subject Knowledge Graph Completion eng
dc.subject Natural Language Processing eng
dc.subject Open Research Knowledge Graph eng
dc.subject Prompt-based Question Answering eng
dc.subject Question Answering eng
dc.subject.classification Konferenzschrift ger
dc.subject.ddc 620 | Ingenieurwissenschaften und Maschinenbau
dc.title Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph eng
dc.type BookPart
dc.type Text
dc.relation.essn 1611-3349
dc.relation.isbn 978-3-031-39847-6
dc.relation.issn 0302-9743
dc.relation.doi https://doi.org/10.1007/978-3-031-39847-6_40
dc.bibliographicCitation.volume 14146
dc.bibliographicCitation.firstPage 508
dc.bibliographicCitation.lastPage 515
dc.description.version publishedVersion eng
tib.accessRights frei zug�nglich


Die Publikation erscheint in Sammlung(en):

Zur Kurzanzeige

 

Suche im Repositorium


Durchblättern

Mein Nutzer/innenkonto

Nutzungsstatistiken