Paper Digest: Recent Papers on Question Answering
Paper Digest Team extracted all recent Question Answering related papers on our radar, and generated highlight sentences for them. The results are then sorted by relevance & date. In addition to this ‘static’ page, we also provide a real-time version of this article, which has more coverage and is updated in real time to include the most recent updates on this topic.
This list is created by the Paper Digest Team. Experience the cutting-edge capabilities of Paper Digest, an innovative AI-powered research platform that empowers you to read, write, get answers and review.
Try us today and unlock the full potential of our services for free!
TABLE 1: Paper Digest: Recent Papers on Question Answering
Paper | Author(s) | Source | Date | |
---|---|---|---|---|
1 | Do LLMs Understand Ambiguity in Text? A Case Study in Open-world Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We demonstrate how simple, training-free, token-level disambiguation methods may be effectively used to improve LLM performance for ambiguous question answering tasks. |
Aryan Keluskar; Amrita Bhattacharjee; Huan Liu; | arxiv-cs.CL | 2024-11-19 |
2 | AdaCM$^2$: On Understanding Extremely Long-Term Video with Adaptive Cross-Modality Memory Reduction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenges of long videos and complex prompts, we propose AdaCM$^2$, which, for the first time, introduces an adaptive cross-modality memory reduction approach to video-text alignment in an auto-regressive manner on video streams. |
YUANBIN MAN et. al. | arxiv-cs.CV | 2024-11-19 |
3 | \textsc{Neon}: News Entity-Interaction Extraction for Enhanced Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the information modeled by the parametric memory of LLMs is often outdated, and Web results from prototypical retrieval systems may fail to capture the latest relevant information and struggle to handle conflicting reports in evolving news. To address this challenge, we present the NEON framework, designed to extract emerging entity interactions — such as events or activities — as described in news articles. |
Sneha Singhania; Silviu Cucerzan; Allen Herring; Sujay Kumar Jauhar; | arxiv-cs.CL | 2024-11-19 |
4 | Mitigating Knowledge Conflicts in Language Model-Driven Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we argue that hallucination could be mitigated via explicit correlation between input source and generated content. |
HAN CAO et. al. | arxiv-cs.CL | 2024-11-18 |
5 | A Comprehensive Survey on Visual Question Answering Datasets and Algorithms Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Since the inception of this field, a plethora of VQA datasets and models have been published. In this article, we meticulously analyze the current state of VQA datasets and models, while cleanly dividing them into distinct categories and then summarizing the methodologies and characteristics of each category. |
Raihan Kabir; Naznin Haque; Md Saiful Islam; | arxiv-cs.CV | 2024-11-17 |
6 | Memory-Augmented Multimodal LLMs for Surgical VQA Via Self-Contained Inquiry Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these methods often struggle with limited scene understanding and question comprehension, and some rely on external resources (e.g., pre-extracted object features), which can introduce errors and generalize poorly across diverse surgical environments. To address these challenges, we propose SCAN, a simple yet effective memory-augmented framework that leverages Multimodal LLMs to improve surgical context comprehension via Self-Contained Inquiry. |
WENJUN HOU et. al. | arxiv-cs.CV | 2024-11-16 |
7 | Large Vision-Language Models for Remote Sensing Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel method that leverages a generative Large Vision-Language Model (LVLM) to streamline the RSVQA process. |
Surasakdi Siripong; Apirak Chaiyapan; Thanakorn Phonchai; | arxiv-cs.CV | 2024-11-16 |
8 | Understanding Multimodal LLMs: The Mechanistic Interpretability of Llava in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we apply mechanistic interpretability methods to analyze the visual question answering (VQA) mechanisms in the first MLLM, Llava. |
Zeping Yu; Sophia Ananiadou; | arxiv-cs.CL | 2024-11-16 |
9 | LLaVA-o1: Let Vision Language Models Reason Step-by-Step Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce LLaVA-o1, a novel VLM designed to conduct autonomous multistage reasoning. |
GUOWEI XU et. al. | arxiv-cs.CV | 2024-11-15 |
10 | Visual Question Answering Based Evaluation Metrics for Text-to-image Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes new evaluation metrics that assess the alignment between input text and generated images for every individual object. |
Mizuki Miyamoto; Ryugo Morita; Jinjia Zhou; | arxiv-cs.CV | 2024-11-15 |
11 | A Benchmark for Long-Form Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a new publicly available benchmark featuring real-world consumer medical questions with long-form answer evaluations annotated by medical doctors. |
PEDRAM HOSSEINI et. al. | arxiv-cs.CL | 2024-11-14 |
12 | Comprehensive and Practical Evaluation of Retrieval-Augmented Generation Systems for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper addresses this gap by providing a comprehensive evaluation framework for medical question-answering (QA) systems in a RAG setting for these situations, including sufficiency, integration, and robustness. We introduce Medical Retrieval-Augmented Generation Benchmark (MedRGB) that provides various supplementary elements to four medical QA datasets for testing LLMs’ ability to handle these specific scenarios. |
Nghia Trung Ngo; Chien Van Nguyen; Franck Dernoncourt; Thien Huu Nguyen; | arxiv-cs.CL | 2024-11-14 |
13 | The Limited Impact of Medical Adaptation of Large Language and Vision-Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we compare ten public medical LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting and supervised fine-tuning regimes for medical question-answering (QA). |
Daniel P. Jeong; Pranav Mani; Saurabh Garg; Zachary C. Lipton; Michael Oberst; | arxiv-cs.CL | 2024-11-13 |
14 | Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces QA-Attack (Question Answering Attack), a novel word-level adversarial strategy that fools QA models. |
Jiyao Li; Mingze Ni; Yongshun Gong; Wei Liu; | arxiv-cs.CL | 2024-11-12 |
15 | Toward Optimal Search and Retrieval for RAG Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here, we work towards the goal of understanding how retrievers can be optimized for RAG pipelines for common tasks such as Question Answering (QA). |
ALEXANDRIA LETO et. al. | arxiv-cs.CL | 2024-11-11 |
16 | Large Language Models Are Poor Clinical Decision-Makers: A Comprehensive Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To better understand LLMs in the clinic, we construct a benchmark ClinicBench. |
FENGLIN LIU et. al. | emnlp | 2024-11-11 |
17 | DVD: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Retrieval-augmented generation (RAG) offers a potential remedy, yet the uneven retrieval quality and irrelevant contents may distract LLMs. In this work, we address these issues at the generation phase by treating RAG as a multi-document QA task. |
Jing Jin; Houfeng Wang; Hao Zhang; Xiaoguang Li; Zhijiang Guo; | emnlp | 2024-11-11 |
18 | Training-free Deep Concept Injection Enables Language Models for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we make the first attempt to demonstrate that the PLM is able to perform zero-shot crossmodal tasks without any crossmodal pretraining, when the observed visual concepts are injected as both additional input text tokens and augmentation in the intermediate features within each feed-forward network for the PLM. |
Xudong Lin; Manling Li; Richard Zemel; Heng Ji; Shih-Fu Chang; | emnlp | 2024-11-11 |
19 | MILD Bot: Multidisciplinary Childhood Cancer Survivor Question-Answering Bot Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study introduces a Multidisciplinary chILDhood cancer survivor question-answering (MILD) bot designed to support childhood cancer survivors facing diverse challenges in their survivorship journey. |
MIRAE KIM et. al. | emnlp | 2024-11-11 |
20 | You Make Me Feel Like A Natural Question: Training QA Systems on Transformed Trivia Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Training question-answering QA and information retrieval systems for web queries require large, expensive datasets that are difficult to annotate and time-consuming to gather. … |
TASNIM KABIR et. al. | emnlp | 2024-11-11 |
21 | CompAct: Compressing Retrieved Documents Actively for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Context compression tackles this issue by filtering out irrelevant information, but current methods still struggle in realistic scenarios where crucial information cannot be captured with a single-step approach. To overcome this limitation, we introduce CompAct, a novel framework that employs an active strategy to condense extensive documents without losing key information. |
Chanwoong Yoon; Taewhoo Lee; Hyeon Hwang; Minbyul Jeong; Jaewoo Kang; | emnlp | 2024-11-11 |
22 | Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Thus, the retrieved knowledge is not truly conducive to helping answer the question, affecting the performance of the overall system. To address this issue, we propose a novel framework that leverages the visual-language model to select the key knowledge retrieved by DPR and answer questions. |
Dongze Hao; Qunbo Wang; Longteng Guo; Jie Jiang; Jing Liu; | emnlp | 2024-11-11 |
23 | EfficientRAG: Efficient Retriever for Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce EfficientRAG, an efficient retriever for multi-hop question answering. |
ZIYUAN ZHUANG et. al. | emnlp | 2024-11-11 |
24 | ERVQA: A Dataset to Benchmark The Readiness of Large Vision Language Models in Hospital Environments Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of |
SOURJYADIP RAY et. al. | emnlp | 2024-11-11 |
25 | SciDQA: A Deep Reading Comprehension Dataset Over Scientific Papers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce SciDQA, a new dataset for reading comprehension that challenges language models to deeply understand scientific articles, consisting of 2,937 QA pairs. |
Shruti Singh; Nandan Sarkar; Arman Cohan; | emnlp | 2024-11-11 |
26 | Encoding and Controlling Global Semantics for Long-form Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To further enhance the controllability, we introduce a cross-modal compositional congruence objective to encourage global semantics aligned with the question. |
THONG THANH NGUYEN et. al. | emnlp | 2024-11-11 |
27 | RAG4ITOps: A Supervised Fine-Tunable and Comprehensive RAG Framework for IT Operations and Maintenance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a general and comprehensive framework based on Retrieval Augmented Generation (RAG) and facilitate the whole business process of establishing QA systems for IT operations and maintenance. |
TIANYANG ZHANG et. al. | emnlp | 2024-11-11 |
28 | CasiMedicos-Arg: A Medical Question Answering Dataset Annotated with Explanatory Argumentative Structures Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Developing new tools to aid residents to train their explanation skills is therefore a central objective of AI in education. In this paper, we follow this direction, and we present, to the best of our knowledge, the first multilingual dataset for Medical Question Answering where correct and incorrect diagnoses for a clinical case are enriched with a natural language explanation written by doctors. |
EKATERINA SVIRIDOVA et. al. | emnlp | 2024-11-11 |
29 | Self-Training Large Language and Vision Assistant for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the advancement of medical image understanding and reasoning critically depends on building high-quality visual instruction data, which is costly and labor-intensive to obtain, particularly in the medical domain. To mitigate this data-starving issue, we introduce Self-Training Large Language and Vision Assistant for Medical (STLLaVA-Med). |
Guohao Sun; Can Qin; Huazhu Fu; Linwei Wang; Zhiqiang Tao; | emnlp | 2024-11-11 |
30 | Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present MIRAGE – Model Internals-based RAG Explanations – a plug-and-play approach using model internals for faithful answer attribution in RAG applications. |
Jirui Qi; Gabriele Sarti; Raquel Fern�ndez; Arianna Bisazza; | emnlp | 2024-11-11 |
31 | A Simple LLM Framework for Long-Range Video Question-Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present LLoVi, a simple yet effective **L**anguage-based **Lo**ng-range **Vi**deo question-answering (LVQA) framework. |
CE ZHANG et. al. | emnlp | 2024-11-11 |
32 | Efficient Answer Retrieval System (EARS): Combining Local DB Search and Web Search for Generative QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose an efficient answer retrieval system **EARS**: a production-ready, factual question answering (QA) system that combines local knowledge base search with generative, context-based QA. |
Nikita Krayko; Ivan Sidorov; Fedor Laputin; Daria Galimzianova; Vasily Konovalov; | emnlp | 2024-11-11 |
33 | Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). |
MINZHENG WANG et. al. | emnlp | 2024-11-11 |
34 | OMG-QA: Building Open-Domain Multi-Modal Generative Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce OMG-QA, a new resource for question answering that is designed to evaluate the effectiveness of question answering systems that perform retrieval augmented generation (RAG) in scenarios that demand reasoning on multi-modal, multi-document contexts. |
LINYONG NAN et. al. | emnlp | 2024-11-11 |
35 | Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the novel challenge of VideoQA within a continual learning framework, and empirically identify a critical issue: fine-tuning a large language model (LLM) for a sequence of tasks often results in catastrophic forgetting. |
CHEN CAI et. al. | emnlp | 2024-11-11 |
36 | LLoCO: Learning Long Contexts Offline Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. |
SIJUN TAN et. al. | emnlp | 2024-11-11 |
37 | Multi-Level Information Retrieval Augmented Generation for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a multi-level information RAG approach that enhances answer generation through entity retrieval and query expansion. |
Adjali Omar; Olivier Ferret; Sahar Ghannay; Herv� Le Borgne; | emnlp | 2024-11-11 |
38 | Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. |
Sagi Shaier; Ari Kobren; Philip V. Ogren; | emnlp | 2024-11-11 |
39 | StorySparkQA: Expert-Annotated QA Pairs with Real-World Knowledge for Children’s Story-Based Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This limitation can be attributed to the existing question-answering (QA) datasets used for children’s education, upon which the systems are built, failing to capture the nuances of how education experts think when conducting interactive story reading activities. To bridge this gap, we design an annotation framework, empowered by existing knowledge graph to capture experts’ annotations and thinking process, and leverage this framework to construct StorySparkQA dataset, which comprises 5, 868 expert-annotated QA pairs with real-world knowledge. |
JIAJU CHEN et. al. | emnlp | 2024-11-11 |
40 | Subgraph Retrieval Enhanced By Graph-Text Alignment for Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To deal with the problems above, we propose a novel framework: \textbf{S}ubgraph R\textbf{E}trieval Enhanced by Gra\textbf{P}h-\textbf{T}ext \textbf{A}lignment, named \textbf{SEPTA}. |
BOCI PENG et. al. | arxiv-cs.LG | 2024-11-11 |
41 | REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus likely leading to misleading or even incorrect utilization of external knowledge (i. e. , retrieved documents). To address this issue, in this paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA). |
YUHAO WANG et. al. | emnlp | 2024-11-11 |
42 | RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, most existing datasets for this task are either constructed using a single source corpus or consist of short extractive answers, which fall short of evaluating large language model (LLM) based RAG-QA systems on cross-domain generalization. To address these limitations, we create Long-form RobustQA (LFRQA), a new dataset comprising human-written long-form answers that integrate short extractive answers from multiple documents into a single, coherent narrative, covering 26K queries and large corpora across seven different domains. |
RUJUN HAN et. al. | emnlp | 2024-11-11 |
43 | Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal Assistant Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We revisit knowledge-aware text-based visual question answering, also known as Text-KVQA in the light of modern advancements in large multimodal models (LMMs), and make the following contributions: (i) We propose VisTEL – a principled approach to perform visual text entity linking. |
Abhirama Subramanyam Penamakuri; Anand Mishra; | emnlp | 2024-11-11 |
44 | Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We identify confounding effects and LM-KG misalignment as key factors causing spurious explanations. To address this, we introduce the LM-KG Fidelity metric to assess KG representation reliability and propose the LM-KG Distribution-aware Alignment (LKDA) algorithm to improve explanation faithfulness. |
Weihe Zhai; Arkaitz Zubiaga; Bingquan Liu; Chengjie Sun; Yalong Zhao; | emnlp | 2024-11-11 |
45 | Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In response, we propose Right for Right Reasons (R3), a commonsense KGQA methodology that allows for a verifiable reasoning procedure by axiomatically surfacing intrinsic commonsense knowledge of LLMs and grounding every factual reasoning step on KG triples. |
Armin Toroghi; Willis Guo; Mohammad Mahdi Abdollah Pour; Scott Sanner; | emnlp | 2024-11-11 |
46 | EVQAScore: Efficient Video Question Answering Data Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although various methods have been proposed for assessing video caption quality, there remains a lack of dedicated evaluation methods for Video QA. To address this gap, we introduce EVQAScore, a reference-free method that leverages keyword extraction to assess both video caption and video QA data quality. |
Hao Liang; Zirong Chen; Wentao Zhang; | arxiv-cs.CV | 2024-11-11 |
47 | PCQPR: Proactive Conversational Question Planning with Reflection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we redefine the CQG task as Conclusion-driven Conversational Question Generation (CCQG) by focusing on proactivity, not merely reacting to the unfolding conversation but actively steering it towards a conclusion-oriented question-answer pair. To address this, we propose a novel approach, called Proactive Conversational Question Planning with self-Refining (PCQPR). |
Shasha Guo; Lizi Liao; Jing Zhang; Cuiping Li; Hong Chen; | emnlp | 2024-11-11 |
48 | Generate-on-Graph: Treat LLM As Both Agent and KG for Incomplete Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To handle IKGQA, we propose a training-free method called Generate-on-Graph (GoG), which can generate new factual triples while exploring KGs. |
YAO XU et. al. | emnlp | 2024-11-11 |
49 | LongRAG: A Dual-Perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose LongRAG, a general, dual-perspective, and robust LLM-based RAG system paradigm for LCQA to enhance RAG’s understanding of complex long-context knowledge (i. e. , global information and factual details). |
QINGFEI ZHAO et. al. | emnlp | 2024-11-11 |
50 | FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Food is a rich and varied dimension of cultural heritage, crucial to both individuals and social groups. To bridge the gap in the literature on the often-overlooked regional diversity in this domain, we introduce FoodieQA, a manually curated, fine-grained image-text dataset capturing the intricate features of food cultures across various regions in China. |
WENYAN LI et. al. | emnlp | 2024-11-11 |
51 | Where Am I? Large Language Models Wandering Between Semantics and Structures in Long Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To verify LLMs’ task alignment, we introduce a verification framework and resources considering both semantic relevancy and structural diversity of the given long context knowledge. |
Seonmin Koo; Jinsung Kim; YoungJoon Jang; Chanjun Park; Heuiseok Lim; | emnlp | 2024-11-11 |
52 | Do Great Minds Think Alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recent advancements of large language models (LLMs)have led to claims of AI surpassing humansin natural language processing NLP tasks such as textual understanding and reasoning. %This work investigates these assertions by introducingCAIMIRA, a novel framework rooted in item response theory IRTthat enables quantitative assessment and comparison of problem-solving abilities inquestion-answering QA agents. |
Maharshi Gor; Hal Daum� Iii; Tianyi Zhou; Jordan Lee Boyd-Graber; | emnlp | 2024-11-11 |
53 | Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. |
Gregor Geigle; Radu Timofte; Goran Glava�; | emnlp | 2024-11-11 |
54 | TimeR4 : Time-aware Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To further enhance LLMs’ temporal reasoning ability, this paper aims to integrate relevant temporal knowledge from TKGs into LLMs through a Time-aware Retrieve-Rewrite-Retrieve-Rerank framework, which we named TimeR4. |
XINYING QIAN et. al. | emnlp | 2024-11-11 |
55 | Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we compare seven public medical LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. |
Daniel P Jeong; Saurabh Garg; Zachary Chase Lipton; Michael Oberst; | emnlp | 2024-11-11 |
56 | Triad: A Framework Leveraging A Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Triad, a unified framework that utilizes an LLM-based agent with multiple roles for KBQA tasks. |
CHANG ZONG et. al. | emnlp | 2024-11-11 |
57 | Contextualized Sequence Likelihood: Enhanced Confidence Scores for Natural Language Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose enhancing the predicted sequence probability by assigning different weights to various tokens using attention values elicited from the base LLM. |
Zhen Lin; Shubhendu Trivedi; Jimeng Sun; | emnlp | 2024-11-11 |
58 | Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing methods, like concatenation or free-form textual conversion of triples, have limitations, including duplicated entities or relations, reduced evidence density, and failure to highlight crucial evidence. To address these issues, we propose EFSum, an Evidence-focused Fact Summarization framework for enhanced QA with knowledge-augmented LLMs. |
Sungho Ko; Hyunjin Cho; Hyungjoo Chae; Jinyoung Yeo; Dongha Lee; | emnlp | 2024-11-11 |
59 | RAC: Retrieval-augmented Conversation Dataset for Open-domain Question Answering in Conversational Settings Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present a novel retrieval-augmented conversation (RAC) dataset and develop a baseline system comprising query rewriting, retrieval, reranking, and response generation stages. |
Bonggeun Choi; JeongJae Park; Yoonsung Kim; Jaehyun Park; Youngjoong Ko; | emnlp | 2024-11-11 |
60 | Pre-training Cross-lingual Open Domain Question Answering with Large-scale Synthetic Supervision Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View |
Fan Jiang; Tom Drummond; Trevor Cohn; | emnlp | 2024-11-11 |
61 | CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address them, we propose a novel rewriting method CoTKR, Chain- of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner, thereby mitigating the limitations of single-step knowledge rewriting. |
YIKE WU et. al. | emnlp | 2024-11-11 |
62 | Can LLM Generate Culturally Relevant Commonsense QA Data? Case Study in Indonesian and Sundanese Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we investigate the effectiveness of using LLMs in generating culturally relevant commonsense QA datasets for Indonesian and Sundanese languages. |
Rifki Afina Putri; Faiz Ghifari Haznitrama; Dea Adhista; Alice Oh; | emnlp | 2024-11-11 |
63 | LONGAGENT: Achieving Question Answering for 128k-Token-Long Documents Through Multi-Agent Collaboration Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce _LongAgent_, a multi-agent collaboration method that enables efficient and effective QA over 128k-token-long documents. |
JUN ZHAO et. al. | emnlp | 2024-11-11 |
64 | RE-RAG: Improving Open-Domain QA Performance and Interpretability with Relevance Estimator in Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a weakly supervised method for training the RE simply utilizing question-answer data without any labels for correct contexts. |
Kiseung Kim; Jay-Yoon Lee; | emnlp | 2024-11-11 |
65 | Cross-lingual Transfer for Automatic Question Generation By Learning Interrogative Structures in Target Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a simple and efficient XLT-QG method that operates without the need for monolingual, parallel, or labeled data in the target language, utilizing a small language model. |
Seonjeong Hwang; Yunsu Kim; Gary Lee; | emnlp | 2024-11-11 |
66 | ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, these methods require additional training, hand-crafted templates or human-written explanations. To address these issues, we introduce ZEBRA, a zero-shot question answering framework that combines retrieval, case-based reasoning and introspection and dispenses with the need for additional training of the LLM. |
Francesco Maria Molfese; Simone Conia; Riccardo Orlando; Roberto Navigli; | emnlp | 2024-11-11 |
67 | GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, for practical deployment, it is crucial to perform knowledge distillation to maintain high performance while operating under computational constraints. In this paper, we address a key question: given the importance of unsupervised distillation for student model performance, how can knowledge from multiple teacher models be effectively ensemble during this stage without the guidance of labels? |
WENJIE ZHOU et. al. | emnlp | 2024-11-11 |
68 | PDFTriage: Question Answering Over Long, Structured Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: When a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the QA system. To bridge this fundamental gap in handling structured documents, we propose an approach called PDFTriage that enables models to retrieve the context based on either structure or content. |
JON SAAD-FALCON et. al. | emnlp | 2024-11-11 |
69 | SparrowVQE: Visual Question Explanation for Course Content Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper aims to advance the field by introducing Visual Question Explanation (VQE), which enhances the ability of VQA to provide detailed explanations rather than brief responses and address the need for more complex interaction with visual content. |
Jialu Li; Manish Kumar Thota; Ruslan Gokhman; Radek Holik; Youshan Zhang; | arxiv-cs.CV | 2024-11-11 |
70 | Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. |
Yifei Yuan; Yang Deng; Anders S�gaard; Mohammad Aliannejadi; | emnlp | 2024-11-11 |
71 | Revisiting Automated Evaluation for Long-form Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce LFTQA-Eval, a meta-evaluation dataset comprising 2,988 human-annotated examples, to rigorously assess the efficacy of current automated metrics in assessing LLM-based LFTQA systems, with a focus on faithfulness and comprehensiveness. |
Yuqi Wang; Lyuhao Chen; Songcheng Cai; Zhijian Xu; Yilun Zhao; | emnlp | 2024-11-11 |
72 | TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Currently, existing methods perform all of these steps in a single pass without being able to adapt if insufficient or incorrect information is collected. To overcome this, we introduce a modular multi-LMM agent framework based on several agents with different roles, instructed by a Planner agent that updates its instructions using shared feedback from the other agents. |
Chuyi Shang; Amos You; Sanjay Subramanian; Trevor Darrell; Roei Herzig; | emnlp | 2024-11-11 |
73 | CommVQA: Situating Visual Question Answering in Communicative Contexts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To evaluate how situating images within naturalistic contexts shapes visual questions, we introduce CommVQA, a VQA dataset consisting of images, image descriptions, real-world communicative scenarios where the image might appear (e. g. , a travel website), and follow-up questions and answers conditioned on the scenario and description. |
Nandita Shankar Naik; Christopher Potts; Elisa Kreiss; | emnlp | 2024-11-11 |
74 | GUIDEQ: Framework for Guided Questioning for Progressive Informational Collection and Classification Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our work, GUIDEQ, presents a novel framework for asking guided questions to further progress a partial information. |
Priya Mishra; Suraj Racha; Kaustubh Ponkshe; Adit Akarsh; Ganesh Ramakrishnan; | arxiv-cs.CL | 2024-11-08 |
75 | SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the Source-aware Semantic Representation Network (SaSR-Net), a novel model designed for AVQA. |
TIANYU YANG et. al. | arxiv-cs.CV | 2024-11-07 |
76 | MEG: Medical Knowledge-Augmented Large Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present MEG, a parameter-efficient approach for medical knowledge-augmented LLMs. |
Laura Cabello; Carmen Martin-Turrero; Uchenna Akujuobi; Anders Søgaard; Carlos Bobed; | arxiv-cs.CL | 2024-11-06 |
77 | Lexicalization Is All You Need: Examining The Impact of Lexical Knowledge in A Compositional QALD System Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we examine the impact of lexicalization on Question Answering over Linked Data (QALD). |
David Maria Schmidt; Mohammad Fazleh Elahi; Philipp Cimiano; | arxiv-cs.AI | 2024-11-06 |
78 | Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we compare seven public medical LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. |
Daniel P. Jeong; Saurabh Garg; Zachary C. Lipton; Michael Oberst; | arxiv-cs.CL | 2024-11-06 |
79 | VQA$^2$:Visual Question Answering for Video Quality Assessment Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, related work is almost nonexistent in the video domain, leaving substantial room for improvement. To address this gap, we introduce the VQA2 Instruction Dataset the first visual question answering instruction dataset entirely focuses on video quality assessment, and based on it, we propose the VQA2 series models The VQA2 Instruction Dataset consists of three stages and covers various video types, containing 157,735 instruction question-answer pairs, including both manually annotated and synthetic data. |
ZIHENG JIA et. al. | arxiv-cs.CV | 2024-11-06 |
80 | Leveraging Large Language Models in Code Question Answering: Baselines and Issues Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a work devoted to using large language models for question answering over source code in Python. |
Georgy Andryushchenko; Vladimir Ivanov; Vladimir Makharev; Elizaveta Tukhtina; Aidar Valeev; | arxiv-cs.CL | 2024-11-05 |
81 | FactTest: Factuality Testing in Large Language Models with Finite-Sample and Distribution-Free Guarantees Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce FactTest, a novel framework that statistically assesses whether a LLM can confidently provide correct answers to given questions with high-probability correctness guarantees. |
FAN NIE et. al. | arxiv-cs.CL | 2024-11-04 |
82 | Multimodal Commonsense Knowledge Distillation for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a novel graph-based multimodal commonsense knowledge distillation framework that constructs a unified relational graph over commonsense knowledge, visual objects and questions through a Graph Convolutional Network (GCN) following a teacher-student environment. |
Shuo Yang; Siwen Luo; Soyeon Caren Han; | arxiv-cs.CL | 2024-11-04 |
83 | One VLM to Keep It Learning: Generation and Balancing for Data-free Continual Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose the first data-free method that leverages the language generation capability of a VLM, instead of relying on external models, to produce pseudo-rehearsal data for addressing continual VQA. |
Deepayan Das; Davide Talon; Massimiliano Mancini; Yiming Wang; Elisa Ricci; | arxiv-cs.CV | 2024-11-04 |
84 | A Visual Question Answering Method for SAR Ship: Breaking The Requirement for Multimodal Dataset Construction and Model Fine-Tuning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This has greatly hindered the application of VQA to downstream tasks, such as ship information analysis based on Synthetic Aperture Radar (SAR) imagery. To address this challenge, this letter proposes a novel VQA approach that integrates object detection networks with visual language models, specifically designed for analyzing ships in SAR images. |
Fei Wang; Chengcheng Chen; Hongyu Chen; Yugang Chang; Weiming Zeng; | arxiv-cs.CV | 2024-11-03 |
85 | Diagnosing Medical Datasets with Training Dynamics Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This study explores the potential of using training dynamics as an automated alternative to human annotation for evaluating the quality of training data. |
Laura Wenderoth; | arxiv-cs.LG | 2024-11-03 |
86 | Right This Way: Can VLMs Guide Us to See More to Answer Questions? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This capability is especially valuable for assisting visually impaired individuals who often need guidance to capture images correctly. To evaluate this capability of current VLMs, we introduce a human-labeled dataset as a benchmark for this task. |
LI LIU et. al. | arxiv-cs.CV | 2024-11-01 |
87 | Enhancing Question Answering Precision with Optimized Vector Retrieval and Instructions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose an innovative approach to improve QA task performances by integrating optimized vector retrievals and instruction methodologies. |
Lixiao Yang; Mengyang Xu; Weimao Ke; | arxiv-cs.IR | 2024-11-01 |
88 | Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel training procedure, Birdie, that significantly enhances the in-context retrieval capabilities of SSMs without altering their architecture. |
Sam Blouir; Jimmy T. H. Smith; Antonios Anastasopoulos; Amarda Shehu; | arxiv-cs.CL | 2024-11-01 |
89 | GRS-QA — Graph Reasoning-Structured Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the impact of the inherent reasoning structures on LLM M-QA performance remains unclear, largely due to the absence of QA datasets that provide fine-grained reasoning structures. To address this gap, we introduce the Graph Reasoning-Structured Question Answering Dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs. |
ANISH PAHILAJANI et. al. | arxiv-cs.CL | 2024-11-01 |
90 | Multi-Modal Validation and Domain Interaction Learning for Knowledge-Based Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge-based Visual Question Answering (KB-VQA) aims to answer the image-aware question via the external knowledge, which requires an agent to not only understand images but … |
Ning Xu; Yifei Gao; An-An Liu; Hongshuo Tian; Yongdong Zhang; | IEEE Transactions on Knowledge and Data Engineering | 2024-11-01 |
91 | Rationale-Guided Retrieval Augmented Generation for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we present RAG$^2$ (RAtionale-Guided RAG), a new framework for enhancing the reliability of RAG in biomedical contexts. |
JIWOONG SOHN et. al. | arxiv-cs.CL | 2024-10-31 |
92 | Show Me What and Where Has Changed? Question Answering and Grounding for Remote Sensing Change Detection Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a new task named Change Detection Question Answering and Grounding (CDQAG), which extends the traditional change detection task by providing interpretable textual answers and intuitive visual evidence. |
KE LI et. al. | arxiv-cs.CV | 2024-10-31 |
93 | Dynamic Strategy Planning for Efficient Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In our work, we propose a novel technique DyPlan, to induce a dynamic strategy selection process in LLMs, to improve performance and reduce costs in question-answering. |
Tanmay Parekh; Pradyot Prakash; Alexander Radovic; Akshay Shekher; Denis Savenkov; | arxiv-cs.CL | 2024-10-30 |
94 | Synthetic Data Generation with Large Language Models for Personalized Community Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we investigate the potential of Large Language Models (LLMs) for generating synthetic documents to train an IR system for a Personalized Community Question Answering task. |
Marco Braga; Pranav Kasela; Alessandro Raganato; Gabriella Pasi; | arxiv-cs.IR | 2024-10-29 |
95 | Are VLMs Really Blind Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, these models fail to perform well on low-level basic visual tasks which are especially easy for humans. Our goal in this work was to determine if these models are truly blind to geometric reasoning or if there are ways to enhance their capabilities in this area. |
Ayush Singh; Mansi Gupta; Shivank Garg; | arxiv-cs.CL | 2024-10-29 |
96 | ProMQA: Question Answering Dataset for Multimodal Procedural Activity Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present a novel evaluation dataset, ProMQA, to measure system advancements in application-oriented scenarios. |
KIMIHIRO HASEGAWA et. al. | arxiv-cs.CL | 2024-10-29 |
97 | Enhancing Financial Question Answering with A Multi-Agent Reflection Framework Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a multi-agent framework incorporating a critic agent that reflects on the reasoning steps and final answers for each question. |
Sorouralsadat Fatemi; Yuheng Hu; | arxiv-cs.CL | 2024-10-29 |
98 | RealCQA-V2 : Visual Premise Proving A Manual COT Dataset for Charts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Visual Premise Proving (VPP), a novel task tailored to refine the process of chart question answering by deconstructing it into a series of logical premises. |
Saleem Ahmed; Ranga Setlur; Venu Govindaraju; | arxiv-cs.AI | 2024-10-29 |
99 | SimpsonsVQA: Enhancing Inquiry-Based Learning with A Tailored Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Hence, in this paper, we present SimpsonsVQA, a novel dataset for VQA derived from The Simpsons TV show, designed to promote inquiry-based learning. |
Ngoc Dung Huynh; Mohamed Reda Bouadjenek; Sunil Aryal; Imran Razzak; Hakim Hacid; | arxiv-cs.CV | 2024-10-29 |
100 | CT2C-QA: Multimodal Question Answering Over Chinese Text, Table and Chart Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present C$\text{T}^2$C-QA, a pioneering Chinese reasoning-based QA dataset that includes an extensive collection of text, tables, and charts, meticulously compiled from 200 selectively sourced webpages. |
BOWEN ZHAO et. al. | arxiv-cs.CL | 2024-10-28 |
101 | SandboxAQ’s Submission to MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper explores the problems of Question Answering (QA) and Named Entity Recognition (NER) in five diverse languages. |
Isidora Chara Tourni; Sayontan Ghosh; Brenda Miao; Constantijn van der Poel; | arxiv-cs.CL | 2024-10-28 |
102 | Few-Shot Multimodal Explanation for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Dizhan Xue; Shengsheng Qian; Changsheng Xu; | ACM Multimedia | 2024-10-28 |
103 | Get Large Language Models Ready to Speak: A Late-fusion Approach for Speech Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a text-to-speech (TTS) system powered by a fine-tuned Llama model, named TTS-Llama, that achieves state-of-the-art speech synthesis performance. |
MAOHAO SHEN et. al. | arxiv-cs.CL | 2024-10-27 |
104 | EfficientEQA: An Efficient Approach for Open Vocabulary Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In real-world scenarios, a robotic agent must efficiently explore and accurately answer questions in open-vocabulary settings. To address these challenges, we propose a novel framework called EfficientEQA for open-vocabulary EQA, which enables efficient exploration and accurate answering. |
KAI CHENG et. al. | arxiv-cs.RO | 2024-10-26 |
105 | Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents Sensor2Text, a model proficient in tracking daily activities and engaging in conversations using wearable sensors. |
Wenqiang Chen; Jiaxuan Cheng; Leyao Wang; Wei Zhao; Wojciech Matusik; | arxiv-cs.LG | 2024-10-25 |
106 | Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs Through Generation of Well-Formed Chains Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs. |
KUN LI et. al. | arxiv-cs.CL | 2024-10-24 |
107 | An Adaptive Framework for Generating Systematic Explanatory Answer in Online Q&A Platforms Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The pioneering task is defined as explanatory answer generation, which entails handling identified challenges such as the requirement for comprehensive information and logical coherence within the generated context. To address these issues, we refer to systematic thinking theory and propose SynthRAG, an innovative framework designed to enhance QA performance. |
ZIYANG CHEN et. al. | arxiv-cs.CL | 2024-10-23 |
108 | Aggregated Knowledge Model: Enhancing Domain-Specific QA with Fine-Tuned and Retrieval-Augmented Generation Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a novel approach to enhancing closed-domain Question Answering (QA) systems, focusing on the specific needs of the Lawrence Berkeley National Laboratory (LBL) Science Information Technology (ScienceIT) domain. |
Fengchen Liu; Jordan Jung; Wei Feinstein; Jeff DAmbrogia; Gary Jung; | arxiv-cs.CL | 2024-10-23 |
109 | SimRAG: Self-Improving Retrieval-Augmented Generation for Adapting Large Language Models to Specialized Domains Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, adapting general-purpose RAG systems to specialized fields such as science and medicine poses unique challenges due to distribution shifts and limited access to domain-specific data. To tackle this, we propose SimRAG, a self-training approach that equips the LLM with joint capabilities of question answering and question generation for domain adaptation. |
RAN XU et. al. | arxiv-cs.CL | 2024-10-23 |
110 | Leveraging The Domain Adaptation of Retrieval Augmented Generation Models for Question Answering and Reducing Hallucination Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigated the performance of diverse RAG and RAG-like architectures through domain adaptation and evaluated their ability to generate accurate and relevant response grounded in the contextual knowledge base. |
Salman Rakin; Md. A. R. Shibly; Zahin M. Hossain; Zeeshan Khan; Md. Mostofa Akbar; | arxiv-cs.CL | 2024-10-23 |
111 | Graphusion: A RAG Framework for Knowledge Graph Construction with A Global Perspective Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces Graphusion, a zero-shot KGC framework from free text. |
RUI YANG et. al. | arxiv-cs.CL | 2024-10-23 |
112 | Which Client Is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models, addressing privacy reliability challenges in the medical domain. |
He Zhu; Ren Togo; Takahiro Ogawa; Miki Haseyama; | arxiv-cs.CV | 2024-10-22 |
113 | Correct After Answer: Enhancing Multi-Span Question Answering with Post-Processing Method Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose Answering-Classifying-Correcting (ACC) framework, which employs a post-processing strategy to handle incorrect predictions. |
JIAYI LIN et. al. | arxiv-cs.CL | 2024-10-22 |
114 | SG-FSM: A Self-Guiding Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Based on Finite State Machine Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, Multi-hop Question Answering (MHQA) remains challenging for many existing models due to issues like hallucination, error propagation, and limited context length. To address these challenges and enhance LLMs’ performance on MHQA, we propose the Self-Guiding prompting Finite State Machine (SG-FSM), designed to strengthen multi-hop reasoning abilities. |
XIAOCHEN WANG et. al. | arxiv-cs.CL | 2024-10-22 |
115 | VoiceTextBlender: Augmenting Large Language Models with Speech Capabilities Via Single-Stage Joint Speech-Text Supervised Fine-Tuning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Another critical challenge with SpeechLMs is catastrophic forgetting-where models optimized for speech tasks suffer significant degradation in text-only performance. To mitigate these issues, we propose a novel single-stage joint speech-text SFT approach on the low-rank adaptation (LoRA) of the LLM backbone. |
YIFAN PENG et. al. | arxiv-cs.CL | 2024-10-22 |
116 | Reasoning Before Responding: Towards Legal Long-form Question Answering with Interpretability Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The endeavor to generate detailed answers to contextually rich legal questions has faced challenges, primarily due to the limited availability of specialized datasets involving intensive manual effort or incapability of existing LFQA models to produce informative responses. Addressing this, our research introduces a semi-synthetic dataset, Legal-LFQA (L2FQA) created by exploiting a large language model (LLM) and utilizing contexts derived from existing legal datasets. |
Utkarsh Ujwal; Sai Sri Harsha Surampudi; Sayantan Mitra; Tulika Saha; | cikm | 2024-10-21 |
117 | Learning-to-Defer for Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, their size poses deployment challenges on resource-constrained devices. Addressing these limitations, we introduce an adapted two-stage Learning-to-Defer mechanism that enhances decision-making by enabling selective deference to human experts or larger models without retraining language models in the context of question-answering. |
Yannis Montreuil; Axel Carlier; Lai Xing Ng; Wei Tsang Ooi; | arxiv-cs.CL | 2024-10-21 |
118 | RD-P: A Trustworthy Retrieval-Augmented Prompter with Knowledge Graphs for LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel method called Retrieve-and-Discriminate Prompter (RD-P), which leverages knowledge graphs (KGs) for trustworthy RAG by synchronizing knowledge retrieval and discrimination in a unified model. |
Yubo Huang; Guosun Zeng; | cikm | 2024-10-21 |
119 | Enhancing The Completeness of Rationales for Multi-Step Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, drawing inspiration from human-like reasoning processes in answering multi-step questions, we explicitly plan the rationales to ensure their completeness. |
SHANGZI XUE et. al. | cikm | 2024-10-21 |
120 | Fine-Tuning LLMs for Reliable Medical Question-Answering Services Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an advanced approach to medical question-answering (QA) services, using fine-tuned Large Language Models (LLMs) to improve the accuracy and reliability of healthcare information. |
Ali Anaissi; Ali Braytee; Junaid Akram; | arxiv-cs.CL | 2024-10-21 |
121 | LeDQA: A Chinese Legal Case Document-based Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present LeDQA, the first Chinese legal case document-based question answering dataset to our best knowledge. |
Bulou Liu; Zhenhao Zhu; Qingyao Ai; Yiqun Liu; Yueyue Wu; | cikm | 2024-10-21 |
122 | Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Additionally, there is a gap in evaluation datasets and methodologies for ontology-free KG construction. To overcome these limitations, we propose SynthKG, a multi-step, document-level ontology-free KG synthesis workflow based on LLMs. |
PRAFULLA KUMAR CHOUBEY et. al. | arxiv-cs.CL | 2024-10-21 |
123 | In Situ Answer Sentence Selection at Web-scale Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present Passage-based Extracting Answer Sentence In-place (PEASI), a novel answer selection model optimized for Web-scale setting. |
Zeyu Zhang; Thuy Vu; Alessandro Moschitti; | cikm | 2024-10-21 |
124 | DiaKoP: Dialogue-based Knowledge-oriented Programming for Neural-symbolic Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present Dialogue-based Knowledge-oriented Programming system (DiaKoP), a system with a chat interface designed for multi-turn knowledge base question answering (KBQA). |
ZHICHENG LEE et. al. | cikm | 2024-10-21 |
125 | Retrieval-enhanced Knowledge Editing in Language Models for Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To tackle the problem, we propose the Retrieval-Augmented model Editing (RAE) framework for multi-hop question answering. |
YUCHENG SHI et. al. | cikm | 2024-10-21 |
126 | Reverse Question Answering: Can An LLM Write A Question So Hard (or Bad) That It Can’t Answer? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: By finding question and answer types yielding RQA errors, we suggest improvements for LLM RQA reasoning. |
NISHANT BALEPUR et. al. | arxiv-cs.CL | 2024-10-20 |
127 | MedLogic-AQA: Enhancing Medical Question Answering with Abstractive Models Focusing on Logical Structures Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing approaches often struggle to grasp the intricate logical structures and relationships inherent in medical contexts, thus limiting their capacity to furnish precise and nuanced answers. In this work, we address this gap by proposing a novel Abstractive QA system MedLogic-AQA that harnesses First Order Logic (FOL) based rules extracted from both context and questions to generate well-grounded answers. |
Aizan Zafar; Kshitij Mishra; Asif Ekbal; | arxiv-cs.CL | 2024-10-20 |
128 | BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning Via Compression Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To accelerate inference, reduce costs, and minimize distractions, this paper presents BRIEF (Bridging Retrieval and Inference through Evidence Fusion), a lightweight approach that performs query-aware multi-hop reasoning by compressing retrieved documents into highly dense textual summaries to integrate into in-context learning. |
Yuankai Li; Jia-Chen Gu; Di Wu; Kai-Wei Chang; Nanyun Peng; | arxiv-cs.CL | 2024-10-20 |
129 | ChitroJera: A Regionally Relevant Visual Question Answering Dataset for Bangla Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, existing Bangla VQA datasets offer little cultural relevance and are largely adapted from their foreign counterparts. To address these challenges, we introduce a large-scale Bangla VQA dataset titled ChitroJera, totaling over 15k samples where diverse and locally relevant data sources are used. |
DEEPARGHYA DUTTA BARUA et. al. | arxiv-cs.CV | 2024-10-19 |
130 | Optimizing Retrieval-Augmented Generation with Elasticsearch for Enhanced Question-Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims to improve the accuracy and quality of large-scale language models (LLMs) in answering questions by integrating Elasticsearch into the Retrieval Augmented Generation (RAG) framework. |
JIAJING CHEN et. al. | arxiv-cs.IR | 2024-10-18 |
131 | MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart Problems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Current benchmarks primarily focus on single-chart tasks, neglecting the multi-hop reasoning required to extract and integrate information from multiple charts, which is essential in practical applications. To fill this gap, we introduce MultiChartQA, a benchmark that evaluates MLLMs’ capabilities in four key areas: direct question answering, parallel question answering, comparative reasoning, and sequential reasoning. |
Zifeng Zhu; Mengzhao Jia; Zhihan Zhang; Lang Li; Meng Jiang; | arxiv-cs.CL | 2024-10-18 |
132 | Electrocardiogram-Language Model for Few-Shot Question Answering with Meta Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces a novel multimodal meta-learning method for few-shot ECG question answering, addressing the challenge of limited labeled data while leveraging the rich knowledge encoded within large language models (LLMs). |
Jialu Tang; Tong Xia; Yuan Lu; Cecilia Mascolo; Aaqib Saeed; | arxiv-cs.LG | 2024-10-18 |
133 | SwaQuAD-24: QA Benchmark Dataset in Swahili Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes the creation of a Swahili Question Answering (QA) benchmark dataset, aimed at addressing the underrepresentation of Swahili in natural language processing (NLP). |
Alfred Malengo Kondoro; | arxiv-cs.CL | 2024-10-18 |
134 | Bridging The Training-Inference Gap in LLMs By Leveraging Self-Generated Tokens Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Marginal differences in predictions at each step can cascade over successive steps, resulting in different distributions from what the models were trained for and potentially leading to unpredictable behavior. This paper proposes two simple approaches based on model own generation to address this discrepancy between the training and inference time. |
ZHEPENG CEN et. al. | arxiv-cs.LG | 2024-10-18 |
135 | Addressing Blind Guessing: Calibration of Selection Bias in Multiple-Choice Question Answering By Video Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we conduct a comprehensive empirical analysis of several VLM architectures across major datasets designed to assess complex video-focused reasoning. |
Olga Loginova; Oleksandr Bezrukov; Alexey Kravets; | arxiv-cs.CL | 2024-10-18 |
136 | BQA: Body Language Question Answering Dataset for Video Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Enabling current Video Large Language Models (VideoLLMs) to accurately interpret body language is a crucial challenge, as human unconscious actions can easily cause the model to misinterpret their intent. To address this, we propose a dataset, BQA, a body language question answering dataset, to validate whether the model can correctly interpret emotions from short clips of body language comprising 26 emotion labels of videos of body language. |
SHINTARO OZAKI et. al. | arxiv-cs.CL | 2024-10-17 |
137 | FinQAPT: Empowering Financial Decisions with End-to-End LLM-driven Question Answering Pipeline Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduced a novel clustering-based negative sampling technique to enhance context extraction and a novel prompting method called Dynamic N-shot Prompting to boost the numerical question-answering capabilities of LLMs. |
Kuldeep Singh; Simerjot Kaur; Charese Smiley; | arxiv-cs.IR | 2024-10-17 |
138 | LEGAL-UQA: A Low-Resource Urdu-English Dataset for Legal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present LEGAL-UQA, the first Urdu legal question-answering dataset derived from Pakistan’s constitution. |
Faizan Faisal; Umair Yousaf; | arxiv-cs.CL | 2024-10-16 |
139 | Open Domain Question Answering with Conflicting Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To explore how humans reason through conflicting contexts, we request our annotators to provide explanations for their selections of correct answers. We demonstrate that by finetuning LLMs to explain their answers, we can introduce richer information into their training that guide them through the process of reasoning with conflicting contexts. |
SIYI LIU et. al. | arxiv-cs.CL | 2024-10-16 |
140 | WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Vision Language Models (VLMs) often struggle with culture-specific knowledge, particularly in languages other than English and in underrepresented cultural contexts. To evaluate their understanding of such knowledge, we introduce WorldCuisines, a massive-scale benchmark for multilingual and multicultural, visually grounded language understanding. |
GENTA INDRA WINATA et. al. | arxiv-cs.CL | 2024-10-16 |
141 | AGENTiGraph: An Interactive Knowledge Graph Platform for LLM-based Chatbots Utilizing Private Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce AGENTiGraph (Adaptive Generative ENgine for Task-based Interaction and Graphical Representation), a platform for knowledge management through natural language interaction. |
XINJIE ZHAO et. al. | arxiv-cs.AI | 2024-10-15 |
142 | Eliminating The Language Bias for Visual Question Answering with Fine-grained Causal Intervention Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel causal intervention training scheme named CIBi to eliminate language bias from a finer-grained perspective. |
YING LIU et. al. | arxiv-cs.CV | 2024-10-14 |
143 | BanglaQuAD: A Bengali Open-domain Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces BanglaQuAD, a Bengali question answering dataset, containing 30,808 question-answer pairs constructed from Bengali Wikipedia articles by native speakers. |
MD RASHAD AL HASAN RONY et. al. | arxiv-cs.CL | 2024-10-14 |
144 | TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce TemporalBench, a new benchmark dedicated to evaluating fine-grained temporal understanding in videos. |
MU CAI et. al. | arxiv-cs.CV | 2024-10-14 |
145 | Unleashing The Power of LLMs As Multi-Modal Encoders for Text and Graph-Structured Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing methods for integrating graph and text embeddings, often based on Multi-layer Perceptrons (MLPs) or shallow transformers, are limited in their ability to fully exploit the heterogeneous nature of these modalities. To overcome this, we propose Janus, a simple yet effective framework that leverages Large Language Models (LLMs) to jointly encode text and graph data. |
JIACHENG LIN et. al. | arxiv-cs.CL | 2024-10-14 |
146 | A Step Towards Mixture of Grader: Statistical Analysis of Existing Automatic Evaluation Metrics Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As a potential solution, we discuss how a Mixture Of Grader could potentially improve the auto QA evaluator quality. |
Yun Joon Soh; Jishen Zhao; | arxiv-cs.CL | 2024-10-13 |
147 | LoRE: Logit-Ranked Retriever Ensemble for Enhancing Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose LoRE (Logit-Ranked Retriever Ensemble), a novel approach that improves answer accuracy and relevance by mitigating positional bias. |
Saikrishna Sanniboina; Shiv Trivedi; Sreenidhi Vijayaraghavan; | arxiv-cs.CL | 2024-10-13 |
148 | Quebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces two corpora: the Quebec Automobile Insurance Expertise Reference Corpus and a set of 82 Expert Answers to Layperson Automobile Insurance Questions. |
David Beauchemin; Zachary Gagnon; Ricahrd Khoury; | arxiv-cs.CL | 2024-10-12 |
149 | Enhanced Electronic Health Records Text Summarization Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The proposed system leverages the Google Flan-T5 model to generate tailored EHR summaries based on clinician-specified topics. |
Ruvarashe Madzime; Clement Nyirenda; | arxiv-cs.CL | 2024-10-12 |
150 | Declarative Knowledge Distillation from Large Language Models for Visual Question Answering Datasets Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The downside is that crafting the rules for such a component can be an additional burden on the developer. We address this challenge by presenting an approach for declarative knowledge distillation from Large Language Models (LLMs). |
Thomas Eiter; Jan Hadl; Nelson Higuera; Johannes Oetsch; | arxiv-cs.AI | 2024-10-12 |
151 | Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we introduce HeurVidQA, a framework that leverages domain-specific entity-action heuristics to refine pre-trained video-language foundation models. |
Ting Yu; Kunhao Fu; Shuhui Wang; Qingming Huang; Jun Yu; | arxiv-cs.CV | 2024-10-12 |
152 | Multi-granularity Contrastive Cross-modal Collaborative Generation for End-to-End Long-term Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, recent emerging successful video-language pre-training models enable cost-effective end-to-end modeling but fall short in domain-specific ratiocination and exhibit disparities in task formulation. Toward this end, we present an entirely end-to-end solution for long-term VideoQA: Multi-granularity Contrastive cross-modal collaborative Generation (MCG) model. |
Ting Yu; Kunhao Fu; Jian Zhang; Qingming Huang; Jun Yu; | arxiv-cs.CV | 2024-10-12 |
153 | Measuring The Groundedness of Legal Question-Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work presents a comprehensive benchmark of various methods to assess the groundedness of AI-generated responses, aiming to significantly enhance their reliability. |
DIETRICH TRAUTMANN et. al. | arxiv-cs.CL | 2024-10-11 |
154 | Retriever-and-Memory: Towards Adaptive Note-Enhanced Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address these, we propose a generic RAG approach called Adaptive Note-Enhanced RAG (Adaptive-Note) for complex QA tasks, which includes the iterative information collector, adaptive memory reviewer, and task-oriented generator, while following a new Retriever-and-Memory paradigm. |
RUOBING WANG et. al. | arxiv-cs.CL | 2024-10-11 |
155 | Retrieving Contextual Information for Long-Form Question Answering Using Weak Supervision Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose and compare different weak supervision techniques to optimize retrieval for contextual information. |
Philipp Christmann; Svitlana Vakulenko; Ionut Teodor Sorodoc; Bill Byrne; Adrià de Gispert; | arxiv-cs.CL | 2024-10-11 |
156 | Increasing The Difficulty of Automatically Generated Questions Via Reinforcement Learning with Synthetic Preference Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a cost-effective approach for generating domain-specific MRC datasets with increased difficulty using Reinforcement Learning from Human Feedback (RLHF) from synthetic preference data. |
William Thorne; Ambrose Robinson; Bohua Peng; Chenghua Lin; Diana Maynard; | arxiv-cs.CL | 2024-10-10 |
157 | Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study Over Open-ended Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To fill the gap, we introduce OKGQA, a new benchmark specifically designed to assess LLMs enhanced with KGs under open-ended, real-world question answering scenarios. |
Yuan Sui; Bryan Hooi; | arxiv-cs.CL | 2024-10-10 |
158 | TVBench: Redesigning Video-Language Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. |
Daniel Cores; Michael Dorkenwald; Manuel Mucientes; Cees G. M. Snoek; Yuki M. Asano; | arxiv-cs.CV | 2024-10-10 |
159 | ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Traditionally, each prompt has been considered indivisible and updated independently, leading the parameters increase proportionally as prompt length grows. To address this issue, we propose Adaptive Codebook for Composite and Efficient Prompt Tuning (ACCEPT). |
Yu-Chen Lin; Wei-Hua Li; Jun-Cheng Chen; Chu-Song Chen; | arxiv-cs.CL | 2024-10-10 |
160 | FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. |
JINGYANG DENG et. al. | arxiv-cs.CL | 2024-10-09 |
161 | $β$-calibration of Language Model Confidence Scores for Generative QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We argue, however, that this standard (average-case) notion of calibration is difficult to interpret for decision-making in generative QA. To address this, we generalize the standard notion of average calibration and introduce $\beta$-calibration, which ensures calibration holds across different question-and-answer groups. |
Putra Manggala; Atalanti Mastakouri; Elke Kirschbaum; Shiva Prasad Kasiviswanathan; Aaditya Ramdas; | arxiv-cs.CL | 2024-10-09 |
162 | Do Great Minds Think Alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recent advancements of large language models (LLMs) have led to claims of AI surpassing humans in natural language processing (NLP) tasks such as textual understanding and reasoning. This work investigates these assertions by introducing CAIMIRA, a novel framework rooted in item response theory (IRT) that enables quantitative assessment and comparison of problem-solving abilities of question-answering (QA) agents: humans and AI systems. |
Maharshi Gor; Hal Daumé III; Tianyi Zhou; Jordan Boyd-Graber; | arxiv-cs.CL | 2024-10-08 |
163 | ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Within any single domain, actions can often appear quite similar, making it challenging for deep models to distinguish them accurately. To evaluate the effectiveness of multimodal foundation models in helping us recognize such actions, we present ActionAtlas v1.0, a multiple-choice video question answering benchmark featuring short videos across various sports. |
MOHAMMADREZA SALEHI et. al. | arxiv-cs.CV | 2024-10-08 |
164 | PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce PDF-WuKong, a multimodal large language model (MLLM) which is designed to enhance multimodal question-answering (QA) for long PDF documents. |
XUDONG XIE et. al. | arxiv-cs.CV | 2024-10-08 |
165 | ERVQA: A Dataset to Benchmark The Readiness of Large Vision Language Models in Hospital Environments Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of |
SOURJYADIP RAY et. al. | arxiv-cs.CL | 2024-10-08 |
166 | Document-level Causal Relation Extraction with Knowledge-guided Binary Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a Knowledge-guided binary Question Answering (KnowQA) method with event structures for ECRE, consisting of two stages: Event Structure Construction and Binary Question Answering. |
Zimu Wang; Lei Xia; Wei Wang; Xinya Du; | arxiv-cs.CL | 2024-10-07 |
167 | MEQA: A Benchmark for Multi-hop Event-centric Question Answering with Explanations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a novel semi-automatic question generation strategy by composing event structures from information extraction (IE) datasets and present the first Multi-hop Event-centric Question Answering (MEQA) benchmark. |
Ruosen Li; Zimu Wang; Son Tran; Lei Xia; Xinya Du; | nips | 2024-10-07 |
168 | Right This Way: Can VLMs Guide Us to See More to Answer Questions? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This capability is especially valuable for assisting visually impaired individuals. To evaluate this capability of current VLMs, we introduce a human-labeled dataset as a benchmark for this task. |
LI LIU et. al. | nips | 2024-10-07 |
169 | Cost-efficient Knowledge-based Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose Coke, a novel cost-efficient strategy for KBQA with LLMs, modeled as a tailored multi-armed bandit problem to minimize calls to LLMs within limited budgets. |
JUNNAN DONG et. al. | nips | 2024-10-07 |
170 | CRAG – Comprehensive RAG Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing RAG datasets, however, do not adequately represent the diverse and dynamic nature of real-world Question Answering (QA) tasks. To bridge this gap, we introduce the Comprehensive RAG Benchmark (CRAG), a factual question answering benchmark of 4,409 question-answer pairs and mock APIs to simulate web and Knowledge Graph (KG) search. |
XIAO YANG et. al. | nips | 2024-10-07 |
171 | Wings: Learning Multimodal LLMs Without Text-only Forgetting Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Wings, a novel MLLM that excels in both text-only dialogues and multimodal comprehension. |
YI-KAI ZHANG et. al. | nips | 2024-10-07 |
172 | CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We identify more advanced/explicit causal relationship modeling \& joint modeling of vision and language as the immediate areas for future efforts to focus upon. |
PARITOSH PARMAR et. al. | nips | 2024-10-07 |
173 | HAWK: Learning to Understand Open-World Video Anomalies Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce HAWK, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. |
JIAQI TANG et. al. | nips | 2024-10-07 |
174 | RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. |
JOAO MONTEIRO et. al. | nips | 2024-10-07 |
175 | FinBen: An Holistic Financial Benchmark for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce FinBen, the first extensive open-source evaluation benchmark, including 36 datasets spanning 24 financial tasks, covering seven critical aspects: information extraction (IE), textual analysis, question answering (QA), text generation, risk management, forecasting, and decision-making. |
QIANQIAN XIE et. al. | nips | 2024-10-07 |
176 | Learnable In-Context Vector for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose \textbf{Learnable ICV} (L-ICV) to distill essential task information from demonstrations, improving ICL performance in LMMs. |
YINGZHE PENG et. al. | nips | 2024-10-07 |
177 | CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We benchmark several Multimodal Large Language Models (MLLMs) on CVQA, and we show that the dataset is challenging for the current state-of-the-art models. |
DAVID ROMERO et. al. | nips | 2024-10-07 |
178 | Crafting Interpretable Embeddings By Asking LLMs Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM. |
VINAMRA BENARA et. al. | nips | 2024-10-07 |
179 | LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark that features video-language interleaved inputs up to an hour long. |
Haoning Wu; DONGXU LI; Bei Chen; Junnan Li; | nips | 2024-10-07 |
180 | LOVA3: Learning to Visual Question Answering, Asking and Assessment Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named “Learning tO Visual Question Answering, Asking and Assessment,” designed to equip MLLMs with these additional capabilities. |
Hengyuan Zhao; Pan Zhou; Difei Gao; Mike Zheng Shou; | nips | 2024-10-07 |
181 | Co-occurrence Is Not Factual Association in Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Pretrained language models can encode a large amount of knowledge and utilize it for various reasoning tasks, yet they can still struggle to learn novel factual knowledge effectively from finetuning on limited textual demonstrations. In this work, we show that the reason for this deficiency is that language models are biased to learn word co-occurrence statistics instead of true factual associations. |
Xiao Zhang; Miao Li; Ji Wu; | nips | 2024-10-07 |
182 | G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. |
XIAOXIN HE et. al. | nips | 2024-10-07 |
183 | SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing question-answering (QA) datasets based on scientific papers are limited in scale and focus solely on textual content. To address this limitation, we introduce SPIQA (Scientific Paper Image Question Answering), the first large-scale QA dataset specifically designed to interpret complex figures and tables within the context of scientific research articles across various domains of computer science. |
Shraman Pramanick; Rama Chellappa; Subhashini Venugopalan; | nips | 2024-10-07 |
184 | FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce FAMMA, an open-source benchmark for financial multilingual multimodal question answering (QA). |
SIQIAO XUE et. al. | arxiv-cs.CL | 2024-10-06 |
185 | Optimizing AI Reasoning: A Hamiltonian Dynamics Approach to Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces an innovative approach to analyzing and improving multi-hop reasoning in AI systems by drawing inspiration from Hamiltonian mechanics. |
Javier Marin; | arxiv-cs.AI | 2024-10-06 |
186 | Overview of Factify5WQA: Fact Verification Through 5W Question-Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Researchers have found that fake news spreads much times faster than real news. This is a major problem, especially in today’s world where social media is the key source of news … |
SURYAVARDAN SURESH et. al. | arxiv-cs.CL | 2024-10-05 |
187 | Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. |
Sagi Shaier; Ari Kobren; Philip Ogren; | arxiv-cs.CL | 2024-10-05 |
188 | Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce Compositional Time Series Reasoning, a new task of handling intricate multistep reasoning tasks from time series data. |
WEN YE et. al. | arxiv-cs.LG | 2024-10-05 |
189 | Cross-lingual Transfer for Automatic Question Generation By Learning Interrogative Structures in Target Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a simple and efficient XLT-QG method that operates without the need for monolingual, parallel, or labeled data in the target language, utilizing a small language model. |
Seonjeong Hwang; Yunsu Kim; Gary Geunbae Lee; | arxiv-cs.CL | 2024-10-04 |
190 | Question-Answering System for Bangla: Fine-tuning BERT-Bangla for A Closed Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Question-answering systems for Bengali have seen limited development, particularly in domain-specific applications. Leveraging advancements in natural language processing, this paper explores a fine-tuned BERT-Bangla model to address this gap. |
Subal Chandra Roy; Md Motaleb Hossen Manik; | arxiv-cs.CL | 2024-10-04 |
191 | Structured List-Grounded Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Motivated by the observation that even advanced language models like GPT-3.5 often miss semantic cues from lists, this paper aims to enhance question answering (QA) systems for better interpretation and use of structured lists. |
MUJEEN SUNG et. al. | arxiv-cs.CL | 2024-10-04 |
192 | ALR$^2$: A Retrieve-then-Reason Framework for Long-context Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that modern LLMs struggle to accurately retrieve relevant facts and instead, often hallucinate retrieved facts, resulting in flawed reasoning and the production of incorrect answers. To address these issues, we introduce ALR$^2$, a method that augments the long-context reasoning capability of LLMs via an explicit two-stage procedure, i.e., aligning LLMs with the objectives of both retrieval and reasoning. |
HUAYANG LI et. al. | arxiv-cs.CL | 2024-10-04 |
193 | Domain-Specific Retrieval-Augmented Generation Using Vector Stores, Knowledge Graphs, and Tensor Factorization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce SMART-SLIC, a highly domain-specific LLM framework, that integrates RAG with KG and a vector store (VS) that store factual domain specific information. |
RYAN C. BARRON et. al. | arxiv-cs.CL | 2024-10-03 |
194 | MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose MA-RLHF, a simple yet effective RLHF framework that incorporates macro actions — sequences of tokens or higher-level language constructs — into the learning process. |
YEKUN CHAI et. al. | arxiv-cs.CL | 2024-10-03 |
195 | Video Instruction Tuning With Synthetic Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The development of video large multimodal models (LMMs) has been hindered by the difficulty of curating large amounts of high-quality raw data from the web. To address this, we propose an alternative approach by creating a high-quality synthetic dataset specifically for video instruction-following, namely LLaVA-Video-178K. |
YUANHAN ZHANG et. al. | arxiv-cs.CV | 2024-10-03 |
196 | Listening to The Wise Few: Select-and-Copy Attention Heads for Multiple-Choice QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, such a format for evaluating LLMs has limitations, since even if the model knows the correct answer, it may struggle to select the corresponding letter simply due to difficulties in following this rigid format. To address this, we introduce new scores that better capture and reveal model’s underlying knowledge: the Query-Key Score (QK-score), derived from the interaction between query and key representations in attention heads, and the Attention Score, based on attention weights. |
EDUARD TULCHINSKII et. al. | arxiv-cs.CL | 2024-10-03 |
197 | Coal Mining Question Answering with LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a novel approach to coal mining question answering (QA) using large language models (LLMs) combined with tailored prompt engineering techniques. |
Antonio Carlos Rivera; Anthony Moore; Steven Robinson; | arxiv-cs.CL | 2024-10-03 |
198 | Question-guided Knowledge Graph Re-scoring and Injection for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the retrieved subgraph inevitably brings distraction information for knowledge utilization, impeding the model’s ability to perform accurate reasoning. To address this issue, we propose a Question-guided Knowledge Graph Re-scoring method (Q-KGR) to eliminate noisy pathways for the input question, thereby focusing specifically on pertinent factual knowledge. |
YU ZHANG et. al. | arxiv-cs.CL | 2024-10-02 |
199 | AHP-Powered LLM Reasoning for Multi-Criteria Evaluation of Open-Ended Responses Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a method that leverages LLMs and the analytic hierarchy process (AHP) to assess answers to open-ended questions. |
Xiaotian Lu; Jiyi Li; Koh Takeuchi; Hisashi Kashima; | arxiv-cs.CL | 2024-10-02 |
200 | Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These challenges often arise due to the complexity and ambiguity present in longer texts. To enhance the performance of LLMs in such scenarios, we introduce the Long Question Coreference Adaptation (LQCA) method. |
YANMING LIU et. al. | arxiv-cs.CL | 2024-10-02 |
201 | Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing benchmarks have primarily focused on basic factual question-answering from single narrative documents, making them inadequate for assessing a model`s ability to comprehend complex real-world instructional documents and provide accurate step-by-step guidance in daily life. To bridge this gap, we present InsCoQA, a novel benchmark tailored for evaluating large language models (LLMs) in the context of CQA with instructional documents. |
SHIWEI WU et. al. | arxiv-cs.CL | 2024-10-01 |
202 | Semantic Parsing with Candidate Expressions for Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a grammar augmented with candidate expressions for semantic parsing on a large KB with a seq2seq PLM. |
Daehwan Nam; Gary Geunbae Lee; | arxiv-cs.CL | 2024-10-01 |
203 | Quantifying Reliance on External Information Over Parametric Knowledge During Retrieval Augmented Generation (RAG) Using Mechanistic Analysis Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose (a) Causal Mediation Analysis; for proving that parametric memory is minimally utilized when answering a question and (b) Attention Contributions and Knockouts for showing the last token residual stream do not get enriched from the subject token in the question, but gets enriched from tokens of RAG-context. |
RESHMI GHOSH et. al. | arxiv-cs.CL | 2024-10-01 |
204 | Vamos: Versatile Action Models for Video Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose versatile action models (Vamos), a learning framework powered by a large language model as the “reasoner”, and can flexibly leverage visual embedding and free-form text descriptions as its input. |
SHIJIE WANG et. al. | eccv | 2024-09-30 |
205 | FunQA: Towards Surprising Video Comprehension IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce FunQA, a challenging video question answering (QA) dataset specifically designed to evaluate and enhance the depth of video reasoning based on counter-intuitive and fun videos. |
BINZHU XIE et. al. | eccv | 2024-09-30 |
206 | Compositional Substitutivity of Visual Reasoning for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the compositional substitutivity of visual reasoning in the context of visual question answering (VQA).Specifically, for each question-image pair, we construct a support question set and a support image set, and both sets contain questions/images that share synonymous primitives with the original question/image.To quantitatively evaluate the substitutivity of VQA models, we introduce two datasets: GQA-SPS and VQA-SPS v2, by performing three types of substitutions using synonymous primitives including words, visual entities, and referents. |
CHUANHAO LI et. al. | eccv | 2024-09-30 |
207 | ViLA: Efficient Video-Language Alignment for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose an efficient Video-Language Alignment (ViLA) network. |
XIJUN WANG et. al. | eccv | 2024-09-30 |
208 | QAEncoder: Towards Aligned Representation Learning in Question Answering System Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the inherent gap between user queries and relevant documents hinders precise matching. Motivated by our conical distribution hypothesis, which posits that potential queries and documents form a cone-like structure in the embedding space, we introduce QAEncoder, a training-free approach to bridge this gap. |
ZHENGREN WANG et. al. | arxiv-cs.CL | 2024-09-30 |
209 | VideoINSTA: Zero-shot Long Video Understanding Via Informative Spatial-Temporal Reasoning with LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a framework VideoINSTA, i.e. INformative Spatial-TemporAl Reasoning for zero-shot long-form video understanding. |
RUOTONG LIAO et. al. | arxiv-cs.CV | 2024-09-30 |
210 | CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To overcome this limitation, we introduce the CAT, which enhances MLLM in three ways: 1) besides straightforwardly bridging audio and video, we design a clue aggregator that aggregates question-related clues in dynamic audio-visual scenarios to enrich the detailed knowledge required for large language models.Notably, we collect an audio-visual joint instruction dataset named AVinstruct, to further enhance the capacity of CAT to model cross-semantic correlations. |
QILANG YE et. al. | eccv | 2024-09-30 |
211 | TimeCraft: Navigate Weakly-Supervised Temporal Grounded Video Question Answering Via Bi-directional Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we focus on the grounded VQA task, which necessitates models to provide answers along with explicit visual evidence, i.e., certain video segments. |
Huabin Liu; Xiao Ma; Cheng Zhong; Yang Zhang; Weiyao Lin; | eccv | 2024-09-30 |
212 | LingoQA: Video Question Answering for Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce LingoQA, a novel dataset and benchmark for visual question answering in autonomous driving.We release our dataset and benchmark1 as an evaluation platform for vision-language models in autonomous driving. |
ANA-MARIA MARCU et. al. | eccv | 2024-09-30 |
213 | GRACE: Graph-Based Contextual Debiasing for Fair Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Innovative methods are required to ensure that LLMs can deliver unbiased yet contextually relevant responses. To tackle this challenge, we present GRAph-based Contextual DEbiasing (GRACE), a novel graph-based method for debiasing knowledge-based VQA models. |
Yifeng Zhang; Ming Jiang; Qi Zhao; | eccv | 2024-09-30 |
214 | WSI-VQA: Interpreting Whole Slide Images By Generative Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel framework (WSI-VQA) to interpret WSIs by generative visual question answering. |
Pingyi Chen; Chenglu Zhu; Sunyi Zheng; Honglin Li; Lin Yang; | eccv | 2024-09-30 |
215 | Learning Trimodal Relation for Audio-Visual Question Answering with Missing Modality Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a framework that ensures robust AVQA performance even when a modality is missing. |
Kyu Ri Park; Hong Joo Lee; Jung Uk Kim; | eccv | 2024-09-30 |
216 | DriveLM: Driving with Graph Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving. |
CHONGHAO SIMA et. al. | eccv | 2024-09-30 |
217 | Q&A Prompts: Discovering Rich Visual Clues Through Mining Question-Answer Prompts for VQA Requiring Diverse World Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we believe that if we can collect rich visual clues, we will recognize the image more accurately, understand the question better, recall relevant knowledge more easily, and finally reason out the answer. |
Haibo Wang; Weifeng Ge; | eccv | 2024-09-30 |
218 | Fully Authentic Visual Question Answering Dataset from Online Communities Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the first VQA dataset in which all contents originate from an authentic use case. |
CHONGYAN CHEN et. al. | eccv | 2024-09-30 |
219 | An Explainable Vision Question Answer Model Via Diffusion Chain-of-Thought Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This means that generating explanations solely for the answer can lead to a semantic discrepancy between the content of the explanation and the question-answering content. To address this, we propose a step-by-step reasoning approach to reduce such semantic discrepancies. |
Chunhao LU; Qiang Lu; Jake Luo; | eccv | 2024-09-30 |
220 | Video Question Answering with Procedural Programs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose to answer questions about videos by generating short procedural programs that solve visual subtasks to obtain a final answer. |
Rohan Choudhury; Koichiro Niinuma; Kris Kitani; Laszlo A Jeni; | eccv | 2024-09-30 |
221 | AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel and challenging benchmark, AutoEval-Video, to comprehensively evaluate large vision-language models in open-ended video question answering. |
Weiran Huang; Xiuyuan Chen; Yuan Lin; Yuchen Zhang; | eccv | 2024-09-30 |
222 | See and Think: Embodied Agent in Virtual Environment IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes STEVE, a comprehensive and visionary embodied agent in the Minecraft virtual environment.We also collect STEVE-21K dataset, which includes 600+ vision-environment pairs, 20K knowledge question-answering pairs, and 200+ skill-code pairs. |
ZHONGHAN ZHAO et. al. | eccv | 2024-09-30 |
223 | Towards Robust Extractive Question Answering Models: Rethinking The Training Methodology Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper proposes a novel training method to improve the robustness of Extractive Question Answering (EQA) models. |
Son Quoc Tran; Matt Kretchmar; | arxiv-cs.CL | 2024-09-29 |
224 | See Then Tell: Enhancing Key Information Extraction with Vision Grounding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce STNet (See then Tell Net), a novel end-to-end model designed to deliver precise answers with relevant vision grounding. |
SHUHANG LIU et. al. | arxiv-cs.CV | 2024-09-29 |
225 | Zero-Shot Multi-Hop Question Answering Via Monte-Carlo Tree Search with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Unlike previous works, we propose a zero-shot prompting method, which relies solely on instructions without the support of hand-crafted few-shot examples that typically require domain expertise. |
SEONGMIN LEE et. al. | arxiv-cs.CL | 2024-09-28 |
226 | Exploring Language Model Generalization in Low-Resource Extractive QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift, i.e., can LLMs generalize well to closed-domains that require specific knowledge such as medicine and law in a zero-shot fashion without additional in-domain training? |
Saptarshi Sengupta; Wenpeng Yin; Preslav Nakov; Shreya Ghosh; Suhang Wang; | arxiv-cs.CL | 2024-09-27 |
227 | Charting The Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel framework that leverages Visual Question Answering (VQA) models to automate the evaluation of LLM-generated data visualizations. |
James Ford; Xingmeng Zhao; Dan Schumacher; Anthony Rios; | arxiv-cs.CV | 2024-09-27 |
228 | Rehearsing Answers to Probable Questions with Perspective-Taking Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, scenarios involving the preparation of answers to probable questions during professional oral presentations remain underexplored. In this paper, we pioneer the examination of this crucial yet overlooked topic by utilizing real-world QA conversation transcripts between company managers and professional analysts. |
Yung-Yu Shih; Ziwei Xu; Hiroya Takamura; Yun-Nung Chen; Chung-Chi Chen; | arxiv-cs.CL | 2024-09-27 |
229 | Efficient In-Domain Question Answering for Resource-Constrained Environments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we combine RAFT with LoRA to reduce fine tuning and storage requirements and gain faster inference times while maintaining comparable RAG performance. |
Isaac Chung; Phat Vo; Arman C. Kizilkale; Aaron Reite; | arxiv-cs.CL | 2024-09-26 |
230 | SynTQA: Synergistic Table-based Question Answering Via Mixture of Text-to-SQL and E2E TQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To combine both strengths, we propose a Synergistic Table-based Question Answering approach that integrate different models via answer selection, which is agnostic to any model types. |
Siyue Zhang; Anh Tuan Luu; Chen Zhao; | arxiv-cs.CL | 2024-09-25 |
231 | Detecting Temporal Ambiguity in Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions. |
Bhawna Piryani; Abdelrahman Abdallah; Jamshid Mozafari; Adam Jatowt; | arxiv-cs.CL | 2024-09-25 |
232 | Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel framework that enhances temporal awareness and reasoning through Temporal Information-Aware Embedding and Granular Contrastive Reinforcement Learning. |
Wanqi Yang; Yanda Li; Meng Fang; Ling Chen; | arxiv-cs.CL | 2024-09-25 |
233 | Empirical Insights on Fine-Tuning Large Language Models for Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, effective strategies for fine-tuning LLMs for the QA task remain largely unexplored. To address this gap, we categorize supervised fine-tuning (SFT) data based on the extent of knowledge memorized by the pretrained LLMs and conduct a series of empirical analyses. |
JUNJIE YE et. al. | arxiv-cs.CL | 2024-09-24 |
234 | Exploring Hint Generation Approaches in Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a novel context preparation approach called HINTQA, which employs Automatic Hint Generation (HG) techniques. |
Jamshid Mozafari; Abdelrahman Abdallah; Bhawna Piryani; Adam Jatowt; | arxiv-cs.CL | 2024-09-24 |
235 | Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. |
Yifei Yuan; Yang Deng; Anders Søgaard; Mohammad Aliannejadi; | arxiv-cs.CL | 2024-09-24 |
236 | Using Similarity to Evaluate Factual Consistency in Summaries Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Therefore, many techniques for detecting factual inconsistencies build pipelines around natural language inference (NLI) or question-answering (QA) models with additional supervised learning steps. In this paper, we revisit similarity-based metrics, showing that this failure stems from the comparison text selection and its granularity. |
Yuxuan Ye; Edwin Simpson; Raul Santos Rodriguez; | arxiv-cs.CL | 2024-09-23 |
237 | LINKAGE: Listwise Ranking Among Varied-Quality References for Non-Factoid QA Evaluation Via LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Inspired by the evolution from pointwise to pairwise to listwise in learning-to-rank methods, we propose a novel listwise NFQA evaluation approach, that utilizes LLMs to rank candidate answers in a list of reference answers sorted by descending quality. |
Sihui Yang; Keping Bi; Wanqing Cui; Jiafeng Guo; Xueqi Cheng; | arxiv-cs.CL | 2024-09-23 |
238 | Learning When to Retrieve, What to Rewrite, and How to Respond in Conversational QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a method for enabling LLMs to decide when to retrieve in RAG settings given a conversational context. |
Nirmal Roy; Leonardo F. R. Ribeiro; Rexhina Blloshmi; Kevin Small; | arxiv-cs.CL | 2024-09-23 |
239 | Scene-Text Grounding for Text-Based Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose to study Grounded TextVideoQA by forcing models to answer questions and spatio-temporally localize the relevant scene-text regions, thus decoupling QA from scenetext recognition and promoting research towards interpretable QA. |
SHENG ZHOU et. al. | arxiv-cs.CV | 2024-09-22 |
240 | QMOS: Enhancing LLMs for Telecommunication with Question Masked Loss and Option Shuffling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces QMOS, an innovative approach which uses a Question-Masked loss and Option Shuffling trick to enhance the performance of LLMs in answering Multiple-Choice Questions in the telecommunications domain. |
Blessed Guda; Gabrial Zencha A.; Lawrence Francis; Carlee Joe-Wong; | arxiv-cs.CL | 2024-09-21 |
241 | First Place Solution to The Multiple-choice Video QA Track of The Second Perception Test Challenge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this report, we present our first-place solution to the Multiple-choice Video Question Answering (QA) track of The Second Perception Test Challenge. |
YINGZHE PENG et. al. | arxiv-cs.CV | 2024-09-20 |
242 | AQA: Adaptive Question Answering in A Society of LLMs Via Contextual Multi-Armed Bandit Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this aim, we build on recent advances in the orchestration of multiple large language models (LLMs) and formulate adaptive QA as a dynamic orchestration challenge. We define this as a contextual multi-armed bandit problem, where the context is defined by the characteristics of the incoming question and the action space consists of potential communication graph configurations among the LLM agents. |
Mohanna Hoveyda; Arjen P. de Vries; Maarten de Rijke; Harrie Oosterhuis; Faegheh Hasibi; | arxiv-cs.CL | 2024-09-20 |
243 | SMART-RAG: Selection Using Determinantal Matrices for Augmented Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This issue is particularly evident in unsupervised retrieval settings, where there are no mechanisms to effectively mitigate these problems, leading to suboptimal context selection. To address this, we propose Selection using Matrices for Augmented Retrieval (SMART) in question answering tasks, a fully unsupervised and training-free framework designed to optimize context selection in RAG. |
Jiatao Li; Xinyu Hu; Xiaojun Wan; | arxiv-cs.CL | 2024-09-20 |
244 | A Multimodal Dense Retrieval Approach for Speech-Based Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, the ASR model propagates its errors to the retriever. In this work, we try to alleviate these limitations by proposing an ASR-free, end-to-end trained multimodal dense retriever that can work directly on spoken questions. |
Georgios Sidiropoulos; Evangelos Kanoulas; | arxiv-cs.CL | 2024-09-20 |
245 | Evaluating Image Hallucination in Text-to-Image Generation with Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we focus on the problem of image hallucination, where images created by generation models fail to faithfully depict factual content. |
Youngsun Lim; Hojun Choi; Hyunjung Shim; | arxiv-cs.CV | 2024-09-19 |
246 | MQA-KEAL: Multi-hop Question Answering Under Knowledge Editing for Arabic Language Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although, there have been numerous attempts for LLMs Knowledge Editing (KE), i.e., to edit the LLMs prior knowledge and in turn test it via Multi-hop Question Answering (MQA), yet so far these studies are primarily focused on English language. To bridge this gap, in this paper we propose: Multi-hop Questioning Answering under Knowledge Editing for Arabic Language (MQA-KEAL). |
Muhammad Asif Ali; Nawal Daftardar; Mutayyaba Waheed; Jianbin Qin; Di Wang; | arxiv-cs.CL | 2024-09-18 |
247 | ProSLM : A Prolog Synergized Language Model for Explainable Domain Specific Knowledge Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose \systemname{}, a novel neurosymbolic framework, to improve the robustness and reliability of LLMs in question-answering tasks. |
Priyesh Vakharia; Abigail Kufeldt; Max Meyers; Ian Lane; Leilani Gilpin; | arxiv-cs.CL | 2024-09-17 |
248 | Contextual Breach: Assessing The Robustness of Transformer-based QA Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a unique dataset that incorporates seven distinct types of adversarial noise into the context, each applied at five different intensity levels on the SQuAD dataset. |
Asir Saadat; Nahian Ibn Asad; Md Farhan Ishmam; | arxiv-cs.CL | 2024-09-17 |
249 | OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This approach has limitations: (i) it is very expensive due to the need for training large encoders on extensive datasets, (ii) acquiring aligned large paired datasets is challenging, and (iii) adding new modalities requires retraining the entire framework to incorporate these modalities. To address these issues, we propose OneEncoder, a lightweight framework that progressively represents and aligns four modalities (image, text, audio, video). |
Bilal Faye; Hanane Azzag; Mustapha Lebbah; | arxiv-cs.CV | 2024-09-17 |
250 | StruEdit: Structured Outputs Enable The Fast and Accurate Knowledge Editing for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We argue that these challenges stem from the unstructured nature of natural language outputs. To address the above challenges, we propose $\textbf{Stru}$ctural $\textbf{Edit}$ing ($\textbf{StruEdit}$), an improved baseline for knowledge editing. |
BAOLONG BI et. al. | arxiv-cs.CL | 2024-09-16 |
251 | HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision Making Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces HALO, a novel framework designed to enhance the accuracy and reliability of medical question-answering (QA) systems by focusing on the detection and mitigation of hallucinations. |
SUMERA ANJUM et. al. | arxiv-cs.CL | 2024-09-16 |
252 | A Benchmark Dataset with Larger Context for Non-Factoid Question Answering Over Islamic Text Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Yet, the scarcity of QA systems tailored specifically to the detailed nature of inquiries about the Quranic Tafsir (explanation, interpretation, context of Quran for clarity) and Ahadith poses significant challenges. To address this gap, we introduce a comprehensive dataset meticulously crafted for QA purposes within the domain of Quranic Tafsir and Ahadith. |
Faiza Qamar; Seemab Latif; Rabia Latif; | arxiv-cs.CL | 2024-09-15 |
253 | QTG-VQA: Question-Type-Guided Architectural for VideoQA Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Particularly, considering the significant variation in dependency on temporal information across different question types, and given that the representation of such information coincidentally represents a principal challenge and difficulty for VideoQA as opposed to ImageQA. To address these challenges, we propose QTG-VQA, a novel architecture that incorporates question-type-guided attention and adaptive learning mechanism. |
Zhixian He; Pengcheng Zhao; Fuwei Zhang; Shujin Lin; | arxiv-cs.CV | 2024-09-14 |
254 | Contri(e)ve: Context + Retrieve for Scholarly Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a two step solution using open source Large Language Model(LLM): Llama3.1 for Scholarly-QALD dataset. |
Kanchan Shivashankar; Nadine Steinmetz; | arxiv-cs.IR | 2024-09-13 |
255 | Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge Types Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a comprehensive framework for evaluating VLMs tailored to VQA tasks in practical settings. |
Neelabh Sinha; Vinija Jain; Aman Chadha; | arxiv-cs.CV | 2024-09-13 |
256 | Electrocardiogram Report Generation and Question Answering Via Retrieval-Augmented Self-Supervised Modeling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Interpreting electrocardiograms (ECGs) and generating comprehensive reports remain challenging tasks in cardiology, often requiring specialized expertise and significant time investment. To address these critical issues, we propose ECG-ReGen, a retrieval-based approach for ECG-to-text report generation and question answering. |
Jialu Tang; Tong Xia; Yuan Lu; Cecilia Mascolo; Aaqib Saeed; | arxiv-cs.LG | 2024-09-13 |
257 | L3Cube-IndicQuest: A Benchmark Questing Answering Dataset for Evaluating Knowledge of LLMs in Indic Context Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLMs) have made significant progress in incorporating Indic languages within multilingual models. However, it is crucial to quantitatively assess whether … |
Pritika Rohera; Chaitrali Ginimav; Akanksha Salunke; Gayatri Sawant; Raviraj Joshi; | ArXiv | 2024-09-13 |
258 | QueryCAD: Grounded Question Answering for CAD Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these models are rarely considered in novel AI-based approaches, such as the automatic synthesis of robot programs, as there are no readily available methods that would allow CAD models to be incorporated for the analysis, interpretation, or extraction of information. To address these limitations, we propose QueryCAD, the first system designed for CAD question answering, enabling the extraction of precise information from CAD models using natural language queries. |
Claudius Kienle; Benjamin Alt; Darko Katic; Rainer Jäkel; | arxiv-cs.RO | 2024-09-13 |
259 | L3Cube-IndicQuest: A Benchmark Question Answering Dataset for Evaluating Knowledge of LLMs in Indic Context Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present the L3Cube-IndicQuest, a gold-standard factual question-answering benchmark dataset designed to evaluate how well multilingual LLMs capture regional knowledge across various Indic languages. |
Pritika Rohera; Chaitrali Ginimav; Akanksha Salunke; Gayatri Sawant; Raviraj Joshi; | arxiv-cs.CL | 2024-09-13 |
260 | Source2Synth: Synthetic Data Generation and Curation Grounded in Real Data Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Source2Synth: a new method that can be used for teaching LLMs new skills without relying on costly human annotations. |
ALISIA LUPIDI et. al. | arxiv-cs.CL | 2024-09-12 |
261 | Top-down Activity Representation Learning for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, to leverage the spatial visual context representation capability of the CLIP model for obtaining non-continuous visual representations in terms of contextual events in videos, we convert long-term video sequences into a spatial image domain and finetune the multimodal model LLaVA for the VideoQA task. |
Yanan Wang; Shuichiro Haruta; Donghuo Zeng; Julio Vizcarra; Mori Kurokawa; | arxiv-cs.CV | 2024-09-12 |
262 | Multi-object Event Graph Representation Learning for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While prior works have focused on modeling individual object movements using transformer-based methods, they falter when capturing complex scenarios involving multiple objects (e.g., a boy is throwing a ball in a hoop). We propose a contrastive language event graph representation learning method called CLanG to address this limitation. |
Yanan Wang; Shuichiro Haruta; Donghuo Zeng; Julio Vizcarra; Mori Kurokawa; | arxiv-cs.CV | 2024-09-12 |
263 | Experimenting with Legal AI Solutions: The Case of Question-Answering for Access to Justice Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose a human-centric legal NLP pipeline, covering data sourcing, inference, and evaluation. |
Jonathan Li; Rohan Bhambhoria; Samuel Dahan; Xiaodan Zhu; | arxiv-cs.CL | 2024-09-11 |
264 | AdaCAD: Adaptively Decoding to Balance Conflicts Between Contextual and Parametric Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a fine-grained, instance-level approach called AdaCAD, which dynamically infers the weight of adjustment based on the degree of conflict, as measured by the Jensen-Shannon divergence between distributions representing contextual and parametric knowledge. |
Han Wang; Archiki Prasad; Elias Stengel-Eskin; Mohit Bansal; | arxiv-cs.CL | 2024-09-11 |
265 | Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Previous works like Retrival-Augmented VQA-v2 (RAVQA-v2) focus on utilizing as much input information, such as image-based textual descriptions and retrieved knowledge, as possible to improve performance, but they all overlook the issue that with the number of input tokens increasing, inference efficiency significantly decreases, which contradicts the demands of practical applications. To address this issue, we propose Retrieval-Augmented MLLM with Compressed Contexts (RACC). |
WEIXI WENG et. al. | arxiv-cs.CV | 2024-09-11 |
266 | Enhancing Temporal Understanding in Audio Question Answering for Large Audio Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While Large Audio Language Models excel in general audio understanding, they are limited in temporal reasoning which may hinder their commercial applications and on device deployment. This paper addresses these challenges and limitations in audio temporal reasoning. |
Arvind Krishna Sridhar; Yinyi Guo; Erik Visser; | arxiv-cs.SD | 2024-09-10 |
267 | OpenROAD-Assistant: An Open-Source Large Language Model for Physical Design Tasks Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models (LLMs) have shown significant potential in serving as domain-specific chatbots. Recently, these models have emerged as powerful tools for chip design, … |
Utsav Sharma; Bing-Yue Wu; Sai Rahul Dhanvi Kankipati; V. A. Chhabria; Austin Rovinski; | 2024 ACM/IEEE 6th Symposium on Machine Learning for CAD … | 2024-09-09 |
268 | Towards Building A Robust Knowledge Intensive Question Answering Model with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the issue of model accuracy decline caused by noisy external information, we propose a data augmentation-based fine-tuning method to enhance LLM’s robustness against noise. |
Xingyun Hong; Yan Shao; Zhilin Wang; Manni Duan; Jin Xiongnan; | arxiv-cs.CL | 2024-09-09 |
269 | Seek and Solve Reasoning for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Inspired by how humans solve TQA tasks, we propose a Seek-and-Solve pipeline that instructs the LLM to first seek relevant information and then answer questions. |
Ruya Jiang; Chun Wang; Weihong Deng; | arxiv-cs.CL | 2024-09-08 |
270 | Question-Answering Dense Video Events Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present question-answering dense video events, a novel task that requires answering and grounding the dense-event questions in long videos, thus challenging MLLMs to faithfully comprehend and reason about multiple events occurring over extended time periods. |
Hangyu Qin; Junbin Xiao; Angela Yao; | arxiv-cs.CV | 2024-09-06 |
271 | WebQuest: A Benchmark for Multimodal QA on Web Page Sequences Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present WebQuest, a multi-page question-answering dataset that requires reasoning across multiple related web pages. |
MARIA WANG et. al. | arxiv-cs.IR | 2024-09-06 |
272 | Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: A key issue is the hallucination problem, where models generate information unsupported by the underlying data, potentially leading to dangerous misinformation. This paper presents a novel approach designed to bridge this gap by combining Large Language Models (LLM) and Knowledge Graphs (KG) to improve the accuracy and reliability of question-answering systems, on the example of a biomedical KG. |
Larissa Pusch; Tim O. F. Conrad; | arxiv-cs.CL | 2024-09-06 |
273 | COLUMBUS: Evaluating COgnitive Lateral Understanding Through Multiple-choice ReBUSes Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Effective problem-solving also necessitates lateral thinking, which remains understudied in AI and has not been used to test visual perception systems. To bridge this gap, we formulate visual lateral thinking as a multiple-choice question-answering task and describe a three-step taxonomy-driven methodology for instantiating task examples. |
Koen Kraaijveld; Yifan Jiang; Kaixin Ma; Filip Ilievski; | arxiv-cs.CV | 2024-09-06 |
274 | RAG Based Question-Answering for Contextual Response Prediction System Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. |
Sriram Veturi; Saurabh Vaichal; Reshma Lal Jagadheesh; Nafis Irtiza Tripto; Nian Yan; | arxiv-cs.CL | 2024-09-05 |
275 | LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we aim to enable long-context LLMs to generate responses with fine-grained sentence-level citations, improving their faithfulness and verifiability. |
JIAJIE ZHANG et. al. | arxiv-cs.CL | 2024-09-04 |
276 | MARAGS: A Multi-Adapter System for Multi-Task Retrieval Augmented Generation Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we present a multi-adapter retrieval augmented generation system (MARAGS) for Meta’s Comprehensive RAG (CRAG) competition for KDD CUP 2024. |
Mitchell DeHaven; | arxiv-cs.CL | 2024-09-04 |
277 | GoT-CQA: Graph-of-Thought Guided Compositional Reasoning for Chart Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The former refers to answering this question strictly based on the analysis of the visual content or internal data of the given chart, while the latter emphasizes the various logical and numerical reasoning involved in answer prediction process. In this paper, we pay more attention on the complex reasoning in CQA task, and propose a novel Graph-of-Thought (GoT) guided compositional reasoning model called GoT-CQA to overcome this problem. |
LINGLING ZHANG et. al. | arxiv-cs.CV | 2024-09-04 |
278 | CRAFT Your Dataset: Task-Specific Synthetic Dataset Generation Through Corpus Retrieval and Augmentation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose Corpus Retrieval and Augmentation for Fine-Tuning (CRAFT), a method for generating synthetic datasets, given a small number of user-written few-shots that demonstrate the task to be performed. |
Ingo Ziegler; Abdullatif Köksal; Desmond Elliott; Hinrich Schütze; | arxiv-cs.CL | 2024-09-03 |
279 | Diversify-verify-adapt: Efficient and Robust Retrieval-Augmented Ambiguous Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although the iterative RAG approach has been proposed to address this problem, it comes at the cost of significantly reduced efficiency. To address these issues, we propose the diversify-verify-adapt (DIVA) framework. |
YEONJUN IN et. al. | arxiv-cs.CL | 2024-09-03 |
280 | VProChart: Answering Chart Question Through Visual Perception Alignment Agent and Programmatic Solution Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, chart images are inherently difficult to interpret, and chart-related questions often involve complex logical and numerical reasoning, which hinders the performance of existing models. This paper introduces VProChart, a novel framework designed to address these challenges in CQA by integrating a lightweight Visual Perception Alignment Agent (VPAgent) and a Programmatic Solution Reasoning approach. |
MUYE HUANG et. al. | arxiv-cs.CV | 2024-09-03 |
281 | Multi-modal Situated Reasoning in 3D Scenes Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing datasets and benchmarks for situated understanding are limited in data modality, diversity, scale, and task scope. To address these limitations, we propose Multi-modal Situated Question Answering (MSQA), a large-scale multi-modal situated reasoning dataset, scalably collected leveraging 3D scene graphs and vision-language models (VLMs) across a diverse range of real-world 3D scenes. |
XIONGKUN LINGHU et. al. | arxiv-cs.CV | 2024-09-03 |
282 | How Privacy-Savvy Are Large Language Models? A Case Study on Compliance and Privacy Technical Review Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper seeks to address this gap by providing a comprehensive case study evaluating LLMs’ performance in privacy-related tasks such as privacy information extraction (PIE), legal and regulatory key point detection (KPD), and question answering (QA) with respect to privacy policies and data protection regulations. We introduce a Privacy Technical Review (PTR) framework, highlighting its role in mitigating privacy risks during the software development life-cycle. |
XICHOU ZHU et. al. | arxiv-cs.CL | 2024-09-03 |
283 | Kvasir-VQA: A Text-Image Pair GI Tract Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce Kvasir-VQA, an extended dataset derived from the HyperKvasir and Kvasir-Instrument datasets, augmented with question-and-answer annotations to facilitate advanced machine learning tasks in Gastrointestinal (GI) diagnostics. |
SUSHANT GAUTAM et. al. | arxiv-cs.CV | 2024-09-02 |
284 | Language Models Benefit from Preparation with Elicited Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, some QA tasks hinge more on accessing relevant knowledge than on chaining reasoning steps. We introduce a simple general prompting technique, called PREP, that involves using two instances of LMs: the first (LM1) generates relevant information, and the second (LM2) answers the question based on this information. |
Jiacan Yu; Hannah An; Lenhart K. Schubert; | arxiv-cs.CL | 2024-09-02 |
285 | Retrieval-Augmented Natural Language Reasoning for Explainable Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a new VQA-NLE model, ReRe (Retrieval-augmented natural language Reasoning), using leverage retrieval information from the memory to aid in generating accurate answers and persuasive explanations without relying on complex networks and extra datasets. |
Su Hyeon Lim; Minkuk Kim; Hyeon Bae Kim; Seong Tae Kim; | arxiv-cs.CV | 2024-08-30 |
286 | MAPWise: Evaluating Vision-Language Models for Advanced Map Queries Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study investigates the efficacy of VLMs in answering questions based on choropleth maps, which are widely used for data analysis and representation. To facilitate and encourage research in this area, we introduce a novel map-based question-answering benchmark, consisting of maps from three geographical regions (United States, India, China), each containing 1000 questions. |
Srija Mukhopadhyay; Abhishek Rajgaria; Prerana Khatiwada; Vivek Gupta; Dan Roth; | arxiv-cs.CV | 2024-08-30 |
287 | LLM-Based Multi-Hop Question Answering with Knowledge Graph Integration in Evolving Environments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing methods for knowledge editing still face difficulties with multi-hop questions that require accurate fact identification and sequential logical reasoning, particularly among numerous fact updates. To tackle these challenges, this paper introduces Graph Memory-based Editing for Large Language Models (GMeLLo), a straitforward and effective method that merges the explicit knowledge representation of Knowledge Graphs (KGs) with the linguistic flexibility of LLMs. |
RUIRUI CHEN et. al. | arxiv-cs.CL | 2024-08-28 |
288 | Can Visual Language Models Replace OCR-Based Visual Question Answering Pipelines in Production? A Case Study in Retail Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our study includes two commercial models, GPT-4V [16] and GPT-4o [17], as well as four open-source models: InternVL [5], LLaVA 1.5 [12], LLaVA-NeXT [13], and CogAgent [9]. |
Bianca Lamm; Janis Keuper; | arxiv-cs.CV | 2024-08-28 |
289 | Multilingual Question Answering Systems for Knowledge Graphs – A Survey Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents a survey on multilingual Knowledge Graph Question Answering (mKGQA). We employ a systematic review methodology to collect and analyze the research results in … |
A. Perevalov; Andreas Both; A. N. Ngomo; | Semantic Web | 2024-08-28 |
290 | Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the hallucination in generative question answering (GQA) where the answer can not be derived from the document, we propose a novel evidence-enhanced triplet generation framework, EATQA, encouraging the model to predict all the combinations of (Question, Evidence, Answer) triplet by flipping the source pair and the target label to understand their logical relationships, i.e., predict Answer(A), Question(Q), and Evidence(E) given a QE, EA, and QA pairs, respectively. |
Haowei Du; Huishuai Zhang; Dongyan Zhao; | arxiv-cs.CL | 2024-08-27 |
291 | Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We develop an automated pipeline to create multi-hop question-answering pairs with associated temporal evidence, enabling to construct a large-scale dataset for instruction-tuning. |
Qirui Chen; Shangzhe Di; Weidi Xie; | arxiv-cs.CV | 2024-08-26 |
292 | Question Answering System of Bridge Design Specification Based on Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Through the self-built question and answer task dataset, based on the tensorflow and keras deep learning platform framework, the model is constructed and trained to predict the start position and end position of the answer in the bridge design specification given by the user. |
Leye Zhang; Xiangxiang Tian; Hongjun Zhang; | arxiv-cs.CL | 2024-08-25 |
293 | IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce an automatic evaluation framework IQA-EVAL to achieve Interactive Question Answering Evaluations, more specifically, we introduce a LLM-based Evaluation Agent (LEA) that can: (1) simulate human behaviors to generate interactions with IQA models; (2) automatically evaluate the generated interactions. |
Ruosen Li; Ruochen Li; Barry Wang; Xinya Du; | arxiv-cs.CL | 2024-08-24 |
294 | Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a new internal and external knowledge interactive refinement paradigm dubbed IEKR to utilize internal knowledge in LLM to help retrieve relevant knowledge from the external knowledge base, as well as exploit the external knowledge to refine the hallucination of generated internal knowledge. |
Haowei Du; Dongyan Zhao; | arxiv-cs.CL | 2024-08-23 |
295 | Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this report, we introduce Vintern-1B, a reliable 1-billion-parameters multimodal large language model (MLLM) for Vietnamese language tasks. |
KHANG T. DOAN et. al. | arxiv-cs.LG | 2024-08-22 |
296 | Enhanced Fine-Tuning of Lightweight Domain-Specific Q&A Model Based on Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Commercial companies face the dual challenges of privacy protection and resource constraints when involving LLMs for fine-tuning. This paper propose a novel framework, Self-Evolution, designed to address these issues by leveraging lightweight open-source LLMs through multiple iterative fine-tuning rounds. |
SHENGLIN ZHANG et. al. | arxiv-cs.AI | 2024-08-22 |
297 | Assessing Modality Bias in Video Question Answering Benchmarks with Multimodal Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing video question-answering (VidQA) benchmarks and datasets often exhibit a bias toward a single modality, despite the goal of requiring advanced reasoning skills that integrate diverse modalities to answer the queries. In this work, we introduce the modality importance score (MIS) to identify such bias. |
JEAN PARK et. al. | arxiv-cs.LG | 2024-08-22 |
298 | RConE: Rough Cone Embedding for Multi-Hop Logical Query Answering on Multi-Modal Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose RConE, an embedding method to capture the multi-modal information needed to answer a query. |
Mayank Kharbanda; Rajiv Ratn Shah; Raghava Mutharaju; | arxiv-cs.AI | 2024-08-21 |
299 | Mathematical Information Retrieval: Search and Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The framework is used to organize and relate the other core topics of the book, including interactions between people and systems, representing math formulas in sources, and evaluation. |
Richard Zanibbi; Behrooz Mansouri; Anurag Agarwal; | arxiv-cs.IR | 2024-08-21 |
300 | Multimodal Datasets and Benchmarks for Reasoning About Dynamic Spatio-Temporality in Everyday Environments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We used a 3D simulator to create artificial video data with standardized annotations, aiming to aid in the development of Embodied AI. |
Takanori Ugai; Kensho Hara; Shusaku Egami; Ken Fukuda; | arxiv-cs.AI | 2024-08-21 |
301 | What Are The Limits of Cross-lingual Dense Passage Retrieval for Low-resource Languages? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we analyze the capabilities of the multi-lingual Dense Passage Retriever (mDPR) for extremely low-resource languages. |
Jie Wu; Zhaochun Ren; Suzan Verberne; | arxiv-cs.IR | 2024-08-21 |
302 | DocTabQA: Answering Questions from Long Documents Using Tables Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the QTabA dataset, encompassing 300 financial documents, accompanied by manually annotated 1.5k question-table pairs. |
Haochen Wang; Kai Hu; Haoyu Dong; Liangcai Gao; | arxiv-cs.CL | 2024-08-21 |
303 | FoRAG: Factuality-optimized Retrieval Augmented Generation for Web-enhanced Long-form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the emergence of various open source methods and web-enhanced commercial systems such as Bing Chat, two critical problems remain unsolved, i.e., the lack of factuality and clear logic in the generated long-form answers. In this paper, we remedy these issues via a systematic study on answer generation in web-enhanced LFQA. |
TIANCHI CAI et. al. | kdd | 2024-08-21 |
304 | DyGKT: Dynamic Graph Learning for Knowledge Tracing Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The three dynamical characteristics above contain the great potential to revolutionize the existing knowledge tracing methods. Along this line, we propose a Dynamic Graph-based Knowledge Tracing model, namely DyGKT. |
KE CHENG et. al. | kdd | 2024-08-21 |
305 | Differentiating Choices Via Commonality for Multiple-Choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel MCQA model by differentiating choices through identifying and eliminating their commonality, called DCQA. |
WENQING DENG et. al. | arxiv-cs.CL | 2024-08-21 |
306 | HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering Using LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. |
Pranoy Panda; Ankush Agarwal; Chaitanya Devaguptapu; Manohar Kaul; Prathosh Ap; | acl | 2024-08-20 |
307 | SOTOPIA-p: Interactive Learning of Socially Intelligent Language Agents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-p, that improves the social intelligence of language agents. |
RUIYI WANG et. al. | acl | 2024-08-20 |
308 | Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Yet, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). |
Guanming Xiong; Junwei Bao; Wen Zhao; | acl | 2024-08-20 |
309 | EWEK-QA : Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Second, web-retrieved contents are usually obtained by some simple heuristics such as fixed length or breakpoints which might lead to splitting information into pieces. To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system. |
MOHAMMAD DEHGHAN et. al. | acl | 2024-08-20 |
310 | SymKGQA: Few-Shot Knowledge Graph Question Answering Via Symbolic Program Generation and Execution Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recently, a new LF called KoPL has been introduced that explicitly models complex reasoning process step-by-step in a symbolic manner and has shown SOTA on KQA Pro in fully-supervised setting. Inspired by this, we propose SymKGQA framework that generates step-by-step Symbolic LF i. e. , KoPL in a few-shot in-context learning setting using LLM. |
Prerna Agarwal; Nishant Kumar; Srikanta Bedathur; | acl | 2024-08-20 |
311 | MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose to select the most informative data for fine-tuning, thereby improving the efficiency of the fine-tuning process with comparative or even better accuracy on the open-domain QA task. |
XIUSI CHEN et. al. | acl | 2024-08-20 |
312 | TaPERA: Enhancing Faithfulness and Interpretability in Long-Form Table QA By Content Planning and Execution-based Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While Large language models based systems have made significant progress, it often hallucinates, especially when the task involves complex reasoning over tables. To tackle this issue, we propose a new LLM-based framework, TaPERA, for LFTQA tasks. |
Yilun Zhao; Lyuhao Chen; Arman Cohan; Chen Zhao; | acl | 2024-08-20 |
313 | Learning Relational Decomposition of Queries for Question Answering from Tables Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: By learning to imitate a restricted subset of SQL-like algebraic operations, we demonstrate that their execution flow provides intermediate supervision steps that allow for increased generalization and structural reasoning compared to classical approaches. |
Rapha�l Mouravieff; Benjamin Piwowarski; Sylvain Lamprier; | acl | 2024-08-20 |
314 | FinTextQA: A Dataset for Long-form Financial Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. |
JIAN CHEN et. al. | acl | 2024-08-20 |
315 | MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring for UE methods. |
YAVUZ FARUK BAKMAN et. al. | acl | 2024-08-20 |
316 | Temporal Knowledge Question Answering Via Abstract Reasoning Induction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we address the challenge of enhancing temporal knowledge reasoning in Large Language Models (LLMs). |
Ziyang Chen; Dongfang Li; Xiang Zhao; Baotian Hu; Min Zhang; | acl | 2024-08-20 |
317 | Modality-Aware Integration with Large Language Models for Knowledge-Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To tackle these, we present a novel modality-aware integration with LLMs for KVQA (MAIL). |
JUNNAN DONG et. al. | acl | 2024-08-20 |
318 | Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. |
ZHENGLIANG SHI et. al. | acl | 2024-08-20 |
319 | Exploring Hybrid Question Answering Via Program-based Prompting Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose HProPro, a novel program-based prompting framework for the hybrid question answering task. |
QI SHI et. al. | acl | 2024-08-20 |
320 | PokeMQA: Programmable Knowledge Editing for Multi-hop Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We thus propose a framework, Programmable knowledge editing for Multi-hop Question Answering (PokeMQA), to decouple the jobs. |
HENGRUI GU et. al. | acl | 2024-08-20 |
321 | Multilingual Non-Factoid Question Answering with Silver Answers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the scope of such datasets for low-resource languages remains limited, with only a few works centered on factoid-based QuADs and none on non-factoid QuADs. Therefore, this work presents MuNfQuAD, a multilingual QuAD with non-factoid questions. |
Ritwik Mishra; Sreeram Vennam; Rajiv Ratn Shah; Ponnurangam Kumaraguru; | arxiv-cs.CL | 2024-08-20 |
322 | Putting People in LLMs’ Shoes: Generating Better Answers Via Question Rewriter Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, their effectiveness in QA is often undermined by the vagueness of user questions. To address this issue, we introduce single-round instance-level prompt optimization, referred to as question rewriter. |
Junhao Chen; Bowen Wang; Zhouqiang jiang; Yuta Nakashima; | arxiv-cs.CL | 2024-08-20 |
323 | Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. |
MAYUR PATIDAR et. al. | acl | 2024-08-20 |
324 | ColBERT Retrieval and Ensemble Response Scoring for Language Model Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The Specializing Large Language Models for Telecom Networks challenge aimed to enhance the performance of two small language models, Phi-2 and Falcon-7B in telecommunication question answering. In this paper, we present our question answering systems for this challenge. |
Alex Gichamba; Tewodros Kederalah Idris; Brian Ebiyau; Eric Nyberg; Teruko Mitamura; | arxiv-cs.CL | 2024-08-20 |
325 | Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. |
ELAN MARKOWITZ et. al. | acl | 2024-08-20 |
326 | MMToM-QA: Multimodal Theory of Mind Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: People can flexibly reason about another person�s mind based on conceptual representations (e. g. , goals, beliefs, plans) extracted from any available data. To address this, we introduce a multimodal Theory of Mind question answering (MMToM-QA) benchmark. |
CHUANYANG JIN et. al. | acl | 2024-08-20 |
327 | To Generate or to Retrieve? On The Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents MedGENIE, the first generate-then-read framework for multiple-choice question answering in medicine. |
Giacomo Frisoni; Alessio Cocchieri; Alex Presepi; Gianluca Moro; Zaiqiao Meng; | acl | 2024-08-20 |
328 | Domain Adaptation for Subjective Induction Questions Answering on Products By Adversarial Disentangled Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: It is hard for traditional methods to work well without considering the shift of domain patterns. To address this problem, we propose a novel domain-adaptive model. |
YUFENG ZHANG et. al. | acl | 2024-08-20 |
329 | Is Table Retrieval A Solved Problem? Exploring Join-Aware Multi-Table Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: If the join plan is not considered in the retrieval stage, the subsequent steps of reasoning and answering based on those retrieved tables are likely to be incorrect. To address this problem, we introduce a method that uncovers useful join relations for any query and database during table retrieval. |
Peter Baile Chen; Yi Zhang; Dan Roth; | acl | 2024-08-20 |
330 | RetinaQA: A Robust Knowledge Base Question Answering Model for Both Answerable and Unanswerable Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent research has found that such models, when superficially adapted to detect answerability, struggle to satisfactorily identify the different categories of unanswerable questions, and simultaneously preserve good performance for answerable questions. Towards addressing this issue, we propose RetinaQA, a new KBQA model that unifies two key ideas in a single KBQA architecture: (a) discrimination over candidate logical forms, rather than generating these, for handling schema-related unanswerability, and (b) sketch-filling-based construction of candidate logical forms for handling data-related unaswerability. |
Prayushi Faldu; Indrajit Bhattacharya; Mausam .; | acl | 2024-08-20 |
331 | CoDi: Conversational Distillation for Grounded Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Secondly, high-quality conversational datasets are often scarce, small, and domain-specific. Addressing these challenges, we introduce a novel data distillation framework named CoDi (short for Conversational Distillation, pronounced Cody), allowing us to synthesize large-scale, assistant-style datasets in a steerable and diverse manner. |
PATRICK HUBER et. al. | arxiv-cs.CL | 2024-08-20 |
332 | Spiral of Silence: How Is Large Language Model Killing Information Retrieval?�A Case Study on Open Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we construct and iteratively run a simulation pipeline to deeply investigate the short-term and long-term effects of LLM text on RAG systems. |
XIAOYANG CHEN et. al. | acl | 2024-08-20 |
333 | FastFiD: Improve Inference Efficiency of Open Domain Question Answering Via Sentence Selection Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nevertheless, this framework can be relatively time-consuming, particularly due to the extensive length of the gathered passages. To address this, we introduce FastFiD in this paper, a novel approach that executes sentence selection on the encoded passages. |
Yufei Huang; Xu Han; Maosong Sun; | acl | 2024-08-20 |
334 | BizBench: A Quantitative Reasoning Benchmark for Business and Finance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce BizBench, a benchmark for evaluating models� ability to reason about realistic financial problems. |
MICHAEL KRUMDICK et. al. | acl | 2024-08-20 |
335 | AutoAct: Automatic Agent Learning from Scratch for QA Via Self-Planning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce AutoAct, an automatic agent learning framework for QA that does not rely on large-scale annotated data and synthetic planning trajectories from closed-source models (e. g. , GPT-4). |
SHUOFEI QIAO et. al. | acl | 2024-08-20 |
336 | SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level. |
ZHENWEN LIANG et. al. | acl | 2024-08-20 |
337 | Paraphrasing in Affirmative Terms Improves Negation Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we experiment with seamless strategies that incorporate affirmative interpretations (i. e. , paraphrases without negation) to make models more robust against negation. |
MohammadHossein Rezaei; Eduardo Blanco; | acl | 2024-08-20 |
338 | Beyond Memorization: The Challenge of Random Memory Access in Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the mechanisms underlying knowledge storage and memory access within their parameters remain elusive. In this paper, we investigate whether a generative LM (e. g. , GPT-2) is able to access its memory sequentially or randomly. |
TONGYAO ZHU et. al. | acl | 2024-08-20 |
339 | ProtT3: Protein-to-Text Generation for Text-based Protein Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address their limitations, we introduce ProtT3, a framework for Protein-to-Text Generation for Text-based Protein Understanding. |
ZHIYUAN LIU et. al. | acl | 2024-08-20 |
340 | FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To evaluate complex reasoning in LLMs more fully, we present FanOutQA, a high-quality dataset of fan-out question-answer pairs and human-annotated decompositions with English Wikipedia as the knowledge base. |
Andrew Zhu; Alyssa Hwang; Liam Dugan; Chris Callison-Burch; | acl | 2024-08-20 |
341 | Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and answer attributability. |
Tobias Schimanski; Jingwei Ni; Mathias Kraus; Elliott Ash; Markus Leippold; | acl | 2024-08-20 |
342 | Narrowing The Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose GRANOLA QA, a novel evaluation setting where a predicted answer is evaluated in terms of accuracy and informativeness against a set of multi-granularity answers. |
Gal Yona; Roee Aharoni; Mor Geva; | acl | 2024-08-20 |
343 | Never Lost in The Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The �lost in the middle� problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Position-Agnostic Multi-step QA (PAM QA). |
JUNQING HE et. al. | acl | 2024-08-20 |
344 | SyllabusQA: A Course Logistics Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce SyllabusQA, an open-source dataset with 63 real course syllabi covering 36 majors, containing 5,078 open-ended course logistics-related question-answer pairs that are diverse in both question types and answer formats. |
Nigel Fernandez; Alexander Scarlatos; Andrew Lan; | acl | 2024-08-20 |
345 | Safety Alignment in NLP Tasks: Weakly Aligned Summarization As An In-Context Attack Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our study, focusing on safety-sensitive documents obtained through adversarial attacks, reveals significant disparities in the safety alignment of various NLP tasks. |
Yu Fu; Yufei Li; Wen Xiao; Cong Liu; Yue Dong; | acl | 2024-08-20 |
346 | BeamAggR: Beam Aggregation Reasoning Over Multi-source Knowledge for Multi-hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, significant challenges still persist, including inaccurate and insufficient retrieval for complex questions, as well as difficulty in integrating multi-source knowledge. To address this, we propose Beam Aggregation Reasoning (BeamAggR), a reasoning framework for knowledge-intensive multi-hop QA. |
ZHENG CHU et. al. | acl | 2024-08-20 |
347 | Consistency Training By Synthetic Question Generation for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: By citing a common modeling error prevalent in previous research, we introduce a new baseline and compare our model�s performance against it, demonstrating an improvement in results, particularly in later turns of the conversation, when dealing with questions that include a large historical context. |
Hamed Hemati; Hamid Beigy; | acl | 2024-08-20 |
348 | Answer Is All You Need: Instruction-following Text Embedding Via Answering The Question Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion. |
LETIAN PENG et. al. | acl | 2024-08-20 |
349 | Ranking Generated Answers: On The Agreement of Retrieval Models with Humans on Consumer Health Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a method for evaluating LLM answers that uses ranking signals as a substitute for explicit relevance judgements. |
Sebastian Heineking; Jonas Probst; Daniel Steinbach; Martin Potthast; Harrisen Scells; | arxiv-cs.IR | 2024-08-19 |
350 | TableBench: A Comprehensive and Complex Benchmark for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite these achievements, LLMs still encounter significant challenges when applied in industrial scenarios, particularly due to the increased complexity of reasoning required with real-world tabular data, underscoring a notable disparity between academic benchmarks and practical applications. To address this discrepancy, we conduct a detailed investigation into the application of tabular data in industrial scenarios and propose a comprehensive and complex benchmark TableBench, including 18 fields within four major categories of table question answering (TableQA) capabilities. |
XIANJIE WU et. al. | arxiv-cs.CL | 2024-08-17 |
351 | Developing A Llama-Based Chatbot for CI/CD Question Answering: A Case Study at Ericsson Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents our experience developing a Llama-based chatbot for question answering about continuous integration and continuous delivery (CI/CD) at Ericsson, a multinational telecommunications company. |
Daksh Chaudhary; Sri Lakshmi Vadlamani; Dimple Thomas; Shiva Nejati; Mehrdad Sabetzadeh; | arxiv-cs.SE | 2024-08-17 |
352 | MuRAR: A Simple and Effective Multimodal Retrieval and Answer Refinement Framework for Multimodal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a simple and effective framework named MuRAR (Multimodal Retrieval and Answer Refinement). |
ZHENGYUAN ZHU et. al. | arxiv-cs.IR | 2024-08-16 |
353 | Beyond The Hype: A Dispassionate Look at Vision-language Models in Medical Scenario Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce RadVUQA, a novel Radiological Visual Understanding and Question Answering benchmark, to comprehensively evaluate existing LVLMs. |
Yang Nan; Huichi Zhou; Xiaodan Xing; Guang Yang; | arxiv-cs.CV | 2024-08-16 |
354 | RealMedQA: A Pilot Biomedical Question Answering Dataset Containing Realistic Clinical Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present RealMedQA, a dataset of realistic clinical questions generated by humans and an LLM. |
GREGORY KELL et. al. | arxiv-cs.CL | 2024-08-16 |
355 | LLaVA-Surg: Towards Multimodal Surgical Assistant Via Structured Surgical Video Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: One major contributing factor is the absence of datasets in the surgical field. In this paper, we create a new dataset, Surg-QA, consisting of 102,000 surgical video-instruction pairs, the largest of its kind so far. |
JIAJIE LI et. al. | arxiv-cs.CV | 2024-08-15 |
356 | IIU: Independent Inference Units for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose Independent Inference Units (IIU) for fine-grained multi-modal reasoning to decompose intra-modal information by the functionally independent units. |
Yili Li; Jing Yu; Keke Gai; Gang Xiong; | arxiv-cs.CV | 2024-08-15 |
357 | W-RAG: Weakly Supervised Dense Retrieval in RAG for Open-domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose W-RAG by utilizing the ranking capabilities of LLMs to create weakly labeled data for training dense retrievers. |
Jinming Nian; Zhiyuan Peng; Qifan Wang; Yi Fang; | arxiv-cs.CL | 2024-08-15 |
358 | Assessing and Enhancing Large Language Models in Rare Disease Question-answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a rare disease question-answering (ReDis-QA) dataset to evaluate the performance of LLMs in diagnosing rare diseases. |
GUANCHU WANG et. al. | arxiv-cs.CE | 2024-08-15 |
359 | Evaluating Fine-Tuning Efficiency of Human-Inspired Learning Strategies in Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This study evaluates the fine-tuning efficiency of five human-inspired strategies across four language models, three datasets, and both human- and LLM-labelled data in the context of medical question answering. |
Yushi Yang; Andrew M. Bean; Robert McCraith; Adam Mahdi; | arxiv-cs.CL | 2024-08-14 |
360 | QirK: Question Answering Via Intermediate Representation on Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We demonstrate QirK, a system for answering natural language questions on Knowledge Graphs (KG). |
JAN LUCA SCHEERER et. al. | arxiv-cs.DB | 2024-08-14 |
361 | Enhancing Visual Question Answering Through Ranking-Based Hybrid Training and Multimodal Fusion Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Current VQA models struggle with complex questions due to limitations in capturing and integrating multimodal information effectively. To address these challenges, we propose the Rank VQA model, which leverages a ranking-inspired hybrid training strategy to enhance VQA performance. |
Peiyuan Chen; Zecheng Zhang; Yiping Dong; Li Zhou; Han Wang; | arxiv-cs.CV | 2024-08-14 |
362 | A RAG-Based Question-Answering Solution for Cyber-Attack Investigation and Attribution Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In the constantly evolving field of cybersecurity, it is imperative for analysts to stay abreast of the latest attack trends and pertinent information that aids in the investigation and attribution of cyber-attacks. In this work, we introduce the first question-answering (QA) model and its application that provides information to the cybersecurity experts about cyber-attacks investigations and attribution. |
Sampath Rajapaksha; Ruby Rani; Erisa Karafili; | arxiv-cs.CR | 2024-08-12 |
363 | Chain of Condition: Construct, Verify and Solve Conditions for Conditional Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing approaches struggle with CQA due to two challenges: (1) precisely identifying necessary conditions and the logical relationship, and (2) verifying conditions to detect any that are missing. In this paper, we propose a novel prompting approach, Chain of condition, by first identifying all conditions and constructing their logical relationships explicitly according to the document, then verifying whether these conditions are satisfied, finally solving the logical expression to indicate any missing conditions and generating the answer accordingly. |
Jiuheng Lin; Yuxuan Lai; Yansong Feng; | arxiv-cs.CL | 2024-08-10 |
364 | Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. |
Chunggi Lee; Tica Lin; Hanspeter Pfister; Chen Zhu-Tian; | arxiv-cs.HC | 2024-08-09 |
365 | Surgical-VQLA++: Adversarial Contrastive Learning for Calibrated Robust Visual Question-Localized Answering in Robotic Surgery Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the inability of VQA models to visually indicate the regions of interest corresponding to the given questions results in incomplete comprehension of the surgical scene. To tackle this, we propose the surgical visual question localized-answering (VQLA) for precise and context-aware responses to specific queries regarding surgical images. |
LONG BAI et. al. | arxiv-cs.CV | 2024-08-09 |
366 | Towards A Generative Approach for Emotion Detection and Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: But can they perform emotional reasoning by concatenating `Let’s think step-by-step’ to the input prompt? In this paper we investigate this question along with introducing a novel approach to zero-shot emotion detection and emotional reasoning using LLMs. |
Ankita Bhaumik; Tomek Strzalkowski; | arxiv-cs.CL | 2024-08-09 |
367 | VideoQA in The Era of LLMs: An Empirical Study Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work conducts a timely and comprehensive study of Video-LLMs’ behavior in VideoQA, aiming to elucidate their success and failure modes, and provide insights towards more human-like video understanding and question answering. |
JUNBIN XIAO et. al. | arxiv-cs.CV | 2024-08-08 |
368 | Enhancing Robustness of Retrieval-Augmented Language Models with In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, RALMs still struggle with unanswerable queries, where the retrieved contexts do not contain the correct answer, and with conflicting information, where different sources provide contradictory answers due to imperfect retrieval. This study introduces an in-context learning-based approach to enhance the reasoning capabilities of RALMs, making them more robust in imperfect retrieval scenarios. |
Seong-Il Park; Seung-Woo Choi; Na-Hyun Kim; Jay-Yoon Lee; | arxiv-cs.CL | 2024-08-08 |
369 | Enhancing Healthcare Through Large Language Models: A Study on Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a detailed study of various LLMs trained on the MedQuAD medical question-answering dataset, with a focus on identifying the most effective model for providing accurate medical information. |
Haoran Yu; Chang Yu; Zihan Wang; Dongxian Zou; Hao Qin; | arxiv-cs.CL | 2024-08-07 |
370 | Targeted Visual Prompting for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address this, region-based questions have been proposed as a means to assess and enhance actual visual understanding through compositional evaluation. To combine these two perspectives, this paper introduces targeted visual prompting to equip MLLMs with region-based questioning capabilities. |
Sergio Tascon-Morales; Pablo Márquez-Neila; Raphael Sznitman; | arxiv-cs.CV | 2024-08-06 |
371 | Entity Retrieval for Answering Entity-Centric Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose Entity Retrieval, a novel retrieval method which rather than relying on question-document similarity, depends on the salient entities within the question to identify the retrieval documents. |
Hassan S. Shavarani; Anoop Sarkar; | arxiv-cs.IR | 2024-08-05 |
372 | XMainframe: A Large Language Model for Mainframe Modernization Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce XMainframe, a state-of-the-art large language model (LLM) specifically designed with knowledge of mainframe legacy systems and COBOL codebases. |
ANH T. V. DAU et. al. | arxiv-cs.CL | 2024-08-05 |
373 | Leveraging Inter-Chunk Interactions for Enhanced Retrieval in Large Language Model-Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Previous research typically handles paragraphs from external documents in isolation, resulting in a lack of context and ambiguous references, particularly in multi-document and complex tasks. To overcome these challenges, we propose a new retrieval framework IIER, that leverages Inter-chunk Interactions to Enhance Retrieval. |
TIEZHENG GUO et. al. | arxiv-cs.CL | 2024-08-05 |
374 | Developing PUGG for Polish: A Modern Approach to KBQA, MRC, and IR Dataset Construction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We executed this pipeline and introduced the PUGG dataset, the first Polish KBQA dataset, and novel datasets for MRC and IR. |
ALBERT SAWCZYN et. al. | arxiv-cs.AI | 2024-08-05 |
375 | KG-CoT: Chain-of-Thought Prompting of Large Language Models Over Knowledge Graphs for Knowledge-Aware Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, fragmented knowledge facts extracted by knowledge retrievers fail to provide explicit and coherent reasoning paths for improving LLM reasoning. To address these challenges, we propose KG-CoT, a novel knowledge-augmented paradigm that leverages a small-scale step-by-step graph reasoning model to reason over knowledge graphs (KGs) and utilizes a reasoning path generation method to generate chains of reasoning with high confidence for large-scale LLMs. |
Ruilin Zhao; Feng Zhao; Long Wang; Xianzhi Wang; Guandong Xu; | ijcai | 2024-08-03 |
376 | MMVQA: A Comprehensive Dataset for Investigating Multipage Multimodal Information Retrieval in PDF-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The paper introduces PDF-MVQA, tailored for research journal articles, encompassing multiple pages and multimodal retrieval. |
Yihao Ding; Kaixuan Ren; Jiabin Huang; Siwen Luo; Soyeon Caren Han; | ijcai | 2024-08-03 |
377 | GigaPevt: Multimodal Medical Assistant Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This demo paper presents GigaPevt, the first multimodal medical assistant that combines the dialog capabilities of large language models with specialized medical models. |
PAVEL BLINOV et. al. | ijcai | 2024-08-03 |
378 | KnowledgeHub: An End-to-End Tool for Assisted Scientific Discovery Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes the KnowledgeHub tool, a scientific literature Information Extraction (IE) and Question Answering (QA) pipeline. |
SHINNOSUKE TANAKA et. al. | ijcai | 2024-08-03 |
379 | ScreenAI: A Vision-Language Model for UI and Infographics Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce ScreenAI, a vision-language model that specializes in UI and infographics understanding. |
GILLES BAECHLER et. al. | ijcai | 2024-08-03 |
380 | Graph Collaborative Expert Finding with Contrastive Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we try to address the limitation of current models that typically neglect the intrinsic high-order connectivity within expert-question interactions, which is pivotal for collaborative effects. |
Qiyao Peng; Wenjun Wang; Hongtao Liu; Cuiying Huo; Minglai Shao; | ijcai | 2024-08-03 |
381 | SUKHSANDESH: An Avatar Therapeutic Question Answering Platform for Sexual Education in Rural India Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This approach aims to foster empathy and connection, which is particularly beneficial for individuals with limited literacy skills. |
Salam Michael Singh; Shubhmoy Kumar Garg; Amitesh Misra; Aaditeshwar Seth; Tanmoy Chakraborty; | ijcai | 2024-08-03 |
382 | Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: When using large language models (LLMs) in knowledge-intensive tasks, such as open-domain question answering, external context can bridge the gap between external knowledge and the LLMs’ parametric knowledge. |
YOUNA KIM et. al. | arxiv-cs.CL | 2024-08-02 |
383 | DebateQA: Evaluating Question Answering on Debatable Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, traditional QA benchmarks assume fixed answers are inadequate for this purpose. To address this, we introduce DebateQA, a dataset of 2,941 debatable questions, each accompanied by multiple human-annotated partial answers that capture a variety of perspectives. |
Rongwu Xu; Xuan Qi; Zehan Qi; Wei Xu; Zhijiang Guo; | arxiv-cs.CL | 2024-08-02 |
384 | BioRAG: A RAG-LLM Framework for Biological Question Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The question-answering system for Life science research, which is characterized by the rapid pace of discovery, evolving insights, and complex interactions among knowledge entities, presents unique challenges in maintaining a comprehensive knowledge warehouse and accurate information retrieval. To address these issues, we introduce BioRAG, a novel Retrieval-Augmented Generation (RAG) with the Large Language Models (LLMs) framework. |
CHENGRUI WANG et. al. | arxiv-cs.CL | 2024-08-02 |
385 | Towards Flexible Evaluation for Generative Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Although Visual Question Answering (VQA) could serve as a developed test field, limitations of VQA evaluation, like the inflexible pattern of Exact Match, have hindered MLLMs from demonstrating their real capability and discourage rich responses. Therefore, this paper proposes the use of semantics-based evaluators for assessing unconstrained open-ended responses on VQA datasets. |
Huishan Ji; Qingyi Si; Zheng Lin; Weiping Wang; | arxiv-cs.CV | 2024-08-01 |
386 | MKEAH: Multimodal Knowledge Extraction and Accumulation Based on Hyperplane Embedding for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
HENG ZHANG et. al. | Virtual Real. Intell. Hardw. | 2024-08-01 |
387 | Transformer-based Vision-language Alignment for Robot Navigation and Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Haonan Luo; Ziyu Guo; Zhenyu Wu; Fei Teng; Tian-Jie Li; | Inf. Fusion | 2024-08-01 |
388 | Prompting Medical Large Vision-Language Models to Diagnose Pathologies By Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose two prompting strategies for MLVLMs that reduce hallucination and improve VQA performance. |
Danfeng Guo; Demetri Terzopoulos; | arxiv-cs.CV | 2024-07-31 |
389 | Decomposed Prompting to Answer Questions on A Course Discussion Board Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose and evaluate a question-answering system that uses decomposed prompting to classify and answer student questions on a course discussion board. |
BRANDON JAIPERSAUD et. al. | arxiv-cs.CL | 2024-07-30 |
390 | Boosting Audio Visual Question Answering Via Key Semantic-Aware Cues Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a Temporal-Spatial Perception Model (TSPM), which aims to empower the model to perceive key visual and auditory cues related to the questions. |
Guangyao Li; Henghui Du; Di Hu; | arxiv-cs.CV | 2024-07-30 |
391 | Advancing Vietnamese Visual Question Answering with Transformer and Convolutional Integration Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the prevalence of approaches in English, there is a notable lack of systems specifically developed for certain languages, particularly Vietnamese. This study aims to bridge this gap by conducting comprehensive experiments on the Vietnamese Visual Question Answering (ViVQA) dataset, demonstrating the effectiveness of our proposed model. |
Ngoc Son Nguyen; Van Son Nguyen; Tung Le; | arxiv-cs.CV | 2024-07-30 |
392 | SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here, by utilizing vision-language model (VLM), we proposed an e2eAD method called SimpleLLM4AD. |
Peiru Zheng; Yun Zhao; Zhan Gong; Hong Zhu; Shaohua Wu; | arxiv-cs.CV | 2024-07-30 |
393 | Pyramid Coder: Hierarchical Code Generator for Compositional Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, there are challenges in enabling LLMs to comprehend the usage of image processing modules and generate relevant code. To overcome these challenges, this paper introduces PyramidCoder, a novel prompting framework for PVQA models. |
Ruoyue Shen; Nakamasa Inoue; Koichi Shinoda; | arxiv-cs.CV | 2024-07-30 |
394 | Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. |
Xingchen Zeng; Haichuan Lin; Yilin Ye; Wei Zeng; | arxiv-cs.CV | 2024-07-29 |
395 | AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, they often require long input prompts to provide the LLM with sufficient API usage details to generate relevant code. To address this limitation, we propose AdaCoder, an adaptive prompt compression framework for VPMs. |
Mahiro Ukai; Shuhei Kurita; Atsushi Hashimoto; Yoshitaka Ushiku; Nakamasa Inoue; | arxiv-cs.AI | 2024-07-28 |
396 | Answerability Fields: Answerable Location Estimation Via Diffusion Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Answerability Fields, a novel approach to predicting answerability within complex indoor environments. |
Daichi Azuma; Taiki Miyanishi; Shuhei Kurita; Koya Sakamoto; Motoaki Kawanabe; | arxiv-cs.CV | 2024-07-26 |
397 | A Role-specific Guided Large Language Model for Ophthalmic Consultation Based on Stylistic Differentiation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose EyeDoctor, an ophthalmic medical questioning large language model that enhances accuracy through doctor-patient role perception guided and an augmented knowledge base with external disease information. |
LAIYI FU et. al. | arxiv-cs.CL | 2024-07-25 |
398 | Constructing The CORD-19 Vaccine Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce new dataset ‘CORD-19-Vaccination’ to cater to scientists specifically looking into COVID-19 vaccine-related research. |
Manisha Singh; Divy Sharma; Alonso Ma; Bridget Tyree; Margaret Mitchell; | arxiv-cs.CL | 2024-07-25 |
399 | Audio Entailment: Assessing Deductive Reasoning for Audio Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the novel task of Audio Entailment to evaluate an ALM’s deductive reasoning ability. |
SOHAM DESHMUKH et. al. | arxiv-cs.SD | 2024-07-25 |
400 | The Geometry of Queries: Query-Based Innovations in Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce Query-Based Retrieval Augmented Generation (QB-RAG), a novel approach that pre-computes a database of potential queries from a content base using LLMs. |
Eric Yang; Jonathan Amar; Jong Ha Lee; Bhawesh Kumar; Yugang Jia; | arxiv-cs.LG | 2024-07-25 |
401 | 3D Question Answering for City Scene Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: From the method perspective, we propose a Scene graph enhanced City-level Understanding method (Sg-CityU), which utilizes the scene graph to introduce the spatial semantic. |
PENGLEI SUN et. al. | arxiv-cs.CV | 2024-07-24 |
402 | ScholarChemQA: Unveiling The Power of Language Models in Chemical Research Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Correspondingly, we introduce a QAMatch model, specifically designed to effectively answer chemical questions by fully leveraging our collected data. |
XIUYING CHEN et. al. | arxiv-cs.CL | 2024-07-23 |
403 | Exploring The Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we conduct an extensive empirical study on representation learning for downstream Visual Question Answering (VQA), which requires an accurate compositional understanding of the scene. |
Amir Mohammad Karimi Mamaghan; Samuele Papa; Karl Henrik Johansson; Stefan Bauer; Andrea Dittadi; | arxiv-cs.CV | 2024-07-22 |
404 | KaPQA: Knowledge-Augmented Product Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, accurately assessing the performance of these applications remains a challenge, mainly due to the lack of suitable benchmarks that effectively simulate real-world scenarios. To address this challenge, we introduce two product question-answering (QA) datasets focused on Adobe Acrobat and Photoshop products to help evaluate the performance of existing models on domain-specific product QA tasks. |
SWETHA EPPALAPALLY et. al. | arxiv-cs.CL | 2024-07-22 |
405 | MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To construct MMInstruct, we propose an instruction generation data engine that leverages GPT-4V, GPT-3.5, and manual correction. |
YANGZHOU LIU et. al. | arxiv-cs.CV | 2024-07-22 |
406 | OMoS-QA: A Dataset for Cross-Lingual Extractive Question Answering in A German Migration Context Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we present OMoS-QA, a dataset of German and English questions paired with relevant trustworthy documents and manually annotated answers, specifically tailored to this scenario. |
Steffen Kleinle; Jakob Prange; Annemarie Friedrich; | arxiv-cs.CL | 2024-07-22 |
407 | RadioRAG: Factual Large Language Models for Enhanced Diagnostics in Radiology Using Dynamic Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Large language models (LLMs) have advanced the field of artificial intelligence (AI) in medicine. |
SOROOSH TAYEBI ARASTEH et. al. | arxiv-cs.CL | 2024-07-22 |
408 | End-to-End Video Question Answering with Frame Scoring Mechanisms and Adaptive Sampling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Simply uniformly sampling frames or indiscriminately aggregating frame-level visual features often falls short in capturing the nuanced and relevant contexts of videos to well perform VideoQA. To mitigate these issues, we propose VidF4, a novel VideoQA framework equipped with tailored frame selection strategy for effective and efficient VideoQA. |
JIANXIN LIANG et. al. | arxiv-cs.CV | 2024-07-21 |
409 | Customized Retrieval Augmented Generation and Benchmarking for EDA Tool Documentation QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Off-the-shelf RAG flows are well pretrained on general-purpose documents, yet they encounter significant challenges when being applied to knowledge-intensive vertical domains, such as electronic design automation (EDA). This paper addresses such issue by proposing a customized RAG framework along with three domain-specific techniques for EDA tool documentation QA, including a contrastive learning scheme for text embedding model fine-tuning, a reranker distilled from proprietary LLM, and a generative LLM fine-tuned with high-quality domain corpus. |
Yuan Pu; Zhuolun He; Tairu Qiu; Haoyuan Wu; Bei Yu; | arxiv-cs.CL | 2024-07-21 |
410 | Knowledge Acquisition Disentanglement for Knowledge-based Visual Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Furthermore, the “forward-only” answering process fails to explicitly capture the knowledge needs of LLMs, which can further hurt answering quality. To cope with the above limitations, we propose DKA: Disentangled Knowledge Acquisition from LLM feedback, a training-free framework that disentangles knowledge acquisition to avoid confusion and uses LLM’s feedback to specify the required knowledge. |
WENBIN AN et. al. | arxiv-cs.CV | 2024-07-21 |
411 | Generalization V.s. Memorization: Tracing Language Models’ Capabilities Back to Pretraining Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To effectively capture task-specific pretraining data frequency, we propose a novel task-gram language model, which is built by counting the co-occurrence of semantically related $n$-gram pairs from task inputs and outputs in the pretraining corpus. |
XINYI WANG et. al. | arxiv-cs.CL | 2024-07-20 |
412 | Evaluating Language Models As Risk Scores Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we focus on the use of LLMs as risk scores for unrealizable prediction tasks. |
André F. Cruz; Moritz Hardt; Celestine Mendler-Dünner; | arxiv-cs.LG | 2024-07-19 |
413 | INDIC QA BENCHMARK: A Multilingual Benchmark to Evaluate Question Answering Capability of LLMs for Indic Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the evaluation of LLMs’ capabilities in non-English languages for context-based QA is limited by the scarcity of benchmarks in non-English languages. To address this gap, we introduce Indic-QA, the largest publicly available context-grounded question-answering dataset for 11 major Indian languages from two language families. |
Abhishek Kumar Singh; Rudra Murthy; Vishwajeet kumar; Jaydeep Sen; Ganesh Ramakrishnan; | arxiv-cs.LG | 2024-07-18 |
414 | Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Towards a solution, we introduce MIRAGE (Multi-Image Retrieval Augmented Generation), an open-source, lightweight visual-RAG framework that processes up to 10k images on a single 40G A100 GPU — far surpassing the 1k-image limit of contemporary models. |
TSUNG-HAN WU et. al. | arxiv-cs.CV | 2024-07-18 |
415 | Clinical Reading Comprehension with Encoder-Decoder Models Enhanced By Direct Preference Optimization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we combine encoder-decoder models with the direct preference optimization (DPO) method to improve over prior state of the art for the RadQA radiology question answering task by 12-15 F1 points. |
Md Sultan Al Nahian; Ramakanth Kavuluru; | arxiv-cs.IR | 2024-07-18 |
416 | Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with An Iterative Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel iterative RAG method called ReSP, equipped with a dual-function summarizer. |
Zhouyu Jiang; Mengshu Sun; Lei Liang; Zhiqiang Zhang; | arxiv-cs.CL | 2024-07-17 |
417 | EchoSight: Advancing Visual-Language Models with Wiki Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce EchoSight, a novel multimodal Retrieval-Augmented Generation (RAG) framework that enables large language models (LLMs) to answer visual questions requiring fine-grained encyclopedic knowledge. |
Yibin Yan; Weidi Xie; | arxiv-cs.CV | 2024-07-17 |
418 | Continual Learning for Temporal-Sensitive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we explore an emerging research area of Continual Learning for Temporal Sensitive Question Answering (CLTSQA). |
WANQI YANG et. al. | arxiv-cs.CL | 2024-07-17 |
419 | Search Engines, LLMs or Both? Evaluating Information Seeking Strategies for Answering Health Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we focus on their merits in answering health questions. |
Marcos Fernández-Pichel; Juan C. Pichel; David E. Losada; | arxiv-cs.IR | 2024-07-17 |
420 | TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the first multitask, multiple-choice Turkish QA benchmark, TurkishMMLU, to evaluate LLMs’ understanding of the Turkish language. |
Arda Yüksel; Abdullatif Köksal; Lütfi Kerem Şenel; Anna Korhonen; Hinrich Schütze; | arxiv-cs.CL | 2024-07-17 |
421 | Localizing and Mitigating Errors in Long-form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work introduces HaluQuestQA, the first hallucination dataset with localized error annotations for human-written and model-generated LFQA answers. |
Rachneet Sachdeva; Yixiao Song; Mohit Iyyer; Iryna Gurevych; | arxiv-cs.CL | 2024-07-16 |
422 | Reasoning with Large Language Models, A Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We provide an in-depth coverage of core approaches and open problems, and we propose a research agenda for the near future. |
ASKE PLAAT et. al. | arxiv-cs.AI | 2024-07-16 |
423 | TM-PATHVQA:90000+ Textless Multilingual Questions for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, this work implements a speech-based VQA system by introducing a Textless Multilingual Pathological VQA (TMPathVQA) dataset, an expansion of the PathVQA dataset, containing spoken questions in English, German & French. |
Tonmoy Rajkhowa; Amartya Roy Chowdhury; Sankalp Nagaonkar; Achyut Mani Tripathi; | arxiv-cs.CV | 2024-07-16 |
424 | Multimodal Reranking for Knowledge-Intensive Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce an additional module, a multi-modal reranker, to improve the ranking quality of knowledge candidates for answer generation. |
Haoyang Wen; Honglei Zhuang; Hamed Zamani; Alexander Hauptmann; Michael Bendersky; | arxiv-cs.CL | 2024-07-16 |
425 | Video-Language Alignment Via Spatio-Temporal Graph Transformer Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel Spatio-Temporal Graph Transformer module to uniformly learn spatial and temporal contexts for video-language alignment pre-training (dubbed STGT). |
SHI-XUE ZHANG et. al. | arxiv-cs.CV | 2024-07-16 |
426 | Unraveling The Truth: Do VLMs Really Understand Charts? A Deep Dive Into Consistency and Robustness Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We investigate two key aspects: 1) the models’ ability to handle varying levels of chart and question complexity, and 2) their robustness across different visual representations of the same underlying data. |
SRIJA MUKHOPADHYAY et. al. | arxiv-cs.CL | 2024-07-15 |
427 | Evaluation of RAG Metrics for Question Answering in The Telecom Domain Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Retrieval Augmented Generation (RAG) is widely used to enable Large Language Models (LLMs) perform Question Answering (QA) tasks in various domains. However, RAG based on … |
SUJOY ROYCHOWDHURY et. al. | ArXiv | 2024-07-15 |
428 | Graphusion: Leveraging Large Language Models for Scientific Knowledge Graph Fusion and Construction in NLP Education Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Graphusion, a zero-shot KGC framework from free text. |
RUI YANG et. al. | arxiv-cs.CL | 2024-07-15 |
429 | RAG-Ex: A Generic Framework for Explaining Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce RAG-Ex, a model- and language-agnostic explanation framework that presents approximate explanations to the users revealing why the LLMs possibly generated a piece of text as a response, given the user input. |
Viju Sudhi; Sinchana Ramakanth Bhat; Max Rudat; Roman Teucher; | sigir | 2024-07-14 |
430 | A Question-Answering Assistant Over Personal Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Based on a fine-grained schema customized for PKG, the PKGQA system in this paper comprises Symbolic Semantic Parsing, Frequently Asked Question (FAQ) Semantic Matching, and Neural Semantic Parsing modules, which are designed to take into account both accuracy and efficiency. |
LINGYUAN LIU et. al. | sigir | 2024-07-14 |
431 | CIQA: A Coding Inspired Question Answering Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel domain-agnostic model to address the problem by leveraging domain-specific and open-source code libraries. |
Mousa Arraf; Kira Radinsky; | sigir | 2024-07-14 |
432 | Towards Robust QA Evaluation Via Open LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite their remarkable capabilities, proprietary LLMs are costly and subject to internal changes that can affect their output, which inhibits the reproducibility of their results and limits the widespread adoption of LLM-based evaluation. In this demo, we aim to use publicly available LLMs for standardizing LLM-based QA evaluation. |
Ehsan Kamalloo; Shivani Upadhyay; Jimmy Lin; | sigir | 2024-07-14 |
433 | GenSco: Can Question Decomposition Based Passage Alignment Improve Question Answering? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate whether providing aligned context via a carefully selected passage sequence leads to better answer generation by the LLM for multi-hop QA. |
Barah Fazili; Koustava Goswami; Natwar Modani; Inderjeet Nair; | arxiv-cs.CL | 2024-07-14 |
434 | Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel customer service question-answering method that amalgamates RAG with a knowledge graph (KG). |
ZHENTAO XU et. al. | sigir | 2024-07-14 |
435 | ArabicaQA: A Comprehensive Dataset for Arabic Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we address the significant gap in Arabic natural language processing (NLP) resources by introducing ArabicaQA, the first large-scale dataset for machine reading comprehension and open-domain question answering in Arabic. |
ABDELRAHMAN ABDALLAH et. al. | sigir | 2024-07-14 |
436 | Are Large Language Models Good at Utility Judgments? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: (iv) We propose a k-sampling, listwise approach to reduce the dependency of LLMs on the sequence of input passages, thereby facilitating subsequent answer generation. |
HENGRAN ZHANG et. al. | sigir | 2024-07-14 |
437 | ChroniclingAmericaQA: A Large-scale Question Answering Dataset Based on Historical American Newspaper Pages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To further contribute to advancing QA and MRC tasks and to overcome the limitation of previous datasets, we introduce ChroniclingAmericaQA, a large-scale temporal QA dataset with 487K question-answer pairs created based on the historical newspaper collection Chronicling America. |
Bhawna Piryani; Jamshid Mozafari; Adam Jatowt; | sigir | 2024-07-14 |
438 | Let Me Show You Step By Step: An Interpretable Graph Routing Network for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel interpretable graph routing network (GRN) which explicitly conducts entity routing over a constructed scene knowledge graph step by step for KB-VQA. |
DUOKANG WANG et. al. | sigir | 2024-07-14 |
439 | Boosting Conversational Question Answering with Fine-Grained Retrieval-Augmentation and Self-Check Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a conversation-level RAG (ConvRAG) approach, which incorporates fine-grained retrieval augmentation and self-check for conversational question answering (CQA). |
LINHAO YE et. al. | sigir | 2024-07-14 |
440 | Can LLMs Master Math? Investigating Large Language Models on Math Stack Exchange Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we follow a two-step approach to investigating the proficiency of LLMs in answering mathematical questions. |
ANKIT SATPUTE et. al. | sigir | 2024-07-14 |
441 | NativQA: Multilingual Culturally-Aligned Natural Query for LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a scalable, language-independent framework, NativQA, to seamlessly construct culturally and regionally aligned QA datasets in native languages, for LLM evaluation and tuning. |
MD. ARID HASAN et. al. | arxiv-cs.CL | 2024-07-13 |
442 | One Stone, Four Birds: A Comprehensive Solution for QA System Using Supervised Contrastive Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a novel and comprehensive solution to enhance both the robustness and efficiency of question answering (QA) systems through supervised contrastive learning (SCL). |
Bo Wang; Tsunenori Mine; | arxiv-cs.CL | 2024-07-12 |
443 | Bridging The Gap Between Information Seeking and Product Search Systems: Q&A Recommendation for E-commerce Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The recent success of Large Language Models (LLMs) has opened up an opportunity to bridge the gap between the two tasks to help customers achieve their goals quickly and effectively by integrating conversational QA within product search. In this paper, we propose to recommend users Question-Answer (Q&A) pairs that are relevant to their product search and can help them make a purchase decision. |
Saar Kuzi; Shervin Malmasi; | arxiv-cs.CL | 2024-07-12 |
444 | Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose to embed an attention mechanism guided by segmentation into a RSVQA pipeline. |
LUCREZIA TOSATO et. al. | arxiv-cs.CV | 2024-07-11 |
445 | Uncertainty Estimation of Large Language Models in Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we benchmark popular UE methods with different model sizes on medical question-answering datasets. |
Jiaxin Wu; Yizhou Yu; Hong-Yu Zhou; | arxiv-cs.CL | 2024-07-11 |
446 | AutoBencher: Creating Salient, Novel, Difficult Datasets for Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present three desiderata for a good benchmark for language models: (i) salience (e.g., knowledge about World War II is more salient than a random day in history), (ii) novelty (i.e., the benchmark reveals new trends in model rankings not shown by previous benchmarks), and (iii) difficulty (i.e., the benchmark should be difficult for existing models, leaving headroom for future improvement). |
Xiang Lisa Li; Evan Zheran Liu; Percy Liang; Tatsunori Hashimoto; | arxiv-cs.CL | 2024-07-11 |
447 | Examining Long-Context Large Language Models for Environmental Review Document Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Long context and retrieval-augmented generation (RAG) are two such methods that have recently gained popularity. In this work, we examine the benefits of both of these techniques by utilizing question answering (QA) task in a niche domain. |
HUNG PHAN et. al. | arxiv-cs.CL | 2024-07-09 |
448 | Advancing Faithfulness of Large Language Models in Goal-Oriented Dialogue Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Goal-oriented dialogue systems, such as assistant chatbots and conversational AI systems, have gained prominence for their question-answering capabilities, often utilizing large … |
Abigail Sticha; Norbert Braunschweiler; R. Doddipatla; K. Knill; | ACM Conversational User Interfaces 2024 | 2024-07-08 |
449 | MST5 — Multilingual Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this research, we propose a simplified approach to enhance multilingual KGQA systems by incorporating linguistic context and entity information directly into the processing pipeline of a language model. |
NIKIT SRIVASTAVA et. al. | arxiv-cs.CL | 2024-07-08 |
450 | Sponsored Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present the first formal analysis of a sponsored QA platform. |
Tommy Mordo; Moshe Tennenholtz; Oren Kurland; | arxiv-cs.GT | 2024-07-05 |
451 | On Scalable Oversight with Weak LLMs Judging Strong LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we study debate, where two AI’s compete to convince a judge; consultancy, where a single AI tries to convince a judge that asks questions; and compare to a baseline of direct question-answering, where the judge just answers outright without the AI. |
ZACHARY KENTON et. al. | arxiv-cs.LG | 2024-07-05 |
452 | Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present our solution for the WSDM2023 Toloka Visual Question Answering Challenge. |
Xiangyu Wu; Zhouyang Chi; Yang Yang; Jianfeng Lu; | arxiv-cs.CV | 2024-07-05 |
453 | Question Answering with Texts and Tables Through Deep Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a novel architecture to generate multi-hop answers to open domain questions that require information from texts and tables, using the Open Table-and-Text Question Answering dataset for validation and training. |
MARCOS M. JOSÉ et. al. | arxiv-cs.CL | 2024-07-05 |
454 | Leveraging Topic Specificity and Social Relationships for Expert Finding in Community Question Answering Platforms Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present TUEF, a Topic-oriented User-Interaction model for Expert Finding, which aims to fully and transparently leverage the heterogeneous information available within online question-answering communities. |
Maddalena Amendola; Andrea Passarella; Raffaele Perego; | arxiv-cs.IR | 2024-07-04 |
455 | STOC-TOT: Stochastic Tree-of-Thought with Constrained Decoding for Complex Reasoning in Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose STOC-TOT, a stochastic tree-of-thought reasoning prompting method with constrained decoding for MHQA and conduct a detailed comparison with other reasoning prompts on different question types and reasoning types. |
Zhenyu Bi; Daniel Hajialigol; Zhongkai Sun; Jie Hao; Xuan Wang; | arxiv-cs.CL | 2024-07-04 |
456 | Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a robust discriminator named RelD to effectively detect hallucination in LLMs’ generated answers. |
YUYAN CHEN et. al. | arxiv-cs.CL | 2024-07-04 |
457 | FSM: A Finite State Machine Based Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a prompting method, Finite State Machine (FSM) to enhance the reasoning capabilities of LLM for complex tasks in addition to improved effectiveness and trustworthiness. |
XIAOCHEN WANG et. al. | arxiv-cs.CL | 2024-07-03 |
458 | VDMA: Video Question Answering with Dynamically Generated Multi-Agents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Video Question Answering with Dynamically Generated Multi-Agents (VDMA). |
Noriyuki Kugo; Tatsuya Ishibashi; Kosuke Ono; Yuji Sato; | arxiv-cs.CV | 2024-07-03 |
459 | Visual Robustness Benchmark for Visual Question Answering (VQA) Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose the first large-scale benchmark comprising 213,000 augmented images, challenging the visual robustness of multiple VQA models and assessing the strength of realistic visual corruptions. |
MD FARHAN ISHMAM et. al. | arxiv-cs.CV | 2024-07-03 |
460 | UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs’ Memorization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces UnSeenTimeQA, a novel data contamination free time-sensitive question-answering (TSQA) benchmark. |
MD NAYEM UDDIN et. al. | arxiv-cs.CL | 2024-07-03 |
461 | Align and Aggregate: Compositional Reasoning with Video Alignment and Answer Aggregation for Video Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the recent progress made in Video Question-Answering (VideoQA), these methods typically function as black-boxes, making it difficult to understand their reasoning processes and perform consistent compositional reasoning. To address these challenges, we propose a \textit{model-agnostic} Video Alignment and Answer Aggregation (VA$^{3}$) framework, which is capable of enhancing both compositional consistency and accuracy of existing VidQA methods by integrating video aligner and answer aggregator modules. |
Zhaohe Liao; Jiangtong Li; Li Niu; Liqing Zhang; | arxiv-cs.CV | 2024-07-03 |
462 | M2QA: Multi-domain Multilingual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This prevents the transfer of NLP systems from well-resourced languages and domains to non-dominant language-domain combinations. To address this gap, we introduce M2QA, a multi-domain multilingual question answering benchmark. |
LEON ENGLÄNDER et. al. | arxiv-cs.CL | 2024-07-01 |
463 | Calibrated Large Language Models for Binary Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel approach that utilizes the inductive Venn–Abers predictor (IVAP) to calibrate the probabilities associated with the output tokens corresponding to the binary labels. |
Patrizio Giovannotti; Alexander Gammerman; | arxiv-cs.CL | 2024-07-01 |
464 | DSAMR: Dual-Stream Attention Multi-hop Reasoning for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
YANHAN SUN et. al. | Expert Syst. Appl. | 2024-07-01 |
465 | Explainable Knowledge Reasoning Via Thought Chains for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Chen Qiu; Zhiqiang Xie; Maofu Liu; Huijun Hu; | Inf. Process. Manag. | 2024-07-01 |
466 | The Solution for The ICCV 2023 Perception Test Challenge 2023 — Task 6 — Grounded VideoQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a grounded video question-answering solution. |
Hailiang Zhang; Dian Chao; Zhihao Guan; Yang Yang; | arxiv-cs.CV | 2024-07-01 |
467 | Incorporating Multi-perspective Information Into Reinforcement Learning to Address Multi-hop Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
CHUANYANG GONG et. al. | Expert Syst. Appl. | 2024-07-01 |
468 | Event-centric Hierarchical Hyperbolic Graph for Multi-hop Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View |
Xun Zhu; Wang Gao; Tianyu Li; Wenguang Yao; Hongtao Deng; | Eng. Appl. Artif. Intell. | 2024-07-01 |
469 | Dynamic Few-Shot Learning for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce a novel approach called Dynamic Few-Shot Learning (DFSL). |
Jacopo D’Abramo; Andrea Zugarini; Paolo Torroni; | arxiv-cs.CL | 2024-07-01 |
470 | Hierarchical Memory for Long Video QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes our champion solution to the LOVEU Challenge @ CVPR’24, Track 1 (Long Video VQA). |
YIQIN WANG et. al. | arxiv-cs.CV | 2024-06-30 |
471 | BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: On the widely used popular knowledge graph, we discover over 90 factual errors which provide scenarios for agents to make discoveries and demonstrate the effectiveness of our approach. |
XINNA LIN et. al. | arxiv-cs.CL | 2024-06-29 |
472 | H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing methods employ either textual reasoning, which excels in semantic interpretation but struggles with mathematical operations, or symbolic reasoning, which handles computations well but lacks semantic understanding. This paper introduces a novel algorithm H-STAR that integrates both symbolic and semantic (textual) approaches in a two-stage process to address these limitations. |
Nikhil Abhyankar; Vivek Gupta; Dan Roth; Chandan K. Reddy; | arxiv-cs.DB | 2024-06-29 |
473 | STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the advancement of medical image understanding and reasoning critically depends on building high-quality visual instruction data, which is costly and labor-intensive to obtain, particularly in the medical domain. To mitigate this data-starving issue, we introduce Self-Training Large Language and Vision Assistant for Medicine (STLLaVA-Med). |
Guohao Sun; Can Qin; Huazhu Fu; Linwei Wang; Zhiqiang Tao; | arxiv-cs.CV | 2024-06-28 |
474 | Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing approaches at the intersection of Continual Learning and Visual Question Answering (VQA) do not study how the multimodal nature of the input affects the learning dynamics of a model. In this paper, we demonstrate that each modality evolves at different rates across a continuum of tasks and that this behavior occurs in established encoder-only models as well as modern recipes for developing Vision & Language (VL) models. |
Malvina Nikandrou; Georgios Pantazopoulos; Ioannis Konstas; Alessandro Suglia; | arxiv-cs.CV | 2024-06-27 |
475 | Follow-Up Questions Improve Documents Generated By Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study investigates the impact of Large Language Models (LLMs) generating follow-up questions in response to user requests for short (1-page) text documents. |
Bernadette J Tix; | arxiv-cs.CL | 2024-06-27 |
476 | TrustUQA: A Trustful Framework for Unified Structured Data Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose UnifiedTQA, a trustful QA framework that can simultaneously support multiple types of structured data in a unified way. |
WEN ZHANG et. al. | arxiv-cs.CL | 2024-06-27 |
477 | FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce FlowVQA, a novel benchmark aimed at assessing the capabilities of visual question-answering multimodal language models in reasoning with flowcharts as visual contexts. |
SHUBHANKAR SINGH et. al. | arxiv-cs.CL | 2024-06-27 |
478 | Context Matters: An Empirical Study of The Impact of Contextual Information in Temporal Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce two new context-rich TQA datasets, ContextAQA and ContextTQE, and provide comprehensive evaluations and guidelines for training robust TQA models. |
DAN SCHUMACHER et. al. | arxiv-cs.CL | 2024-06-27 |
479 | Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). |
MINZHENG WANG et. al. | arxiv-cs.CL | 2024-06-25 |
480 | Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present explicit diversity conditions for QAG, focusing on spatial aspects, question types, and entities, substantially increasing diversity in QA generation. |
Vikas Yadav; Hyuk Joon Kwon; Vijay Srinivasan; Hongxia Jin; | arxiv-cs.CL | 2024-06-25 |
481 | Advancing Question Answering on Handwritten Documents: A State-of-the-Art Recognition-Based Model for HW-SQuAD Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a novel recognition-based approach that improves upon the previous state-of-the-art on the HW-SQuAD and BenthamQA datasets. |
Aniket Pal; Ajoy Mondal; C. V. Jawahar; | arxiv-cs.CV | 2024-06-25 |
482 | CaLMQA: Exploring Culturally Specific Long-form Question Answering Across 23 Languages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: While LFQA has been well-studied in English, this research has not been extended to other languages. To bridge this gap, we introduce CaLMQA, a collection of 1.5K complex culturally specific questions spanning 23 languages and 51 culturally agnostic questions translated from English into 22 other languages. |
SHANE ARORA et. al. | arxiv-cs.CL | 2024-06-25 |
483 | Is Your Benchmark Truly Adversarial? AdvScore: Evaluating Human-Grounded Adversarialness Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Given the lack of a standardized metric for measuring adversarialness, we propose AdvScore, a human-grounded evaluation metric. |
Yoo Yeon Sung; Maharshi Gor; Eve Fleisig; Ishani Mondal; Jordan Lee Boyd-Graber; | arxiv-cs.CL | 2024-06-24 |
484 | Context-augmented Retrieval: A Novel Framework for Fast Information Retrieval Based Response Generation Using Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For the same, this work proposes a new approach Context Augmented retrieval (CAR), where partitioning of vector database by real-time classification of information flowing into the corpus is done. |
Sai Ganesh; Anupam Purwar; Gautam B; | arxiv-cs.IR | 2024-06-24 |
485 | CogMG: Collaborative Augmentation Between Large Language Model and Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a collaborative augmentation framework, CogMG, leveraging knowledge graphs to address the limitations of LLMs in QA scenarios, explicitly targeting the problems of incomplete knowledge coverage and knowledge update misalignment. |
Tong Zhou; Yubo Chen; Kang Liu; Jun Zhao; | arxiv-cs.CL | 2024-06-24 |
486 | DEXTER: A Benchmark for Open-domain Complex Question Answering Using LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: While retrieval performance for classical QA tasks is well explored, their capabilities for heterogeneous complex retrieval tasks, especially in an open-domain setting, and the impact on downstream QA performance, are relatively unexplored. To address this, in this work, we propose a benchmark composing diverse complex QA tasks and provide a toolkit to evaluate state-of-the-art pre-trained dense and sparse retrieval models in an open-domain setting. |
Venktesh V. Deepali Prabhu; Avishek Anand; | arxiv-cs.CL | 2024-06-24 |
487 | HCQA @ Ego4D EgoSchema Challenge 2024 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this report, we present our champion solution for Ego4D EgoSchema Challenge in CVPR 2024. |
HAOYU ZHANG et. al. | arxiv-cs.CV | 2024-06-22 |
488 | Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate the construction of a more cohesive and stable Med-VQA structure. |
Lin Fan; Xun Gong; Cenyang Zheng; Yafei Ou; | arxiv-cs.LG | 2024-06-21 |
489 | 70B-parameter Large Language Models in Japanese Medical Question-answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here we utilize multiple 70B-parameter LLMs for the first time and show that instruction tuning using Japanese medical question-answering dataset significantly improves the ability of Japanese LLMs to solve Japanese medical license exams, surpassing 50\% in accuracy. |
Issey Sukeda; Risa Kishikawa; Satoshi Kodera; | arxiv-cs.CL | 2024-06-21 |
490 | Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. |
ZHENGLIANG SHI et. al. | arxiv-cs.CL | 2024-06-21 |
491 | Pregnant Questions: The Importance of Pragmatic Awareness in Maternal Health Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In a high-risk domain such as maternal and infant health, a question-answering system must recognize these pragmatic constraints and go beyond simply answering user questions, examining them in context to respond helpfully. To achieve this, we study assumptions and implications, or pragmatic inferences, made when mothers ask questions about pregnancy and infant care by collecting a dataset of 2,727 inferences from 500 questions across three diverse sources. |
NEHA SRIKANTH et. al. | naacl | 2024-06-20 |
492 | Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a novel framework for enhancing LLMs’ planning capabilities by using planning data derived from knowledge graphs (KGs). |
JUNJIE WANG et. al. | arxiv-cs.CL | 2024-06-20 |
493 | TRAQ: Trustworthy Retrieval Augmented Question Answering Via Conformal Prediction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Retrieval augmented generation (RAG) is a promising strategy to avoid hallucinations, but it does not provide guarantees on its correctness. To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or *TRAQ*, which provides the first end-to-end statistical correctness guarantee for RAG. |
Shuo Li; Sangdon Park; Insup Lee; Osbert Bastani; | naacl | 2024-06-20 |
494 | AudioChatLlama: Towards General-Purpose Speech Abilities for LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we extend the instruction-tuned Llama-2 model with end-to-end general-purpose speech processing and reasoning abilities while maintaining the wide range of original LLM capabilities, without using any carefully curated paired data. |
YASSIR FATHULLAH et. al. | naacl | 2024-06-20 |
495 | On Narrative Question Answering Skills Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing task-level skill views oversimplify the multidimensional nature of tasks, while question-level taxonomies face issues in evaluation and methodology. To address these challenges, we introduce a more inclusive skill taxonomy that synthesizes and redefines narrative understanding skills from previous taxonomies and includes a generation skill dimension from the answering perspective. |
Emil Kalbaliyev; Kairit Sirts; | naacl | 2024-06-20 |
496 | Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models Through Question Complexity IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel adaptive QA framework that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs from the simplest to the most sophisticated ones based on the query complexity. |
Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong Park; | naacl | 2024-06-20 |
497 | CPopQA: Ranking Cultural Concept Popularity By LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the extent to which an LLM effectively captures corpus-level statistical trends of concepts for reasoning, especially long-tail ones, is largely underexplored. In this study, we introduce a novel few-shot question-answering task (CPopQA) that examines LLMs� statistical ranking abilities for long-tail cultural concepts (e. g. , holidays), particularly focusing on these concepts� popularity in the United States and the United Kingdom, respectively. |
Ming Jiang; Mansi Joshi; | naacl | 2024-06-20 |
498 | Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. |
Gregor Geigle; Radu Timofte; Goran Glavaš; | arxiv-cs.CV | 2024-06-20 |
499 | Is Prompt Transfer Always Effective? An Empirical Study of Prompt Transfer for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we characterize the question answering task based on features such as answer format and empirically investigate the transferability of soft prompts for the first time. |
Minji Jung; Soyeon Park; Jeewoo Sul; Yong Suk Choi; | naacl | 2024-06-20 |
500 | QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a simple yet efficient method called question and passage augmentation (QPaug) via LLMs for open-domain QA. |
Minsang Kim; Cheoneum Park; Seungjun Baek; | arxiv-cs.CL | 2024-06-20 |
501 | TTQA-RS- A Break-down Prompting Approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we have proposed a Retrieval Augmented Generation (RAG) based model – TTQA-RS: A break-down prompting approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization. |
Jayetri Bardhan; Bushi Xiao; Daisy Zhe Wang; | arxiv-cs.CL | 2024-06-20 |
502 | Towards Improved Multi-Source Attribution for Long-Form Answer Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite gaining increasing popularity for usage in QA systems and search engines, current LLMs struggle with attribution for long-form responses which require reasoning over multiple evidence sources. To address this, in this paper we aim to improve the attribution capability of LLMs for long-form answer generation to multiple sources, with multiple citations per sentence. |
Nilay Patel; Shivashankar Subramanian; Siddhant Garg; Pratyay Banerjee; Amita Misra; | naacl | 2024-06-20 |
503 | Self-Prompting Large Language Models for Zero-Shot Open-Domain QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of LLMs and their strong instruction understanding abilities. |
Junlong Li; Jinyuan Wang; Zhuosheng Zhang; Hai Zhao; | naacl | 2024-06-20 |
504 | A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: At the core of KBLLaMA, we study (1) how to organize new knowledge about KBQA and (2) how to facilitate the learning of the organized knowledge. |
Lingxi Zhang; Jing Zhang; Yanling Wang; Cuiping Li; Hong Chen; | arxiv-cs.CL | 2024-06-20 |
505 | SEMQA: Semi-Extractive Multi-Source Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion. |
TAL SCHUSTER et. al. | naacl | 2024-06-20 |
506 | Unveiling Divergent Inductive Biases of LLMs on Temporal Data Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the adeptness of Large Language Models (LLMs) in discerning patterns and relationships from data, their inherent comprehension of temporal dynamics remains a formidable challenge. This research meticulously explores these intrinsic challenges within LLMs, with a specific emphasis on evaluating the performance of GPT-3. |
Sindhu Kishore; Hangfeng He; | naacl | 2024-06-20 |
507 | End-to-End Beam Retrieval for Multi-Hop Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Beam Retrieval, an end-to-end beam retrieval framework for multi-hop QA. |
Jiahao Zhang; Haiyang Zhang; Dongmei Zhang; Liu Yong; Shen Huang; | naacl | 2024-06-20 |
508 | PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models As Decision Makers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we conduct a study to utilize LLMs as a solution for decision making that requires complex data analysis. |
Myeonghwa Lee; Seonho An; Min-Soo Kim; | naacl | 2024-06-20 |
509 | SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce SQATIN, a new framework for dialog NLU based on (i) instruction tuning and (ii) question-answering-based formulation of ID and VE tasks. |
Evgeniia Razumovskaia; Goran Glava�; Anna Korhonen; Ivan Vulic; | naacl | 2024-06-20 |
510 | SynDARin: Synthesising Datasets for Automated Reasoning in Low-Resource Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This means that producing novel models and measuring the performance of multilingual LLMs in low-resource languages is challenging. To mitigate this, we propose $\textbf{S}$yn$\textbf{DAR}$in, a method for generating and validating QA datasets for low-resource languages. |
Gayane Ghazaryan; Erik Arakelyan; Pasquale Minervini; Isabelle Augenstein; | arxiv-cs.CL | 2024-06-20 |
511 | FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. |
Wei Zhou; Mohsen Mesgar; Heike Adel; Annemarie Friedrich; | naacl | 2024-06-20 |
512 | Retrieval Helps or Hurts? A Deeper Dive Into The Efficacy of Retrieval Augmentation to Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, our goal is to offer a more detailed, fact-centric analysis by exploring the effects of combinations of entities and relations. |
Seiji Maekawa; Hayate Iso; Sairam Gurajada; Nikita Bhutani; | naacl | 2024-06-20 |
513 | Mitigating Bias for Question Answering Models By Tracking Bias Influence Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. |
MINGYU MA et. al. | naacl | 2024-06-20 |
514 | Evaluating RAG-Fusion with RAGElo: An Automated Elo-based Framework Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This results in difficulties in evaluating RAG variations, like RAG-Fusion (RAGF), in the context of a product QA task at Infineon Technologies. To solve these problems, we propose a comprehensive evaluation framework, which leverages Large Language Models (LLMs) to generate large datasets of synthetic queries based on real user queries and in-domain documents, uses LLM-as-a-judge to rate retrieved documents and answers, evaluates the quality of answers, and ranks different variants of Retrieval-Augmented Generation (RAG) agents with RAGElo’s automated Elo-based competition. |
Zackary Rackauckas; Arthur Câmara; Jakub Zavrel; | arxiv-cs.IR | 2024-06-20 |
515 | Temporal Knowledge Graph Question Answering: A Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work aims to serve as a comprehensive reference for TKGQA and to stimulate further research. |
MIAO SU et. al. | arxiv-cs.CL | 2024-06-20 |
516 | Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present MIRAGE –Model Internals-based RAG Explanations — a plug-and-play approach using model internals for faithful answer attribution in RAG applications. |
Jirui Qi; Gabriele Sarti; Raquel Fernández; Arianna Bisazza; | arxiv-cs.CL | 2024-06-19 |
517 | Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we introduce a new knowledge granularity, termed ‘logic unit’, where documents are transformed into more structured and loosely interconnected logic units with large language models. |
KAIKAI AN et. al. | arxiv-cs.AI | 2024-06-19 |
518 | AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current Vision-Language Models (VLMs) primarily focus on third-person view videos, neglecting the richness of egocentric perceptual experience. To address this gap, we propose three key contributions. First, we introduce the Egocentric Video Understanding Dataset (EVUD) for training VLMs on video captioning and question answering tasks specific to egocentric videos. |
ALESSANDRO SUGLIA et. al. | arxiv-cs.CV | 2024-06-19 |
519 | Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and Metrics for Open Domain Question Answering in The Era of Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel taxonomy for ODQA datasets that incorporates both the modality and difficulty of the question types. |
Akchay Srivastava; Atif Memon; | arxiv-cs.CL | 2024-06-19 |
520 | QRMeM: Unleash The Length Limitation Through Question Then Reflection Memory Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing techniques face challenges with static knowledge integration, leading to insufficient adaptation to task-specific needs and missing multi-segmentation relationships, which hinders the dynamic reorganization and logical combination of relevant segments during the response process. To address these issues, we introduce a novel strategy, Question then Reflection Memory Mechanism (QRMeM), incorporating a dual-structured memory pool. |
BO WANG et. al. | arxiv-cs.CL | 2024-06-18 |
521 | LIVE: Learnable In-Context Vector for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we propose Learnable In-Context VEctor (LIVE) to distill essential task information from demonstrations, improving ICL performance in LMMs. |
YINGZHE PENG et. al. | arxiv-cs.CL | 2024-06-18 |
522 | Problem-Solving in Language Model Networks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To improve the reasoning and question-answering capabilities of Large Language Models (LLMs), several multi-agent approaches have been introduced. |
Ciaran Regan; Alexandre Gournail; Mizuki Oka; | arxiv-cs.AI | 2024-06-18 |
523 | On The Robustness of Language Models for Tabular Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We highlight the need for improved methodologies, including structure-aware self-attention mechanisms and better handling of domain-specific tabular data, to develop more reliable LLMs for table comprehension. |
Kushal Raj Bhandari; Sixue Xing; Soham Dan; Jianxi Gao; | arxiv-cs.CL | 2024-06-18 |
524 | From RAGs to Rich Parameters: Probing How Language Models Utilize External Knowledge Over Parametric Information for Factual Queries Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we mechanistically examine the RAG pipeline to highlight that language models take shortcut and have a strong bias towards utilizing only the context information to answer the question, while relying minimally on their parametric memory. |
HITESH WADHWA et. al. | arxiv-cs.CL | 2024-06-18 |
525 | Diversify, Rationalize, and Combine: Ensembling Multiple QA Strategies for Zero-shot Knowledge-based VQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose Diversification, Evidence Truncation, and Combination for Knowledge-based Elucidation (DietCoke), which utilizes a bundle of complementary question-answering tactics and aggregates their answers using textual rationales. |
Miaoyu Li; Haoxin Li; Zilin Du; Boyang Li; | arxiv-cs.CL | 2024-06-18 |
526 | AvaTaR: Optimizing LLM Agents for Tool Usage Via Contrastive Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. |
SHIRLEY WU et. al. | arxiv-cs.LG | 2024-06-17 |
527 | RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. |
JOAO MONTEIRO et. al. | arxiv-cs.CL | 2024-06-17 |
528 | TRACE The Evidence: Constructing Knowledge-Grounded Reasoning Chains for Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To enhance the multi-hop reasoning ability of RAG models, we propose TRACE. |
Jinyuan Fang; Zaiqiao Meng; Craig Macdonald; | arxiv-cs.CL | 2024-06-17 |
529 | FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Food is a rich and varied dimension of cultural heritage, crucial to both individuals and social groups. To bridge the gap in the literature on the often-overlooked regional diversity in this domain, we introduce FoodieQA, a manually curated, fine-grained image-text dataset capturing the intricate features of food cultures across various regions in China. |
WENYAN LI et. al. | arxiv-cs.CL | 2024-06-16 |
530 | Multi-LLM QA with Embodied Exploration Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: There is a lack of insight into whether a Multi-LLM system can handle question-answering based on observations from embodied exploration. In this work, we address this gap by investigating the use of Multi-Embodied LLM Explorers (MELE) for QA in an unknown environment. |
Bhrij Patel; Vishnu Sashank Dorbala; Amrit Singh Bedi; Dinesh Manocha; | arxiv-cs.LG | 2024-06-16 |
531 | SHMamba: Structured Hyperbolic State Space Model for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the self-attention mechanism’s limitations in window modeling and quadratic computational complexity reduce its effectiveness in modeling long sequences. To address these limitations, we propose SHMamba: Structured Hyperbolic State Space Model to integrate the advantages of hyperbolic geometry and state space models. |
Zhe Yang; Wenrui Li; Guanghui Cheng; | arxiv-cs.AI | 2024-06-14 |
532 | Datasets for Multilingual Answer Sentence Selection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce new high-quality datasets for AS2 in five European languages (French, German, Italian, Portuguese, and Spanish), obtained through supervised Automatic Machine Translation (AMT) of existing English AS2 datasets such as ASNQ, WikiQA, and TREC-QA using a Large Language Model (LLM). |
Matteo Gabburo; Stefano Campese; Federico Agostini; Alessandro Moschitti; | arxiv-cs.CL | 2024-06-14 |
533 | Enhancing Question Answering on Charts Through Effective Pre-training Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While the current state-of-the-art approaches for document understanding (both OCR-based and OCR-free) work well, a thorough analysis of their capabilities and limitations has not yet been performed. Therefore, in this work, we addresses the limitation of current VisualQA models when applied to charts and plots. |
ASHIM GUPTA et. al. | arxiv-cs.CL | 2024-06-14 |
534 | Beyond Raw Videos: Understanding Edited Videos with Large Multimodal Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we leverage the edited videos on a popular short video platform, \textit{i.e.}, TikTok, and build a video VQA benchmark (named EditVid-QA) covering four typical editing categories, i.e., effect, funny, meme, and game. |
LU XU et. al. | arxiv-cs.CV | 2024-06-14 |
535 | EWEK-QA: Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Second, web-retrieved contents are usually obtained by some simple heuristics such as fixed length or breakpoints which might lead to splitting information into pieces. To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system. |
MOHAMMAD DEHGHAN et. al. | arxiv-cs.CL | 2024-06-14 |
536 | Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce an approach for KBVQA, augmenting the existing vision-language transformer encoder-decoder (OFA) model. |
Manas Jhalani; Annervaz K M; Pushpak Bhattacharyya; | arxiv-cs.CL | 2024-06-14 |
537 | CoG-DQA: Chain-of-Guiding Learning with Large Language Models for Diagram Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we introduce the Chain-of-Guiding Learning Model for Diagram Question Answering (CoG-DQA) a novel framework that effectively addresses DQA challenges. |
SHAOWEI WANG et. al. | cvpr | 2024-06-13 |
538 | Optimizing Visual Question Answering Models for Driving: Bridging The Gap Between Human and Machine Attention Patterns Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose an approach integrating filters to optimize the model’s attention mechanisms, prioritizing relevant objects and improving accuracy. |
Kaavya Rekanar; Martin Hayes; Ganesh Sistu; Ciaran Eising; | arxiv-cs.CV | 2024-06-13 |
539 | VTQA: Visual Text Question Answering Via Entity Alignment and Cross-Media Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Motivated by the need for a more comprehensive evaluation we introduce a novel dataset comprising 23781 questions derived from 10124 image-text pairs. |
Kang Chen; Xiangqian Wu; | cvpr | 2024-06-13 |
540 | DIEM: Decomposition-Integration Enhancing Multimodal Insights Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we propose the Decomposition-Integration Enhancing Multimodal Insight (DIEM) which initially decomposes the given question and image into multiple subquestions and several sub-images aiming to isolate specific elements for more focused analysis. |
XINYI JIANG et. al. | cvpr | 2024-06-13 |
541 | Language-aware Visual Semantic Distillation for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we are inspired by the human recognition and learning pattern and propose VideoDistill a framework with language-aware (i.e. goal-driven) behavior in both vision perception and answer generation process. |
Bo Zou; Chao Yang; Yu Qiao; Chengbin Quan; Youjian Zhao; | cvpr | 2024-06-13 |
542 | How to Configure Good In-Context Sequence for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To enhance the ICL performance in this study we use Visual Question Answering (VQA) as case study to explore diverse in-context configurations to find the powerful ones. |
Li Li; Jiawei Peng; Huiyi Chen; Chongyang Gao; Xu Yang; | cvpr | 2024-06-13 |
543 | Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Here we propose a beam-search-based most-likely prediction and a temperature-based multimodal prediction to implement both deterministic and stochastic inferences. |
Inhwan Bae; Junoh Lee; Hae-Gon Jeon; | cvpr | 2024-06-13 |
544 | Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While Multi-modal Language Models (MLMs) demon strate impressive multimodal ability they still struggle on providing factual and precise responses for tasks like vi sual question answering (VQA). In this paper we address this challenge from the perspective of contextual informa tion. |
Shitian Zhao; Zhuowan Li; Yadong Lu; Alan Yuille; Yan Wang; | cvpr | 2024-06-13 |
545 | Ranking Distillation for Open-Ended Video Question Answering with Insufficient Labels Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As a result existing works tend to directly treat all the unlabeled answers as negative labels leading to limited ability for generalization. In this work we introduce a simple yet effective ranking distillation framework (RADI) to mitigate this problem without additional manual annotation. |
Tianming Liang; Chaolei Tan; Beihao Xia; Wei-Shi Zheng; Jian-Fang Hu; | cvpr | 2024-06-13 |
546 | Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose using the principle of neighborhood consistency to identify unreliable responses from a black-box vision-language model in question answering tasks. |
Zaid Khan; Yun Fu; | cvpr | 2024-06-13 |
547 | Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Such VLMs often independently caption a large number of frames uniformly sampled from long videos, which is not efficient and can mostly be redundant. Questioning these decision choices, we explore optimal strategies for key-frame selection that can significantly reduce these redundancies, namely Hierarchical Keyframe Selector. |
JONGWOO PARK et. al. | arxiv-cs.CV | 2024-06-13 |
548 | Synthesize Step-by-Step: Tools Templates and LLMs As Data Generators for Reasoning-Based Chart VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we address the lack of reasoning ability by data augmentation. |
Zhuowan Li; Bhavan Jasani; Peng Tang; Shabnam Ghadar; | cvpr | 2024-06-13 |
549 | On Scaling Up A Multilingual Vision and Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We explore the boundaries of scaling up a multilingual vision and language model both in terms of size of the components and the breadth of its training task mixture. |
XI CHEN et. al. | cvpr | 2024-06-13 |
550 | Can I Trust Your Answer? Visually Grounded Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Experiments with different backbones demonstrate that this grounding mechanism improves both grounding and QA. With these efforts we aim to push towards trustworthy VLMs in VQA systems. |
Junbin Xiao; Angela Yao; Yicong Li; Tat-Seng Chua; | cvpr | 2024-06-13 |
551 | OpenEQA: Embodied Question Answering in The Era of Foundation Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a modern formulation of Embodied Question Answering (EQA) as the task of understanding an environment well enough to answer questions about it in natural language. |
ARJUN MAJUMDAR et. al. | cvpr | 2024-06-13 |
552 | Towards Multilingual Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we work towards extending Audio-Visual Question Answering (AVQA) to multilingual settings. |
ORCHID CHETIA PHUKAN et. al. | arxiv-cs.LG | 2024-06-13 |
553 | DiscreteSLU: A Large Language Model with Self-Supervised Discrete Speech Units for Spoken Language Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose the use of discrete speech units (DSU), rather than continuous-valued speech encoder outputs, that are converted to the LLM token embedding space using the speech adapter. |
SUWON SHON et. al. | arxiv-cs.CL | 2024-06-13 |
554 | MoReVQA: Exploring Modular Reasoning Models for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Thus unlike traditional single-stage planning methods we propose a multi-stage system consisting of an event parser a grounding stage and a final reasoning stage in conjunction with an external memory. |
Juhong Min; Shyamal Buch; Arsha Nagrani; Minsu Cho; Cordelia Schmid; | cvpr | 2024-06-13 |
555 | Grounded Question-Answering in Long Egocentric Videos Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper we delve into open-ended question-answering (QA) in long egocentric videos which allows individuals or robots to inquire about their own past visual experiences. |
Shangzhe Di; Weidi Xie; | cvpr | 2024-06-13 |
556 | Multi-Factor Adaptive Vision Selection for Egocentric Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The challenge of interpreting the world from a human perspective in Artificial Intelligence (AI) is particularly evident in egocentric video question answering, which grapples with issues like small object recognition, noise suppression, and spatial-temporal reasoning. To address these challenges, we introduce the Multi-Factor Adaptive vision Selection (MFAS) framework. |
HAOYU ZHANG et. al. | icml | 2024-06-12 |
557 | TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. |
Zhiruo Wang; Graham Neubig; Daniel Fried; | icml | 2024-06-12 |
558 | Unifying Image Processing As Visual Prompting Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these advances have predominantly concentrated on high-level vision tasks, with less attention paid to low-level vision tasks. To address this issue, we propose a universal model for general image processing that covers image restoration, image enhancement, image feature extraction tasks, etc. |
YIHAO LIU et. al. | icml | 2024-06-12 |
559 | Switchable Decision: Dynamic Neural Generation Networks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a switchable decision to accelerate inference by dynamically assigning computation resources for each data instance. |
Shujian Zhang; Korawat Tanwisuth; Chengyue Gong; Pengcheng He; Mingyuan Zhou; | icml | 2024-06-12 |
560 | In-Context Principle Learning from Mistakes IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nonetheless, all ICL-based approaches only learn from correct input-output pairs. In this paper, we revisit this paradigm, by learning more from the few given input-output examples. |
TIANJUN ZHANG et. al. | icml | 2024-06-12 |
561 | Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we suggest investigating internal activations and quantifying LLM’s truthfulness using the local intrinsic dimension (LID) of model activations. |
Fan Yin; Jayanth Srinivasa; Kai-Wei Chang; | icml | 2024-06-12 |
562 | MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we present MBBQ (Multilingual Bias Benchmark for Question-answering), a carefully curated version of the English BBQ dataset extended to Dutch, Spanish, and Turkish, which measures stereotypes commonly held across these languages. |
Vera Neplenbroek; Arianna Bisazza; Raquel Fernández; | arxiv-cs.CL | 2024-06-11 |
563 | Question-Answering (QA) Model for A Personalized Learning Assistant for Arabic Language Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper describes the creation, optimization, and assessment of a question-answering (QA) model for a personalized learning assistant that uses BERT transformers customized for … |
Mohammad Sammoudi; Ahmad Habaybeh; Huthaifa I. Ashqar; Mohammed Elhenawy; | ArXiv | 2024-06-11 |
564 | Scholarly Question Answering Using Large Language Models in The NFDI4DataScience Gateway Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a scholarly Question Answering (QA) system on top of the NFDI4DataScience Gateway, employing a Retrieval Augmented Generation-based (RAG) approach. |
HAMED BABAEI GIGLOU et. al. | arxiv-cs.CL | 2024-06-11 |
565 | Situational Awareness Matters in 3D Vision Language Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Being able to carry out complicated vision language reasoning tasks in 3D space represents a significant milestone in developing household robots and human-centered embodied AI. In this work, we demonstrate that a critical and distinct challenge in 3D vision language reasoning is situational awareness, which incorporates two key components: (1) The autonomous agent grounds its self-location based on a language prompt. |
Yunze Man; Liang-Yan Gui; Yu-Xiong Wang; | arxiv-cs.CV | 2024-06-11 |
566 | DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To improve the neural-symbolic reasoning capabilities of language agents powered by Large Language Models (LLMs) in KGQA, we propose the DecompositionAlignment-Reasoning Agent (DARA) framework. |
Haishuo Fang; Xiaodan Zhu; Iryna Gurevych; | arxiv-cs.CL | 2024-06-11 |
567 | Benchmarking Vision-Language Contrastive Methods for Medical Representation Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Through this study, we aim to answer the following research questions: (i) How transferable are general-domain representations to the medical domain? |
SHUVENDU ROY et. al. | arxiv-cs.CV | 2024-06-11 |
568 | VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present the VideoLLaMA 2, a set of Video Large Language Models (Video-LLMs) designed to enhance spatial-temporal modeling and audio understanding in video and audio-oriented tasks. |
ZESEN CHENG et. al. | arxiv-cs.CV | 2024-06-11 |
569 | DR-RAG: Applying Dynamic Document Relevance to Retrieval-Augmented Generation for Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To mine the relevance, a two-stage retrieval framework called Dynamic-Relevant Retrieval-Augmented Generation (DR-RAG) is proposed to improve document retrieval recall and the accuracy of answers while maintaining efficiency. |
ZIJIAN HEI et. al. | arxiv-cs.LG | 2024-06-11 |
570 | MedExQA: Medical Question Answering Benchmark with Multiple Explanations Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces MedExQA, a novel benchmark in medical question-answering, to evaluate large language models’ (LLMs) understanding of medical knowledge through explanations. |
Yunsoo Kim; Jinge Wu; Yusuf Abdulle; Honghan Wu; | arxiv-cs.CL | 2024-06-10 |
571 | MemoriQA: A Question-Answering Lifelog Dataset Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Lifelogging can be referred to as the process of passively collecting data on an individual’s daily life. Lifelog data provides a large amount of information which can be used to … |
Quang-Linh Tran; Binh T. Nguyen; Gareth J. F. Jones; C. Gurrin; | Proceedings of the 1st ACM Workshop on AI-Powered Q&A … | 2024-06-10 |
572 | Chart Question Answering Based on Modality Conversion and Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: A two-stage chart question answering system is proposed in this paper. Chart/plot images are first converted into structured text-based data by a transformer-based conversion … |
Yi-Cheng Liu; Wei-Ta Chu; | Proceedings of the 1st ACM Workshop on AI-Powered Q&A … | 2024-06-10 |
573 | MyEachtraX: Lifelog Question Answering on Mobile Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Your whole life in your pocket. That is the premise of lifelogging, a technology that captures and stores every moment of your life in digital form. Built on top of MyEachtra and … |
Ly-Duyen Tran; Thanh-Binh Nguyen; C. Gurrin; Liting Zhou; | Proceedings of the 7th Annual ACM Workshop on the Lifelog … | 2024-06-10 |
574 | Evaluating The Retrieval Component in LLM-Based Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study proposes a straightforward baseline for evaluating retrievers in Retrieval-Augmented Generation (RAG)-based chatbots. |
Ashkan Alinejad; Krtin Kumar; Ali Vahdat; | arxiv-cs.CL | 2024-06-10 |
575 | HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering Using LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. |
Pranoy Panda; Ankush Agarwal; Chaitanya Devaguptapu; Manohar Kaul; Prathosh A P; | arxiv-cs.CL | 2024-06-10 |
576 | MedREQAL: Examining Medical Knowledge Recall of Large Language Models Via Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we examine the capability of LLMs to exhibit medical knowledge recall by constructing a novel dataset derived from systematic reviews — studies synthesizing evidence-based answers for specific medical questions. |
Juraj Vladika; Phillip Schneider; Florian Matthes; | arxiv-cs.CL | 2024-06-09 |
577 | Zero-Shot End-To-End Spoken Question Answering In Medical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our study introduces a novel zero-shot SQA approach, compared to traditional cascade systems. |
Yanis Labrak; Adel Moumen; Richard Dufour; Mickael Rouvier; | arxiv-cs.CL | 2024-06-09 |
578 | MrRank: Improving Question Answering Retrieval System Through Multi-Result Ranking Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose an approach that leverages learning-to-rank techniques to combine heterogeneous IR systems. |
Danupat Khamnuansin; Tawunrat Chalothorn; Ekapol Chuangsuwanich; | arxiv-cs.CL | 2024-06-09 |
579 | CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: More importantly, although these datasets often extend their linguistic range via translation or some other approaches, they usually keep images the same, resulting in narrow cultural representation. To address these limitations, we construct CVQA, a new Culturally-diverse multilingual Visual Question Answering benchmark, designed to cover a rich set of languages and cultures, where we engage native speakers and cultural experts in the data collection process. |
DAVID ROMERO et. al. | arxiv-cs.CV | 2024-06-09 |
580 | Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Negation is important because it adds depth and nuance to the understanding of language and is also crucial for logical reasoning and inference. In this work, we address the above limitation and particularly focus on studying the impact of negation in LLM hallucinations. |
NEERAJ VARSHNEY et. al. | arxiv-cs.CL | 2024-06-08 |
581 | Venn Diagram Prompting : Accelerating Comprehension with Scaffolding Effect Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Venn Diagram (VD) Prompting, an innovative prompting technique which allows Large Language Models (LLMs) to combine and synthesize information across complex, diverse and long-context documents in knowledge-intensive question-answering tasks. |
Sakshi Mahendru; Tejul Pandit; | arxiv-cs.CL | 2024-06-08 |
582 | CRAG — Comprehensive RAG Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing RAG datasets, however, do not adequately represent the diverse and dynamic nature of real-world Question Answering (QA) tasks. To bridge this gap, we introduce the Comprehensive RAG Benchmark (CRAG), a factual question answering benchmark of 4,409 question-answer pairs and mock APIs to simulate web and Knowledge Graph (KG) search. |
XIAO YANG et. al. | arxiv-cs.CL | 2024-06-07 |
583 | ComplexTempQA: A Large-Scale Dataset for Complex Temporal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce ComplexTempQA, a large-scale dataset consisting of over 100 million question-answer pairs designed to tackle the challenges in temporal question answering. |
Raphael Gruber; Abdelrahman Abdallah; Michael Färber; Adam Jatowt; | arxiv-cs.CL | 2024-06-07 |
584 | TCMD: A Traditional Chinese Medicine QA Dataset for Evaluating Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a new medical question-answering (QA) dataset that contains massive manual instruction for solving Traditional Chinese Medicine examination tasks, called TCMD. |
Ping Yu; Kaitao Song; Fengchen He; Ming Chen; Jianfeng Lu; | arxiv-cs.CL | 2024-06-07 |
585 | MATTER: Memory-Augmented Transformer Using Heterogeneous Knowledge Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce an efficient memory-augmented transformer called MATTER, designed to retrieve relevant knowledge from multiple heterogeneous knowledge sources. |
Dongkyu Lee; Chandana Satya Prakash; Jack FitzGerald; Jens Lehmann; | arxiv-cs.CL | 2024-06-07 |
586 | CRAG – Comprehensive RAG Benchmark Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Retrieval-Augmented Generation (RAG) has recently emerged as a promising solution to alleviate Large Language Model (LLM)’s deficiency in lack of knowledge. Existing RAG datasets, … |
XIAO YANG et. al. | ArXiv | 2024-06-07 |
587 | FairytaleQA Translated: Enabling Educational Question and Answer Generation in Less-Resourced Languages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: While numerous datasets have been developed in English for this purpose, a noticeable void exists in less-resourced languages. To alleviate this gap, our paper introduces machine-translated versions of FairytaleQA, a renowned QA dataset designed to assess and enhance narrative comprehension skills in young children. |
Bernardo Leite; Tomás Freitas Osório; Henrique Lopes Cardoso; | arxiv-cs.CL | 2024-06-06 |
588 | Wings: Learning Multimodal LLMs Without Text-only Forgetting Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Wings, a novel MLLM that excels in both text-only dialogues and multimodal comprehension. |
YI-KAI ZHANG et. al. | arxiv-cs.CL | 2024-06-05 |
589 | M-QALM: A Benchmark to Assess Clinical Reading Comprehension and Knowledge Recall in Large Language Models Via Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: There is vivid research on adapting Large Language Models (LLMs) to perform a variety of tasks in high-stakes domains such as healthcare. |
ANAND SUBRAMANIAN et. al. | arxiv-cs.CL | 2024-06-05 |
590 | Measuring Retrieval Complexity in Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate which questions are challenging for retrieval-based Question Answering (QA). |
Matteo Gabburo; Nicolaas Paul Jedema; Siddhant Garg; Leonardo F. R. Ribeiro; Alessandro Moschitti; | arxiv-cs.CL | 2024-06-05 |
591 | I’ve Got The Answer! Interpretation of LLMs Hidden States in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We also identify the layers which have a negative effect on the model’s behavior. As a prospect of practical application of the hypothesis, we propose to train such weak layers additionally in order to improve the quality of the task solution. |
Valeriya Goloviznina; Evgeny Kotelnikov; | arxiv-cs.CL | 2024-06-04 |
592 | UniOQA: A Unified Framework for Knowledge Graph Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce UniOQA, a unified framework that integrates two complementary parallel workflows. |
Zhuoyang Li; Liran Deng; Hui Liu; Qiaoqiao Liu; Junzhao Du; | arxiv-cs.CL | 2024-06-04 |
593 | Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that these artifacts can significantly affect the models, confirmed by extensive experiments across diverse models, languages, and translation processes. In light of this, we present a simple data augmentation strategy that can alleviate the adverse impacts of translation artifacts. |
CHAEHUN PARK et. al. | arxiv-cs.CL | 2024-06-04 |
594 | EffiQA: Efficient Question-Answering with Strategic Multi-Model Collaboration on Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing approaches that integrate LLMs and KGs either underutilize the reasoning abilities of LLMs or suffer from prohibitive computational costs due to tight coupling. To address these limitations, we propose a novel collaborative framework named EffiQA that can strike a balance between performance and efficiency via an iterative paradigm. |
ZIXUAN DONG et. al. | arxiv-cs.CL | 2024-06-03 |
595 | Graph Neural Network Enhanced Retrieval for Question Answering of LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel retrieval method, called GNN-Ret, which leverages graph neural networks (GNNs) to enhance retrieval by exploiting the relatedness between passages. |
ZIJIAN LI et. al. | arxiv-cs.CL | 2024-06-03 |
596 | MedFuzz: Exploring The Robustness of Large Language Models in Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). |
ROBERT OSAZUWA NESS et. al. | arxiv-cs.CL | 2024-06-03 |
597 | Seeing Beyond Borders: Evaluating LLMs in Multilingual Ophthalmological Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLMs), such as GPT-3.5 [1] and GPT-4 [2], have significant potential for transforming several aspects of patient care from clinical note summarization to … |
DAVID RESTREPO et. al. | 2024 IEEE 12th International Conference on Healthcare … | 2024-06-03 |
598 | Selectively Answering Visual Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose Avg BLEU, a calibration score combining the benefits of both sampling and likelihood methods across modalities. |
Julian Martin Eisenschlos; Hernán Maina; Guido Ivetta; Luciana Benotti; | arxiv-cs.CL | 2024-06-03 |
599 | Compositional 4D Dynamic Scenes Understanding with Physics Priors for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a video question answering dataset SuperCLEVR-Physics that focuses on the dynamics properties of objects. |
XINGRUI WANG et. al. | arxiv-cs.CV | 2024-06-02 |
600 | Beyond Boundaries: A Human-like Approach for Question Answering Over Structured and Unstructured Information Sources Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Answering factual questions from heterogenous sources, such as graphs and text, is a key capacity of intelligent systems. Current approaches either (i) perform question answering … |
Jens Lehmann; Dhananjay Bhandiwad; Preetam Gattogi; S. Vahdati; | Transactions of the Association for Computational … | 2024-06-01 |
601 | Mix-tower: Light Visual Question Answering Framework Based on Exclusive Self-attention Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts View |
Deguang Chen; Jianrui Chen; Luheng Yang; Fanhua Shang; | Neurocomputing | 2024-06-01 |
602 | SPAGHETTI: Open-Domain Question Answering from Heterogeneous Data Sources with Retrieval and Semantic Parsing Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce SPAGHETTI: Semantic Parsing Augmented Generation for Hybrid English information from Text Tables and Infoboxes, a hybrid question-answering (QA) pipeline that utilizes information from heterogeneous knowledge sources, including knowledge base, text, tables, and infoboxes. |
HEIDI C. ZHANG et. al. | arxiv-cs.CL | 2024-06-01 |
603 | The Effect of Clustering Algorithms on Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Rana Husni AlMahmoud; Marwah Alian; | Expert Syst. Appl. | 2024-06-01 |
604 | Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose passage-specific prompt tuning for reranking in open-domain question answering (PSPT): a parameter-efficient method that fine-tunes learnable passage-specific soft prompts, incorporating passage-specific knowledge from a limited set of question-passage relevance pairs. |
Xuyang Wu; Zhiyuan Peng; Krishna Sravanthi Rajanala Sai; Hsin-Tai Wu; Yi Fang; | arxiv-cs.CL | 2024-05-31 |
605 | Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking Via Side-by-Side Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a holistic pipeline for automatic data generation including question generation, answering, and model scoring using an “Evaluator”. |
BERND BOHNET et. al. | arxiv-cs.CL | 2024-05-31 |
606 | GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style. |
Costas Mavromatis; George Karypis; | arxiv-cs.CL | 2024-05-30 |
607 | Video Question Answering for People with Visual Impairments Using An Egocentric 360-Degree Camera Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper addresses the daily challenges encountered by visually impaired individuals, such as limited access to information, navigation difficulties, and barriers to social interaction. To alleviate these challenges, we introduce a novel visual question answering dataset. |
Inpyo Song; Minjun Joo; Joonhyung Kwon; Jangwon Lee; | arxiv-cs.CV | 2024-05-30 |
608 | VQA Training Sets Are Self-play Environments for Generating Few-shot Pools Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a technique in which existing training sets can be directly used for constructing computational environments with task metrics as rewards. |
Tautvydas Misiunas; Hassan Mansoor; Jasper Uijlings; Oriana Riva; Victor Carbune; | arxiv-cs.CV | 2024-05-30 |
609 | The First ACM Workshop on AI-Powered Question Answering Systems for Multimedia Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The advent of large language models (LLMs) has energised research in Question-Answering (QA) tasks, enabling responses across varied domains like economics and mathematics. … |
TAI TAN MAI et. al. | Proceedings of the 2024 International Conference on … | 2024-05-30 |
610 | MathChat: Benchmarking Mathematical Reasoning and Instruction Following in Multi-Turn Interactions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces MathChat, a comprehensive benchmark specifically designed to evaluate LLMs across a broader spectrum of mathematical tasks. |
ZHENWEN LIANG et. al. | arxiv-cs.AI | 2024-05-29 |
611 | Evaluating Zero-Shot GPT-4V Performance on 3D Visual Question Answering Benchmarks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As interest in reformulating the 3D Visual Question Answering (VQA) problem in the context of foundation models grows, it is imperative to assess how these new paradigms influence existing closed-vocabulary datasets. In this case study, we evaluate the zero-shot performance of foundational models (GPT-4 Vision and GPT-4) on well-established 3D VQA benchmarks, namely 3D-VQA and ScanQA. |
Simranjit Singh; Georgios Pavlakos; Dimitrios Stamoulis; | arxiv-cs.CV | 2024-05-29 |
612 | A Multi-Source Retrieval Question Answering Framework Based on RAG Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing RAG paradigms are inevitably influenced by erroneous retrieval information, thereby reducing the reliability and correctness of generated results. Therefore, to improve the relevance of retrieval information, this study proposes a method that replaces traditional retrievers with GPT-3.5, leveraging its vast corpus knowledge to generate retrieval information. |
RIDONG WU et. al. | arxiv-cs.IR | 2024-05-29 |
613 | Peering Into The Mind of Language Models: An Approach for Attribution in Contextual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a novel method for attribution in contextual question answering, leveraging the hidden state representations of LLMs. |
Anirudh Phukan; Shwetha Somasundaram; Apoorv Saxena; Koustava Goswami; Balaji Vasan Srinivasan; | arxiv-cs.CL | 2024-05-28 |
614 | Conv-CoA: Improving Open-domain Question Answering in Large Language Models Via Conversational Chain-of-Action Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a Conversational Chain-of-Action (Conv-CoA) framework for Open-domain Conversational Question Answering (OCQA). |
Zhenyu Pan; Haozheng Luo; Manling Li; Han Liu; | arxiv-cs.CL | 2024-05-28 |
615 | THREAD: Thinking Deeper with Recursive Spawning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Large language models (LLMs) have shown impressive capabilities across diverse settings, but still struggle as the length and complexity of the context increases. To address this challenge, we propose Thinking Recursively and Dynamically (ThReaD). |
Philip Schroeder; Nathaniel Morgan; Hongyin Luo; James Glass; | arxiv-cs.CL | 2024-05-27 |
616 | Aligning LLMs Through Multi-perspective User Preference Ranking-based Feedback for Programming Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Code Community Question Answering (CCQA) seeks to tackle programming-related issues, thereby boosting productivity in both software engineering and academic research. Recent … |
HONGYU YANG et. al. | ArXiv | 2024-05-27 |
617 | Hawk: Learning to Understand Open-World Video Anomalies Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce Hawk, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. |
JIAQI TANG et. al. | arxiv-cs.CV | 2024-05-27 |
618 | Reason3D: Searching and Reasoning 3D Segmentation Via Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces Reason3D, a novel LLM designed for comprehensive 3D understanding. |
Kuan-Chih Huang; Xiangtai Li; Lu Qi; Shuicheng Yan; Ming-Hsuan Yang; | arxiv-cs.CV | 2024-05-27 |
619 | Accurate and Nuanced Open-QA Evaluation Through Textual Entailment Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose to study the entailment relations of answers to identify more informative and more general system answers, offering a much closer evaluation to human judgment on both NaturalQuestions and TriviaQA while being learning-free. |
Peiran Yao; Denilson Barbosa; | arxiv-cs.CL | 2024-05-26 |
620 | Map-based Modular Approach for Zero-shot Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a map-based modular approach to EQA, enabling real-world robots to explore and map unknown environments. |
Koya Sakamoto; Daichi Azuma; Taiki Miyanishi; Shuhei Kurita; Motoaki Kawanabe; | arxiv-cs.RO | 2024-05-26 |
621 | Crafting Interpretable Embeddings By Asking LLMs Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM. |
VINAMRA BENARA et. al. | arxiv-cs.CL | 2024-05-26 |
622 | Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: For each task, we review their relevant characteristics, sub-tasks, and specific challenges (e.g., missing datasets for multi-document summarization, coherence in story generation, and complex reasoning for question answering). |
Jonas Becker; Jan Philip Wahle; Bela Gipp; Terry Ruas; | arxiv-cs.CL | 2024-05-24 |
623 | Efficient Medical Question Answering with Knowledge-Augmented Question Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach. |
JULIEN KHLAUT et. al. | arxiv-cs.CL | 2024-05-23 |
624 | Experimental Design of Extractive Question-Answering Systems: Influence of Error Scores and Answer Length Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-answering (QA) systems are becoming more and more important because they enable human-computer communication in a natural language. In recent years, significant progress … |
Amer Farea; Frank Emmert-Streib; | J. Artif. Intell. Res. | 2024-05-23 |
625 | LOVA3: Learning to Visual Question Answering, Asking and Assessment Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. Inspired by the human learning mechanism, we introduce LOVA3, an innovative framework named Learning tO Visual question Answering, Asking and Assessment, designed to equip MLLMs with these additional capabilities. |
Henry Hengyuan Zhao; Pan Zhou; Difei Gao; Zechen Bai; Mike Zheng Shou; | arxiv-cs.CV | 2024-05-23 |
626 | FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Large language models are often challenged by generating erroneous or `hallucinated’ responses, especially in complex reasoning tasks. To mitigate this, we propose a retrieval augmented reasoning method, FiDeLiS, which enhances knowledge graph question answering by anchoring responses to structured, verifiable reasoning paths. |
YUAN SUI et. al. | arxiv-cs.AI | 2024-05-22 |
627 | Efficient and Interpretable Information Retrieval for Product Question Answering with Heterogeneous Data Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we explore the potential of jointly learning dense semantic representation and combining it with the lexical one for ranking candidate information. |
Biplob Biswas; Rajiv Ramnath; | arxiv-cs.LG | 2024-05-21 |
628 | Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a new multi-task Urdu scene text dataset comprising over 1000 natural scene images, which can be used for text detection, recognition, and VQA tasks. |
HIBA MARYAM et. al. | arxiv-cs.CV | 2024-05-21 |
629 | MentalQA: An Annotated Arabic Corpus for Questions and Answers of Mental Healthcare Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce MentalQA, a novel Arabic dataset featuring conversational-style question-and-answer (QA) interactions. |
Hassan Alhuzali; Ashwag Alasmari; Hamad Alsaleh; | arxiv-cs.CL | 2024-05-21 |
630 | OLAPH: Improving Factuality in Biomedical Long-form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Thus, we introduce MedLFQA, a benchmark dataset reconstructed using long-form question-answering datasets related to the biomedical domain. |
Minbyul Jeong; Hyeon Hwang; Chanwoong Yoon; Taewhoo Lee; Jaewoo Kang; | arxiv-cs.CL | 2024-05-21 |
631 | Causal Event Graph-Guided Language-based Spatiotemporal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models have excelled at encoding and leveraging language patterns in large text-based corpora for various tasks, including spatiotemporal event-based question … |
KAUSHIK ROY et. al. | AAAI Spring Symposia | 2024-05-20 |
632 | MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we tackle multilingual TEC-VQA by introducing MTVQA, the first benchmark featuring high-quality human expert annotations across 9 diverse languages, consisting of 6,778 question-answer pairs across 2,116 images. |
JINGQUN TANG et. al. | arxiv-cs.CV | 2024-05-20 |
633 | Increasing The LLM Accuracy for Question Answering: Ontologies to The Rescue! Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Building on the observations of our previous research where the inaccurate LLM-generated SPARQL queries followed incorrect paths, we present an approach that consists of 1) Ontology-based Query Check (OBQC): detects errors by leveraging the ontology of the knowledge graph to check if the LLM-generated SPARQL query matches the semantic of ontology and 2) LLM Repair: use the error explanations with an LLM to repair the SPARQL query. |
Dean Allemang; Juan Sequeda; | arxiv-cs.AI | 2024-05-19 |
634 | MemeMQA: Multimodal Question Answering for Memes Via Rationale-Based Inferencing Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To extend this research, we introduce MemeMQA, a multimodal question-answering framework aiming to solicit accurate responses to structured questions while providing coherent explanations. |
Siddhant Agarwal; Shivam Sharma; Preslav Nakov; Tanmoy Chakraborty; | arxiv-cs.CL | 2024-05-18 |
635 | StackOverflowVQA: Stack Overflow Visual Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we focus on the questions which need the understanding of images in addition to the question itself. |
Motahhare Mirzaei; Mohammad Javad Pirhadi; Sauleh Eetemadi; | arxiv-cs.CV | 2024-05-17 |
636 | SciQAG: A Framework for Auto-Generated Science Question Answering Dataset with Fine-grained Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce SciQAG, a novel framework for automatically generating high-quality science question-answer pairs from a large corpus of scientific literature based on large language models (LLMs). |
YUWEI WAN et. al. | arxiv-cs.CL | 2024-05-16 |
637 | FinTextQA: A Dataset for Long-form Financial Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. |
JIAN CHEN et. al. | arxiv-cs.CL | 2024-05-16 |
638 | Towards Better Question Generation in QA-based Event Extraction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, in QA-based EE, the quality of the questions dramatically affects the extraction accuracy, and how to generate high-quality questions for QA-based EE remains a challenge. In this work, to tackle this challenge, we suggest four criteria to evaluate the quality of a question and propose a reinforcement learning method, RLQG, for QA-based EE that can generate generalizable, high-quality, and context-dependent questions and provides clear guidance to QA models. |
Zijin Hong; Jian Liu; | arxiv-cs.CL | 2024-05-16 |
639 | Exploring The Impact of ChatGPT on Wikipedia Engagement Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore Wikipedia user metrics across four areas: page views, unique visitor numbers, edit counts and editor numbers within twelve language instances of Wikipedia. |
Neal Reeves; Wenjie Yin; Elena Simperl; | arxiv-cs.HC | 2024-05-16 |
640 | Question Answering System with Text Mining and Deep Networks Related Papers Related Patents Related Grants Related Venues Related Experts View |
Hüseyin Avni Ardaç; P. Erdoğmuş; | Evol. Syst. | 2024-05-16 |
641 | Prompting-based Synthetic Data Generation for Few-Shot Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: With this motivation, we show that using large language models can improve Question Answering performance on various datasets in the few-shot setting compared to state-of-the-art approaches. For this, we perform data generation leveraging the Prompting framework, suggesting that language models contain valuable task-agnostic knowledge that can be used beyond the common pre-training/fine-tuning scheme. |
Maximilian Schmidt; Andrea Bartezzaghi; Ngoc Thang Vu; | arxiv-cs.CL | 2024-05-15 |
642 | STAR: A Benchmark for Situated Reasoning in Real-World Videos IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a new benchmark that evaluates the situated reasoning ability via situation abstraction and logic-grounded question answering for real-world videos, called Situated Reasoning in Real-World Videos (STAR Benchmark). |
Bo Wu; Shoubin Yu; Zhenfang Chen; Joshua B Tenenbaum; Chuang Gan; | arxiv-cs.AI | 2024-05-15 |
643 | A Knowledge-Injected Curriculum Pretraining Framework for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, in this paper, we propose a general K nowledge-I njected C urriculum P retraining framework (KICP) to achieve comprehensive KG learning and exploitation for KBQA tasks, which is composed of knowledge injection (KI), knowledge adaptation (KA) and curriculum reasoning (CR). |
XIN LIN et. al. | www | 2024-05-13 |
644 | TIQ: A Benchmark for Temporal Question Answering with Implicit Time Constraints Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Temporal question answering (QA) involves explicit (e.g., …before 2024) or implicit (e.g., …during the Cold War period) time constraints. Implicit constraints are more … |
Zhen Jia; Philipp Christmann; G. Weikum; | Companion Proceedings of the ACM on Web Conference 2024 | 2024-05-13 |
645 | Demonstration of FeVisQA: Free-Form Question Answering Over Data Visualization Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question Answering (QA) systems playa vital role in knowledge acquisition. CodeQA refers to question answering (QA) over source code for code comprehension purpose. However, … |
Yuanfeng Song; Jinwei Lu; Xuefang Zhao; Raymond Chi-Wing Wong; Haodi Zhang; | 2024 IEEE 40th International Conference on Data Engineering … | 2024-05-13 |
646 | TANQ: An Open Domain Dataset of Table Answered Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce TANQ, the first open domain question answering dataset where the answers require building tables from information across multiple sources. |
Mubashara Akhtar; Chenxi Pang; Andreea Marzoca; Yasemin Altun; Julian Martin Eisenschlos; | arxiv-cs.CL | 2024-05-13 |
647 | Harnessing Multi-Role Capabilities of Large Language Models for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose LLMQA, a generalized framework that formulates the ODQA process into three basic steps: query expansion, document selection, and answer generation, combining the superiority of both retrieval-based and generation-based evidence. |
HONGDA SUN et. al. | www | 2024-05-13 |
648 | Causal Question Answering with Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Hence, in this paper, we aim to answer causal questions with a causality graph, a large-scale dataset of causal relations between noun phrases along with the relations’ provenance data. |
Lukas Bl\{u}baum; Stefan Heindorf; | www | 2024-05-13 |
649 | KET-QA: A Dataset for Knowledge Enhanced Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose to use a knowledge base (KB) as the external knowledge source for TableQA and construct a dataset KET-QA with fine-grained gold evidence annotation. |
Mengkang Hu; Haoyu Dong; Ping Luo; Shi Han; Dongmei Zhang; | arxiv-cs.CL | 2024-05-13 |
650 | Faithful Temporal Question Answering Over Heterogeneous Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As implicit questions are sparse in prior benchmarks, we introduce a principled method for generating diverse questions. |
Zhen Jia; Philipp Christmann; Gerhard Weikum; | www | 2024-05-13 |
651 | MedConceptsQA: Open Source Medical Concepts QA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present MedConceptsQA, a dedicated open source benchmark for medical concepts question answering. |
Ofir Ben Shoham; Nadav Rappoport; | arxiv-cs.CL | 2024-05-12 |
652 | ChartInsights: Evaluating Multimodal Large Language Models for Low-Level Chart Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While recent advancements in multimodal large language models (MLLMs) like GPT-4o have shown promise in high-level ChartQA tasks, such as chart captioning, their effectiveness in low-level ChartQA tasks (e.g., identifying correlations) remains underexplored. In this paper, we address this gap by evaluating MLLMs on low-level ChartQA using a newly curated dataset, ChartInsights, which consists of 22,347 (chart, task, query, answer) covering 10 data analysis tasks across 7 chart types. |
YIFAN WU et. al. | arxiv-cs.CL | 2024-05-11 |
653 | Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Since LLMs have probably seen the majority of factual question-answering datasets already, to facilitate our analysis, we proposed a fully automatic pipeline for creating a benchmark that requires knowledge of long-tail facts for answering the involved questions. |
WENYU HUANG et. al. | arxiv-cs.CL | 2024-05-10 |
654 | CourseGPT-zh: An Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, restricted access to closed-source LLMs via APIs and the difficulty in collecting massive high-quality datasets pose obstacles to the development of large language models in education fields of various courses. Given these challenges, we propose CourseGPT-zh, a course-oriented education LLM that supports customization and low-cost deployment. |
Zheyan Qu; Lu Yin; Zitong Yu; Wenbo Wang; Xing zhang; | arxiv-cs.CL | 2024-05-07 |
655 | Mitigating Clickbait: An Approach to Spoiler Generation Using Multitask Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study introduces ‘clickbait spoiling’, a novel technique designed to detect, categorize, and generate spoilers as succinct text responses, countering the curiosity induced by clickbait content. |
Sayantan Pal; Souvik Das; Rohini K. Srihari; | arxiv-cs.CL | 2024-05-07 |
656 | S-EQA: Tackling Situational Queries in Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present and tackle the problem of Embodied Question Answering (EQA) with Situational Queries (S-EQA) in a household environment. |
VISHNU SASHANK DORBALA et. al. | arxiv-cs.RO | 2024-05-07 |
657 | VSA4VQA: Scaling A Vector Symbolic Architecture to Visual Question Answering on Natural Images Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose VSA4VQA – a novel 4D implementation of VSAs that implements a mental representation of natural images for the challenging task of Visual Question Answering (VQA). |
Anna Penzkofer; Lei Shi; Andreas Bulling; | arxiv-cs.CV | 2024-05-06 |
658 | Overview of The EHRSQL 2024 Shared Task on Reliable Text-to-SQL Modeling on Electronic Health Records Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we describe the task of reliable text-to-SQL modeling, the dataset, and the methods and results of the participants. |
Gyubok Lee; Sunjun Kweon; Seongsu Bae; Edward Choi; | arxiv-cs.CL | 2024-05-04 |
659 | UQA: Corpus for Urdu Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces UQA, a novel dataset for question answering and text comprehension in Urdu, a low-resource language with over 70 million native speakers. |
Samee Arif; Sualeha Farid; Awais Athar; Agha Ali Raza; | arxiv-cs.CL | 2024-05-02 |
660 | OmniDrive: A Holistic LLM-Agent Framework for Autonomous Driving with 3D Perception, Reasoning and Planning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, capitalizing on MLLMs’ strong reasoning capabilities for improved planning behavior is challenging since planning requires full 3D situational awareness beyond 2D reasoning. To address this challenge, our work proposes a holistic framework for strong alignment between agent models and 3D driving tasks. |
SHIHAO WANG et. al. | arxiv-cs.CV | 2024-05-02 |
661 | Enhanced Textual Feature Extraction for Visual Question Answering: A Simple Convolutional Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we conduct a comprehensive comparison between complex textual models that leverage long-range dependencies and simpler models focusing on local textual features within a well-established VQA framework. |
Zhilin Zhang; | arxiv-cs.CV | 2024-05-01 |
662 | ConfigILM: A General Purpose Configurable Library for Combining Image and Language Models for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
L. Hackel; Kai Norman Clasen; Begum Demir; | SoftwareX | 2024-05-01 |
663 | Question-Aware Global-Local Video Understanding Network for Audio-Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: As a newly emerging task, audio-visual question answering (AVQA) has attracted research attention. Compared with traditional single-modality (e.g., audio or visual) QA tasks, it … |
Zailong Chen; Lei Wang; Peng Wang; Peng Gao; | IEEE Transactions on Circuits and Systems for Video … | 2024-05-01 |
664 | Video Question Answering With Semantic Disentanglement and Reasoning Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering aims to provide correct answers given complex videos and related questions, posting high requirements of the comprehension ability in both video and … |
Jin Liu; Guoxiang Wang; Jialong Xie; F. Zhou; Huijuan Xu; | IEEE Transactions on Circuits and Systems for Video … | 2024-05-01 |
665 | ZVQAF: Zero-shot Visual Question Answering with Feedback from Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View |
Cheng Liu; Chao Wang; Yan Peng; Zhixu Li; | Neurocomputing | 2024-05-01 |
666 | Suvach — Generated Hindi QA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a new benchmark specifically designed for evaluating Hindi EQA models and discusses the methodology to do the same for any task. |
Vaishak Narayanan; Prabin Raj KP; Saifudheen Nouphal; | arxiv-cs.CL | 2024-04-30 |
667 | When to Retrieve: Teaching LLMs to Utilize Information Retrieval Effectively Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we demonstrate how Large Language Models (LLMs) can effectively learn to use an off-the-shelf information retrieval (IR) system specifically when additional context is required to answer a given question. |
Tiziano Labruna; Jon Ander Campos; Gorka Azkune; | arxiv-cs.CL | 2024-04-30 |
668 | QLSC: A Query Latent Semantic Calibrator for Robust Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our work introduces a novel approach, called the “Query Latent Semantic Calibrator (QLSC)”, designed as an auxiliary module for existing MRC models. |
SHENG OUYANG et. al. | arxiv-cs.CL | 2024-04-30 |
669 | ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question Answering By Understanding Vietnamese Text in Images Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Optical Character Recognition – Visual Question Answering (OCR-VQA) is the task of answering text information contained in images that have just been significantly developed in … |
HUY QUANG PHAM et. al. | ArXiv | 2024-04-29 |
670 | TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table Domains Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we establish a benchmark for table visual question answering, referred to as the TableVQA-Bench, derived from pre-existing table question-answering (QA) and table structure recognition datasets. |
Yoonsik Kim; Moonbin Yim; Ka Yeon Song; | arxiv-cs.CV | 2024-04-29 |
671 | Multi-Page Document Visual Question Answering Using Self-Attention Scoring Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel method and efficient training strategy for multi-page Document VQA tasks. |
Lei Kang; Rubèn Tito; Ernest Valveny; Dimosthenis Karatzas; | arxiv-cs.CV | 2024-04-29 |
672 | Multi-hop Question Answering Over Knowledge Graphs Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we evaluate the capability of (LLMs) to answer questions over KG that involve multiple hops. |
Abir Chakraborty; | arxiv-cs.AI | 2024-04-29 |
673 | QANA: LLM-based Question Generation and Network Analysis for Zero-shot Key Point Analysis and Beyond Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose Question-Answering Network Analysis (QANA), a novel opinion mining framework that utilizes Large Language Models (LLMs) to generate questions from users’ comments, constructs a bipartite graph based on the comments’ answerability to the questions, and applies centrality measures to examine the importance of opinions. |
TOMOKI FUKUMA et. al. | arxiv-cs.CL | 2024-04-28 |
674 | MediFact at MEDIQA-M3G 2024: Medical Question Answering in Dermatology with Multimodal Learning Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: The MEDIQA-M3G 2024 challenge necessitates novel solutions for Multilingual & Multimodal Medical Answer Generation in dermatology (wai Yim et al., 2024a). This paper addresses the … |
Nadia Saeed; | Clinical Natural Language Processing Workshop | 2024-04-27 |
675 | Can A Multichoice Dataset Be Repurposed for Extractive Question Answering? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our aim is to enable others to adapt our approach for the 120+ other language variants in Belebele, many of which are deemed under-resourced. |
TERESA LYNN et. al. | arxiv-cs.CL | 2024-04-26 |
676 | Türkçe Dil Modellerinin Performans Karşılaştırması Performance Comparison of Turkish Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Yet, despite the increasing number of these models, there is no comprehensive comparison of their performance for Turkish. This study aims to fill this gap in the literature. |
EREN DOGAN et. al. | arxiv-cs.CL | 2024-04-25 |
677 | Large Language Models in The Clinic: A Comprehensive Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To better understand LLMs in the clinic, we construct a benchmark ClinicBench. |
FENGLIN LIU et. al. | arxiv-cs.CL | 2024-04-25 |
678 | Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a medical vision-language model that integrates large vision and language models adapted for the medical domain. |
CUONG NHAT HA et. al. | arxiv-cs.CL | 2024-04-24 |
679 | KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models (LLMs) suffer from the hallucination problem and face significant challenges when applied to knowledge-intensive tasks. A promising approach is to leverage … |
XINXIN ZHENG et. al. | ArXiv | 2024-04-24 |
680 | Assessing The Potential Of Mid-Sized Language Models For Clinical QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models, such as GPT-4 and Med-PaLM, have shown impressive performance on clinical tasks; however, they require access to compute, are closed-source, and cannot be … |
ELLIOT BOLTON et. al. | ArXiv | 2024-04-24 |
681 | Evaluating Tool-Augmented Agents in Remote Sensing Platforms Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Tool-augmented Large Language Models (LLMs) have shown impressive capabilities in remote sensing (RS) applications. However, existing benchmarks assume question-answering input … |
Simranjit Singh; Michael Fore; Dimitrios Stamoulis; | ArXiv | 2024-04-23 |
682 | Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Multimodal LLMs are the natural evolution of LLMs, and enlarge their capabilities so as to work beyond the pure textual modality. As research is being carried out to design novel architectures and vision-and-language adapters, in this paper we concentrate on endowing such models with the capability of answering questions that require external knowledge. |
DAVIDE CAFFAGNI et. al. | arxiv-cs.CV | 2024-04-23 |
683 | Retrieval Augmented Generation for Domain-specific Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel framework to compile a large question-answer database and develop the approach for retrieval-aware finetuning of a Large Language model. |
SANAT SHARMA et. al. | arxiv-cs.CL | 2024-04-23 |
684 | Generate-on-Graph: Treat LLM As Both Agent and KG in Incomplete Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To handle IKGQA, we propose a training-free method called Generate-on-Graph (GoG), which can generate new factual triples while exploring KGs. |
YAO XU et. al. | arxiv-cs.CL | 2024-04-23 |
685 | RS-LLaVA: A Large Vision-Language Model for Joint Captioning and Question Answering in Remote Sensing Imagery IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this paper, we delve into the innovative application of large language models (LLMs) and their extension, large vision-language models (LVLMs), in the field of remote sensing … |
Y. Bazi; Laila Bashmal; Mohamad Mahmoud Al Rahhal; Riccardo Ricci; F. Melgani; | Remote. Sens. | 2024-04-23 |
686 | Tree of Reviews: A Tree-based Dynamic Iterative Retrieval Framework for Multi-hop Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-hop question answering is a knowledge-intensive complex problem. Large Language Models (LLMs) use their Chain of Thoughts (CoT) capability to reason complex problems step by … |
JIAPENG LI et. al. | ArXiv | 2024-04-22 |
687 | Listen Then See: Video Alignment with Speaker Attention Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a cross-modal alignment and subsequent representation fusion approach that achieves state-of-the-art results (82.06\% accuracy) on the Social IQ 2.0 dataset for SIQA. |
Aviral Agrawal; Carlos Mateo Samudio Lezcano; Iqui Balam Heredia-Marin; Prabhdeep Singh Sethi; | arxiv-cs.CV | 2024-04-21 |
688 | Exploring Diverse Methods in Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study explores innovative methods for improving Visual Question Answering (VQA) using Generative Adversarial Networks (GANs), autoencoders, and attention mechanisms. |
PANFENG LI et. al. | arxiv-cs.CV | 2024-04-21 |
689 | MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce MahaSQuAD, the first-ever full SQuAD dataset for the Indic language Marathi, consisting of 118,516 training, 11,873 validation, and 11,803 test samples. |
Ruturaj Ghatage; Aditya Kulkarni; Rajlaxmi Patil; Sharvi Endait; Raviraj Joshi; | arxiv-cs.CL | 2024-04-20 |
690 | PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Through this work, we aim to enhance the capabilities of existing vision-and-language models in handling challenges posed by text-dominant documents in VRD-QA. |
Yihao Ding; Kaixuan Ren; Jiabin Huang; Siwen Luo; Soyeon Caren Han; | arxiv-cs.CV | 2024-04-19 |
691 | LaPA: Latent Prompt Assist Model For Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose the Latent Prompt Assist model (LaPA) for medical visual question answering. |
Tiancheng Gu; Kaicheng Yang; Dongnan Liu; Weidong Cai; | arxiv-cs.CV | 2024-04-19 |
692 | MedThink: Explaining Medical Visual Question Answering Via Multimodal Decision-Making Rationale Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the model interpretability and transparency of existing MedVQA solutions are often limited, posing challenges in understanding their decision-making processes. To address this issue, we devise a semi-automated annotation process to streamline data preparation and build new benchmark MedVQA datasets R-RAD, R-SLAKE and R-Path. |
XIAOTANG GAI et. al. | arxiv-cs.CV | 2024-04-18 |
693 | Evaluating AI for Law: Bridging The Gap with Open-Source Solutions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study evaluates the performance of general-purpose AI, like ChatGPT, in legal question-answering tasks, highlighting significant risks to legal professionals and clients. |
Rohan Bhambhoria; Samuel Dahan; Jonathan Li; Xiaodan Zhu; | arxiv-cs.AI | 2024-04-18 |
694 | Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Reka Core, Flash, and Edge, a series of powerful multimodal language models trained from scratch by Reka. |
AITOR ORMAZABAL et. al. | arxiv-cs.CL | 2024-04-18 |
695 | Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Furthermore, current datasets may not provide a precise diagnostic for these methods. To tackle these challenges, firstly, we propose a novel dataset, MUSIC-AVQA-R, crafted in two steps: rephrasing questions within the test split of a public dataset (MUSIC-AVQA) and subsequently introducing distribution shifts to split questions. |
JIE MA et. al. | arxiv-cs.CV | 2024-04-18 |
696 | Characterizing LLM Abstention Behavior in Science QA with Context Perturbations Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. |
Bingbing Wen; Bill Howe; Lucy Lu Wang; | arxiv-cs.CL | 2024-04-18 |
697 | EuSQuAD: Automatically Translated and Aligned SQuAD2.0 for Basque Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work presents EuSQuAD, the first initiative dedicated to automatically translating and aligning SQuAD2.0 into Basque, resulting in more than 142k QA examples. |
Aitor García-Pablos; Naiara Perez; Montse Cuadros; Jaione Bengoetxea; | arxiv-cs.CL | 2024-04-18 |
698 | Consistency Training By Synthetic Question Generation for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: By citing a common modeling error prevalent in previous research, we introduce a new baseline model and compare our model’s performance against it, demonstrating an improvement in results, particularly when dealing with questions that include a substantial amount of historical context. |
Hamed Hematian Hemati; Hamid Beigy; | arxiv-cs.CL | 2024-04-17 |
699 | Language Models Still Struggle to Zero-shot Reason About Time Series Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address this gap, we generate a first-of-its-kind evaluation framework for time series reasoning, including formal tasks and a corresponding dataset of multi-scale time series paired with text captions across ten domains. Using these data, we probe whether language models achieve three forms of reasoning: (1) Etiological Reasoning – given an input time series, can the language model identify the scenario that most likely created it? |
Mike A. Merrill; Mingtian Tan; Vinayak Gupta; Tom Hartvigsen; Tim Althoff; | arxiv-cs.CL | 2024-04-17 |
700 | Knowledge-Enriched Prompt for Low-Resource Named Entity Recognition Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Named Entity Recognition (NER) in low-resource settings aims to identify and categorize entities in a sentence with limited labeled data. Although prompt-based methods have … |
Wenlong Hou; Weidong Zhao; Xianhui Liu; Wenyan Guo; | ACM Transactions on Asian and Low-Resource Language … | 2024-04-17 |
701 | Spiral of Silence: How Is Large Language Model Killing Information Retrieval? – A Case Study on Open Domain Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The practice of Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs) with retrieval systems, has become increasingly prevalent. However, the … |
XIAOYANG CHEN et. al. | Annual Meeting of the Association for Computational … | 2024-04-16 |
702 | CoTAR: Chain-of-Thought Attribution Reasoning with Multi-level Granularity Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce an attribution-oriented Chain-of-Thought reasoning method to enhance the accuracy of attributions. |
Moshe Berchansky; Daniel Fleischer; Moshe Wasserblat; Peter Izsak; | arxiv-cs.CL | 2024-04-16 |
703 | Spiral of Silence: How Is Large Language Model Killing Information Retrieval? — A Case Study on Open Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we construct and iteratively run a simulation pipeline to deeply investigate the short-term and long-term effects of LLM text on RAG systems. |
XIAOYANG CHEN et. al. | arxiv-cs.IR | 2024-04-16 |
704 | ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: As a developing country, conditions are still limited, and this task is still open in Vietnam. Therefore, we introduce the first large-scale dataset in Vietnamese specializing in the ability to understand text appearing in images, we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual \textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over 16,000} images and \textbf{over 50,000} questions with answers. |
QUAN VAN NGUYEN et. al. | arxiv-cs.CL | 2024-04-16 |
705 | IMCN: Improved Modular Co-attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Cheng Liu; Chao Wang; Yan Peng; | Appl. Intell. | 2024-04-16 |
706 | HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Vision Language Models (VLMs) are now the de facto state-of-the-art for a number of tasks including visual question answering, recognising objects, and spatial referral. In … |
Siddhant Bansal; Michael Wray; D. Damen; | ArXiv | 2024-04-15 |
707 | TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: The advent of Large Multimodal Models (LMMs) has sparked a surge in research aimed at harnessing their remarkable reasoning abilities. However, for understanding text-rich images, … |
BOZHI LUAN et. al. | ArXiv | 2024-04-15 |
708 | Context-aware Chatbot Using MLLMs for Cultural Heritage Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-modal Large Language Models (MLLMs) are currently an extremely active research topic for the multimedia and computer vision communities, and show a significant impact in … |
Pavan Kartheek Rachabatuni; F. Principi; Paolo Mazzanti; Marco Bertini; | Proceedings of the 15th ACM Multimedia Systems Conference | 2024-04-15 |
709 | M3TQA: Multi-View, Multi-Hop and Multi-Stage Reasoning for Temporal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge Graph (KG) have attained notable triumph over Question Answering (QA) tasks. However, the presence of temporal constraints on numerous facts within the real world has … |
Zhiyuan Zha; Pengnian Qi; Xigang Bao; Mengyuan Tian; Biao Qin; | ICASSP 2024 – 2024 IEEE International Conference on … | 2024-04-14 |
710 | Prompting Large Language Models with Fine-Grained Visual Relations from Scene Graph for Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual Question Answering (VQA) is a task that requires models to comprehend both questions and images. An increasing number of works are leveraging the strong reasoning … |
JIAPENG LIU et. al. | ICASSP 2024 – 2024 IEEE International Conference on … | 2024-04-14 |
711 | GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models Using Few Shot Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose GeMQuAD – a semi-supervised learning approach, extending the WeakDAP framework, applied to a dataset generated through ICL with just one example in the target language using AlexaTM 20B Seq2Seq LLM. |
Amani Namboori; Shivam Mangale; Andy Rosenbaum; Saleh Soltan; | arxiv-cs.CL | 2024-04-14 |
712 | CORAAL QA: A Dataset and Framework for Open Domain Spontaneous Speech Question Answering from Long Audio Files Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents a novel dataset (CORAAL QA) and framework for audio question-answering from long audio recordings containing spontaneous speech. The dataset introduced here … |
Natarajan Balaji Shankar; Alexander Johnson; Christina Chance; Hariram Veeramani; Abeer Alwan; | ICASSP 2024 – 2024 IEEE International Conference on … | 2024-04-14 |
713 | Cross-Data Knowledge Graph Construction for LLM-enabled Educational Question-Answering System: A Case Study at HCMUT Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This article proposes a method for automatically constructing a Knowledge Graph from multiple data sources and discusses some initial applications (experimental trials) of KG in conjunction with LLMs for question-answering tasks. |
TUAN BUI et. al. | arxiv-cs.CL | 2024-04-14 |
714 | CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge Graph Prompting Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nevertheless, the original KGP framework necessitates costly fine-tuning with large datasets yet still suffers from LLM hallucination. Therefore, we propose a reasoning-infused LLM agent to enhance this framework. |
Zukang Yang; Zixuan Zhu; | arxiv-cs.CL | 2024-04-13 |
715 | Relational Reasoning and Adaptive Fusion for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Xiang Shen; Dezhi Han; Liang Zong; Zihan Guo; Jie Hua; | Appl. Intell. | 2024-04-13 |
716 | Improving Health Question Answering with Reliable and Time-Aware Evidence Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We discuss the results, highlight interesting examples, and outline challenges for future research, like managing evidence disagreement and crafting user-friendly explanations. |
Juraj Vladika; Florian Matthes; | arxiv-cs.CL | 2024-04-12 |
717 | Small Models Are (Still) Effective Cross-Domain Argument Extractors Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, detailed explorations of these techniques’ ability to actually enable this transfer are lacking. In this work, we provide such a study, exploring zero-shot transfer using both techniques on six major EAE datasets at both the sentence and document levels. |
William Gantt; Aaron Steven White; | arxiv-cs.CL | 2024-04-12 |
718 | Enhancing Visual Question Answering Through Question-Driven Image Captions As Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a straightforward and efficient question-driven image captioning approach within this pipeline to transfer contextual information into the question-answering (QA) model. |
Övgü Özdemir; Erdem Akagündüz; | arxiv-cs.CV | 2024-04-12 |
719 | Synthetic Dataset Creation and Fine-Tuning of Transformer Models for Question Answering in Serbian Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we focus on generating a synthetic question answering (QA) dataset using an adapted Translate-Align-Retrieve method. |
Aleksa Cvetanović; Predrag Tadić; | arxiv-cs.CL | 2024-04-12 |
720 | LLoCO: Learning Long Contexts Offline Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. |
SIJUN TAN et. al. | arxiv-cs.CL | 2024-04-11 |
721 | MM-PhyQA: Multimodal Physics Question-Answering with Multi-image CoT Prompting Related Papers Related Patents Related Grants Related Venues Related Experts View |
AVINASH ANAND et. al. | Pacific-Asia Conference on Knowledge Discovery and Data … | 2024-04-11 |
722 | SurveyAgent: A Conversational System for Personalized and Efficient Research Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces SurveyAgent, a novel conversational system designed to provide personalized and efficient research survey assistance to researchers. |
XINTAO WANG et. al. | arxiv-cs.CL | 2024-04-09 |
723 | Early Prediction of Promising Expert Users on Community Question Answering Sites Related Papers Related Patents Related Grants Related Venues Related Experts View |
P. Roy; Jyoti Prakash Singh; | Int. J. Syst. Assur. Eng. Manag. | 2024-04-09 |
724 | MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Finally, the situation is particularly grim if we consider benchmarking LLMs for languages other than English which remains, as far as we know, a totally neglected topic. In order to address these shortcomings, in this paper we present MedExpQA, the first multilingual benchmark based on medical exams to evaluate LLMs in Medical Question Answering. |
Iñigo Alonso; Maite Oronoz; Rodrigo Agerri; | arxiv-cs.CL | 2024-04-08 |
725 | Enhancing Software-Related Information Extraction Via Single-Choice Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes our participation in the Shared Task on Software Mentions Disambiguation (SOMD), with a focus on improving relation extraction in scholarly texts through generative Large Language Models (LLMs) using single-choice question-answering. |
Wolfgang Otto; Sharmila Upadhyaya; Stefan Dietze; | arxiv-cs.CL | 2024-04-08 |
726 | PerkwE_COQA: Enhanced Persian Conversational Question Answering By Combining Contextual Keyword Extraction with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a novel method to elevate the performance of Persian Conversational question-answering (CQA) systems. |
Pardis Moradbeiki; Nasser Ghadiri; | arxiv-cs.CL | 2024-04-08 |
727 | Your Finetuned Large Language Model Is Already A Powerful Out-of-distribution Detector Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection. The intuition … |
Andi Zhang; Tim Z. Xiao; Weiyang Liu; Robert Bamler; Damon Wischik; | ArXiv | 2024-04-07 |
728 | Neural-Symbolic VideoQA: Learning Compositional Spatio-Temporal Reasoning for Real-world Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing approaches struggle to establish effective symbolic reasoning structures, which are crucial for answering compositional spatio-temporal questions. To address this challenge, we propose a neural-symbolic framework called Neural-Symbolic VideoQA (NS-VideoQA), specifically designed for real-world VideoQA tasks. |
Lili Liang; Guanglu Sun; Jin Qiu; Lizhong Zhang; | arxiv-cs.CV | 2024-04-05 |
729 | Which Experimental Design Is Better Suited for VQA Tasks? Eye Tracking Study on Cognitive Load, Performance, and Gaze Allocations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conducted an eye-tracking user study with 13 participants to investigate the influence of stimulus-question ordering and question modality on participants using visual question-answering (VQA) tasks. |
Sita A. Vriend; Sandeep Vidyapu; Amer Rama; Kun-Ting Chen; Daniel Weiskopf; | arxiv-cs.HC | 2024-04-05 |
730 | KazQAD: Kazakh Open-Domain Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce KazQAD — a Kazakh open-domain question answering (ODQA) dataset — that can be used in both reading comprehension and full ODQA settings, as well as for information retrieval experiments. |
Rustem Yeshpanov; Pavel Efimov; Leonid Boytsov; Ardak Shalkarbayuli; Pavel Braslavski; | arxiv-cs.CL | 2024-04-05 |
731 | TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes TinyVQA, a novel multimodal deep neural network for visual question answering tasks that can be deployed on resource-constrained tinyML hardware. |
Hasib-Al Rashid; Argho Sarkar; Aryya Gangopadhyay; Maryam Rahnemoonfar; Tinoosh Mohsenin; | arxiv-cs.CV | 2024-04-04 |
732 | CBR-RAG: Case-Based Reasoning for Retrieval Augmented Generation in LLMs for Legal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Retrieval-Augmented Generation (RAG) enhances Large Language Model (LLM) output by providing prior knowledge as context to input. This is beneficial for knowledge-intensive and … |
N. WIRATUNGA et. al. | International Conference on Case-Based Reasoning | 2024-04-04 |
733 | Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., <1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks. |
JOOYOUNG LEE et. al. | arxiv-cs.CL | 2024-04-04 |
734 | Self-Improvement Programming for Temporal Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Motivated by semantic-parsing-based approaches that explicitly model constraints in questions by generating logical forms with symbolic operators, we design fundamental temporal operators for time constraints and introduce a novel self-improvement Programming method for TKGQA (Prog-TQA). |
ZHUO CHEN et. al. | arxiv-cs.CL | 2024-04-02 |
735 | Enhancing Human-Computer Interaction in Chest X-ray Analysis Using Vision and Language Model with Eye Gaze Patterns Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work proposes a novel approach to enhance human-computer interaction in chest X-ray analysis using Vision-Language Models (VLMs) enhanced with radiologists’ attention by incorporating eye gaze data alongside textual prompts. |
Yunsoo Kim; Jinge Wu; Yusuf Abdulle; Yue Gao; Honghan Wu; | arxiv-cs.CV | 2024-04-02 |
736 | Towards Better Generalization in Open-Domain Question Answering By Mitigating Context Memorization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate the generalization performance of a retrieval-augmented QA model in two specific scenarios: 1) adapting to updated versions of the same knowledge corpus; 2) switching to completely different knowledge domains. |
Zixuan Zhang; Revanth Gangi Reddy; Kevin Small; Tong Zhang; Heng Ji; | arxiv-cs.CL | 2024-04-02 |
737 | Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. |
ZHUO CHEN et. al. | arxiv-cs.CL | 2024-04-02 |
738 | MChartQA: A Universal Benchmark for Multimodal Chart Question Answer Based on Vision-Language Alignment and Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Traditional methods, which typically involve either direct multimodal processing or a table-to-text conversion followed by language model analysis, have limitations in effectively handling these complex scenarios. This paper introduces a novel multimodal chart question-answering model, specifically designed to address these intricate tasks. |
JINGXUAN WEI et. al. | arxiv-cs.CV | 2024-04-01 |
739 | Simple Contrastive Learning in A Self-supervised Manner for Robust Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
SHUWEN YANG et. al. | Comput. Vis. Image Underst. | 2024-04-01 |
740 | Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Previous studies have explored using large large multimodal models (LMMs) as reward models to guide preference modeling, but their ability to accurately assess the factuality of generated responses compared to corresponding videos has not been conclusively established. This paper introduces a novel framework that utilizes detailed video captions as a proxy of video content, enabling language models to incorporate this information as supporting evidence for scoring video Question Answering (QA) predictions. |
RUOHONG ZHANG et. al. | arxiv-cs.CV | 2024-04-01 |
741 | Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: An open-domain question answering (QA) system usually follows a retrieve-then-read paradigm, in which a retriever is used to retrieve relevant passages from a large corpus, and … |
Dingmin Wang; Qiuyuan Huang; Matthew Jackson; Jianfeng Gao; | Transactions of the Association for Computational … | 2024-04-01 |
742 | VideoDistill: Language-aware Vision Distillation for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we are inspired by the human recognition and learning pattern and propose VideoDistill, a framework with language-aware (i.e., goal-driven) behavior in both vision perception and answer generation process. |
Bo Zou; Chao Yang; Yu Qiao; Chengbin Quan; Youjian Zhao; | arxiv-cs.CV | 2024-04-01 |
743 | Explainable Multi-hop Question Generation: An End-to-End Approach Without Intermediate Question Labeling Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce an end-to-end question rewriting model that increases question complexity through sequential rewriting. |
Seonjeong Hwang; Yunsu Kim; Gary Geunbae Lee; | arxiv-cs.CL | 2024-03-31 |
744 | How Robust Are The Tabular QA Models for Scientific Tables? A Study Using Customized Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To investigate the robustness of the existing state-of-the-art QA models on scientific hybrid tabular data, we propose a new dataset, SciTabQA, consisting of 822 question-answer pairs from scientific tables and their descriptions. |
Akash Ghosh; B Venkata Sahith; Niloy Ganguly; Pawan Goyal; Mayank Singh; | arxiv-cs.CL | 2024-03-30 |
745 | DOCMASTER: A Unified Platform for Annotation, Training, & Inference in Document Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces DOCMASTER, a unified platform designed for annotating PDF documents, model training, and inference, tailored to document question-answering. |
Alex Nguyen; Zilong Wang; Jingbo Shang; Dheeraj Mekala; | arxiv-cs.CL | 2024-03-30 |
746 | Multi-hop Question Answering Under Temporal Knowledge Editing IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts. To address this limitation, we propose a novel framework, namely TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA). |
KEYUAN CHENG et. al. | arxiv-cs.CL | 2024-03-30 |
747 | How Robust Are The QA Models for Hybrid Scientific Tabular Data? A Study Using Customized Dataset Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-answering (QA) on hybrid scientific tabular and textual data deals with scientific information, and relies on complex numerical reasoning. In recent years, while tabular … |
Akash Ghosh; Venkata Sahith Bathini; Niloy Ganguly; Pawan Goyal; Mayank Singh; | ArXiv | 2024-03-30 |
748 | Design As Desired: Utilizing Visual Question Answering for Multimodal Pre-training Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we utilize Visual Question Answering (VQA) for multimodal pre-training to guide the framework focusing on targeted pathological features. |
TONGKUN SU et. al. | arxiv-cs.CV | 2024-03-29 |
749 | VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper develops a Versatile and Honest vision language Model (VHM) for remote sensing image analysis. |
CHAO PANG et. al. | arxiv-cs.CV | 2024-03-29 |
750 | Multi-Frame, Lightweight & Efficient Vision-Language Models for Question Answering in Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current approaches to these systems use expensive large language model (LLM) backbones and image encoders, making such systems unsuitable for real-time autonomous driving systems where tight memory constraints exist and fast inference time is necessary. To address these previous issues, we develop EM-VLM4AD, an efficient, lightweight, multi-frame vision language model which performs Visual Question Answering for autonomous driving. |
Akshay Gopalkrishnan; Ross Greer; Mohan Trivedi; | arxiv-cs.CV | 2024-03-28 |
751 | JDocQA: Japanese Document Question Answering Dataset for Generative Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset, essentially requiring both visual and textual information to answer questions, which comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese. |
Eri Onami; Shuhei Kurita; Taiki Miyanishi; Taro Watanabe; | arxiv-cs.CL | 2024-03-28 |
752 | An Image Grid Can Be Worth A Video: Zero-shot Video Question Answering Using A VLM IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we introduce a simple yet novel strategy where only a single Vision Language Model (VLM) is utilized. |
Wonkyun Kim; Changin Choi; Wonseok Lee; Wonjong Rhee; | arxiv-cs.CV | 2024-03-27 |
753 | MFORT-QA: Multi-hop Few-shot Open Rich Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the Multi-hop Few-shot Open Rich Table QA (MFORT-QA) approach, which consists of two major steps. |
Che Guan; Mengyu Huang; Peng Zhang; | arxiv-cs.CL | 2024-03-27 |
754 | A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose the Gaze-grounded VQA dataset (GazeVQA) that clarifies ambiguous questions using gaze information by focusing on a clarification process complemented by gaze information. |
Shun Inadumi; Seiya Kawano; Akishige Yuguchi; Yasutomo Kawanishi; Koichiro Yoshino; | arxiv-cs.CL | 2024-03-26 |
755 | Denoising Table-Text Retrieval for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Previous studies in table-text open-domain question answering have two common challenges: firstly, their retrievers can be affected by false-positive labels in training datasets; secondly, they may struggle to provide appropriate evidence for questions that require reasoning across the table. To address these issues, we propose Denoised Table-Text Retriever (DoTTeR). |
Deokhyung Kang; Baikjin Jung; Yunsu Kim; Gary Geunbae Lee; | arxiv-cs.CL | 2024-03-26 |
756 | GPTs and Language Barrier: A Cross-Lingual Legal QA Examination Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the application of Generative Pre-trained Transformers (GPTs) in cross-lingual legal Question-Answering (QA) systems using the COLIEE Task 4 dataset. |
Ha-Thanh Nguyen; Hiroaki Yamada; Ken Satoh; | arxiv-cs.CL | 2024-03-26 |
757 | Can Multiple-choice Questions Really Be Useful in Detecting The Abilities of LLMs? IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The misalignment between the task and the evaluation method demands a thoughtful analysis of MCQ’s efficacy, which we undertake in this paper by evaluating nine LLMs on four question-answering (QA) datasets in two languages: Chinese and English. |
WANGYUE LI et. al. | arxiv-cs.CL | 2024-03-26 |
758 | Intrinsic Subgraph Generation for Interpretable Graph Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset. |
Pascal Tilli; Ngoc Thang Vu; | arxiv-cs.CL | 2024-03-26 |
759 | Chain-of-Action: Faithful and Multimodal Question Answering Through Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a Chain-of-Action (CoA) framework for multimodal and retrieval-augmented Question-Answering (QA). |
Zhenyu Pan; Haozheng Luo; Manling Li; Han Liu; | arxiv-cs.CL | 2024-03-25 |
760 | ProCQA: A Large-scale Community-based Programming Question Answering Dataset for Code Search Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce ProCQA, a large-scale programming question answering dataset extracted from the StackOverflow community, offering naturally structured mixed-modal QA pairs. |
Zehan Li; Jianfei Zhang; Chuantao Yin; Yuanxin Ouyang; Wenge Rong; | arxiv-cs.CL | 2024-03-25 |
761 | Synthesize Step-by-Step: Tools, Templates and LLMs As Data Generators for Reasoning-Based Chart VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we address the lack of reasoning ability by data augmentation. |
Zhuowan Li; Bhavan Jasani; Peng Tang; Shabnam Ghadar; | arxiv-cs.CV | 2024-03-24 |
762 | RetLLM-E: Retrieval-Prompt Strategy for Question-Answering on Student Discussion Forums Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper focuses on using Large Language Models to support teaching assistants in answering questions on large student forums such as Piazza and EdSTEM. Since student questions … |
CHANCHARIK MITRA et. al. | AAAI Conference on Artificial Intelligence | 2024-03-24 |
763 | CyberQ: Generating Questions and Answers for Cybersecurity Education Using Knowledge Graph-Augmented LLMs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Building a skilled cybersecurity workforce is paramount to building a safer digital world. However, the diverse skill set, constantly emerging vulnerabilities, and deployment of … |
Garima Agrawal; Kuntal Pal; Yuli Deng; Huanmin Liu; Yingying Chen; | AAAI Conference on Artificial Intelligence | 2024-03-24 |
764 | Graph Reasoning Transformers for Knowledge-Aware Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Augmenting Language Models (LMs) with structured knowledge graphs (KGs) aims to leverage structured world knowledge to enhance the capability of LMs to complete … |
Ruilin Zhao; Feng Zhao; Liang Hu; Guandong Xu; | AAAI Conference on Artificial Intelligence | 2024-03-24 |
765 | SciSpace Copilot: Empowering Researchers Through Intelligent Reading Assistance Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We introduce SciSpace Copilot, an AI research assistant that helps in understanding and reading research papers faster by providing a plethora of features. Answering questions … |
TRINITA ROY et. al. | AAAI Conference on Artificial Intelligence | 2024-03-24 |
766 | Explore Until Confident: Efficient Exploration for Embodied Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We consider the problem of Embodied Question Answering (EQA), which refers to settings where an embodied agent such as a robot needs to actively explore an environment to gather information until it is confident about the answer to a question. In this work, we leverage the strong semantic reasoning capabilities of large vision-language models (VLMs) to efficiently explore and answer such questions. |
ALLEN Z. REN et. al. | arxiv-cs.RO | 2024-03-23 |
767 | Awakening Augmented Generation: Learning to Awaken Internal Knowledge of Large Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent works indicate that LLMs model rich knowledge, but it is often not effectively activated and awakened. Inspired by this, we propose a novel knowledge-augmented framework, $\textbf{Awakening-Augmented-Generation}$ (AAG), which mimics the human ability to answer questions using only thinking and recalling to compensate for knowledge gaps, thereby awaking relevant knowledge in LLMs without relying on external resources. |
HUANXUAN LIAO et. al. | arxiv-cs.CL | 2024-03-22 |
768 | Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recent advancements in Surgical Visual Question Answering (Surgical-VQA) and related region grounding have shown great promise for robotic and medical applications, addressing the … |
GUAN-FENG WANG et. al. | ArXiv | 2024-03-22 |
769 | Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models Through Question Complexity IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs from the simplest to the most sophisticated ones based on the query complexity. |
Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong C. Park; | arxiv-cs.CL | 2024-03-21 |
770 | Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose an adaptive multi-agent system, named Multi-Agent VQA, to overcome the limitations of foundation models in object detection and counting by using specialized agents as tools. |
Bowen Jiang; Zhijun Zhuang; Shreyas S. Shivakumar; Dan Roth; Camillo J. Taylor; | arxiv-cs.CV | 2024-03-21 |
771 | Large Language Models for Multi-Choice Question Classification of Medical Subjects Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The aim of this paper is to evaluate whether large language models trained on multi-choice question data can be used to discriminate between medical subjects. |
Víctor Ponce-López; | arxiv-cs.CL | 2024-03-21 |
772 | Language Repository for Long Video Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a Language Repository (LangRepo) for LLMs, that maintains concise and structured information as an interpretable (i.e., all-textual) representation. |
Kumara Kahatapitiya; Kanchana Ranasinghe; Jongwoo Park; Michael S. Ryoo; | arxiv-cs.CV | 2024-03-21 |
773 | Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. |
Kosuke Akimoto; Kunihiro Takeoka; Masafumi Oyamada; | arxiv-cs.CL | 2024-03-21 |
774 | Improved Baselines for Data-efficient Perceptual Augmentation of LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While different approaches have been explored to interface LLMs with “perceptual backbones” that process, e.g., visual or audio data, they are often explored for different tasks, different datasets, and using different perceptual backbones and language models, hindering direct comparison of the interfacing mechanisms. To remedy this lack of comparability between methods, we present an extensive experimental evaluation of different interfacing mechanisms, across multiple tasks (including image, video, and audio captioning as well as visual question answering), datasets and backbones, paying special attention to low-data settings. |
Théophane Vallaeys; Mustafa Shukor; Matthieu Cord; Jakob Verbeek; | arxiv-cs.CV | 2024-03-20 |
775 | Syn-QA2: Evaluating False Assumptions in Long-tail Questions with Synthetic QA Datasets Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce Syn-(QA)$^2$, a set of two synthetically generated QA datasets: one generated using perturbed relations from Wikidata, and the other by perturbing HotpotQA (Yang et al. 2018). |
Ashwin Daswani; Rohan Sawant; Najoung Kim; | arxiv-cs.CL | 2024-03-18 |
776 | Dr3: Ask Large Language Models Not to Give Off-Topic Answers in Open Domain Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This issue of off-topic answers accounts for approximately one-third of incorrect answers, yet remains underexplored despite its significance. To alleviate this issue, we propose the Discriminate->Re-Compose->Re- Solve->Re-Decompose (Dr3) mechanism. |
YUAN GAO et. al. | arxiv-cs.CL | 2024-03-18 |
777 | Enhancing Event Causality Identification with Rationale and Structure-Aware Causal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a multi-task learning framework to enhance event causality identification with rationale and structure-aware causal question answering. |
Baiyan Zhang; Qin Chen; Jie Zhou; Jian Jin; Liang He; | arxiv-cs.CL | 2024-03-17 |
778 | RetinaQA: A Robust Knowledge Base Question Answering Model for Both Answerable and Unanswerable Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent research has found that such models, when superficially adapted to detect answerability, struggle to satisfactorily identify the different categories of unanswerable questions, and simultaneously preserve good performance for answerable questions. Towards addressing this issue, we propose RetinaQA, a new KBQA model that unifies two key ideas in a single KBQA architecture: (a) discrimination over candidate logical forms, rather than generating these, for handling schema-related unanswerability, and (b) sketch-filling-based construction of candidate logical forms for handling data-related unaswerability. |
Prayushi Faldu; Indrajit Bhattacharya; | arxiv-cs.CL | 2024-03-16 |
779 | Knowledge Condensation and Reasoning for Knowledge-based VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenge, we propose two synergistic models: Knowledge Condensation model and Knowledge Reasoning model. |
DONGZE HAO et. al. | arxiv-cs.CV | 2024-03-15 |
780 | Few-Shot Image Classification and Segmentation As Visual Question Answering Using Vision-Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce the Vision-Instructed Segmentation and Evaluation (VISE) method that transforms the FS-CS problem into the Visual Question Answering (VQA) problem, utilising Vision-Language Models (VLMs), and addresses it in a training-free manner. |
Tian Meng; Yang Tao; Ruilin Lyu; Wuliang Yin; | arxiv-cs.CV | 2024-03-15 |
781 | Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a multimodal adversarial training architecture with spatial awareness capabilities. |
Zhixuan Shen; Haonan Luo; Sijia Li; Tianrui Li; | arxiv-cs.CV | 2024-03-14 |
782 | DAM: Dynamic Adapter Merging for Continual Video QA Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present a parameter-efficient method for continual video question-answering (VidQA) learning. |
FENG CHENG et. al. | arxiv-cs.CV | 2024-03-13 |
783 | RAGGED: Towards Informed Design of Retrieval Augmented Generation Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To systematically find the optimal configuration, we introduce RAGGED, a framework for analyzing RAG configurations across various DBQA tasks. |
Jennifer Hsia; Afreen Shaikh; Zhiruo Wang; Graham Neubig; | arxiv-cs.CL | 2024-03-13 |
784 | MoleculeQA: A Dataset to Evaluate Factual Accuracy in Molecular Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To rectify the absence of factual evaluation, we present MoleculeQA, a novel question answering (QA) dataset which possesses 62K QA pairs over 23K molecules. |
XINGYU LU et. al. | arxiv-cs.CL | 2024-03-12 |
785 | Answering Diverse Questions Via Text Attached with Key Audio-Visual Clues Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Indeed, the natural heterogeneous relationship between audiovisuals and text makes the perfect fusion challenging, to prevent high-level audio-visual semantics from weakening the network’s adaptability to diverse question types, we propose a framework for performing mutual correlation distillation (MCD) to aid question inference. |
Qilang Ye; Zitong Yu; Xin Liu; | arxiv-cs.CV | 2024-03-11 |
786 | InfiBench: Evaluating The Question-Answering Capabilities of Code Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, they are insufficient to cover the full range of expected capabilities of code LLMs, which span beyond code generation to answering diverse coding-related questions. To fill this gap, we propose InfiBench, the first large-scale freeform question-answering (QA) benchmark for code to our knowledge, comprising 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages. |
LINYI LI et. al. | arxiv-cs.SE | 2024-03-10 |
787 | KG-Rank: Enhancing Large Language Models for Medical QA with Knowledge Graphs and Ranking Techniques IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we develop an augmented LLM framework, KG-Rank, which leverages a medical knowledge graph (KG) along with ranking and re-ranking techniques, to improve the factuality of long-form question answering (QA) in the medical domain. |
RUI YANG et. al. | arxiv-cs.CL | 2024-03-09 |
788 | SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a novel evaluative benchmark named \textbf{SnapNTell}, specifically tailored for entity-centric VQA. |
JIELIN QIU et. al. | arxiv-cs.CV | 2024-03-07 |
789 | Can’t Remember Details in Long Documents? You Need Some R&R Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Long-context large language models (LLMs) hold promise for tasks such as question-answering (QA) over long documents, but they tend to miss important information in the middle of context documents (arXiv:2307.03172v3). Here, we introduce $\textit{R&R}$ — a combination of two novel prompt-based methods called $\textit{reprompting}$ and $\textit{in-context retrieval}$ (ICR) — to alleviate this effect in document-based QA. |
Devanshu Agrawal; Shang Gao; Martin Gajek; | arxiv-cs.CL | 2024-03-07 |
790 | Benchmarking Hallucination in Large Language Models Based on Unanswerable Math Word Problem Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a new method for evaluating LLM hallucination in Question Answering (QA) based on the unanswerable math word problem (MWP). |
YUHONG SUN et. al. | arxiv-cs.CL | 2024-03-06 |
791 | Evaluating The Elementary Multilingual Capabilities of Large Language Models with MultiQ IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent research shows that, despite limits in their intended use, people prompt LLMs in many different languages. Therefore, in this paper, we investigate the basic multilingual capabilities of state-of-the-art open LLMs beyond their intended use. |
Carolin Holtermann; Paul Röttger; Timm Dill; Anne Lauscher; | arxiv-cs.CL | 2024-03-06 |
792 | Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces the novel task of multimodal puzzle solving, framed within the context of visual question-answering. |
Deepanway Ghosal; Vernon Toh Yan Han; Chia Yew Ken; Soujanya Poria; | arxiv-cs.CV | 2024-03-06 |
793 | Enhancing Generalization in Medical Visual Question Answering Tasks Via Gradient-Guided Model Perturbation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a method that incorporates gradient-guided parameter perturbations to the visual encoder of the multimodality model during both pre-training and fine-tuning phases, to improve model generalization for downstream medical VQA tasks. |
Gang Liu; Hongyang Li; Zerui He; Shenjun Zhong; | arxiv-cs.CV | 2024-03-05 |
794 | Vision-Language Models for Medical Report Generation and Visual Question Answering: A Review IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Medical vision-language models (VLMs) combine computer vision and natural language processing to analyze visual and textual medical data. |
Iryna Hartsock; Ghulam Rasool; | arxiv-cs.CV | 2024-03-04 |
795 | KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023. |
Sunjun Kweon; Byungjin Choi; Minkyu Kim; Rae Woong Park; Edward Choi; | arxiv-cs.CL | 2024-03-03 |
796 | Automatic Question-Answer Generation for Long-Tail Knowledge Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Pretrained Large Language Models (LLMs) have gained significant attention for addressing open-domain Question Answering (QA). While they exhibit high accuracy in answering … |
ROHAN KUMAR et. al. | ArXiv | 2024-03-03 |
797 | CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring Commonsense Reasoning and Long-Tail Knowledge Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Knowledge graph question answering (KGQA) is a well-established field that seeks to provide factual answers to natural language (NL) questions by leveraging knowledge graphs … |
Willis Guo; Armin Toroghi; Scott Sanner; | ArXiv | 2024-03-03 |
798 | Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge Graph Question Answering (KGQA) methods seek to answer Natural Language questions using the relational information stored in Knowledge Graphs (KGs). With the recent … |
Armin Toroghi; Willis Guo; Mohammad Mahdi Torabi pour; Scott Sanner; | ArXiv | 2024-03-03 |
799 | LocalRQA: From Generating Data to Locally Training, Testing, and Deploying Retrieval-Augmented QA Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose LocalRQA, an open-source toolkit that features a wide selection of model training algorithms, evaluation methods, and deployment tools curated from the latest research. |
Xiao Yu; Yunan Lu; Zhou Yu; | arxiv-cs.CL | 2024-03-01 |
800 | XMQAs: Constructing Complex-Modified Question-Answering Dataset for Robust Question Understanding IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question understanding is an important issue to the success of a Knowledge-based Question Answering (KBQA) system.However, the existing study does not pay enough attention to this … |
Yuyan Chen; Yanghua Xiao; Zhixu Li; Bang Liu; | IEEE Transactions on Knowledge and Data Engineering | 2024-03-01 |
801 | Let LLMs Take on The Latest Challenges! A Chinese Dynamic Question Answering Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To promote the improvement of Chinese LLMs’ ability to answer dynamic questions, in this paper, we introduce CDQA, a Chinese Dynamic QA benchmark containing question-answer pairs related to the latest news on the Chinese Internet. |
ZHIKUN XU et. al. | arxiv-cs.CL | 2024-02-29 |
802 | Prompting Explicit and Implicit Knowledge for Multi-hop Question Answering Based on Human Reading Process Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce a Prompting Explicit and Implicit knowledge (PEI) framework, which uses prompts to connect explicit and implicit knowledge, aligning with human reading process for multi-hop QA. |
Guangming Huang; Yunfei Long; Cunjin Luo; Jiaxing Shen; Xia Sun; | arxiv-cs.CL | 2024-02-29 |
803 | Can GPT Improve The State of Prior Authorization Via Guideline Based Automated Question Answering? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we evaluate whether GPT can validate numerous key factors, in turn helping health plans reach a decision drastically faster. |
Shubham Vatsal; Ayush Singh; Shabnam Tafreshi; | arxiv-cs.CL | 2024-02-28 |
804 | The First Place Solution of WSDM Cup 2024: Leveraging Large Language Models for Conversational Multi-Doc QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce our winning approach for the Conversational Multi-Doc QA challenge in WSDM Cup 2024, which exploits the superior natural language understanding and generation capability of Large Language Models (LLMs). |
Yiming Li; Zhao Zhang; | arxiv-cs.CL | 2024-02-28 |
805 | Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Moreover, the lack of reference explanations means we cannot easily evaluate the reasoning of model decisions, a crucial component of supporting doctors in making complex medical decisions. To address these challenges, we construct two new datasets: JAMA Clinical Challenge and Medbullets. |
Hanjie Chen; Zhouxiang Fang; Yash Singla; Mark Dredze; | arxiv-cs.CL | 2024-02-28 |
806 | Researchy Questions: A Dataset of Multi-Perspective, Decompositional Questions for LLM Web Agents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present Researchy Questions, a dataset of search engine queries tediously filtered to be non-factoid, “decompositional” and multi-perspective. |
CORBY ROSSET et. al. | arxiv-cs.CL | 2024-02-27 |
807 | BlendSQL: A Scalable Dialect for Unifying Hybrid Question Answering in Relational Algebra Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce BlendSQL, a superset of SQLite to act as a unified dialect for orchestrating reasoning across both unstructured and structured data. |
Parker Glenn; Parag Pravin Dakle; Liang Wang; Preethi Raghavan; | arxiv-cs.CL | 2024-02-27 |
808 | JMLR: Joint Medical LLM and Retrieval Training for Enhancing Reasoning and Professional Question Answering Capability IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Unlike previous methods in RAG where the retrieval model was trained separately from the LLM, we introduce JMLR (for Jointly trains LLM and information Retrieval) during the fine-tuning phase. |
Junda Wang; Zhichao Yang; Zonghai Yao; Hong Yu; | arxiv-cs.CL | 2024-02-27 |
809 | REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus likely leading to misleading or even incorrect utilization of external knowledge (i.e., retrieved documents). To address this issue, in this paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA). |
YUHAO WANG et. al. | arxiv-cs.CL | 2024-02-27 |
810 | Unsupervised Multiple Choices Question Answering Via Universal Corpus Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel framework designed to generate synthetic MCQA data barely based on contexts from the universal domain without relying on any form of manual annotation. |
Qin Zhang; Hao Ge; Xiaojun Chen; Meng Fang; | arxiv-cs.CL | 2024-02-27 |
811 | GigaPevt: Multimodal Medical Assistant Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This demo paper presents the GigaPevt, the first multimodal medical assistant that combines the dialog capabilities of large language models with specialized medical models. |
PAVEL BLINOV et. al. | arxiv-cs.AI | 2024-02-26 |
812 | Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Temporal knowledge graph question answering (TKGQA) poses a significant challenge task, due to the temporal constraints hidden in questions and the answers sought from dynamic … |
YIFU GAO et. al. | arxiv-cs.CL | 2024-02-26 |
813 | SuRe: Summarizing Retrievals Using Answer Candidates for Open-domain QA of LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we design a simple yet effective framework to enhance open-domain QA (ODQA) with LLMs, based on the summarized retrieval (SuRe). |
JAEHYUNG KIM et. al. | iclr | 2024-02-26 |
814 | The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of The Open World IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present the All-Seeing (AS) project: a large-scale dataset and model for recognizing and understanding everything in the open world.Using a scalable data engine that incorporates human feedback and efficient models in the loop, we create a new dataset (AS-1B) with over 1.2 billion regions annotated with semantic tags, question-answering pairs, and detailed captions. |
WEIYUN WANG et. al. | iclr | 2024-02-26 |
815 | Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: With augmentation of retrieval module, open-source Large Language Models (LLMs) can produce coherent answers often with different focuses, but are still sub-optimal in terms of reliable evidence selection and in-depth question analysis. In this paper, we propose a novel Chain-of-Discussion framework to leverage the synergy among multiple open-source LLMs aiming to provide \textbf{more correct} and \textbf{more comprehensive} answers for open-ended QA, although they are not strong enough individually. |
Mingxu Tao; Dongyan Zhao; Yansong Feng; | arxiv-cs.CL | 2024-02-26 |
816 | RAPPER: Reinforced Rationale-Prompted Paradigm for Natural Language Explanation in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In practice, one might encounter explanations which lack informativeness or contradict visual-grounded facts, known as implausibility and hallucination problems, respectively. To tackle these challenging issues, we consider the task of visual question answering (VQA) and introduce Rapper, a two-stage Reinforced Rationale-Prompted Paradigm. |
KAI-PO CHANG et. al. | iclr | 2024-02-26 |
817 | CABINET: Content Relevance-based Noise Reduction for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) – a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information.We release our code and datasets here. |
SOHAN PATNAIK et. al. | iclr | 2024-02-26 |
818 | Bootstrapping Variational Information Pursuit with Large Language and Vision Models for Interpretable Image Classification Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This limits V-IP’s application to small-scale tasks where manual data annotation is feasible. In this work, we focus on image classification tasks and propose to relieve this bottleneck by leveraging pretrained language and vision models. |
Aditya Chattopadhyay; Kwan Ho Ryan Chan; Rene Vidal; | iclr | 2024-02-26 |
819 | EQA-MX: Embodied Question Answering Using Multimodal Expression Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we have introduced 8 novel embodied question answering (EQA) tasks to develop learning models to comprehend embodied questions with multimodal expressions.We have developed a novel large-scale dataset, EQA-MX, with over 8 million diverse embodied QA data samples involving multimodal expressions from multiple visual and verbal perspectives. |
Md Mofijul Islam; Alexi Gladstone; Riashat Islam; Tariq Iqbal; | iclr | 2024-02-26 |
820 | Deep Learning Approaches for Improving Question Answering Systems in Hepatocellular Carcinoma Research IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Models such as BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation. These pre-trained models serve as robust bases for various tasks including semantic understanding, intelligent writing, and reasoning, paving the way for a more generalized form of artificial intelligence. |
Shuning Huo; Yafei Xiang; Hanyi Yu; Mengran Zhu; Yulu Gong; | arxiv-cs.CL | 2024-02-25 |
821 | PerLTQA: A Personal Long-Term Memory Dataset for Memory Classification, Retrieval, and Synthesis in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Based on PerLTQA, we propose a novel framework for memory integration and generation, consisting of three main components: Memory Classification, Memory Retrieval, and Memory Synthesis. |
YIMING DU et. al. | arxiv-cs.CL | 2024-02-25 |
822 | Bridging The Gap Between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Integrating proposed mechanisms above, we present BridgeQA, that offers a fresh perspective on multi-modal transformer-based architectures for 3D-VQA. |
Wentao Mo; Yang Liu; | arxiv-cs.CV | 2024-02-24 |
823 | Predicting Semantic Category of Answers for Question Answering Systems Using Transformers: A Transfer Learning Approach Related Papers Related Patents Related Grants Related Venues Related Experts View |
S. C. M.; JayaramanPrem Prakash; Varun Sai Alaparthi; | Multim. Tools Appl. | 2024-02-24 |
824 | Biomedical Entity Linking As Multiple Choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Although biomedical entity linking (BioEL) has made significant progress with pre-trained language models, challenges still exist for fine-grained and long-tailed entities. To address these challenges, we present BioELQA, a novel model that treats Biomedical Entity Linking as Multiple Choice Question Answering. |
Zhenxi Lin; Ziheng Zhang; Xian Wu; Yefeng Zheng; | arxiv-cs.CL | 2024-02-23 |
825 | VISREAS: Complex Visual Reasoning with Unanswerable Questions Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Verifying a question’s validity before answering is crucial in real-world applications, where users may provide imperfect instructions. In this scenario, an ideal model should … |
Syeda Nahida Akter; Sangwu Lee; Yingshan Chang; Yonatan Bisk; Eric Nyberg; | Annual Meeting of the Association for Computational … | 2024-02-23 |
826 | Triad: A Framework Leveraging A Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Triad, a unified framework that utilizes an LLM-based agent with three roles for KBQA tasks. |
CHANG ZONG et. al. | arxiv-cs.CL | 2024-02-22 |
827 | Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP Education Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To tackle TutorQA queries, we present CGLLM, a pipeline integrating concept graphs with LLMs for answering diverse questions. |
RUI YANG et. al. | arxiv-cs.CL | 2024-02-22 |
828 | Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces Word-Sequence Entropy (WSE), a method that calibrates uncertainty at both the word and sequence levels, considering semantic relevance. |
ZHIYUAN WANG et. al. | arxiv-cs.CL | 2024-02-21 |
829 | PQA: Zero-shot Protein Question Answering for Free-form Scientific Enquiry with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing datasets suffer from biases, noise, and lack of evolutionary context, while current evaluation methods fail to accurately assess model performance. We introduce the Pika framework to overcome these limitations. |
Eli M Carrami; Sahand Sharifzadeh; | arxiv-cs.LG | 2024-02-21 |
830 | ActiveRAG: Autonomously Knowledge Assimilation and Accommodation Through Retrieval-Augmented Agents Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce ActiveRAG, a multi-agent framework that mimics human learning behavior to help LLMs actively engage with and learn from retrieved evidence. |
ZHIPENG XU et. al. | arxiv-cs.CL | 2024-02-21 |
831 | LLMs Meet Long Video: Advancing Long Video Question Answering with An Interactive Visual Adapter in LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this approach incurs high computational costs due to the extensive array of video tokens, experiences reduced visual clarity as a consequence of token aggregation, and confronts challenges arising from irrelevant visual tokens while answering video-related questions. To alleviate these issues, we present an Interactive Visual Adapter (IVA) within LLMs, designed to enhance interaction with fine-grained visual elements. |
Yunxin Li; Xinyu Chen; Baotain Hu; Min Zhang; | arxiv-cs.CL | 2024-02-21 |
832 | Self-DC: When to Retrieve and When to Generate? Self Divide-and-Conquer for Compositional Unknown Questions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose the first Compositional unknown Question-Answering dataset (CuQA), and introduce a Self Divide-and-Conquer (Self-DC) framework to empower LLMs to adaptively call different methods on-demand, resulting in better performance and efficiency. |
HONGRU WANG et. al. | arxiv-cs.CL | 2024-02-20 |
833 | Object Attribute Matters in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel VQA approach from the perspective of utilizing object attribute, aiming to achieve better object-level visual-language alignment and multimodal scene understanding. |
Peize Li; Qingyi Si; Peng Fu; Zheng Lin; Yan Wang; | aaai | 2024-02-20 |
834 | STAIR: Spatial-Temporal Reasoning with Auditable Intermediate Results for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Though neural module networks are already widely studied on image-text tasks, applying them to videos is a non-trivial task, as reasoning on videos requires different abilities. In this paper, we define a set of basic video-text sub-tasks for video question answering and design a set of lightweight modules to complete them. |
Yueqian Wang; Yuxuan Wang; Kai Chen; Dongyan Zhao; | aaai | 2024-02-20 |
835 | Cross-Modal Feature Distribution Calibration for Few-Shot Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Currently, most of the few-shot VQA methods are confined to simply extending few-shot classification methods to cross-modal tasks while ignoring the spatial distribution properties of multimodal features and cross-modal information interaction. To address this problem, we propose a novel Cross-modal feature Distribution Calibration Inference Network (CDCIN) in this paper, where a new concept named visual information entropy is proposed to realize multimodal features distribution calibration by cross-modal information interaction for more effective few-shot VQA. |
Jing Zhang; Xiaoqiang Liu; Mingzhe Chen; Zhe Wang; | aaai | 2024-02-20 |
836 | Object-Aware Adaptive-Positivity Learning for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose to explicitly consider fine-grained visual objects in video frames (object-level clues) and explore the multi-modal relations (\textit{i.e.}, the object, audio, and question) in terms of feature interaction and model optimization. |
Zhangbin Li; Dan Guo; Jinxing Zhou; Jing Zhang; Meng Wang; | aaai | 2024-02-20 |
837 | Question Calibration and Multi-Hop Modeling for Temporal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: (II) They neither emphasize the graph structure between entities nor explicitly model the multi-hop relationship in the graph, which will make it difficult to solve complex multi-hop question answering. To alleviate this problem, we propose a novel Question Calibration and Multi-Hop Modeling (QC-MHM) approach. |
Chao Xue; Di Liang; Pengfei Wang; Jing Zhang; | arxiv-cs.CL | 2024-02-20 |
838 | Making Natural Language Reasoning Explainable and Faithful Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this talk, we will focus on (1) our design of leveraging structured information (that is grounded to the context), for the explainable complex question answering and reasoning; (2) our multi-module interpretable framework for inductive reasoning, which conducts step-wise faithful reasoning with iterative feedback. |
Xinya Du; | aaai | 2024-02-20 |
839 | Code-Style In-Context Learning for Knowledge-Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current powerful LLMs have little exposure to logic forms during pre-training, resulting in a high format error rate. To solve this problem, we propose a code-style in-context learning method for KBQA, which converts the generation process of unfamiliar logical form into the more familiar code generation process for LLMs. |
Zhijie Nie; Richong Zhang; Zhongyuan Wang; Xudong Liu; | aaai | 2024-02-20 |
840 | Knowledge Graph Prompting for Multi-Document Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, few works explore this paradigm in multi-document question answering (MD-QA), a task demanding a thorough understanding of the logical associations among the contents and structures of documents. To fill this crucial gap, we propose a Knowledge Graph Prompting (KGP) method to formulate the right context in prompting LLMs for MD-QA, which consists of a graph construction module and a graph traversal module. |
YU WANG et. al. | aaai | 2024-02-20 |
841 | Bidirectional Contrastive Split Learning for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work aims to tackle privacy-preserving VQA by decoupling a multi-modal model into representation modules and a contrastive module, leveraging inter-module gradients sharing and inter-client weight sharing. |
Yuwei Sun; Hideya Ochiai; | aaai | 2024-02-20 |
842 | YTCommentQA: Video Question Answerability in Instructional Videos Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Discerning whether a question can be answered by video content is challenging due to the multi-modal nature of videos, where visual and verbal information are intertwined. To bridge this gap, we present the YTCommentQA dataset, which contains naturally-generated questions from YouTube, categorized by their answerability and required modality to answer — visual, script, or both. |
Saelyne Yang; Sunghyun Park; Yunseok Jang; Moontae Lee; | aaai | 2024-02-20 |
843 | Benchmarking Retrieval-Augmented Generation for Medicine IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Using MIRAGE, we conducted large-scale experiments with over 1.8 trillion prompt tokens on 41 combinations of different corpora, retrievers, and backbone LLMs through the MedRAG toolkit introduced in this work. |
Guangzhi Xiong; Qiao Jin; Zhiyong Lu; Aidong Zhang; | arxiv-cs.CL | 2024-02-20 |
844 | BiMediX: Bilingual Medical Mixture of Experts LLM Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce BiMediX, the first bilingual medical mixture of experts LLM designed for seamless interaction in both English and Arabic. |
SARA PIERI et. al. | arxiv-cs.CL | 2024-02-20 |
845 | T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning Via Large Language Model Signals for Science Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Besides, the annotated rationales are hardly accurate due to the external essential information missed. To address these issues, we propose a novel method termed T-SciQ that aims at teaching science question answering with LLM signals. |
LEI WANG et. al. | aaai | 2024-02-20 |
846 | Slot-VLM: SlowFast Slots for Video-Language Modeling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce Slot-VLM, a novel framework designed to generate semantically decomposed video tokens, in terms of object-wise and event-wise visual representations, to facilitate LLM inference. |
Jiaqi Xu; Cuiling Lan; Wenxuan Xie; Xuejin Chen; Yan Lu; | arxiv-cs.CV | 2024-02-20 |
847 | Video-Context Aligned Transformer for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Extremely imbalanced alignment of information from both sides leads to significant instability in reasoning. To address this concern, we propose the Video-Context Aligned Transformer (V-CAT), which leverages the context to achieve semantic and content alignment between video and question. |
LINLIN ZONG et. al. | aaai | 2024-02-20 |
848 | Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Additionally, these methods suffer from inefficient retrieval, as complex questions often contain abundant information, leading to the retrieval of irrelevant information inconsistent with the query’s intent. In this work, we propose a novel question decomposition framework called TRQA for multi-hop question answering, which addresses these limitations. |
KUN ZHANG et. al. | aaai | 2024-02-20 |
849 | Exploring The Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we address this research gap in two steps. |
DEHAI MIN et. al. | arxiv-cs.CL | 2024-02-20 |
850 | Detection-Based Intermediate Supervision for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For instance, (1) a prior assumption that each instance-module refers to only one grounded object yet overlooks other potentially associated grounded objects, impeding full cross-modal alignment learning; (2) IoU-based intermediate supervisions may introduce noise signals as the bounding box overlap issue might guide the model’s focus towards irrelevant objects. To address these issues, a novel method, Detection-based Intermediate Supervision (DIS), is proposed, which adopts a generative detection framework to facilitate multiple grounding supervisions via sequence generation. |
YUHANG LIU et. al. | aaai | 2024-02-20 |
851 | Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose an end-to-end methodology designed to generate long-form answers to any statutory law questions, utilizing a "retrieve-then-read" pipeline. |
Antoine Louis; Gijs van Dijck; Gerasimos Spanakis; | aaai | 2024-02-20 |
852 | BIDER: Bridging Knowledge Inconsistency for Efficient Retrieval-Augmented LLMs Via Key Supporting Evidence IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces BIDER, an approach that refines retrieval documents into Key Supporting Evidence (KSE) through knowledge synthesis, supervised fine-tuning (SFT), and preference alignment. |
Jiajie Jin; Yutao Zhu; Yujia Zhou; Zhicheng Dou; | arxiv-cs.CL | 2024-02-19 |
853 | FinBen: A Holistic Financial Benchmark for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce FinBen, the first extensive open-source evaluation benchmark, including 36 datasets spanning 24 financial tasks, covering seven critical aspects: information extraction (IE), textual analysis, question answering (QA), text generation, risk management, forecasting, and decision-making. |
QIANQIAN XIE et. al. | arxiv-cs.CL | 2024-02-19 |
854 | Cofca: A Step-Wise Counterfactual Multi-hop QA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, current factual Multi-hop QA (MHQA) benchmarks are annotated on open-source corpora such as Wikipedia, although useful for multi-step reasoning evaluation, they show limitations due to the potential data contamination in LLMs’ pre-training stage. To address these issues, we introduce a Step-wise Counterfactual benchmark (CofCA), a novel evaluation benchmark consisting of factual data and counterfactual data that reveals LLMs’ real reasoning abilities on multi-step reasoning and reasoning chain evaluation. |
Jian Wu; Linyi Yang; Zhen Wang; Manabu Okumura; Yue Zhang; | arxiv-cs.CL | 2024-02-19 |
855 | Question Answering Over Spatio-Temporal Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In response, we propose STCQA, a new spatio-temporal KGQA approach that utilizes a novel STKG embedding method named STComplEx. |
Xinbang Dai; Huiying Li; Guilin Qi; | arxiv-cs.CL | 2024-02-18 |
856 | Learning From Failure: Integrating Negative Examples When Fine-tuning Large Language Models As Agents IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Discarding failed trajectories also leads to significant wastage of data and resources and limits the possible optimization paths during fine-tuning. In this paper, we argue that unsuccessful trajectories offer valuable insights, and LLMs can learn from these trajectories through appropriate quality control and fine-tuning strategies. |
Renxi Wang; Haonan Li; Xudong Han; Yixuan Zhang; Timothy Baldwin; | arxiv-cs.CL | 2024-02-18 |
857 | Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we delve deeper into the CoT reasoning capabilities of LLMs in multi-hop question answering by utilizing knowledge graphs (KGs). |
MINH-VUONG NGUYEN et. al. | arxiv-cs.CL | 2024-02-17 |
858 | Evaluating LLMs’ Mathematical Reasoning in Financial Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The results provide insights into LLMs’ capabilities and limitations in handling complex mathematical scenarios for semi-structured tables. Ultimately, we introduce a novel prompting technique tailored to semi-structured documents, matching or outperforming other baselines in performance while providing a nuanced understanding of LLMs abilities for such a task. |
Pragya Srivastava; Manuj Malik; Vivek Gupta; Tanuja Ganu; Dan Roth; | arxiv-cs.CL | 2024-02-17 |
859 | PANDA (Pedantic ANswer-correctness Determination and Adjudication): Improving Automatic Evaluation for Question Answering and Text Generation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question answering (QA) can only make progress if we know if an answer is correct, but for many of the most challenging and interesting QA examples, current answer correctness … |
Zongxia Li; Ishani Mondal; Yijun Liang; Huy Nghiem; Jordan L. Boyd-Graber; | ArXiv | 2024-02-17 |
860 | Question-Instructed Visual Descriptions for Zero-Shot Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present Q-ViD, a simple approach for video question answering (video QA), that unlike prior methods, which are based on complex architectures, computationally expensive pipelines or use closed models like GPTs, Q-ViD relies on a single instruction-aware open vision-language model (InstructBLIP) to tackle videoQA using frame descriptions. |
David Romero; Thamar Solorio; | arxiv-cs.CV | 2024-02-16 |
861 | PEDANTS: Cheap But Effective and Interpretable Answer Equivalence Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Question answering (QA) can only make progress if we know if an answer is correct, but current answer correctness (AC) metrics struggle with verbose, free-form answers from large … |
Zongxia Li; Ishani Mondal; Yijun Liang; Huy Nghiem; Jordan Lee Boyd-Graber; | arxiv-cs.CL | 2024-02-16 |
862 | PAT-Questions: A Self-Updating Benchmark for Present-Anchored Temporal Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: PATQA poses unique challenges: (1) large language models (LLMs) may have outdated knowledge, (2) complex temporal relationships (e.g. ‘before’, ‘previous’) are hard to reason, (3) multi-hop reasoning may be required, and (4) the gold answers of benchmarks must be continuously updated. To address these challenges, we introduce the PAT-Questions benchmark, which includes single and multi-hop temporal questions. |
Jannat Ara Meem; Muhammad Shihab Rashid; Yue Dong; Vagelis Hristidis; | arxiv-cs.CL | 2024-02-16 |
863 | VQAttack: Transferable Adversarial Attacks on Visual Question Answering Via Pre-trained Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Correspondingly, we propose a novel VQAttack model, which can iteratively generate both image and text perturbations with the designed modules: the large language model (LLM)-enhanced image attack and the cross-modal joint attack module. |
ZIYI YIN et. al. | arxiv-cs.CV | 2024-02-16 |
864 | MURRE: Multi-Hop Table Retrieval with Removal for Open-Domain Text-to-SQL Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Since the questions in text-to-SQL usually contain all required information, while previous multi-hop retrieval supplements the questions with retrieved documents. Therefore, we propose the multi-hop table retrieval with removal (MURRE), which removes previously retrieved information from the question to guide the retriever towards unretrieved relevant tables. |
Xuanliang Zhang; Dingzirui Wang; Longxu Dou; Qingfu Zhu; Wanxiang Che; | arxiv-cs.CL | 2024-02-16 |
865 | A Question Answering Based Pipeline for Comprehensive Chinese EHR Information Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel approach that automatically generates training data for transfer learning of QA models. |
Huaiyuan Ying; Sheng Yu; | arxiv-cs.CL | 2024-02-16 |
866 | II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose II-MMR, a novel idea to identify and improve multi-modal multi-hop reasoning in VQA. |
Jihyung Kil; Farideh Tavazoee; Dongyeop Kang; Joo-Kyung Kim; | arxiv-cs.CV | 2024-02-16 |
867 | GenDec: A Robust Generative Question-decomposition Method for Multi-hop Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a \textbf{gen}erative question \textbf{dec}omposition method (GenDec) from the perspective of explainable QA by generating independent and complete sub-questions based on incorporating additional extracted evidence for enhancing LLMs’ reasoning ability in RAG. |
JIAN WU et. al. | arxiv-cs.CL | 2024-02-16 |
868 | A Dataset of Open-Domain Question Answering with Multiple-Span Answers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Previous efforts for constructing MSQA datasets predominantly emphasized entity-centric contextualization, resulting in a bias towards collecting factoid questions and potentially overlooking questions requiring more detailed descriptive responses. To overcome these limitations, we present CLEAN, a comprehensive Chinese multi-span question answering dataset that involves a wide range of open-domain subjects with a substantial number of instances requiring descriptive answers. |
Zhiyi Luo; Yingying Zhang; Shuyun Luo; Ying Zhao; Wentao Lyu; | arxiv-cs.CL | 2024-02-15 |
869 | Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For precise questions, we observe a minimum accuracy improvement of 7.5. |
Jiaxiang Liu; Tong Zhou; Yubo Chen; Kang Liu; Jun Zhao; | arxiv-cs.CL | 2024-02-15 |
870 | Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-rays Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Here, we introduce a novel VLM called PLURAL, which is pretrained on natural and longitudinal chest X-ray data for the diff-VQA task. |
Yeongjae Cho; Taehee Kim; Heejun Shin; Sungzoon Cho; Dongmyung Shin; | arxiv-cs.CV | 2024-02-14 |
871 | Prompt-based Personalized Federated Learning for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns in traditional medical visual question answering (VQA) methods. |
He Zhu; Ren Togo; Takahiro Ogawa; Miki Haseyama; | arxiv-cs.CV | 2024-02-14 |
872 | Visual Question Answering Instruction: Unlocking Multimodal Large Language Model To Domain-Specific Visual Multitasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We developed a method to transform domain-specific visual and vision-language datasets into a unified question answering format called Visual Question Answering Instruction (VQA-IN), thereby extending MLLM to domain-specific tasks. |
Jusung Lee; Sungguk Cha; Younghyun Lee; Cheoljong Yang; | arxiv-cs.CV | 2024-02-13 |
873 | Visually Dehallucinative Instruction Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a novel and scalable method for generating visually dehallucinative instructions, dubbed CAP2QA, that constrains the scope to only image contents. |
Sungguk Cha; Jusung Lee; Younghyun Lee; Cheoljong Yang; | arxiv-cs.CV | 2024-02-13 |
874 | T-RAG: Lessons from The LLM Trenches Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Large Language Models (LLM) have shown remarkable language capabilities fueling attempts to integrate them into applications across a wide range of domains. |
Masoomali Fatehkia; Ji Kim Lucas; Sanjay Chawla; | arxiv-cs.AI | 2024-02-12 |
875 | G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. |
XIAOXIN HE et. al. | arxiv-cs.LG | 2024-02-12 |
876 | Exploring Perceptual Limitation of Multimodal Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we quantitatively study the perception of small visual objects in several state-of-the-art MLLMs and reveal a pervasive limitation in answering questions about small objects in images. |
Jiarui Zhang; Jinyi Hu; Mahyar Khayatkhoei; Filip Ilievski; Maosong Sun; | arxiv-cs.CV | 2024-02-11 |
877 | FaBERT: Pre-training BERT on Persian Blogs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce FaBERT, a Persian BERT-base model pre-trained on the HmBlogs corpus, encompassing both informal and formal Persian texts. |
Mostafa Masumi; Seyed Soroush Majd; Mehrnoush Shamsfard; Hamid Beigy; | arxiv-cs.CL | 2024-02-09 |
878 | The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate” Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators. We assess the performance of three LLMs and one … |
Juhyun Oh; Eunsu Kim; Inha Cha; Alice Oh; | ArXiv | 2024-02-09 |
879 | The Generative AI Paradox on Evaluation: What It Can Solve, It May Not Evaluate Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators. |
Juhyun Oh; Eunsu Kim; Inha Cha; Alice Oh; | arxiv-cs.CL | 2024-02-09 |
880 | SPARQL Generation: An Analysis on Fine-tuning OpenLLaMA for Question Answering Over A Life Science Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To overcome this challenge, in this study, we evaluate several strategies for fine-tuning the OpenLlama LLM for question answering over life science knowledge graphs. In particular, we propose an end-to-end data augmentation approach for extending a set of existing queries over a given knowledge graph towards a larger dataset of semantically enriched question-to-SPARQL query pairs, enabling fine-tuning even for datasets where these pairs are scarce. |
Julio C. Rangel; Tarcisio Mendes de Farias; Ana Claudia Sima; Norio Kobayashi; | arxiv-cs.AI | 2024-02-07 |
881 | ScreenAI: A Vision-Language Model for UI and Infographics Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce ScreenAI, a vision-language model that specializes in UI and infographics understanding. |
GILLES BAECHLER et. al. | arxiv-cs.CV | 2024-02-07 |
882 | NORMY: Non-Uniform History Modeling for Open Retrieval Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose NORMY, the first unsupervised non-uniform history modeling pipeline which generates the best conversational history for each module. |
Muhammad Shihab Rashid; Jannat Ara Meem; Vagelis Hristidis; | arxiv-cs.IR | 2024-02-06 |
883 | Training Language Models to Generate Text with Citations Via Fine-grained Rewards IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose an effective training framework using fine-grained rewards to teach LLMs to generate highly supportive and relevant citations, while ensuring the correctness of their responses. |
Chengyu Huang; Zeqiu Wu; Yushi Hu; Wenya Wang; | arxiv-cs.CL | 2024-02-06 |
884 | Convincing Rationales for Visual Question Answering Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To generate both visual and textual rationales next to the predicted answer to the given image/question pair, we propose Convincing Rationales for VQA, CRVQA. |
Kun Li; George Vosselman; Michael Ying Yang; | arxiv-cs.CV | 2024-02-06 |
885 | Enhancing Textbook Question Answering Task with Large Language Models and Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a methodology that handle the out-of-domain scenario in TQA where concepts are spread across different lessons by incorporating the retrieval augmented generation (RAG) technique and utilize transfer learning to handle the long context and enhance reasoning abilities. |
Hessa Abdulrahman Alawwad; Areej Alhothali; Usman Naseem; Ali Alkhathlan; Amani Jamal; | arxiv-cs.CL | 2024-02-05 |
886 | LB-KBQA: Large-language-model and BERT Based Knowledge-Based Question and Answering System Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, both of the methods suffer from limited resources in intent recognition. To address this issue, we propose a novel KBQA system based on a Large Language Model(LLM) and BERT (LB-KBQA). |
Yan Zhao; Zhongyun Li; Yushan Pan; Jiaxing Wang; Yihong Wang; | arxiv-cs.CL | 2024-02-05 |
887 | Large Language Model for Table Processing: A Survey Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Tables, typically two-dimensional and structured to store large amounts of data, are essential in daily activities like database queries, spreadsheet calculations, and generating … |
Weizheng Lu; Jiaming Zhang; Jing Zhang; Yueguo Chen; | ArXiv | 2024-02-04 |
888 | GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite this, how to activate the capacity of MLLM as the implicit knowledge engine has not been explored yet. Therefore, we propose GeReA, a generate-reason framework that prompts a MLLM like InstructBLIP with question relevant vision and language information to generate knowledge-relevant descriptions and reasons those descriptions for knowledge-based VQA. |
ZIYU MA et. al. | arxiv-cs.CV | 2024-02-04 |
889 | Knowledge Generation for Zero-shot Knowledge-based VQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Inspired by recent work on knowledge generation from LLMs for text-based QA, in this work we propose and test a similar knowledge-generation-based K-VQA method, which first generates knowledge from an LLM and then incorporates the generated knowledge for K-VQA in a zero-shot manner. |
Rui Cao; Jing Jiang; | arxiv-cs.CL | 2024-02-04 |
890 | SemPool: Simple, Robust, and Interpretable KG Pooling for Enhancing Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, GNN-based methods for QA rely on the graph information of the candidate answer nodes, which limits their effectiveness in more challenging settings where critical answer information is not included in the KG. We propose a simple graph pooling approach that learns useful semantics of the KG that can aid the LM’s reasoning and that its effectiveness is robust under graph perturbations. |
Costas Mavromatis; Petros Karypis; George Karypis; | arxiv-cs.CL | 2024-02-03 |
891 | Large Language Model for Table Processing: A Survey IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We summarize the training techniques for LLMs and VLMs tailored for table processing. |
WEIZHENG LU et. al. | arxiv-cs.AI | 2024-02-03 |
892 | CABINET: Content Relevance Based Noise Reduction for Table Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) – a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. |
SOHAN PATNAIK et. al. | arxiv-cs.CL | 2024-02-02 |
893 | Beyond The Answers: Reviewing The Rationality of Multiple Choice Question Answering for The Evaluation of Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation … |
HAOCHUN WANG et. al. | arxiv-cs.CL | 2024-02-02 |
894 | Enhancing Scene‐text Visual Question Answering with Relational Reasoning, Attention and Dynamic Vocabulary Integration Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text‐based VQA tasks, emphasizing the important role of … |
Mayank Agrawal; Anand Singh Jalal; Himanshu Sharma; | Computational Intelligence | 2024-02-01 |
895 | So Many Heads, So Many Wits: Multimodal Graph Reasoning for Text-Based Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: While texts related to images convey fundamental messages for scene understanding and reasoning, text-based visual question answering tasks concentrate on visual questions that … |
Wenbo Zheng; Lan Yan; Fei-Yue Wang; | IEEE Transactions on Systems, Man, and Cybernetics: Systems | 2024-02-01 |
896 | SPARQL Generation with Entity Pre-trained GPT for KG Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We managed to isolate which property of the task can be the most difficult to solve at few or zero-shot and we proposed pre-training on all entities (under CWA) to improve the performance. |
Diego Bustamante; Hideaki Takeda; | arxiv-cs.CL | 2024-02-01 |
897 | A Multi-scale Contextual Attention Network for Remote Sensing Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Jiangfan Feng; Hui Wang; | Int. J. Appl. Earth Obs. Geoinformation | 2024-02-01 |
898 | HiQA: A Hierarchical Contextual Augmentation RAG for Massive Documents QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: As language model agents leveraging external tools rapidly evolve, significant progress has been made in question-answering(QA) methodologies utilizing supplementary documents and … |
Xinyue Chen; Pengyu Gao; Jiangjiang Song; Xiaoyang Tan; | ArXiv | 2024-02-01 |
899 | Knowledge Graph-Based Reinforcement Federated Learning for Chinese Question and Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge question and answering (Q&A) is widely used. However, most existing semantic parsing methods in Q&A usually use cascading, which can incur error accumulation. In … |
LIANG XU et. al. | IEEE Transactions on Computational Social Systems | 2024-02-01 |
900 | Proximity QA: Unleashing The Power of Multi-Modal Large Language Models for Spatial Proximity Analysis Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, while existing MLLMs adeptly recognize \textit{what} objects are in an image, they still face challenges in effectively discerning \textit{where} these objects are, particularly along the distance (scene depth) axis. To overcome this limitation in MLLMs, we introduce Proximity Question Answering (Proximity QA), a novel framework designed to enable MLLMs to infer the proximity relationship between objects in images. |
Jianing Li; Xi Nan; Ming Lu; Li Du; Shanghang Zhang; | arxiv-cs.CV | 2024-01-31 |
901 | Desiderata for The Context Use of Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, most prior work focus on one or two of those problems in isolation, which makes it difficult to see trends across them. We aim to close this gap, by first outlining a set of — previously discussed as well as novel — desiderata for QA models. |
Sagi Shaier; Lawrence E Hunter; Katharina von der Wense; | arxiv-cs.CL | 2024-01-31 |
902 | HiQA: A Hierarchical Contextual Augmentation RAG for Multi-Documents QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these methods exhibit limited retrieval accuracy when faced with numerous indistinguishable documents, presenting notable challenges in their practical application. In response to these emerging challenges, we present HiQA, an advanced multi-document question-answering (MDQA) framework that integrates cascading metadata into content and a multi-route retrieval mechanism. |
Xinyue Chen; Pengyu Gao; Jiangjiang Song; Xiaoyang Tan; | arxiv-cs.CL | 2024-01-31 |
903 | Are My Answers Medically Accurate? Exploiting Medical Knowledge Graphs for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Aizan Zafar; Deeksha Varshney; Sovan Kumar Sahoo; Amitava Das; Asif Ekbal; | Appl. Intell. | 2024-01-31 |
904 | An Exam-based Evaluation Approach Beyond Traditional Relevance Judgments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose two evaluation measures, the recall-oriented EXAM Cover metric, and the precision-oriented EXAM Qrels metric, the latter which can be implemented with trec_eval. |
Naghmeh Farzi; Laura Dietz; | arxiv-cs.IR | 2024-01-31 |
905 | Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we provide a Transformer-based model and a baseline benchmark for the Turkish Language. |
Savas Yildirim; | arxiv-cs.CL | 2024-01-30 |
906 | PipeNet: Question Answering with Semantic Pruning Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we target at finding semantically related entity nodes in the subgraph to improve the efficiency of graph reasoning with KG. |
Ying Su; Jipeng Zhang; Yangqiu Song; Tong Zhang; | arxiv-cs.CL | 2024-01-30 |
907 | LCVO: An Efficient Pretraining-Free Framework for Visual Question Answering Grounding Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this paper, the LCV2 modular method is proposed for the Grounded Visual Question Answering task in the vision-language multimodal domain. This approach relies on a frozen large … |
Yuhan Chen; Lumei Su; Lihua Chen; Zhiwei Lin; | ArXiv | 2024-01-29 |
908 | LCV2: An Efficient Pretraining-Free Framework for Grounded Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, the LCV2 modular method is proposed for the Grounded Visual Question Answering task in the vision-language multimodal domain. |
Yuhan Chen; Lumei Su; Lihua Chen; Zhiwei Lin; | arxiv-cs.CV | 2024-01-28 |
909 | Improving Data Augmentation for Robust Visual Question Answering with Effective Curriculum Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Being widely used in learning unbiased visual question answering (VQA) models, Data Augmentation (DA) helps mitigate language biases by generating extra training samples beyond the original samples. |
Yuhang Zheng; Zhen Wang; Long Chen; | arxiv-cs.CV | 2024-01-28 |
910 | A RAG-based Question Answering System Proposal for Understanding Islam: MufassirQAS LLM Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study uses a vector database-based Retrieval Augmented Generation (RAG) approach to enhance the accuracy and transparency of LLMs. |
Ahmet Yusuf Alan; Enis Karaarslan; Ömer Aydin; | arxiv-cs.CL | 2024-01-27 |
911 | Augment Before You Try: Knowledge-Enhanced Table Question Answering Via Table Expansion Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a simple yet effective method to integrate external information in a given table. |
YUJIAN LIU et. al. | arxiv-cs.CL | 2024-01-27 |
912 | DataFrame QA: A Universal LLM Framework on DataFrame Question Answering Without Data Exposure Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose DataFrame QA as a comprehensive framework that includes safe Pandas query generation and code execution. |
Junyi Ye; Mengnan Du; Guiling Wang; | arxiv-cs.CL | 2024-01-27 |
913 | Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing Security in Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recent advancements in large language models (LLMs) have significantly enhanced capabilities in natural language processing and artificial intelligence. These models, including … |
Yunhong He; Jianling Qiu; Wei Zhang; Zhe Yuan; | ArXiv | 2024-01-27 |
914 | Benchmarking Large Language Models in Complex Question Answering Attribution Using Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The current methods for automatically evaluating the attribution, which are often based on Large Language Models (LLMs), are still inadequate, particularly in recognizing subtle differences between attributions, and complex relationships between citations and statements. To compare these attribution evaluation methods and develop new ones, we introduce a set of fine-grained categories (i.e., supportive, insufficient, contradictory and irrelevant) for measuring the attribution, and develop a Complex Attributed Question Answering (CAQA) benchmark by leveraging knowledge graphs (KGs) for automatically generating attributions of different categories to question-answer pairs. |
NAN HU et. al. | arxiv-cs.CL | 2024-01-25 |
915 | Towards Consistent Natural-Language Explanations Via Explanation-Consistency Finetuning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose explanation-consistency finetuning (EC-finetuning), a method that adapts LLMs to generate more consistent natural-language explanations on related examples. |
YANDA CHEN et. al. | arxiv-cs.CL | 2024-01-25 |
916 | Graph Guided Question Answer Generation for Procedural Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we focus on task-specific question answering (QA). |
HAI X. PHAM et. al. | arxiv-cs.CL | 2024-01-24 |
917 | SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes the first known end-to-end framework, Speech Dense Passage Retriever (SpeechDPR), for the retrieval component of the openSQA problem. |
CHYI-JIUNN LIN et. al. | arxiv-cs.CL | 2024-01-24 |
918 | Question Answering Systems for Health Professionals at The Point of Care—a Systematic Review Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Objectives Question answering (QA) systems have the potential to improve the quality of clinical care by providing health professionals with the latest and most relevant evidence. … |
GREGORY KELL et. al. | Journal of the American Medical Informatics Association : … | 2024-01-24 |
919 | SEER: Facilitating Structured Reasoning and Explanation Via Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose SEER, a novel method that maximizes a structure-based return to facilitate structured reasoning and explanation. |
GUOXIN CHEN et. al. | arxiv-cs.CL | 2024-01-24 |
920 | Can AI Assistants Know What They Don’t Know? IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We believe that an AI assistant’s refusal to answer questions it does not know is a crucial method for reducing hallucinations and making the assistant truthful. Therefore, in this paper, we ask the question Can AI assistants know what they don’t know and express them through natural language? |
QINYUAN CHENG et. al. | arxiv-cs.CL | 2024-01-24 |
921 | TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. |
Zhiruo Wang; Daniel Fried; Graham Neubig; | arxiv-cs.AI | 2024-01-23 |
922 | Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Presently, major foundation model companies have opened up Embedding and Chat API interfaces, and frameworks like LangChain have already integrated the RAG process. |
Demiao Lin; | arxiv-cs.AI | 2024-01-23 |
923 | CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question answering (QA) can only make progress if we know if an answer is correct, but for many of the most challenging and interesting QA examples, current evaluation metrics to … |
Zongxia Li; Ishani Mondal; Yijun Liang; Huy Nghiem; Jordan Boyd-Graber; | arxiv-cs.CL | 2024-01-23 |
924 | TAT-LLM: A Specialized Language Model for Discrete Reasoning Over Tabular and Textual Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we address question answering (QA) over a hybrid of tabular and textual data that are very common content on the Web (e.g. SEC filings), where discrete reasoning capabilities are often required. |
FENGBIN ZHU et. al. | arxiv-cs.CL | 2024-01-23 |
925 | Free Form Medical Visual Question Answering in Radiology Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We innovatively augment the SLAKE dataset, enabling our model to respond to a more diverse array of questions, not limited to the immediate content of radiology or pathology images. |
ABHISHEK NARAYANAN et. al. | arxiv-cs.CV | 2024-01-23 |
926 | FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the limited data resources and reduce the annotation cost, we introduce FinLLMs, a method for generating financial question-answering data based on common financial formulas using Large Language Models. |
ZIQIANG YUAN et. al. | arxiv-cs.AI | 2024-01-19 |
927 | Reinforcement Learning for Question Answering in Programming Domain Using Public Community Scoring As A Human Feedback Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we investigate the enhancement of the GPT Neo 125M performance in Community Question Answering (CQA) with a focus on programming, through the integration of Reinforcement Learning from Human Feedback (RLHF) and the utilization of scores from Stack Overflow. |
Alexey Gorbatovski; Sergey Kovalchuk; | arxiv-cs.CL | 2024-01-19 |
928 | Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Moreover, there are no human annotations for question-critical timestamps in existing VideoQA datasets. In light of this, we propose a novel weakly supervised framework to enforce the LMMs to reason out the answers with question-critical moments as visual inputs. |
Haibo Wang; Chenghang Lai; Yixuan Sun; Weifeng Ge; | arxiv-cs.CV | 2024-01-19 |
929 | Q&A Prompts: Discovering Rich Visual Clues Through Mining Question-Answer Prompts for VQA Requiring Diverse World Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we believe that if we can collect visual clues in the given image as much as possible, we will recognize the image more accurately, understand the question better, recall relevant knowledge more easily, and finally reason out the answer. |
Haibo Wang; Weifeng Ge; | arxiv-cs.CV | 2024-01-19 |
930 | Veagle: Advancements in Multimodal Representation Learning Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Lately, researchers in artificial intelligence have been really interested in how language and vision come together, giving rise to the development of multimodal models that aim … |
RAJAT CHAWLA et. al. | ArXiv | 2024-01-18 |
931 | Instant Answering in E-Commerce Buyer-Seller Messaging Using Message-to-Question Reformulation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We seek to automate buyer inquiries to sellers in a leading e-commerce store using a domain-specific federated Question Answering (QA) system. |
BESNIK FETAHU et. al. | arxiv-cs.CL | 2024-01-18 |
932 | Veagle: Advancements in Multimodal Representation Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a novel approach to enhance the multimodal capabilities of existing models. |
RAJAT CHAWLA et. al. | arxiv-cs.CV | 2024-01-18 |
933 | ChatQA: Surpassing GPT-4 on Conversational QA and RAG IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on retrieval-augmented generation (RAG) and conversational question answering (QA). |
ZIHAN LIU et. al. | arxiv-cs.CL | 2024-01-18 |
934 | Question-Answer Cross Language Image Matching for Weakly Supervised Semantic Segmentation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel Question-Answer Cross-Language-Image Matching framework for WSSS (QA-CLIMS), leveraging the vision-language foundation model to maximize the text-based understanding of images and guide the generation of activation maps. |
Songhe Deng; Wei Zhuo; Jinheng Xie; Linlin Shen; | arxiv-cs.CV | 2024-01-18 |
935 | BERTologyNavigator: Advanced Question Answering with BERT-based Semantics Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce the BERTologyNavigator — a two-phased system that combines relation extraction techniques and BERT embeddings to navigate the relationships within the DBLP Knowledge Graph (KG). |
Shreya Rajpal; Ricardo Usbeck; | arxiv-cs.CL | 2024-01-17 |
936 | Fine-tuning Strategies for Domain Specific Question Answering Under Low Annotation Budget Constraints Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The unsupervised training of a language model combined with further target task fine-tuning has become the standard QA fine-tuning procedure. In this work, we demonstrate that this strategy is sub-optimal for fine-tuning QA models, especially under a low QA annotation budget, which is a usual setting in practice due to the extractive QA labeling cost. |
Kunpeng Guo; Dennis Diefenbach; Antoine Gourru; Christophe Gravier; | arxiv-cs.CL | 2024-01-17 |
937 | QAnswer: Towards Question Answering Search Over Websites Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To illustrate the potential of QA technologies for the website search practitioner, we demonstrate web searches that combine QA over knowledge graphs and QA over free text — each being usually tackled separately. |
Kunpeng Guo; Clement Defretiere; Dennis Diefenbach; Christophe Gravier; Antoine Gourru; | arxiv-cs.CL | 2024-01-17 |
938 | MMToM-QA: Multimodal Theory of Mind Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: People can flexibly reason about another person’s mind based on conceptual representations (e.g., goals, beliefs, plans) extracted from any available data. To address this, we introduce a multimodal Theory of Mind question answering (MMToM-QA) benchmark. |
CHUANYANG JIN et. al. | arxiv-cs.AI | 2024-01-16 |
939 | MMToM-QA: Multimodal Theory of Mind Question Answering IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Theory of Mind (ToM), the ability to understand people’s minds, is an essential ingredient for developing machines with human-level social intelligence. Recent machine learning … |
CHUANYANG JIN et. al. | ArXiv | 2024-01-16 |
940 | BERT-CNN Based Evidence Retrieval and Aggregation for Chinese Legal Multi-choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yanling Li; Jiaye Wu; Xudong Luo; | Neural Comput. Appl. | 2024-01-16 |
941 | Towards Efficient Methods in Medical Question Answering Using Knowledge Graph Embeddings Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, in-domain pre-training is expensive in terms of time and resources. In this paper, we propose a resource-efficient approach for injecting domain knowledge into a model without relying on such domain-specific pre-training. |
Saptarshi Sengupta; Connor Heaton; Suhan Cui; Soumalya Sarkar; Prasenjit Mitra; | arxiv-cs.CL | 2024-01-15 |
942 | Developing ChatGPT for Biology and Medicine: A Complete Review of Biomedical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper highlights the structures and advancements of medical domain explorations against general domain methods, emphasizing their applications across different tasks and datasets. |
Qing Li; Lei Li; Yu Li; | arxiv-cs.CL | 2024-01-15 |
943 | A Study on Large Language Models’ Limitations in Multiple-Choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we tackle one of the most widely used tasks – answering Multiple Choice Question (MCQ). |
Aisha Khatun; Daniel G. Brown; | arxiv-cs.CL | 2024-01-15 |
944 | Generalizing Visual Question Answering from Synthetic to Human-Written Questions Via A Chain of QA with A Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, VQA models trained on those data do not perform well on complex, human-written questions. To address this issue, we propose a new method called {\it chain of QA for human-written questions} (CoQAH). |
Taehee Kim; Yeongjae Cho; Heejun Shin; Yohan Jo; Dongmyung Shin; | arxiv-cs.CL | 2024-01-12 |
945 | BOK-VQA: Bilingual Outside Knowledge-Based Visual Question Answering Via Graph Representation Pretraining Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Accordingly, we propose a bilingual outside-knowledge VQA (BOK-VQA) dataset in this study that can be extended to multilingualism. |
Minjun Kim; Seungwoo Song; Youhan Lee; Haneol Jang; Kyungtae Lim; | arxiv-cs.CL | 2024-01-12 |
946 | Cross-modal Retrieval for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Named entities have diverse visual representations and are therefore difficult to recognize. We argue that cross-modal retrieval may help bridge the semantic gap between an entity and its depictions, and is foremost complementary with mono-modal retrieval. |
Paul Lerner; Olivier Ferret; Camille Guinaudeau; | arxiv-cs.CL | 2024-01-11 |
947 | How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose to evaluate the understanding and generation ability of LLMs to deal with differently structured logical forms by examining the inter-conversion of natural and formal language through in-context learning of LLMs. |
JINXIN LIU et. al. | arxiv-cs.CL | 2024-01-11 |
948 | Hallucination Benchmark in Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The study provides an in-depth analysis of current models’ limitations and reveals the effectiveness of various prompting strategies. |
Jinge Wu; Yunsoo Kim; Honghan Wu; | arxiv-cs.CL | 2024-01-11 |
949 | TRANS-VQA: Fully Transformer-Based Image Question-Answering Model Using Question-guided Vision Attention Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Understanding multiple modalities and relating them is an easy task for humans. But for machines, this is a stimulating task. One such multi-modal reasoning task is Visual … |
Dipali Koshti; Ashutosh Gupta; M. Kalla; Arvind Sharma; | Inteligencia Artif. | 2024-01-10 |
950 | AutoAct: Automatic Agent Learning from Scratch for QA Via Self-Planning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce AutoAct, an automatic agent learning framework for QA that does not rely on large-scale annotated data and synthetic planning trajectories from closed-source models (e.g., GPT-4). |
SHUOFEI QIAO et. al. | arxiv-cs.CL | 2024-01-10 |
951 | Answer Retrieval in Legal Community Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Two main challenges hinder applying existing answer retrieval approaches in other domains to the legal domain: (1) a huge knowledge gap between lawyers and non-professionals; and (2) a mix of informal and formal content on legal QA websites. To tackle these challenges, we propose CE_FS, a novel cross-encoder (CE) re-ranker based on the fine-grained structured inputs. |
Arian Askari; Zihui Yang; Zhaochun Ren; Suzan Verberne; | arxiv-cs.IR | 2024-01-09 |
952 | Building Efficient and Effective OpenQA Systems for Low-Resource Languages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we show that effective, low-cost OpenQA systems can be developed for low-resource contexts. |
EMRAH BUDUR et. al. | arxiv-cs.CL | 2024-01-07 |
953 | A Joint-Reasoning Based Disease Q&A System Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Extant QA systems also have limitations in terms of automation and performance. We address these challenges by designing a novel, automated disease QA system which effectively utilizes both LM and KG techniques through a joint-reasoning approach to answer disease-related questions appropriate for lay users. |
Prakash Chandra Sukhwal; Vaibhav Rajan; Atreyi Kankanhalli; | arxiv-cs.CL | 2024-01-06 |
954 | Improving The Representation of Sentences with Reinforcement Learning and AMR Graph Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Sentence Embedding is a technique that represents the meaning of sentences in vector form, playing a crucial role in various natural language processing tasks such as … |
Jinwoo Park; Hosoo Shin; Dahee Jeong; Junyeong Kim; | 2024 IEEE International Conference on Consumer Electronics … | 2024-01-06 |
955 | DocGraphLM: Documental Graph Language Model for Information Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce DocGraphLM, a novel framework that combines pre-trained language models with graph semantics. |
Dongsheng Wang; Zhiqiang Ma; Armineh Nourbakhsh; Kang Gu; Sameena Shah; | arxiv-cs.CL | 2024-01-05 |
956 | A Case Study of Generative AI in MSX Sales Copilot: Improving Seller Productivity with A Real-time Question-answering System for Content Recommendation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this paper, we design a real-time question-answering system specifically targeted for helping sellers get relevant material/documentation they can share live with their … |
MANPREET SINGH et. al. | ArXiv | 2024-01-04 |
957 | Location Aware Modular Biencoder for Tourism Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The traditional method of encoding each pair of question and POI becomes inefficient when the number of candidates increases, making it infeasible for real-world applications. To overcome this, we propose treating the QA task as a dense vector retrieval problem, where we encode questions and POIs separately and retrieve the most relevant POIs for a question by utilizing embedding space similarity. |
Haonan Li; Martin Tomko; Timothy Baldwin; | arxiv-cs.CL | 2024-01-04 |
958 | Navigator: A Gen-AI System for Discovery of Factual and Predictive Insights on Domain-Specific Tabular Datasets Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We demonstrate a gen-AI-based question-answering system called Navigator, which allows business users to ask natural language questions and get answers based on domain-specific … |
ARNAB CHAKRABORTY et. al. | Proceedings of the 7th Joint International Conference on … | 2024-01-04 |
959 | Joint Multi-Facts Reasoning Network For Complex Temporal Question Answering Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose \textbf{\underline{J}}oint \textbf{\underline{M}}ulti \textbf{\underline{F}}acts \textbf{\underline{R}}easoning \textbf{\underline{N}}etwork (JMFRN), to jointly reasoning multiple temporal facts for accurately answering \emph{complex} temporal questions. |
RIKUI HUANG et. al. | arxiv-cs.CL | 2024-01-04 |
960 | Navigating Uncertainty: Optimizing API Dependency for Hallucination Reduction in Closed-Book Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a new LLM able to self-estimate if it is able to answer directly or needs to request an external tool. |
Pierre Erbacher; Louis Falissar; Vincent Guigue; Laure Soulier; | arxiv-cs.CL | 2024-01-03 |
961 | Evaluating Large Language Models in Semantic Parsing for Conversational Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Through a series of experiments on an extensive benchmark dataset, we compare models of varying sizes with different prompting techniques and identify common issue types in the generated output. |
Phillip Schneider; Manuel Klettner; Kristiina Jokinen; Elena Simperl; Florian Matthes; | arxiv-cs.CL | 2024-01-03 |
962 | Benchmarking Out-of-Distribution Detection in Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: When faced with an out-of-distribution (OOD) question or image, visual question answering (VQA) systems may provide unreliable answers. If relied on by real users or secondary … |
Xiangxi Shi; Stefan Lee; | 2024 IEEE/CVF Winter Conference on Applications of Computer … | 2024-01-03 |
963 | Scene Text Visual Question Answering By Using YOLO and STN Related Papers Related Patents Related Grants Related Venues Related Experts View |
Kimiya Nourali; Elham Dolkhani; | International Journal of Speech Technology | 2024-01-03 |
964 | Unlocking Telecom Domain Knowledge Using LLMs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Conversational assistants have become increasingly popular as they use Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) for domain context. In this work, we … |
Sujoy Roychowdhury; Nishkarsh Jain; Sumit Soman; | 2024 16th International Conference on COMmunication Systems … | 2024-01-03 |
965 | Question-Answering Based Summarization of Electronic Health Records Using Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, the requirement to consider the entire content of an EHR in summarization has resulted in poor performance due to the fact that attention mechanisms in modern large language models (LLMs) adds a quadratic complexity in terms of the size of the input. We propose here a method that mitigates these shortcomings by combining semantic search, retrieval augmented generation (RAG) and question-answering using the latest LLMs. |
Walid Saba; Suzanne Wendelken; James. Shanahan; | arxiv-cs.CL | 2024-01-02 |
966 | Sports-QA: A Large-Scale Video Question Answering Benchmark for Complex and Professional Sports Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce the first dataset, named Sports-QA, specifically designed for the sports VideoQA task. |
HAOPENG LI et. al. | arxiv-cs.CV | 2024-01-02 |
967 | Answering from Sure to Uncertain: Uncertainty-Aware Curriculum Learning for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recognizing that conventional self-paced CL methods rely on training loss for difficulty measurement, which might not accurately reflect the intricacies of video-question pairs, we introduce the concept of uncertainty-aware CL. |
Haopeng Li; Qiuhong Ke; Mingming Gong; Tom Drummond; | arxiv-cs.CV | 2024-01-02 |
968 | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, humans can easily tackle it by using a series of episode memories as anchors to quickly locate question-related key moments for reasoning. To mimic this effective reasoning strategy, we propose the Glance-Focus model. |
Ziyi Bai; Ruiping Wang; Xilin Chen; | arxiv-cs.CV | 2024-01-02 |
969 | DermaVQA: A Multilingual Visual Question Answering Dataset for Dermatology Related Papers Related Patents Related Grants Related Venues Related Experts View |
WEN-WAI YIM et. al. | International Conference on Medical Image Computing and … | 2024-01-01 |
970 | Interactive Question Answering for Multimodal Lifelog Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View |
Ly-Duyen Tran; Liting Zhou; Binh T. Nguyen; C. Gurrin; | Conference on Multimedia Modeling | 2024-01-01 |
971 | Enhancing Remote Sensing Visual Question Answering: A Mask-Based Dual-Stream Feature Mutual Attention Network Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The visual question answering (VQA) method applied to remote sensing images (RSIs) can complete the interaction of image information and text information, which avoids … |
YANGYANG LI et. al. | IEEE Geoscience and Remote Sensing Letters | 2024-01-01 |
972 | Resolving Zero-Shot and Fact-Based Visual Question Answering Via Enhanced Fact Retrieval Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Practical applications with visual question answering (VQA) systems are challenging, and recent research has aimed at investigating this important field. Many issues related to … |
Sen Wu; Guoshuai Zhao; Xueming Qian; | IEEE Transactions on Multimedia | 2024-01-01 |
973 | Uncertainty Estimation in Large Language Models to Support Biodiversity Conservation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLM) provide significant value in question answering (QA) scenarios and have practical application in complex decision-making contexts, such as biodiversity … |
Maria Mora-Cross; Saúl Calderón Ramírez; | North American Chapter of the Association for Computational … | 2024-01-01 |
974 | CircuitVQA: A Visual Question Answering Dataset for Electrical Circuit Images Related Papers Related Patents Related Grants Related Venues Related Experts View |
Rahul Mehta; Bhavyajeet Singh; Vasudeva Varma; Manish Gupta; | ECML/PKDD | 2024-01-01 |
975 | Analyze, Generate and Refine: Query Expansion with LLMs for Zero-Shot Open-Domain QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Query expansion (QE) is a critical component in the open-domain question answering (OpenQA) pipeline, enhancing the retrieval performance by broadening the scope of queries with … |
Xinran Chen; Xuanang Chen; Ben He; Tengfei Wen; Le Sun; | Annual Meeting of the Association for Computational … | 2024-01-01 |
976 | Generative AI for Systems Thinking: Can A GPT Question-Answering System Turn Text Into The Causal Maps Produced By Human Readers? Related Papers Related Patents Related Grants Related Venues Related Experts View |
P. Giabbanelli; Nathan Witkowicz; | Hawaii International Conference on System Sciences | 2024-01-01 |
977 | Overview of BioASQ 2024: The Twelfth BioASQ Challenge on Large-Scale Biomedical Semantic Indexing and Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
A. NENTIDIS et. al. | Conference and Labs of the Evaluation Forum | 2024-01-01 |
978 | EHRNoteQA: A Patient-Specific Question Answering Benchmark for Evaluating Large Language Models in Clinical Settings Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This study introduces EHRNoteQA , a novel patient-specific question answering benchmark tailored for evaluating Large Language Models (LLMs) in clinical environments. Based on … |
SUNJUN KWEON et. al. | ArXiv | 2024-01-01 |
979 | UTSA-NLP at ChemoTimelines 2024: Evaluating Instruction-Tuned Language Models for Temporal Relation Extraction Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents our approach for the 2024 ChemoTimelines shared task. Specifically, we explored using Large Language Models (LLMs) for temporal relation extraction. We … |
Xingmeng Zhao; A. Rios; | Clinical Natural Language Processing Workshop | 2024-01-01 |
980 | RSMoDM: Multimodal Momentum Distillation Model for Remote Sensing Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Remote sensing (RS) visual question answering (VQA) is a task that answers questions about a given RS image by utilizing both image and textual information. However, existing … |
PENGFEI LI et. al. | IEEE Journal of Selected Topics in Applied Earth … | 2024-01-01 |
981 | BEnQA: A Question Answering Benchmark for Bengali and English Related Papers Related Patents Related Grants Related Venues Related Experts View |
SHEIKH SHAFAYAT et. al. | Annual Meeting of the Association for Computational … | 2024-01-01 |
982 | CroMIC-QA: The Cross-Modal Information Complementation Based Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper proposes a new multi-modal question -answering task, named as Cross-Modal Information Complementation based Question Answering (CroMIC-QA), to promote the exploration … |
SHUN QIAN et. al. | IEEE Transactions on Multimedia | 2024-01-01 |
983 | Leveraging Knowledge Graph Reasoning in A Multihop Question Answering System for Hot Rolling Line Fault Diagnosis Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multihop question answering (QA) over knowledge graph (KG) poses significant challenges in the context of industrial processes, due to the intricate semantics of natural language … |
Huihui Han; Jian Wang; Xiaowen Wang; | IEEE Transactions on Instrumentation and Measurement | 2024-01-01 |
984 | Arabic Narrative Question Answering (QA) Using Transformer Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The Narrative question answering (QA) problem involves generating accurate, relevant, and human-like answers to questions based on the comprehension of a story consisting of … |
Mohammad A. Ateeq; Sabrina Tiun; Hamed Abdelhaq; Nawras Rahhal; | IEEE Access | 2024-01-01 |
985 | Conversational Question Answering with Language Models Generated Reformulations Over Knowledge Graph Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Conversational question answering (ConvQA) 001 over knowledge graphs (KGs) involves answer-002 ing multi-turn natural language questions about 003 information contained in a KG. … |
Lihui Liu; Blaine Hill; Boxin Du; Fei Wang; Hanghang Tong; | Annual Meeting of the Association for Computational … | 2024-01-01 |
986 | See, Perceive, and Answer: A Unified Benchmark for High-Resolution Postdisaster Evaluation in Remote Sensing Images Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual-language generation for remote sensing image (RSI) is an emerging and challenging research area that requires multitask learning to achieve a comprehensive understanding. … |
Danpei Zhao; Jiankai Lu; Bo Yuan; | IEEE Transactions on Geoscience and Remote Sensing | 2024-01-01 |
987 | A Novel Joint Training Model for Knowledge Base Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In knowledge base question answering (KBQA) systems, relation detection and entity recognition are two core components. However, since the relation detection in KBQA contains … |
Shouhui Wang; Biao Qin; | IEEE/ACM Transactions on Audio, Speech, and Language … | 2024-01-01 |
988 | Towards Robust Expert Finding in Community Question Answering Platforms Related Papers Related Patents Related Grants Related Venues Related Experts View |
Maddalena Amendola; Andrea Passarella; Raffaele Perego; | European Conference on Information Retrieval | 2024-01-01 |
989 | ChatQA: Building GPT-4 Level Conversational QA Models IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this work, we introduce ChatQA, a family of conversational question answering (QA) models that obtain GPT-4 level accuracies. Specifically, we propose a two-stage instruction … |
ZIHAN LIU et. al. | ArXiv | 2024-01-01 |
990 | MLeVLM: Improve Multi-level Progressive Capabilities Based on Multimodal Large Language Model for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
DEXUAN XU et. al. | Annual Meeting of the Association for Computational … | 2024-01-01 |
991 | Multi-hop Community Question Answering Based on Multi-aspect Heterogeneous Graph Related Papers Related Patents Related Grants Related Venues Related Experts View |
YONGLIANG WU et. al. | Inf. Process. Manag. | 2024-01-01 |
992 | Operation-Augmented Numerical Reasoning for Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question answering requiring numerical reasoning, which generally involves symbolic operations such as sorting, counting, and addition, is a challenging task. To address such a … |
Yongwei Zhou; Junwei Bao; Youzheng Wu; Xiaodong He; Tiejun Zhao; | IEEE/ACM Transactions on Audio, Speech, and Language … | 2024-01-01 |
993 | Analysis of QA System Behavior Against Context and Question Changes Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Data quality has gained increasing attention across various research domains, including pattern recognition, image processing, and Natural Language Processing (NLP). The goal of … |
R. Karra; A. Lasfar; | Int. Arab J. Inf. Technol. | 2024-01-01 |
994 | QPAVE: A Multi-task Question Answering Approach for Fine-Grained Product Attribute Value Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View |
Kassem Sabeh; Mouna Kacimi; J. Gamper; | International Conference on Data Warehousing and Knowledge … | 2024-01-01 |
995 | Debiased Visual Question Answering Via The Perspective of Question Types Related Papers Related Patents Related Grants Related Venues Related Experts View |
Tianyu Huai; Shuwen Yang; Junhang Zhang; Jiabao Zhao; Liang He; | Pattern Recognit. Lett. | 2024-01-01 |
996 | Intelligent Retrieval and Comprehension of Entrepreneurship Education Resources Based on Semantic Summarization of Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The latest technologies in natural language processing provide creative, knowledge retrieval, and question-answering technologies in the design of intelligent education, which can … |
Haiyang Yu; Entai Wang; Qi Lang; Jianan Wang; | IEEE Transactions on Learning Technologies | 2024-01-01 |
997 | BioASQ at CLEF2024: The Twelfth Edition of The Large-Scale Biomedical Semantic Indexing and Question Answering Challenge Related Papers Related Patents Related Grants Related Venues Related Experts View |
A. NENTIDIS et. al. | European Conference on Information Retrieval | 2024-01-01 |
998 | FakeBench: Uncover The Achilles’ Heels of Fake Images with Large Multimodal Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recently, fake images generated by artificial intelligence (AI) models have become indistinguishable from the real, exerting new challenges for fake image detection models. To … |
Yixuan Li; Xuelin Liu; Xiaoyang Wang; Shiqi Wang; Weisi Lin; | ArXiv | 2024-01-01 |
999 | Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
XIANG LI et. al. | Annual Meeting of the Association for Computational … | 2024-01-01 |
1000 | Efficient Agricultural Question Classification With A BERT-Enhanced DPCNN Model Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The application of big data technology in agricultural production has led to explosive growth in agricultural data. The accurate classification of agricultural questions from vast … |
XIAOJUAN GUO et. al. | IEEE Access | 2024-01-01 |
1001 | Reflection-Reinforced Self-Training for Language Agents Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Self-training can potentially improve the performance of language agents without relying on demonstrations from humans or stronger models. The general process involves generating … |
Zi-Yi Dou; Cheng-Fu Yang; Xueqing Wu; Kai-Wei Chang; Nanyun Peng; | ArXiv | 2024-01-01 |
1002 | Question-Directed Reasoning With Relation-Aware Graph Attention Network for Complex Question Answering Over Knowledge Graph Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Complex knowledge graph question answering (KGQA) aims at answering natural language questions by entities retrieving from a knowledge graph (KG). Recently, the relation … |
GENG ZHANG et. al. | IEEE/ACM Transactions on Audio, Speech, and Language … | 2024-01-01 |
1003 | EquinorQA: Large Language Models for Question Answering Over Proprietary Data Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: . Large Language Models (LLMs) have become the state-of-the-art technology in a variety of language understanding tasks. Accordingly, many commercial organizations have been … |
Darío Garigliotti; Bjarte Johansen; Jakob Vigerust Kallestad; Seong-Eun Cho; Cèsar Ferri; | European Conference on Artificial Intelligence | 2024-01-01 |
1004 | MRHF: Multi-stage Retrieval and Hierarchical Fusion for Textbook Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Peide Zhu; Zhen Wang; Manabu Okumura; Jie Yang; | Conference on Multimedia Modeling | 2024-01-01 |
1005 | InfiCoder-Eval: Systematically Evaluating The Question-Answering Capabilities of Code Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models for understanding and generating code (code LLMs) have witnessed tremendous progress in recent years. With the rapid development of code LLMs, many popular … |
LINYI LI et. al. | ArXiv | 2024-01-01 |
1006 | UIC NLP GRADS at SemEval-2024 Task 3: Two-Step Disjoint Modeling for Emotion-Cause Pair Extraction Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Disentangling underlying factors contributing to the expression of emotion in multimodal data is challenging but may accelerate progress toward many real-world applications. In … |
Sharad Chandakacherla; Vaibhav Bhargava; Natalie Parde; | International Workshop on Semantic Evaluation | 2024-01-01 |
1007 | HIJLI_JU at SemEval-2024 Task 7: Enhancing Quantitative Question Answering Using Fine-tuned BERT Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In data and numerical analysis, Quantitative Question Answering (QQA) becomes a crucial instrument that provides deep insights for analyzing large datasets and helps make … |
Partha Sengupta; Sandip Sarkar; Dipankar Das; | International Workshop on Semantic Evaluation | 2024-01-01 |
1008 | Large Language Models for Binary Health-Related Question Answering: A Zero- and Few-Shot Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View |
Marcos Fernández-Pichel; David E. Losada; J. C. Pichel; | International Conference on Conceptual Structures | 2024-01-01 |
1009 | IKIM at MEDIQA-M3G 2024: Multilingual Visual Question-Answering for Dermatology Through VLM Fine-tuning and LLM Translations Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents our solution to the MEDIQA-M3G Challenge at NAACL-ClinicalNLP 2024. We participated in all three languages, ranking first in Chinese and Spanish and third in … |
Marie Bauer; Constantin Seibold; J. Kleesiek; Amin Dada; | Clinical Natural Language Processing Workshop | 2024-01-01 |
1010 | Keqing: Knowledge-based Question Answering Is A Nature Chain-of-thought Mentor of LLM IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a novel framework to assist LLMs, such as ChatGPT, to retrieve question-related structured information on the knowledge graph, and demonstrate that Knowledge-based question answering (Keqing) could be a nature Chain-of-Thought (CoT) mentor to guide the LLM to sequentially find the answer entities of a complex question through interpretable logical chains. |
CHAOJIE WANG et. al. | arxiv-cs.CL | 2023-12-31 |
1011 | LaFFi: Leveraging Hybrid Natural Language Feedback for Fine-tuning Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces an alternative to SFT called Natural Language Feedback for Finetuning LLMs (LaFFi). |
QIANXI LI et. al. | arxiv-cs.LG | 2023-12-31 |
1012 | ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the effectiveness, due to the divergence in model architecture, the PLM and GNN are not closely integrated, limiting the knowledge sharing and fine-grained feature interactions. To solve it, we aim to simplify the above two-module approach, and develop a more capable PLM that can directly support subgraph reasoning for KGQA, namely ReasoningLM. |
Jinhao Jiang; Kun Zhou; Wayne Xin Zhao; Yaliang Li; Ji-Rong Wen; | arxiv-cs.CL | 2023-12-30 |
1013 | FusionMind — Improving Question and Answering with External Context Fusion Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Answering questions using pre-trained language models (LMs) and knowledge graphs (KGs) presents challenges in identifying relevant knowledge and performing joint reasoning.We compared LMs (fine-tuned for the task) with the previously published QAGNN method for the Question-answering (QA) objective and further measured the impact of additional factual context on the QAGNN performance. |
Shreyas Verma; Manoj Parmar; Palash Choudhary; Sanchita Porwal; | arxiv-cs.CL | 2023-12-30 |
1014 | Integrating Multimodal Features By A Two-way Co-attention Mechanism for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Himanshu Sharma; Swati Srivastava; | Multim. Tools Appl. | 2023-12-29 |
1015 | AQUALLM: Audio Question Answering Data Generation Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a scalable AQA data generation pipeline, denoted as the AQUALLM framework, which relies on Large Language Models (LLMs). |
Swarup Ranjan Behera; Krishna Mohan Injeti; Jaya Sai Kiran Patibandla; Praveen Kumar Pokala; Balakrishna Reddy Pailla; | arxiv-cs.CL | 2023-12-28 |
1016 | S2M: Converting Single-Turn to Multi-Turn Datasets for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: On the other hand, while numerous single-turn datasets are available, we have not utilized them effectively. To solve this problem, we propose a novel method to convert single-turn datasets to multi-turn datasets. |
BAOKUI LI et. al. | arxiv-cs.CL | 2023-12-27 |
1017 | From Text to Multimodal: A Survey of Adversarial Example Generation in Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This article aims to comprehensively review adversarial example-generation techniques in the QA field, including textual and multimodal contexts. |
Gulsum Yigit; Mehmet Fatih Amasyali; | arxiv-cs.CL | 2023-12-26 |
1018 | Geographic Knowledge Base Question Answering Over OpenStreetMap Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In recent years, question answering on knowledge bases (KBQA) has emerged as a promising approach for providing unified, user-friendly access to knowledge bases. Nevertheless, … |
Jonghyeon Yang; Hanme Jang; Kiyun Yu; | ISPRS Int. J. Geo Inf. | 2023-12-26 |
1019 | Conversational Question Answering with Reformulations Over Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, CornNet, which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. |
Lihui Liu; Blaine Hill; Boxin Du; Fei Wang; Hanghang Tong; | arxiv-cs.CL | 2023-12-26 |
1020 | KnowledgeNavigator: Leveraging Large Language Models for Enhanced Reasoning Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Especially in scenarios that require long logical chains or complex reasoning, the hallucination and knowledge limitation of LLM limit its performance in question answering (QA). In this paper, we propose a novel framework KnowledgeNavigator to address these challenges by efficiently and accurately retrieving external knowledge from knowledge graph and using it as a key factor to enhance LLM reasoning. |
TIEZHENG GUO et. al. | arxiv-cs.CL | 2023-12-25 |
1021 | On The Promises and Challenges of Multimodal Foundation Models for Geographical, Environmental, Agricultural, and Urban Planning Applications Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The advent of large language models (LLMs) has heightened interest in their potential for multimodal applications that integrate language and vision. This paper explores the … |
CHENJIAO TAN et. al. | ArXiv | 2023-12-23 |
1022 | Continually Improving Extractive QA Via Human Feedback Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We study continually improving an extractive question answering (QA) system via human user feedback. |
Ge Gao; Hung-Ting Chen; Yoav Artzi; Eunsol Choi; | emnlp | 2023-12-22 |
1023 | ViGPTQA – State-of-the-Art LLMs for Vietnamese Question Answering: System Overview, Core Models Training, and Evaluations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a practical real-world implementation of a question answering system for Vietnamese, called ViGPTQA, leveraging the power of LLM. |
Minh Thuan Nguyen; Khanh Tung Tran; Nhu Van Nguyen; Xuan-Son Vu; | emnlp | 2023-12-22 |
1024 | Selectively Answering Ambiguous Questions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We investigate question answering from this perspective, focusing on answering a subset of questions with a high degree of accuracy, from a set of questions in which many are inherently ambiguous. |
JEREMY COLE et. al. | emnlp | 2023-12-22 |
1025 | Merging Generated and Retrieved Knowledge for Open-Domain QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Based on the intuition that answers supported by both sources are more likely to be correct, we propose COMBO, a Compatibility-Oriented knowledge Merging for Better Open-domain QA framework, to effectively leverage the two sources of information. |
YUNXIANG ZHANG et. al. | emnlp | 2023-12-22 |
1026 | Diversity Enhanced Narrative Question Generation for Storybooks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a multi-question generation model (mQG), which is capable of generating multiple, diverse, and answerable questions by focusing on context and questions. |
Hokeun Yoon; JinYeong Bak; | emnlp | 2023-12-22 |
1027 | Techniques, Datasets, Evaluation Metrics and Future Directions of A Question Answering System Related Papers Related Patents Related Grants Related Venues Related Experts View |
Faiza Qamar; Seemab Latif; Asad Shah; | Knowledge and Information Systems | 2023-12-22 |
1028 | ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the effectiveness, due to the divergence in model architecture, the PLM and GNN are not closely integrated, limiting the knowledge sharing and fine-grained feature interactions. To solve it, we aim to simplify the above two-module approach, and develop a more capable PLM that can directly support subgraph reasoning for KGQA, namely ReasoningLM. |
Jinhao Jiang; Kun Zhou; Xin Zhao; Yaliang Li; Ji-Rong Wen; | emnlp | 2023-12-22 |
1029 | CRT-QA: A Dataset of Complex Reasoning Question Answering Over Tabular Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we first establish a comprehensive taxonomy of reasoning and operation types for tabular data analysis. Then, we construct a complex reasoning QA dataset over tabular data, named CRT-QA dataset (Complex Reasoning QA over Tabular data), with the following unique features: (1) it is the first Table QA dataset with multi-step operation and informal reasoning; (2) it contains fine-grained annotations on questions? directness, composition types of sub-questions, and human reasoning paths which can be used to conduct a thorough investigation on LLMs? reasoning ability; (3) it contains a collection of unanswerable and indeterminate questions that commonly arise in real-world situations. |
Zhehao Zhang; Xitao Li; Yan Gao; Jian-Guang Lou; | emnlp | 2023-12-22 |
1030 | QA-NatVer: Question Answering for Natural Logic-based Fact Verification Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose to use question answering to predict natural logic operators, taking advantage of the generalization capabilities of instruction-tuned language models. |
Rami Aly; Marek Strong; Andreas Vlachos; | emnlp | 2023-12-22 |
1031 | Large Language Models Are Complex Table Parsers IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose to incorporate GPT-3. 5 to address such challenges, in which complex tables are reconstructed into tuples and specific prompt designs are employed for dialogues. |
BOWEN ZHAO et. al. | emnlp | 2023-12-22 |
1032 | Mitigating Temporal Misalignment By Discarding Outdated Facts IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To mitigate the effects of temporal misalignment, we propose fact duration prediction: the task of predicting how long a given fact will remain true. |
Michael Zhang; Eunsol Choi; | emnlp | 2023-12-22 |
1033 | FACTIFY3M: A Benchmark for Multimodal Fact Verification with Explainability Through 5W Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite progress in automatic text-based fact verification (e. g. , FEVER, LIAR), the research community lacks substantial effort in multimodal fact verification. To address this gap, we introduce FACTIFY 3M, a dataset of 3 million samples that pushes the boundaries of the domain of fact verification via a multimodal fake news dataset, in addition to offering explainability through the concept of 5W question-answering. |
MEGHA CHAKRABORTY et. al. | emnlp | 2023-12-22 |
1034 | TheoremQA: A Theorem-driven Question Answering Dataset IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce TheoremQA, the first theorem-driven question-answering dataset designed to evaluate AI models� capabilities to apply theorems to solve challenging science problems. |
WENHU CHEN et. al. | emnlp | 2023-12-22 |
1035 | Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we delve into why TTA causes model collapse and find that the imbalanced label distribution inherent in QA is the reason for it. |
Yi Su; Yixin Ji; Juntao Li; Hai Ye; Min Zhang; | emnlp | 2023-12-22 |
1036 | Navigating The Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The increased deployment of LMs for real-world tasks involving knowledge and facts makes it important to understand model epistemology: what LMs think they know, and how their attitudes toward that knowledge are affected by language use in their inputs. Here, we study an aspect of model epistemology: how epistemic markers of certainty, uncertainty, or evidentiality like �I�m sure it�s�, �I think it�s�, or �Wikipedia says it�s� affect models, and whether they contribute to model failures. |
Kaitlyn Zhou; Dan Jurafsky; Tatsunori Hashimoto; | emnlp | 2023-12-22 |
1037 | Towards A Unified Multimodal Reasoning Framework Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our experiments aimed to fill the gap in current research by investigating the combined impact of CoT and VQA, contributing to the understanding of how these techniques can improve the reasoning capabilities of state-of-the-art models like GPT-4. Results from our experiments demonstrated the potential of these approaches in enhancing LM’s reasoning and question-answering capabilities, providing insights for further research and development in the field, and paving the way for more accurate and reliable AI systems that can handle complex reasoning tasks across multiple modalities. |
Abhinav Arun; Dipendra Singh Mal; Mehul Soni; Tomohiro Sawada; | arxiv-cs.CL | 2023-12-22 |
1038 | MarkQA: A Large Scale KBQA Dataset with Numerical Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we focus on the complex numerical reasoning in KBQA, and propose a new task, NR-KBQA, which necessitates the ability to perform both multi-hop reasoning and numerical reasoning. |
Xiang Huang; Sitao Cheng; Yuheng Bao; Shanshan Huang; Yuzhong Qu; | emnlp | 2023-12-22 |
1039 | Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we introduce InfoSeek, a visual question answering dataset tailored for information-seeking questions that cannot be answered with only common sense knowledge. |
YANG CHEN et. al. | emnlp | 2023-12-22 |
1040 | Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we provide a benchmark Question Answering (QA) dataset named MSQA, centered around Microsoft products and IT technical problems encountered by customers. |
FANGKAI YANG et. al. | emnlp | 2023-12-22 |
1041 | Question Answering As Programming for Solving Time-Sensitive Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This can be attributed to the LLMs� inability to perform rigorous reasoning based on surface-level text semantics. To overcome this limitation, rather than requiring LLMs to directly answer the question, we propose a novel approach where we reframe the Question Answering task as Programming (QAaP). |
XINYU ZHU et. al. | emnlp | 2023-12-22 |
1042 | Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the original dialog inpainting model is trained solely on the dialog reconstruction task, resulting in the generation of questions with low contextual relevance due to insufficient learning of question-answer alignment. To overcome this limitation, we propose a novel framework called Dialogizer, which has the capability to automatically generate ConvQA datasets with high contextual relevance from textual sources. |
YERIN HWANG et. al. | emnlp | 2023-12-22 |
1043 | Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To cope with the challenge, we propose a novel framework, Tree of Clarifications (ToC): It recursively constructs a tree of disambiguations for the AQ�via few-shot prompting leveraging external knowledge�and uses it to generate a long-form answer. |
Gangwoo Kim; Sungdong Kim; Byeongguk Jeon; Joonsuk Park; Jaewoo Kang; | emnlp | 2023-12-22 |
1044 | TempTabQA: Temporal Question Answering for Semi-Structured Tables IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. |
VIVEK GUPTA et. al. | emnlp | 2023-12-22 |
1045 | GazeVQA: A Video Question Answering Dataset for Multiview Eye-Gaze Task-Oriented Collaborations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we build a novel task-oriented VQA dataset, called GazeVQA, for collaborative tasks where gaze information is captured during the task process. |
MUHAMMET ILASLAN et. al. | emnlp | 2023-12-22 |
1046 | PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering Via Pluggable Reward-Driven Contextual Adapter IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Incorporating Large Language Models (LLMs) as generators is beneficial due to their advanced QA capabilities, but they are typically too large to be fine-tuned with budget constraints while some of them are only accessible via APIs. To tackle this issue and further improve ReQA performance, we propose a trainable Pluggable Reward-Driven Contextual Adapter (PRCA), keeping the generator as a black box. |
HAOYAN YANG et. al. | emnlp | 2023-12-22 |
1047 | API-Assisted Code Generation for Question Answering on Varied Table Structures Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In response, this paper introduces a unified TableQA framework that: (1) provides a unified representation for structured tables as multi-index Pandas data frames, (2) uses Python as a powerful querying language, and (3) uses few-shot prompting to translate NL questions into Python programs, which are executable on Pandas data frames. |
Yihan Cao; Shuyi Chen; Ryan Liu; Zhiruo Wang; Daniel Fried; | emnlp | 2023-12-22 |
1048 | Interview Evaluation: A Novel Approach for Automatic Evaluation of Conversational Question Answering Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel automatic evaluation approach, interview evaluation. |
XIBO LI et. al. | emnlp | 2023-12-22 |
1049 | PreWoMe: Exploiting Presuppositions As Working Memory for Long Form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose PreWoMe, a unified approach capable of handling any type of information-seeking question. |
Wookje Han; Jinsol Park; Kyungjae Lee; | emnlp | 2023-12-22 |
1050 | A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we use language models to rewrite snippets from scientific documents to be read on their own. |
Benjamin Newman; Luca Soldaini; Raymond Fok; Arman Cohan; Kyle Lo; | emnlp | 2023-12-22 |
1051 | A Simple Baseline for Knowledge-Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our main contribution in this paper is to propose a much simpler and readily reproducible pipeline which, in a nutshell, is based on efficient in-context learning by prompting LLaMA (1 and 2) using question-informative captions as contextual information. |
Alexandros Xenos; Themos Stafylakis; Ioannis Patras; Georgios Tzimiropoulos; | emnlp | 2023-12-22 |
1052 | Language Models with Rationality Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This lack of interpretability is a growing impediment to widespread use of LLMs. To address this, our goals are to make model beliefs and their inferential relationships explicit, and to resolve inconsistencies that may exist, so that answers are supported by interpretable chains of reasoning drawn from a consistent network of beliefs. |
NORA KASSNER et. al. | emnlp | 2023-12-22 |
1053 | From Parse-Execute to Parse-Execute-Refine: Improving Semantic Parser for Complex Question Answering Over Knowledge Base Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we propose three components: a parsing stage, an execution stage and a refinement stage, to enhance the ability of complex reasoning. |
Wangzhen Guo; Linyin Luo; Hanjiang Lai; Jian Yin; | emnlp | 2023-12-22 |
1054 | Large Language Models Are Temporal and Causal Reasoners for Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we develop LLaMA-VQA by applying Flipped-VQA to LLaMA, and it outperforms both LLMs-based and non-LLMs-based models on five challenging VideoQA benchmarks. |
Dohwan Ko; Ji Lee; Woo-Young Kang; Byungseok Roh; Hyunwoo Kim; | emnlp | 2023-12-22 |
1055 | IfQA: A Dataset for Open-domain Question Answering Under Counterfactual Presuppositions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question is based on a counterfactual presupposition via an �if� clause. |
Wenhao Yu; Meng Jiang; Peter Clark; Ashish Sabharwal; | emnlp | 2023-12-22 |
1056 | Uncertainty Guided Global Memory Improves Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, attention-based token representations lack explicit global contextual information to connect reasoning steps. To address these issues, we propose GEMFormer, a two-stage method that first collects relevant information over the entire document to the memory and then combines it with local context to solve the task. |
Alsu Sagirova; Mikhail Burtsev; | emnlp | 2023-12-22 |
1057 | CarExpert: Leveraging Large Language Models for In-Car Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. |
MD RASHAD AL HASAN RONY et. al. | emnlp | 2023-12-22 |
1058 | ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems. |
Dheeraj Mekala; Jason Wolfe; Subhro Roy; | emnlp | 2023-12-22 |
1059 | Too Much of Product Information : Don�t Worry, Let�s Look for Evidence! Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a distantly supervised solution to answer customer questions by using product information. |
Aryan Jain; Jitenkumar Rana; Chetan Aggarwal; | emnlp | 2023-12-22 |
1060 | Hop, Union, Generate: Explainable Multi-hop Reasoning Without Rationale Supervision Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. |
Wenting Zhao; Justin Chiu; Claire Cardie; Alexander Rush; | emnlp | 2023-12-22 |
1061 | Evaluating and Modeling Attribution for Cross-Lingual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that Natural Language Inference models and PaLM 2 fine-tuned on a very small amount of attribution data can accurately detect attribution. With these models, we improve the attribution level of a cross-lingual QA system. |
BENJAMIN MULLER et. al. | emnlp | 2023-12-22 |
1062 | Causal Reasoning Through Two Cognition Layers for Improving Generalization in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Besides, diverse interpretations of the input lead to various modes of answer generation, highlighting the role of causal reasoning between interpreting and answering steps in VQA. Through this lens, we propose Cognitive pathways VQA (CopVQA) improving the multimodal predictions by emphasizing causal reasoning factors. |
Trang Nguyen; Naoaki Okazaki; | emnlp | 2023-12-22 |
1063 | Does Named Entity Recognition Truly Not Scale Up to Real-world Product Attribute Extraction? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we argue the scalability of the NER-based approach compared to the QA-based approach, since researchers have compared BERT-based QA-based models to only a weak BiLSTM-based NER baseline trained from scratch in terms of only accuracy on datasets designed to evaluate the QA-based approach. |
Wei-Te Chen; Keiji Shinzato; Naoki Yoshinaga; Yandi Xia; | emnlp | 2023-12-22 |
1064 | Diversify Question Generation with Retrieval-Augmented Style Transfer Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: These methods, however, have not considered the potential of external knowledge for expression diversity. To bridge this gap, we propose RAST, a framework for Retrieval-Augmented Style Transfer, where the objective is to utilize the style of diverse templates for question generation. |
QI GOU et. al. | emnlp | 2023-12-22 |
1065 | Best of Both Worlds: Towards Improving Temporal Knowledge Base Question Answering Via Targeted Fact Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We model the extraction problem as an open-domain question answering task using off-the-shelf language models. |
NITHISH KANNEN et. al. | emnlp | 2023-12-22 |
1066 | Continual Dialogue State Tracking Via Example-Guided Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user�s goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. |
HYUNDONG CHO et. al. | emnlp | 2023-12-22 |
1067 | LingoQA: Visual Question Answering for Autonomous Driving IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce LingoQA, a novel dataset and benchmark for visual question answering in autonomous driving. |
ANA-MARIA MARCU et. al. | arxiv-cs.RO | 2023-12-21 |
1068 | DriveLM: Driving with Graph Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving. |
CHONGHAO SIMA et. al. | arxiv-cs.CV | 2023-12-21 |
1069 | Perception Test 2023: A Summary of The First Challenge And Outcome Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We summarise in this report the task descriptions, metrics, baselines, and results. |
Joseph Heyward; João Carreira; Dima Damen; Andrew Zisserman; Viorica Pătrăucean; | arxiv-cs.CV | 2023-12-20 |
1070 | Relation-Aware Question Answering for Heterogeneous Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this way, the interaction between entity and relation is enhanced, and we derive better entity and relation representations. |
HAOWEI DU et. al. | arxiv-cs.CL | 2023-12-19 |
1071 | Cross-Modal Reasoning with Event Correlation for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the dense caption modality as a new auxiliary and distill event-correlated information from it to infer the correct answer. |
CHENGXIANG YIN et. al. | arxiv-cs.CV | 2023-12-19 |
1072 | Multi-Clue Reasoning with Memory Augmentation for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, most existing VQA methods are incapable of handling Knowledge-based Visual Question Answering (KB-VQA), which requires external knowledge beyond visible contents to answer questions about a given image. To address this issue, we propose a novel framework that endows the model with capabilities of answering more general questions, and achieves a better exploitation of external knowledge through generating Multiple Clues for Reasoning with Memory Neural Networks (MCR-MemNN). |
Chengxiang Yin; Zhengping Che; Kun Wu; Zhiyuan Xu; Jian Tang; | arxiv-cs.CV | 2023-12-19 |
1073 | VQA4CIR: Boosting Composed Image Retrieval with Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Albeit progress has been made in Composed Image Retrieval (CIR), we empirically find that a certain percentage of failure retrieval results are not consistent with their relative captions. |
CHUN-MEI FENG et. al. | arxiv-cs.CV | 2023-12-19 |
1074 | On Early Detection of Hallucinations in Factual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we explore if the artifacts associated with the model generations can provide hints that the generation will contain hallucinations. |
Ben Snyder; Marius Moisescu; Muhammad Bilal Zafar; | arxiv-cs.CL | 2023-12-19 |
1075 | UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Generative information retrieval, encompassing two major tasks of Generative Document Retrieval (GDR) and Grounded Answer Generation (GAR), has gained significant attention in … |
Xiaoxi Li; Yujia Zhou; Zhicheng Dou; | ArXiv | 2023-12-18 |
1076 | GenBoost: Generative Modeling and Boosted Learning for Multi-hop Question Answering Over Incomplete Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-hop question answering over incomplete knowledge graphs involves iteratively reasoning on the provided question and graph to find answers, while also tackling the inherent … |
Zhen Cheng; Jianwei Niu; Shasha Mo; Jia Chen; | 2023 IEEE 29th International Conference on Parallel and … | 2023-12-17 |
1077 | Towards Designing A Question-Answering Chatbot for Online News: Understanding Questions and Perspectives Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: By combining results from the studies, we present alignments and discrepancies between how journalists and readers want to use QA chatbots and propose a framework for designing effective QA chatbots in newsrooms. |
Md Naimul Hoque; Ayman Mahfuz; Mayukha Kindi; Naeemul Hassan; | arxiv-cs.HC | 2023-12-17 |
1078 | An Evaluation of GPT-4V and Gemini in Online VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conduct fine-grained analysis by generating seven types of metadata for nearly 2,000 visual questions, such as image type and the required image processing capabilities. |
Mengchen Liu; Chongyan Chen; Danna Gurari; | arxiv-cs.CV | 2023-12-17 |
1079 | Research on Intelligent Question-Answering Systems Based on Large Language Models and Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: With the continuous development of artificial intelligence and cloud computing technologies, the emergence of large language models (LLMs) has created new opportunities for … |
Qinglin Wu; Yan Wang; | 2023 16th International Symposium on Computational … | 2023-12-16 |
1080 | Privacy-Aware Document Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we explore privacy in the domain of DocVQA for the first time, highlighting privacy issues in state of the art multi-modal LLM models used for DocVQA, and explore possible solutions. |
RUBÈN TITO et. al. | arxiv-cs.CV | 2023-12-15 |
1081 | GSQA: An End-to-End Model for Generative Spoken Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While this extractive-based approach is effective when answers are present directly within the input, it falls short in addressing abstractive questions, where answers are not directly extracted but inferred from the given information. To bridge this gap, we introduce the first end-to-end Generative Spoken Question Answering (GSQA) model that empowers the system to engage in abstractive reasoning. |
MIN-HAN SHIH et. al. | arxiv-cs.CL | 2023-12-15 |
1082 | Weak Supervision for Question and Answering Sentiment Analysis Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Companies and government agencies are keen on comprehending their customers’ sentiments regarding their products and services. This has given rise to the concept of Social … |
Victor Akihito Kamada Tomita; Fábio Manoel França Lobato; R. Marcacini; | 2023 International Conference on Machine Learning and … | 2023-12-15 |
1083 | RJUA-QA: A Comprehensive QA Dataset for Urology Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce RJUA-QA, a novel medical dataset for question answering (QA) and reasoning with clinical evidence, contributing to bridge the gap between general large language models (LLMs) and medical-specific LLM applications. |
SHIWEI LYU et. al. | arxiv-cs.CL | 2023-12-15 |
1084 | ReST Meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These systems, however, suffer from various failure cases, and we cannot directly train them end-to-end to fix such failures, as interaction with external knowledge is non-differentiable. To address these deficiencies, we define a ReAct-style LLM agent with the ability to reason and act upon external knowledge. |
RENAT AKSITOV et. al. | arxiv-cs.CL | 2023-12-15 |
1085 | Advancing Surgical VQA with Scene Graph Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present a novel surgical VQA dataset and model and show that results can be significantly improved by incorporating geometric scene features in the VQA model design. |
KUN YUAN et. al. | arxiv-cs.CV | 2023-12-15 |
1086 | Privacy-Aware Document Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Document Visual Question Answering (DocVQA) is a fast growing branch of document understanding. Despite the fact that documents contain sensitive or copyrighted information, none … |
RUBÈN PÉREZ TITO et. al. | ArXiv | 2023-12-15 |
1087 | Knowledge Enhancement and Scene Understanding for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Zhenqiang Su; Gang Gou; | Knowledge and Information Systems | 2023-12-14 |
1088 | ViLA: Efficient Video-Language Alignment for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose an efficient Video-Language Alignment (ViLA) network. |
XIJUN WANG et. al. | arxiv-cs.CV | 2023-12-13 |
1089 | BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, they often suffer from (i) the data insufficient problem, which makes it difficult to train the state of the arts (SOTAs) for the domain-specific task, and (ii) the reproducibility problem, that many existing models have not been thoroughly evaluated in a unified experimental setup. To address these issues, this paper develops a Benchmark Evaluation SysTem for Medical Visual Question Answering, denoted by BESTMVQA. |
Xiaojie Hong; Zixin Song; Liangzhi Li; Xiaoli Wang; Feiyan Liu; | arxiv-cs.AI | 2023-12-12 |
1090 | Evaluating ChatGPT As A Question Answering System: A Comprehensive Analysis and Comparison with Existing Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the current era, a multitude of language models has emerged to cater to user inquiries. Notably, the GPT-3.5 Turbo language model has gained substantial attention as the … |
Hossein Bahak; Farzaneh Taheri; Zahra Zojaji; Arefeh Kazemi; | ArXiv | 2023-12-11 |
1091 | NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets Using Markup Annotations IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Markup-QA, a novel dataset annotation technique in which QAs are enclosed within markups. |
Yuichi Inoue; Yuki Yada; Kotaro Tanahashi; Yu Yamaguchi; | arxiv-cs.CV | 2023-12-11 |
1092 | PaperQA: Retrieval-Augmented Generative Agent for Scientific Research IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Retrieval-Augmented Generation (RAG) models have been proposed to reduce hallucinations and provide provenance for how an answer was generated. |
JAKUB LÁLA et. al. | arxiv-cs.CL | 2023-12-08 |
1093 | DelucionQA: Detecting Hallucinations in Domain-specific Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Detecting hallucinations through automated methods is thus paramount. To facilitate research in this direction, we introduce a sophisticated dataset, DelucionQA, that captures hallucinations made by retrieval-augmented LLMs for a domain-specific QA task. |
MOBASHIR SADAT et. al. | arxiv-cs.CL | 2023-12-08 |
1094 | Retrieval-based Video Language Model for Efficient Long Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, the presence of abundant question-irrelevant tokens introduces noise to the video QA process. To address these issues, we introduce a simple yet effective retrieval-based video language model (R-VLM) for efficient and interpretable long video QA. |
Jiaqi Xu; Cuiling Lan; Wenxuan Xie; Xuejin Chen; Yan Lu; | arxiv-cs.CV | 2023-12-08 |
1095 | LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper we introduce LifelongMemory, a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval. |
Ying Wang; Yanlai Yang; Mengye Ren; | arxiv-cs.CV | 2023-12-07 |
1096 | Language Model Knowledge Distillation for Efficient Question Answering in Spanish Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Therefore, smaller distilled models for the Spanish language could be proven to be highly scalable and facilitate their further adoption on a variety of tasks and scenarios. In this work, we take one step in this direction by developing SpanishTinyRoBERTa, a compressed language model based on RoBERTa for efficient question answering in Spanish. |
Adrián Bazaga; Pietro Liò; Gos Micklem; | arxiv-cs.CL | 2023-12-07 |
1097 | PCoQA: Persian Conversational Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In the pursuit of conversational question answering research, we introduce the PCoQA, the first \textbf{P}ersian \textbf{Co}nversational \textbf{Q}uestion \textbf{A}nswering dataset, a resource comprising information-seeking dialogs encompassing a total of 9,026 contextually-driven questions. |
HAMED HEMATIAN HEMATI et. al. | arxiv-cs.CL | 2023-12-07 |
1098 | A Question-Answering System for Vietnamese Public Administrative Services Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the realm of legal question-answering (QA) systems, information retrieval (IR) plays a pivotal role. Despite thorough research in numerous languages, the Vietnamese research … |
Anh Pham Duy; Huong Le Thanh; | Proceedings of the 12th International Symposium on … | 2023-12-07 |
1099 | MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, their QAs are unduly narrow and modality-biased, lacking a wider view of understanding long-term video content with rich dynamics and complex narratives. To remedy this, we introduce MoVQA, a long-form movie question-answering dataset, and benchmark to assess the diverse cognitive capabilities of multimodal systems rely on multi-level temporal lengths, with considering both video length and clue length. |
HONGJIE ZHANG et. al. | arxiv-cs.CV | 2023-12-07 |
1100 | XAIQA: Explainer-Based Data Augmentation for Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel approach, XAIQA, for generating synthetic QA pairs at scale from data naturally available in electronic health records. |
JOEL STREMMEL et. al. | arxiv-cs.CL | 2023-12-06 |
1101 | PoQuAD – The Polish Question Answering Dataset – Description and Analysis Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper showcases PoQuAD — a SQuAD-like contribution to building Question Answering tools for Polish. It largely follows the usual Machine Reading Comprehension format, but a … |
Ryszard Tuora; Aleksandra Zwierzchowska; Natalia Zawadzka-Paluektau; Cezary Klamra; Łukasz Kobyliński; | Proceedings of the 12th Knowledge Capture Conference 2023 | 2023-12-05 |
1102 | Lingua Franca – Entity-Aware Machine Translation Approach for Question Answering Over Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This research paper proposes an approach called Lingua Franca that improves machine translation quality by utilizing information from a knowledge graph to translate named entities … |
NIKIT SRIVASTAVA et. al. | Proceedings of the 12th Knowledge Capture Conference 2023 | 2023-12-05 |
1103 | Low-Resource Efficient Multi-Stage Tuning Strategy for Biomedical Question Answering Task Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The automated question-answering system plays a crucial role in improving the accuracy and efficiency of clinical decision-making. While large-scale language models perform … |
Binrui Wang; Yongping Du; Xingnan Jin; Rui Yan; Qi Zhang; | 2023 IEEE International Conference on Bioinformatics and … | 2023-12-05 |
1104 | An IoT-based Approach to Expert Recommendation in Community Question Answering for Disaster Recovery Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the dynamic field of IoT, where technologies like Bluetooth and WiFi are prevalent in home and office settings, proactively managing disasters is critical. This paper … |
David Macri; Antonio Francesco Gentile; Pietro Sabatino; | 2023 IEEE International Conference on Data Mining Workshops … | 2023-12-04 |
1105 | GNN2R: Weakly-Supervised Rationale-Providing Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Second, it is difficult to maintain high efficiency when explicit KG triples need to be retrieved to generate explanations. In this paper, we propose a novel Graph Neural Network-based Two-Step Reasoning model (GNN2R) to solve this issue. |
Ruijie Wang; Luca Rossetto; Michael Cochez; Abraham Bernstein; | arxiv-cs.CL | 2023-12-04 |
1106 | Unleashing The Potential of Large Language Model: Zero-shot VQA for Flood Disaster Scenario Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a zero-shot VQA model named Zero-shot VQA for Flood Disaster Damage Assessment (ZFDDA). |
Yimin Sun; Chao Wang; Yan Peng; | arxiv-cs.CV | 2023-12-04 |
1107 | Towards Leveraging LLMs for Conditional QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Utilizing the Conditional Question Answering (CQA) dataset and focusing on generative models like T5 and UL2, we assess the performance of LLMs across diverse question types. |
Syed-Amad Hussain; Parag Pravin Dakle; SaiKrishna Rallabandi; Preethi Raghavan; | arxiv-cs.CL | 2023-12-02 |
1108 | Harnessing The Power of Prompt-based Techniques for Generating School-Level Questions Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel approach that utilizes prompt-based techniques to generate descriptive and reasoning-based questions. |
Subhankar Maity; Aniket Deroy; Sudeshna Sarkar; | arxiv-cs.CL | 2023-12-02 |
1109 | Multi-Granularity Interaction and Integration Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering, aiming to answer a natural language question related to the given video, has gained popularity in the last few years. Although significant improvements … |
Yuanyuan Wang; Meng Liu; Jianlong Wu; Liqiang Nie; | IEEE Transactions on Circuits and Systems for Video … | 2023-12-01 |
1110 | BERT and Hierarchical Cross Attention-based Question Answering Over Bridge Inspection Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
JIANXI YANG et. al. | Expert Syst. Appl. | 2023-12-01 |
1111 | Knowledge-based Visual Question Answering About Named Entities Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This thesis is positioned at the intersection of several research fields, Natural Language Processing, Information Retrieval (IR) and Computer Vision, which have unified around … |
Paul Lerner; | ACM SIGIR Forum | 2023-12-01 |
1112 | Semantic Parsing for Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a novel method with graph-to-segment mapping for question answering over knowledge graphs, which helps understanding question utterances. |
Sijia Wei; Wenwen Zhang; Qisong Li; Jiang Zhao; | arxiv-cs.CL | 2023-12-01 |
1113 | Zero-Shot Video Question Answering with Procedural Programs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose to answer zero-shot questions about videos by generating short procedural programs that derive a final answer from solving a sequence of visual subtasks. |
Rohan Choudhury; Koichiro Niinuma; Kris M. Kitani; László A. Jeni; | arxiv-cs.CV | 2023-12-01 |
1114 | KI-MAG: A Knowledge-infused Abstractive Question Answering System in Medical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View |
Aizan Zafar; Sovan Kumar Sahoo; Harsh Bhardawaj; Amitava Das; Asif Ekbal; | Neurocomputing | 2023-12-01 |
1115 | Enhancing Answer Selection in Community Question Answering with Pre-trained and Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we apply the BERT model as the encoder layer to do pre-training for question subjects, question bodies and answers, respectively, then the cross attention mechanism selects the most relevant answer for different questions. |
Xinghang Hu; | arxiv-cs.CL | 2023-11-29 |
1116 | AviationGPT: A Large Language Model for The Aviation Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The emergence of LLMs presents an opportunity to transform this situation, but there is a lack of LLMs specifically designed for the aviation domain. To address this gap, we propose AviationGPT, which is built on open-source LLaMA-2 and Mistral architectures and continuously trained on a wealth of carefully curated aviation datasets. |
Liya Wang; Jason Chou; Xin Zhou; Alex Tien; Diane M Baumgartner; | arxiv-cs.CL | 2023-11-29 |
1117 | Multi-modal Domain Adaptation for Text Visual Question Answering Tasks Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Domain adaptation aims to train a model on the labeled source data and unlabeled target data while improving the performance of the same model on the target domain. Recently, … |
Zhiyuan Li; Dongnan Liu; Weidong Cai; | 2023 International Conference on Digital Image Computing: … | 2023-11-28 |
1118 | Towards Top-Down Reasoning: An Explainable Multi-Agent Approach for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Thus, they cannot fully use the powerful VLM for the given VQA question to achieve optimal performance. Attempt to overcome this limitation and inspired by the human top-down reasoning process, i.e., systematically exploring relevant issues to derive a comprehensive answer, this work introduces a novel, explainable multi-agent collaboration framework by leveraging the expansive knowledge of Large Language Models (LLMs) to enhance the capabilities of VLMs themselves. |
ZEQING WANG et. al. | arxiv-cs.CV | 2023-11-28 |
1119 | A Survey of Consumer Health Question Answering Systems Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Consumers are increasingly using the web to find answers to their health‐related queries. Unfortunately, they often struggle with formulating the questions, further compounded by … |
A. Welivita; Pearl Pu; | AI Mag. | 2023-11-27 |
1120 | Fully Authentic Visual Question Answering Dataset from Online Communities Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the first VQA dataset in which all contents originate from an authentic use case. |
CHONGYAN CHEN et. al. | arxiv-cs.CV | 2023-11-27 |
1121 | Characterizing Video Question Answering with Sparsified Inputs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this way, we experiment over public VideoQA benchmarks and provide analysis on how sparsified inputs affect the performance. |
Shiyuan Huang; Robinson Piramuthu; Vicente Ordonez; Shih-Fu Chang; Gunnar A. Sigurdsson; | arxiv-cs.CV | 2023-11-27 |
1122 | Releasing The CRaQAn (Coreference Resolution in Question-Answering): An Open-source Dataset and Dataset Creation Methodology Using Instruction-following Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we present our Coreference Resolution in Question-Answering (CRaQAn) dataset, an open-source dataset that caters to the nuanced information retrieval requirements of coreference resolution in question-answering tasks by providing over 250 question-answer pairs containing coreferences. |
ROB GRZYWINSKI et. al. | arxiv-cs.CL | 2023-11-27 |
1123 | See and Think: Embodied Agent in Virtual Environment IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes STEVE, a comprehensive and visionary embodied agent in the Minecraft virtual environment. |
ZHONGHAN ZHAO et. al. | arxiv-cs.AI | 2023-11-26 |
1124 | Uncertainty-aware Language Modeling for Selective Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an automatic large language model (LLM) conversion approach that produces uncertainty-aware LLMs capable of estimating uncertainty with every prediction. |
QI YANG et. al. | arxiv-cs.CL | 2023-11-26 |
1125 | Optimizing and Fine-tuning Large Language Model for Urban Renewal Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims to innovatively explore adaptive applications of large language models (LLM) in urban renewal. |
XI WANG et. al. | arxiv-cs.CL | 2023-11-26 |
1126 | FlowMind: Automatic Workflow Generation with LLMs IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The rapidly evolving field of Robotic Process Automation (RPA) has made significant strides in automating repetitive processes, yet its effectiveness diminishes in scenarios … |
ZHEN ZENG et. al. | Proceedings of the Fourth ACM International Conference on … | 2023-11-25 |
1127 | AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel and challenging benchmark, AutoEval-Video, to comprehensively evaluate large vision-language models in open-ended video question answering. |
Xiuyuan Chen; Yuan Lin; Yuchen Zhang; Weiran Huang; | arxiv-cs.CV | 2023-11-24 |
1128 | Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree). |
SHULIN CAO et. al. | arxiv-cs.CL | 2023-11-23 |
1129 | Question Answering in Natural Language: The Special Case of Temporal Expressions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph. |
Armand Stricker; | arxiv-cs.CL | 2023-11-23 |
1130 | Drilling Down Into The Discourse Structure with LLMs for Long Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We aim to assess the applicability of large language models (LLMs) in the task of zero-shot long document evidence retrieval, owing to their unprecedented performance across various NLP tasks. |
Inderjeet Nair; Shwetha Somasundaram; Apoorv Saxena; Koustava Goswami; | arxiv-cs.CL | 2023-11-22 |
1131 | Enhancing Large Language Models’ Utility for Medical Question-Answering: A Patient Health Question Summarization Approach Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models (LLMs) offer tremendous potential for answering diverse questions and providing valuable insights. However, to maximize their utility, it is essential to … |
Nour Eddine Zekaoui; Siham Yousfi; M. Mikram; Maryem Rhanoui; | 2023 14th International Conference on Intelligent Systems: … | 2023-11-22 |
1132 | FinanceBench: A New Benchmark for Financial Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We test 16 state of the art model configurations (including GPT-4-Turbo, Llama2 and Claude2, with vector stores and long context prompts) on a sample of 150 cases from FinanceBench, and manually review their answers (n=2,400). |
PRANAB ISLAM et. al. | arxiv-cs.CL | 2023-11-20 |
1133 | Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical NLP tasks in different languages, We present Taiyi, a bilingual fine-tuned LLM for diverse biomedical tasks. |
LING LUO et. al. | arxiv-cs.CL | 2023-11-20 |
1134 | PEFT-MedAware: Large Language Model for Medical Awareness Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Chat models are capable of answering a wide range of questions, however, the accuracy of their responses is highly uncertain. In this research, we propose a specialized PEFT-MedAware model where we utilize parameter-efficient fine-tuning (PEFT) to enhance the Falcon-1b large language model on specialized MedQuAD data consisting of 16,407 medical QA pairs, leveraging only 0.44% of its trainable parameters to enhance computational efficiency. |
Keivalya Pandya; | arxiv-cs.CL | 2023-11-17 |
1135 | Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To deal with them, we propose a GE-Reasoning method, which directs LLMs to generate proper sub-questions and corresponding answers. |
Jinyoung Park; Ameen Patel; Omar Zia Khan; Hyunwoo J. Kim; Joo-Kyung Kim; | arxiv-cs.CL | 2023-11-16 |
1136 | Graph-Guided Reasoning for Multi-Hop Question Answering in Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Chain-of-Thought (CoT) prompting has boosted the multi-step reasoning capabilities of Large Language Models (LLMs) by generating a series of rationales before the final answer. We … |
Jinyoung Park; Ameen Patel; Omar Zia Khan; Hyunwoo J. Kim; Jooyeon Kim; | ArXiv | 2023-11-16 |
1137 | Downstream Trade-offs of A Family of Text Watermarks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we evaluate the performance of LLMs watermarked using three different strategies over a diverse suite of tasks including those cast as k-class classification (CLS), multiple choice question answering (MCQ), short-form generation (e.g., open-ended question answering) and long-form generation (e.g., translation) tasks. |
Anirudh Ajith; Sameer Singh; Danish Pruthi; | arxiv-cs.CL | 2023-11-16 |
1138 | Towards Robust Temporal Reasoning of Large Language Models Via A Multi-Hop QA Dataset and Pseudo-Instruction Tuning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a complex temporal question-answering dataset Complex-TR that focuses on multi-answer and multi-hop temporal reasoning. |
Qingyu Tan; Hwee Tou Ng; Lidong Bing; | arxiv-cs.CL | 2023-11-16 |
1139 | Leveraging LLMs in Scholarly Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a scholarly Knowledge Graph Question Answering (KGQA) that answers bibliographic natural language questions by leveraging a large language model (LLM) in a few-shot manner. |
Tilahun Abedissa Taffa; Ricardo Usbeck; | arxiv-cs.CL | 2023-11-16 |
1140 | Crafting In-context Examples According to LMs’ Parametric Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We perform analysis on three multi-answer question answering datasets, which allows us to further study answer set ordering strategies based on the LM’s knowledge of each answer. |
Yoonsang Lee; Pranav Atreya; Xi Ye; Eunsol Choi; | arxiv-cs.CL | 2023-11-16 |
1141 | Graph Neural Networks for Visual Question Answering: A Systematic Review Related Papers Related Patents Related Grants Related Venues Related Experts View |
ABDULGANIYU ABDU YUSUF et. al. | Multim. Tools Appl. | 2023-11-16 |
1142 | On Evaluating The Integration of Reasoning and Action in LLM Agents with Database Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenge of accurately assessing answer quality, we introduce a multi-agent evaluation framework that simulates the academic peer-review process, enhancing the precision and reliability of our evaluations. |
LINYONG NAN et. al. | arxiv-cs.CL | 2023-11-16 |
1143 | Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. |
Mayur Patidar; Riya Sawhney; Avinash Singh; Biswajit Chatterjee; Indrajit Bhattacharya; | arxiv-cs.CL | 2023-11-15 |
1144 | Improving Zero-shot Visual Question Answering Via Large Language Models with Reasoning Question Prompts IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we present Reasoning Question Prompts for VQA tasks, which can further activate the potential of LLMs in zero-shot scenarios. |
YUNSHI LAN et. al. | arxiv-cs.CV | 2023-11-15 |
1145 | LLMRefine: Pinpointing and Refining Large Language Models Via Fine-Grained Actionable Feedback Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose LLMRefine, an inference time optimization method to refine LLM’s output. |
WENDA XU et. al. | arxiv-cs.CL | 2023-11-15 |
1146 | Never Lost in The Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The lost in the middle problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Attention Strengthening Multi-doc QA (ASM QA). |
JUNQING HE et. al. | arxiv-cs.CL | 2023-11-15 |
1147 | Long-form Question Answering: An Iterative Planning-Retrieval-Generation Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Additionally, generating detailed long-form answers often entails aggregating knowledge from diverse sources. To address these limitations, we propose an LFQA model with iterative Planning, Retrieval, and Generation. |
Pritom Saha Akash; Kashob Kumar Roy; Lucian Popa; Kevin Chen-Chuan Chang; | arxiv-cs.CL | 2023-11-15 |
1148 | Pregnant Questions: The Importance of Pragmatic Awareness in Maternal Health Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In a high-risk domain such as maternal and infant health, a question-answering system must recognize these pragmatic constraints and go beyond simply answering user questions, examining them in context to respond helpfully. To achieve this, we study assumptions and implications, or pragmatic inferences, made when mothers ask questions about pregnancy and infant care by collecting a dataset of 2,727 inferences from 500 questions across three diverse sources. |
NEHA SRIKANTH et. al. | arxiv-cs.CL | 2023-11-15 |
1149 | SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce SQATIN, a new framework for dialog NLU based on (i) instruction tuning and (ii) question-answering-based formulation of ID and VE tasks. |
Evgeniia Razumovskaia; Goran Glavaš; Anna Korhonen; Ivan Vulić; | arxiv-cs.CL | 2023-11-15 |
1150 | TempTabQA: Temporal Question Answering for Semi-Structured Tables IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. |
VIVEK GUPTA et. al. | arxiv-cs.CL | 2023-11-14 |
1151 | Learning to Filter Context for Retrieval-Augmented Generation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This can cause over- or under-reliance on context, and result in problems in the generated output such as hallucinations. To alleviate these problems, we propose FILCO, a method that improves the quality of the context provided to the generator by (1) identifying useful context based on lexical and information-theoretic approaches, and (2) training context filtering models that can filter retrieved contexts at test time. |
Zhiruo Wang; Jun Araki; Zhengbao Jiang; Md Rizwan Parvez; Graham Neubig; | arxiv-cs.CL | 2023-11-14 |
1152 | Understanding Calibration for Multilingual Question Answering Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we study the calibration properties of several pre-trained multilingual large language models (LLMs) on a variety of question-answering tasks. |
Yahan Yang; Soham Dan; Dan Roth; Insup Lee; | arxiv-cs.CL | 2023-11-14 |
1153 | Insights Into Classifying and Mitigating LLMs’ Hallucinations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our research addresses this critical issue within the HeReFaNMi (Health-Related Fake News Mitigation) project, generously supported by NGI Search, dedicated to combating Health-Related Fake News dissemination on the Internet. This endeavour represents a concerted effort to safeguard the integrity of information dissemination in an age of evolving AI technologies. |
Alessandro Bruno; Pier Luigi Mazzeo; Aladine Chetouani; Marouane Tliba; Mohamed Amine Kerkouri; | arxiv-cs.CL | 2023-11-14 |
1154 | RECALL: A Benchmark for LLMs Robustness Against External Counterfactual Knowledge IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information. |
YI LIU et. al. | arxiv-cs.CL | 2023-11-14 |
1155 | A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Challenges arise when these models grapple with understanding multi-hop relations in complex questions or lack the necessary knowledge for a comprehensive response. To address this issue, we introduce the Decompose-and-Query framework (D&Q). |
HEJING CAO et. al. | arxiv-cs.CL | 2023-11-13 |
1156 | Evaluating LLMs on Document-Based QA: Exact Answer Selection and Numerical Extraction Using Cogtale Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While some existing work focus on evaluating large language models performance on retrieving and answering questions from documents, assessing the LLMs performance on QA types that require exact answer selection from predefined options and numerical extraction is yet to be fully assessed. In this paper, we specifically focus on this underexplored context and conduct empirical analysis of LLMs (GPT-4 and GPT-3.5) on question types, including single-choice, yes-no, multiple-choice, and number extraction questions from documents in zero-shot setting. |
ZAFARYAB RASOOL et. al. | arxiv-cs.IR | 2023-11-13 |
1157 | A Benchmark to Understand The Role of Knowledge Graphs on Large Language Model’s Accuracy for Question Answering on Enterprise SQL Databases IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims to evaluate the accuracy of LLM-powered question answering systems in the context of enterprise questions and SQL databases, while also exploring the role of knowledge graphs in improving accuracy. To achieve this, we introduce a benchmark comprising an enterprise SQL schema in the insurance domain, a range of enterprise queries encompassing reporting to metrics, and a contextual layer incorporating an ontology and mappings that define a knowledge graph. |
Juan Sequeda; Dean Allemang; Bryon Jacob; | arxiv-cs.AI | 2023-11-13 |
1158 | A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Yet, the true challenge lies in the domain of knowledge-intensive VQA tasks, which necessitate not just recognition of visual elements, but also a deep comprehension of the visual information in conjunction with a vast repository of learned knowledge. To uncover such capabilities of MLMs, particularly the newly introduced GPT-4V and Gemini, we provide an in-depth evaluation from three perspectives: 1) Commonsense Knowledge, which assesses how well models can understand visual cues and connect to general knowledge; 2) Fine-grained World Knowledge, which tests the model’s skill in reasoning out specific knowledge from images, showcasing their proficiency across various specialized fields; 3) Comprehensive Knowledge with Decision-making Rationales, which examines model’s capability to provide logical explanations for its inference, facilitating a deeper analysis from the interpretability perspective. |
YUNXIN LI et. al. | arxiv-cs.CL | 2023-11-13 |
1159 | Hallucination Augmented Recitations for Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose Hallucination Augmented Recitations (HAR) for creating counterfactual datasets by utilizing hallucination in LLMs to improve attribution. |
Abdullatif Köksal; Renat Aksitov; Chung-Ching Chang; | arxiv-cs.CL | 2023-11-13 |
1160 | Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present BYOKG, a universal question-answering (QA) system that can operate on any knowledge graph (KG), requires no human-annotated training data, and can be ready to use within a day — attributes that are out-of-scope for current KGQA systems. |
Dhruv Agarwal; Rajarshi Das; Sopan Khosla; Rashmi Gangadharaiah; | arxiv-cs.CL | 2023-11-13 |
1161 | Knowledgeable Preference Alignment for LLMs in Domain-specific Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Thus, we introduce Knowledgeable Preference AlignmenT (KnowPAT), which constructs two kinds of preference sets to tackle the two issues. |
YICHI ZHANG et. al. | arxiv-cs.CL | 2023-11-11 |
1162 | Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Large Multimodal Models (LMMs) have shown promise in vision-language tasks but struggle with high-resolution input and detailed scene understanding. Addressing these challenges, we introduce Monkey to enhance LMM capabilities. |
ZHANG LI et. al. | arxiv-cs.CV | 2023-11-11 |
1163 | BizBench: A Quantitative Reasoning Benchmark for Business and Finance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce BizBench, a benchmark for evaluating models’ ability to reason about realistic financial problems. |
RIK KONCEL-KEDZIORSKI et. al. | arxiv-cs.CL | 2023-11-11 |
1164 | Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We introduce Lumos, a novel framework for training language agents that employs a unified data format and a modular architecture based on open-source large language models (LLMs). … |
DA YIN et. al. | ArXiv | 2023-11-09 |
1165 | Hallucination-minimized Data-to-answer Framework for Financial Decision-makers Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLMs) have been applied to build several automation and personalized question-answering prototypes so far. However, scaling such prototypes to robust … |
SOHINI ROYCHOWDHURY et. al. | 2023 IEEE International Conference on Big Data (BigData) | 2023-11-09 |
1166 | SEMQA: Semi-Extractive Multi-Source Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion. |
TAL SCHUSTER et. al. | arxiv-cs.CL | 2023-11-08 |
1167 | NLQxform: A Language Model-based Question to SPARQL Transformer Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: In recent years, scholarly data has grown dramatically in terms of both scale and complexity. It becomes increasingly challenging to retrieve information from scholarly knowledge … |
Ruijie Wang; Zhiruo Zhang; Luca Rossetto; Florian Ruosch; Abraham Bernstein; | ArXiv | 2023-11-08 |
1168 | Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering, especially the reasoning process. |
Ruosen Li; Xinya Du; | arxiv-cs.CL | 2023-11-07 |
1169 | In-Context Learning for Knowledge Base Question Answering for Unmanned Systems Based on Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we focus on the CCKS2023 Competition of Question Answering with Knowledge Graph Inference for Unmanned Systems. |
Yunlong Chen; Yaming Zhang; Jianfei Yu; Li Yang; Rui Xia; | arxiv-cs.CL | 2023-11-06 |
1170 | Adapting Pre-trained Generative Models for Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a novel approach that uses the power of pre-trained generative models to address extractive QA tasks by generating indexes corresponding to context tokens or sentences that form part of the answer. |
Prabir Mallick; Tapas Nayak; Indrajit Bhattacharya; | arxiv-cs.CL | 2023-11-06 |
1171 | Divide & Conquer for Entailment-aware Multi-hop Evidence Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we demonstrate that textual entailment relation is another important relevance dimension that should be considered. |
Fan Luo; Mihai Surdeanu; | arxiv-cs.CL | 2023-11-05 |
1172 | Tailoring Self-Rationalizers with Multi-Reward Distillation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we enable small-scale LMs (approx. 200x smaller than GPT-3) to generate rationales that not only improve downstream task performance, but are also more plausible, consistent, and diverse, assessed both by automatic and human evaluation. |
SAHANA RAMNATH et. al. | arxiv-cs.CL | 2023-11-05 |
1173 | Causal Question Answering with Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Hence, in this paper, we aim to answer causal questions with a causality graph, a large-scale dataset of causal relations between noun phrases along with the relations’ provenance data. |
Lukas Blübaum; Stefan Heindorf; | arxiv-cs.AI | 2023-11-05 |
1174 | AI-TA: Towards An Intelligent Question-Answer Teaching Assistant Using Open-Source LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenges of scalable and intelligent question-answering (QA), we introduce an innovative solution that leverages open-source Large Language Models (LLMs) from the LLaMA-2 family to ensure data privacy. |
Yann Hicke; Anmol Agarwal; Qianou Ma; Paul Denny; | arxiv-cs.LG | 2023-11-05 |
1175 | Perturbation-based Active Learning for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a perturbation-based active learning acquisition strategy and demonstrate it is more effective than existing commonly used strategies. |
Fan Luo; Mihai Surdeanu; | arxiv-cs.CL | 2023-11-04 |
1176 | SAC3: Reliable Hallucination Detection in Black-Box Language Models Via Semantic-aware Cross-check Consistency IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To achieve this goal, we re-examine existing detection approaches based on the self-consistency of LMs and uncover two types of hallucinations resulting from 1) question-level and 2) model-level, which cannot be effectively identified through self-consistency check alone. Building upon this discovery, we propose a novel sampling-based method, i.e., semantic-aware cross-check consistency (SAC3) that expands on the principle of self-consistency checking. |
Jiaxin Zhang; Zhuohang Li; Kamalika Das; Bradley A. Malin; Sricharan Kumar; | arxiv-cs.CL | 2023-11-03 |
1177 | Predicting Question-Answering Performance of Large Language Models Through Semantic Consistency IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We address the task of assessing question-answering (QA) semantic consistency of contemporary large language models (LLMs) by manually creating a benchmark dataset with high-quality paraphrases for factual questions, and release the dataset to the community. |
Ella Rabinovich; Samuel Ackerman; Orna Raz; Eitan Farchi; Ateret Anaby-Tavor; | arxiv-cs.CL | 2023-11-02 |
1178 | Long Story Short: A Summarize-then-Search Method for Long Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This capability has been particularly effective in settings such as narrative question answering, where the diversity of tasks is immense, but the available supervision data is small. In this work, we investigate if such language models can extend their zero-shot reasoning abilities to long multimodal narratives in multimedia content such as drama, movies, and animation, where the story plays an essential role. |
Jiwan Chung; Youngjae Yu; | arxiv-cs.CV | 2023-11-02 |
1179 | CLRN: A Reasoning Network for Multi-relation Question Answering Over Cross-lingual Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View |
YIMING TAN et. al. | Expert Syst. Appl. | 2023-11-01 |
1180 | VQA-GEN: A Visual Question Answering Benchmark for Domain Generalization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose VQA-GEN, the first ever multi-modal benchmark dataset for distribution shift generated through a shift induced pipeline. |
Suraj Jyothi Unni; Raha Moraffah; Huan Liu; | arxiv-cs.CV | 2023-11-01 |
1181 | Hierarchical Reasoning Based on Perception Action Cycle for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Safaa Abdullahi Moallim Mohamud; Amin Jalali; Minho Lee; | Expert Syst. Appl. | 2023-11-01 |
1182 | Confidence-based Interactable Neural-symbolic Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yajie Bao; Tianwei Xing; Xun Chen; | Neurocomputing | 2023-11-01 |
1183 | From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The work aims to navigate both beginners and experts by shedding light on the potential avenues of research and expanding the boundaries of the field. |
Md Farhan Ishmam; Md Sakib Hossain Shovon; M. F. Mridha; Nilanjan Dey; | arxiv-cs.CV | 2023-11-01 |
1184 | Chinese Mineral Question and Answering System Based on Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
CHENGJIAN LIU et. al. | Expert Syst. Appl. | 2023-11-01 |
1185 | VQAPT: A New Visual Question Answering Model for Personality Traits in Social Media Images Related Papers Related Patents Related Grants Related Venues Related Experts View |
Kunal Biswas; P. Shivakumara; U. Pal; Cheng-Lin Liu; Yue Lu; | Pattern Recognit. Lett. | 2023-11-01 |
1186 | DIVKNOWQA: Assessing The Reasoning Ability of LLMs Via Open-Domain Question Answering Over Knowledge Base and Text Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when solely relying on their internal knowledge, especially … |
WENTING ZHAO et. al. | arxiv-cs.CL | 2023-10-31 |
1187 | Generating Context-Aware Natural Answers for Questions in 3D Scenes Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: 3D question answering is a young field in 3D vision-language that is yet to be explored. Previous methods are limited to a pre-defined answer space and cannot generate answers … |
Mohammed Munzer Dwedari; Matthias Nießner; Dave Zhenyu Chen; | ArXiv | 2023-10-30 |
1188 | Split-NER: Named Entity Recognition Via Two Question-Answering-based Classifications Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we address the NER problem by splitting it into two logical sub-tasks: (1) Span Detection which simply extracts entity mention spans irrespective of entity type; (2) Span Classification which classifies the spans into their entity types. |
Jatin Arora; Youngja Park; | arxiv-cs.CL | 2023-10-30 |
1189 | Fusing Temporal Graphs Into Transformers for Time-Sensitive Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Answering time-sensitive questions from long documents requires temporal reasoning over the times in questions and documents. An important open question is whether large language … |
Xin Su; Phillip Howard; Nagib Hakim; Steven Bethard; | Conference on Empirical Methods in Natural Language … | 2023-10-30 |
1190 | Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a multimodal framework that uses language guidance (LG) in the form of rationales, image captions, scene graphs, etc to answer questions more accurately. |
Deepanway Ghosal; Navonil Majumder; Roy Ka-Wei Lee; Rada Mihalcea; Soujanya Poria; | arxiv-cs.CV | 2023-10-30 |
1191 | Knowledge Compass: A Question Answering System Guiding Students with Follow-Up Question Recommendations Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Pedagogical question-answering (QA) systems have been utilized for providing individual support in online learning courses. However, existing systems often neglect the education … |
RUI SHENG et. al. | Adjunct Proceedings of the 36th Annual ACM Symposium on … | 2023-10-29 |
1192 | Multimodal ChatGPT for Medical Applications: An Experimental Study of GPT-4V IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we critically evaluate the capabilities of the state-of-the-art multimodal large language model, i.e., GPT-4 with Vision (GPT-4V), on Visual Question Answering (VQA) task. |
ZHILING YAN et. al. | arxiv-cs.CV | 2023-10-29 |
1193 | DCQA: Document-Level Chart Question Answering Towards Complex Reasoning and Common-Sense Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a novel task named document-level chart question answering (DCQA). |
ANRAN WU et. al. | arxiv-cs.AI | 2023-10-29 |
1194 | An Empirical Study of Multilingual Scene-Text Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In recent years, the focus on multilingual modeling has intensified, driven by the necessity to enable cross-lingual Text-based Visual Question Answering (TextVQA), which requires … |
Lin Li; Haohan Zhang; Zeqin Fang; | Proceedings of the 2nd Workshop on User-centric Narrative … | 2023-10-29 |
1195 | Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a curriculum learning strategy to enhance the performance of multimodal deep learning models. |
Huseyin Fuat Alsan; Taner Arsan; | arxiv-cs.CV | 2023-10-29 |
1196 | Prompt-Engineering and Transformer-based Question Generation and Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this research, we finetuned a pretrained distilBERT model on the SQuAD question answering dataset to generate questions. |
Rubaba Amyeen; | arxiv-cs.CL | 2023-10-28 |
1197 | EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce EHRXQA, a novel multi-modal question answering dataset combining structured EHRs and chest X-ray images. |
SEONGSU BAE et. al. | arxiv-cs.CL | 2023-10-28 |
1198 | ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in Vietnamese Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Neural models for VQA have made remarkable progress on large-scale datasets, with a primary focus on resource-rich languages like English. To address this, we introduce the ViCLEVR dataset, a pioneering collection for evaluating various visual reasoning capabilities in Vietnamese while mitigating biases. |
Khiem Vinh Tran; Hao Phu Phan; Kiet Van Nguyen; Ngan Luu Thuy Nguyen; | arxiv-cs.CL | 2023-10-27 |
1199 | Detrimental Contexts in Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we analyze how passages can have a detrimental effect on retrieve-then-read architectures used in question answering. |
Philhoon Oh; James Thorne; | arxiv-cs.CL | 2023-10-27 |
1200 | Knowledge Corpus Error in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This study revisits the conventional formulation of QA and introduces the concept of knowledge corpus error. |
Yejoon Lee; Philhoon Oh; James Thorne; | arxiv-cs.CL | 2023-10-27 |
1201 | 3D-Aware Visual Question Answering About Parts, Poses and Occlusions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. |
Xingrui Wang; Wufei Ma; Zhuowan Li; Adam Kortylewski; Alan Yuille; | arxiv-cs.CV | 2023-10-27 |
1202 | Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Evaluating text-to-image models is notoriously difficult. A strong recent approach for assessing text-image faithfulness is based on QG/A (question generation and answering), … |
JAEMIN CHO et. al. | ArXiv | 2023-10-27 |
1203 | VTQAGen: BART-based Generative Model For Visual Text Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual Text Question Answering (VTQA) is a challenging task that requires answering questions pertaining to visual content by combining image understanding and language … |
HAORU CHEN et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1204 | In-Context Ability Transfer for Question Decomposition in Complex QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Answering complex questions is a challenging task that requires question decomposition and multistep reasoning for arriving at the solution. While existing supervised and … |
V. Venktesh; Sourangshu Bhattacharya; Avishek Anand; | ArXiv | 2023-10-26 |
1205 | Improving Zero-shot Reader By Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims at the feasibility of a zero-shot reader that addresses the challenges of computational cost and the need for labeled data. |
Sukmin Cho; Jeongyeon Seo; Soyeong Jeong; Jong C. Park; | arxiv-cs.CL | 2023-10-26 |
1206 | Finetuning Language Models for Multimodal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: To achieve multi-modal intelligence, AI must be able to process and respond to inputs from multimodal sources. However, many current question answering models are limited to … |
XIN ZHANG et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1207 | Intra- and Inter-Modal Curriculum for Multimodal Learning IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multimodal learning has been widely studied and applied due to its improvement over previous unimodal tasks and its effectiveness on emerging multimodal challenges. However, it … |
Yuwei Zhou; Xin Wang; Hong Chen; Xuguang Duan; Wenwu Zhu; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1208 | Incorporating Probing Signals Into Multimodal Machine Translation Via Visual Question-Answering Pairs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents an in-depth study of multimodal machine translation (MMT), examining the prevailing understanding that MMT systems exhibit decreased sensitivity to visual information when text inputs are complete. |
YUXIN ZUO et. al. | arxiv-cs.CL | 2023-10-26 |
1209 | Depth-Aware Sparse Transformer for Video-Language Learning Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In Video-Language (VL) learning tasks, a massive amount of text annotations are describing geometrical relationships of instances (e.g. 19.6% to 45.0% in MSVD, MSR-VTT, MSVD-QA … |
Haonan Zhang; Lianli Gao; Pengpeng Zeng; A. Hanjalic; H. Shen; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1210 | VTQA2023: ACM Multimedia 2023 Visual Text Question Answering Challenge Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The ideal form of Visual Question Answering requires understanding, grounding and reasoning in the joint space of vision and language and serves as a proxy for the AI task of … |
Kang Chen; Tianli Zhao; Xiangqian Wu; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1211 | Multi-Domain Lifelong Visual Question Answering Via Self-Critical Distillation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual Question Answering (VQA) has achieved significant success over the last few years, while most studies focus on training a VQA model on a stationary domain (e.g., a given … |
MINGRUI LAO et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1212 | Advancing Video Question Answering with A Multi-modal and Multi-layer Question Enhancement Network Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering is an increasingly vital research field, spurred by the rapid proliferation of video content online and the urgent need for intelligent systems that can … |
MENG LIU et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1213 | Language-Guided Visual Aggregation Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video Question Answering (VideoQA) aims to comprehend intricate relationships, actions, and events within video content, as well as the inherent links between objects and scenes, … |
XIAO LIANG et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1214 | Answer-Based Entity Extraction and Alignment for Visual Text Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: As a variant of visual question answering (VQA), visual text question answering (VTQA) provides a text-image pair for each question. Text utilizes named entities to describe … |
JUN YU et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1215 | QA-CLIMS: Question-Answer Cross Language Image Matching for Weakly Supervised Semantic Segmentation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Class Activation Map (CAM) has emerged as a popular tool for weakly supervised semantic segmentation (WSSS), allowing the localization of object regions in an image using only … |
Songhe Deng; Wei Zhuo; Jinheng Xie; Linlin Shen; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1216 | Exploring Question Decomposition for Zero-Shot VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, we show that naive application of model-written decompositions can hurt performance. We introduce a model-driven selective decomposition approach for second-guessing predictions and correcting errors, and validate its effectiveness on eight VQA tasks across three domains, showing consistent improvements in accuracy, including improvements of >20% on medical VQA datasets and boosting the zero-shot performance of BLIP-2 above chance on a VQA reformulation of the challenging Winoground task. |
Zaid Khan; Vijay Kumar BG; Samuel Schulter; Manmohan Chandraker; Yun Fu; | arxiv-cs.CV | 2023-10-25 |
1217 | Binary State Recognition By Robots Using Visual Question Answering of Pre-Trained Vision-Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Until now, these states have been recognized by programmatically describing the state of a point cloud or raw image, by annotating and learning images, by using special sensors, etc. In contrast to these methods, we apply Visual Question Answering (VQA) from a Pre-Trained Vision-Language Model (PTVLM) trained on a large-scale dataset, to such binary state recognition. |
Kento Kawaharazuka; Yoshiki Obinata; Naoaki Kanazawa; Kei Okada; Masayuki Inaba; | arxiv-cs.RO | 2023-10-25 |
1218 | Hierarchical Synergy-Enhanced Multimodal Relational Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering (VideoQA) is challenging as it requires reasoning about natural language and multimodal interactive relations. Most existing methods apply attention … |
Min Peng; Xiaohu Shao; Yu Shi; Xiangdong Zhou; | ACM Transactions on Multimedia Computing, Communications … | 2023-10-25 |
1219 | Transformer-Based Question Answering Model for The Biomedical Domain Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Motivation: Question Answering (QA) is a highly focused topic in the field of Natural Language Processing (NLP). Recent progress in neural network models and the availability of … |
Ahcene Haddouche; Ikram Rabia; Aicha Aid; | 2023 5th International Conference on Pattern Analysis and … | 2023-10-25 |
1220 | Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a deep learning model tailored for document information analysis, emphasizing document classification, entity relation extraction, and document visual question answering. |
Tofik Ali; Partha Pratim Roy; | arxiv-cs.CV | 2023-10-25 |
1221 | Quality > Quantity: Synthetic Corpora from Foundation Models for Closed-Domain Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we study extractive question answering within closed domains and introduce the concept of targeted pre-training. |
Saptarshi Sengupta; Connor Heaton; Shreya Ghosh; Preslav Nakov; Prasenjit Mitra; | arxiv-cs.CL | 2023-10-25 |
1222 | EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce EHRXQA, a novel multi-modal question answering dataset for structured EHRs and chest X-ray images. |
SEONGSU BAE et. al. | nips | 2023-10-24 |
1223 | EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. |
Karttikeya Mangalam; Raiymbek Akshulakov; Jitendra Malik; | nips | 2023-10-24 |
1224 | Emergent Communication in Interactive Sketch Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Vision-based emergent communication (EC) aims to learn to communicate through sketches and demystify the evolution of human communication. |
Zixing Lei; Yiming Zhang; Yuxin Xiong; Siheng Chen; | arxiv-cs.AI | 2023-10-24 |
1225 | Benchmarking Large Language Models on CMExam – A Comprehensive Chinese Medical Exam Dataset IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National Medical Licensing Examination. |
JUNLING LIU et. al. | nips | 2023-10-24 |
1226 | RealTime QA: What’s The Answer Right Now? IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce RealTime QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis (weekly in this version). |
JUNGO KASAI et. al. | nips | 2023-10-24 |
1227 | 3D-Aware Visual Question Answering About Parts, Poses and Occlusions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. |
Xingrui Wang; Zhuowan Li; Wufei Ma; Adam Kortylewski; Alan Yuille; | nips | 2023-10-24 |
1228 | BeaverTails: A Human-Preference Dataset for LLM Harmlessness Alignment Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety alignment in large language models (LLMs). |
JIAMING JI et. al. | nips | 2023-10-24 |
1229 | Foundation Model Is Efficient Multimodal Multitask Model Selector Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Although recent-advanced approaches employed lightweight metrics to measure models’ transferability, they often depend heavily on the prior knowledge of a single task, making them inapplicable in a multi-modal multi-task scenario. To tackle this issue, we propose an efficient multitask model selector (EMMS), which employs large-scale foundation models to transform diverse label formats such as categories, texts, and bounding boxes of different downstream tasks into a unified noisy label embedding. |
FANQING MENG et. al. | nips | 2023-10-24 |
1230 | ECG-QA: A Comprehensive Question Answering Dataset Combined With Electrocardiogram IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This leaves the vast potential of combining electrocardiogram (ECG) data with these systems largely untapped. To address this gap, we present ECG-QA, the first QA dataset specifically designed for ECG analysis. |
Jungwoo Oh; Seongsu Bae; Gyubok Lee; Joon-myoung Kwon; Edward Choi; | nips | 2023-10-24 |
1231 | LoRA: A Logical Reasoning Augmented Dataset for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: VQA tasks and large vision-and-language models aim to tackle reasoning problems, but the accuracy, consistency and fabrication of the generated answers is hard to evaluate in the absence of a VQA dataset that can offer formal, comprehensive and systematic complex logical reasoning questions. To address this gap, we present LoRA, a novel Logical Reasoning Augmented VQA dataset that requires formal and complex description logic reasoning based on a food-and-kitchen knowledge base. |
Jingying Gao; Qi Wu; Alan Blair; Maurice Pagnucco; | nips | 2023-10-24 |
1232 | A Theoretically Grounded Question Answering Data Set for Evaluating Machine Common Sense Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Achieving machine common sense has been a longstanding problem within Artificial Intelligence. Thus far, benchmark data sets that are grounded in a theory of common sense and can … |
Henrique Santos; Ke Shen; Alice M. Mulvehill; M. Kejriwal; Deborah L. McGuinness; | Data Intelligence | 2023-10-24 |
1233 | Evaluating Open-QA Evaluation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a new task, QA Evaluation (QA-Eval) and the corresponding dataset EVOUNA, designed to assess the accuracy of AI-generated answers in relation to standard answers within Open-QA. |
CUNXIANG WANG et. al. | nips | 2023-10-24 |
1234 | ToolQA: A Dataset for LLM Question Answering with External Tools IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current evaluation methods do not distinguish between questions that can be answered using LLMs’ internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs’ ability to use external tools for question answering. |
Yuchen Zhuang; Yue Yu; Kuan Wang; Haotian Sun; Chao Zhang; | nips | 2023-10-24 |
1235 | Exploring Question Decomposition for Zero-Shot VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, we show that naive application of model-written decompositions can hurt performance. We introduce a model-driven _selective decomposition_ approach for second-guessing predictions and correcting errors, and validate its effectiveness on eight VQA tasks across three domains, showing consistent improvements in accuracy, including improvements of >20% on medical VQA datasets and boosting the zero-shot performance of BLIP-2 significantly above chance (+18%) on the challenging Winoground task. |
Zaid Khan; Vijay Kumar B G; Samuel Schulter; Manmohan Chandraker; Yun Fu; | nips | 2023-10-24 |
1236 | Large Language Models Are Temporal and Causal Reasoners for Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we develop LLaMA-VQA by applying Flipped-VQA to LLaMA, and it outperforms both LLMs-based and non-LLMs-based models on five challenging VideoQA benchmarks. |
Dohwan Ko; Ji Soo Lee; Wooyoung Kang; Byungseok Roh; Hyunwoo J. Kim; | arxiv-cs.CV | 2023-10-24 |
1237 | Towards Perceiving Small Visual Details in Zero-shot Visual Question Answering with Multimodal LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we investigate whether MLLMs can perceive small details as well as large details in images. |
Jiarui Zhang; Mahyar Khayatkhoei; Prateek Chhikara; Filip Ilievski; | arxiv-cs.CV | 2023-10-24 |
1238 | TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces TableQAKit, the first comprehensive toolkit designed specifically for TableQA. |
FANGYU LEI et. al. | arxiv-cs.CL | 2023-10-23 |
1239 | Strong and Efficient Baselines for Open Domain Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we study the State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and Fusion-in-Decoder (FiD) reader pipeline, and show that it significantly underperforms when applied to ODConvQA tasks due to various limitations. |
Andrei C. Coman; Gianni Barlacchi; Adrià de Gispert; | arxiv-cs.CL | 2023-10-23 |
1240 | Generative Pre-trained Transformer for Vietnamese Community-based COVID-19 Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel approach by conducting a comparative analysis of different Transformers vs SOTA models in the community-based COVID-19 question answering dataset. |
Tam Minh Vo; Khiem Vinh Tran; | arxiv-cs.CL | 2023-10-23 |
1241 | An In-Context Schema Understanding Method for Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recently, Large Language Models (LLMs) have shown strong capabilities in language understanding and can be used to solve this task. In doing so, a major challenge for LLMs is to overcome the immensity and heterogeneity of knowledge base schemas.Existing methods bypass this challenge by initially employing LLMs to generate drafts of logic forms without schema-specific details.Then, an extra module is used to inject schema information to these drafts.In contrast, in this paper, we propose a simple In-Context Schema Understanding (ICSU) method that enables LLMs to directly understand schemas by leveraging in-context learning. |
YANTAO LIU et. al. | arxiv-cs.CL | 2023-10-22 |
1242 | Retrieval-Augmented Chain-of-Thought in Semi-structured Domains Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study explores leveraging the semi-structured nature of legal and financial data to efficiently retrieve relevant context, enabling the use of LLMs for domain-specialized QA. |
Vaibhav Mavi; Abulhair Saparov; Chen Zhao; | arxiv-cs.CL | 2023-10-22 |
1243 | Comparative Analysis of Open Source and Commercial Embedding Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this industry track presentation, we will provide a comprehensive tour of the best performing embedding models for question answering, as determined by the Massive Text Embedding Benchmark1. |
Georgios Balikas; | cikm | 2023-10-21 |
1244 | CORD: A Three-Stage Coarse-to-Fine Framework for Relation Detection in Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a simple and efficient three-stage framework to exploit the coarse-to-fine paradigm. |
Yanzeng Li; Sen Hu; Wenjuan Han; Lei Zou; | cikm | 2023-10-21 |
1245 | LittleMu: Deploying An Online Virtual Teaching Assistant Via Heterogeneous Sources Integration and Chain of Teach Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present a virtual MOOC teaching assistant, LittleMu with minimum labeled training data, to provide question answering and chit-chat services. |
SHANGQING TU et. al. | cikm | 2023-10-21 |
1246 | MoqaGPT : Zero-Shot Multi-modal Open-domain Question Answering with Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To enable LLMs to tackle the task in a zero-shot manner, we introduce MoqaGPT, a straightforward and flexible framework. |
Le Zhang; Yihong Wu; Fengran Mo; Jian-Yun Nie; Aishwarya Agrawal; | arxiv-cs.CL | 2023-10-20 |
1247 | Robust Training for Conversational Question Answering Models with Reinforced Reformulation Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Models for conversational question answering (ConvQA) over knowledge graphs (KGs) are usually trained and tested on benchmarks of gold QA pairs. |
Magdalena Kaiser; Rishiraj Saha Roy; Gerhard Weikum; | arxiv-cs.CL | 2023-10-20 |
1248 | Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs of LLMs, by LLMs and for LLMs. |
Jinyuan Wang; Junlong Li; Hai Zhao; | arxiv-cs.CL | 2023-10-20 |
1249 | SALMONN: Towards Generic Hearing Abilities for Large Language Models IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose SALMONN, a speech audio language music open neural network, built by integrating a pre-trained text-based large language model (LLM) with speech and audio encoders into a single multimodal model. |
CHANGLI TANG et. al. | arxiv-cs.SD | 2023-10-20 |
1250 | Test-Time Self-Adaptive Small Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we show and investigate the capabilities of smaller self-adaptive LMs, only with unlabeled test data. |
Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong C. Park; | arxiv-cs.CL | 2023-10-20 |
1251 | ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models Via Transferable Adversarial Attacks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, this paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases for evaluating the LLMs’ reliability in using new evidence for answering. |
Xiaodong Yu; Hao Cheng; Xiaodong Liu; Dan Roth; Jianfeng Gao; | arxiv-cs.CL | 2023-10-19 |
1252 | Reliable Academic Conference Question Answering: A Study Based on Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, these methods fail to work due to the lack of the latest conference knowledge. To address this challenge, we develop the ConferenceQA dataset, consisting of seven diverse academic conferences. |
ZHIWEI HUANG et. al. | arxiv-cs.CL | 2023-10-19 |
1253 | CLIFT: Analysing Natural Distribution Shift on Question Answering Models in Clinical Domain Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a new testbed CLIFT (Clinical Shift) for the clinical domain Question-answering task. |
Ankit Pal; | arxiv-cs.CL | 2023-10-19 |
1254 | RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: These approaches demand significant computational resources and time, and a considerable number of trainable parameters are introduced. To address these challenges, we introduce a novel method known as RSAdapter, which prioritizes runtime and parameter efficiency. |
Yuduo Wang; Pedram Ghamisi; | arxiv-cs.CV | 2023-10-19 |
1255 | PSYCHIC: A Neuro-Symbolic Framework for Knowledge Graph Question-Answering Grounding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We answer the KGQA over DBLP (DBLP-QUAD) task by proposing a neuro-symbolic (NS) framework based on PSYCHIC, an extractive QA model capable of identifying the query and entities related to a KG question. |
Hanna Abi Akl; | arxiv-cs.AI | 2023-10-19 |
1256 | Time-Aware Representation Learning for Time-Sensitive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, language models have difficulty understanding the relationships between time specifiers, such as ‘after’ and ‘before’, and numbers, since existing QA datasets do not include sufficient time expressions. To address this issue, we propose a Time-Context aware Question Answering (TCQA) framework. |
Jungbin Son; Alice Oh; | arxiv-cs.CL | 2023-10-19 |
1257 | Understanding Retrieval Augmentation for Long-Form Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a study of retrieval-augmented language models (LMs) on long-form question answering. |
Hung-Ting Chen; Fangyuan Xu; Shane A. Arora; Eunsol Choi; | arxiv-cs.CL | 2023-10-18 |
1258 | A Summary of The ALQAC 2023 Competition Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents an overview of the third edition of the Automated Legal Question Answering Competition (ALQAC 2023). The primary objective of ALQAC is to address challenges … |
CHAU NGUYEN et. al. | 2023 15th International Conference on Knowledge and Systems … | 2023-10-18 |
1259 | Open Information Extraction: A Review of Baseline Techniques, Approaches, and Applications Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: It briefly discusses the main approaches and the pros and cons of each method. |
Serafina Kamp; Morteza Fayazi; Zineb Benameur-El; Shuyan Yu; Ronald Dreslinski; | arxiv-cs.IR | 2023-10-17 |
1260 | Systematic Assessment of Factual Knowledge in Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a framework to systematically assess the factual knowledge of LLMs by leveraging knowledge graphs (KGs). |
Linhao Luo; Thuy-Trang Vu; Dinh Phung; Gholamreza Haffari; | arxiv-cs.CL | 2023-10-17 |
1261 | QADYNAMICS: Training Dynamics-Driven Synthetic QA Diagnostic for Zero-Shot Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current QA synthesis protocols may introduce noise from the CSKBs and generate ungrammatical questions and false negative options, which impede the model’s ability to generalize. To address these issues, we propose QADYNAMICS, a training dynamics-driven framework for QA diagnostics and refinement. |
HAOCHEN SHI et. al. | arxiv-cs.CL | 2023-10-17 |
1262 | Will The Prince Get True Love’s Kiss? On The Model Sensitivity to Gender Perturbation Over Fairytale Texts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recent studies show that traditional fairytales are rife with harmful gender biases. To help mitigate these gender biases in fairytales, this work aims to assess learned biases of language models by evaluating their robustness against gender perturbations. |
Christina Chance; Da Yin; Dakuo Wang; Kai-Wei Chang; | arxiv-cs.CL | 2023-10-16 |
1263 | A Search for Prompts: Generating Structured Answers from Contracts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In many legal processes being able to action on the concrete implication of a legal question can be valuable to automating human review or signalling certain conditions (e.g., alerts around automatic renewal). To support such tasks, we present a form of legal question answering that seeks to return one (or more) fixed answers for a question about a contract clause. |
ADAM ROEGIEST et. al. | arxiv-cs.CV | 2023-10-16 |
1264 | UNK-VQA: A Dataset and A Probe Into The Abstention Ability of Multi-modal Large Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper aims to bridge the research gap by contributing a comprehensive dataset, called UNK-VQA. |
Yangyang Guo; Fangkai Jiao; Zhiqi Shen; Liqiang Nie; Mohan Kankanhalli; | arxiv-cs.CV | 2023-10-16 |
1265 | Emerging Challenges in Personalized Medicine: Assessing Demographic Effects on Biomedical Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We find that irrelevant demographic information change up to 15% of the answers of a KG-grounded system and up to 23% of the answers of a text-based system, including changes that affect accuracy. |
Sagi Shaier; Kevin Bennett; Lawrence Hunter; Katharina von der Wense; | arxiv-cs.CL | 2023-10-16 |
1266 | CarExpert: Leveraging Large Language Models for In-Car Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. |
MD RASHAD AL HASAN RONY et. al. | arxiv-cs.CL | 2023-10-14 |
1267 | Progressive Evidence Refinement for Open-domain Multimodal Retrieval Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Secondly, a gap exists between the feature extraction of evidence and the question, which hinders the model from effectively extracting critical features from the evidence based on the given question. We propose a two-stage framework for evidence retrieval and question-answering to alleviate these issues. |
SHUWEN YANG et. al. | arxiv-cs.AI | 2023-10-14 |
1268 | MiniGPT-v2: Large Language Model As A Unified Interface for Vision-language Multi-task Learning IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. |
JUN CHEN et. al. | arxiv-cs.CV | 2023-10-13 |
1269 | ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, three core challenges remain: inefficient knowledge retrieval, mistakes of retrieval adversely impacting semantic parsing, and the complexity of previous KBQA methods. To tackle these challenges, we introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework, which proposes first generating the logical form with fine-tuned LLMs, then retrieving and replacing entities and relations with an unsupervised retrieval method, to improve both generation and retrieval more directly. |
HAORAN LUO et. al. | arxiv-cs.CL | 2023-10-13 |
1270 | Enhancing BERT-Based Visual Question Answering Through Keyword-Driven Sentence Selection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The goal is to identify the document elements that answer a specific question posed in natural language. This paper describes the PoliTo’s approach to addressing this task, in particular, our best solution explores a text-only approach, leveraging an ad hoc sampling strategy. |
Davide Napolitano; Lorenzo Vaiani; Luca Cagliero; | arxiv-cs.CL | 2023-10-13 |
1271 | Mitigating Bias for Question Answering Models By Tracking Bias Influence Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. |
MINGYU DEREK MA et. al. | arxiv-cs.CL | 2023-10-12 |
1272 | Training Generative Question-Answering on Synthetic Data Obtained from An Instruct-tuned Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a simple and cost-effective method for synthesizing data to train question-answering systems. |
Kosuke Takahashi; Takahiro Omi; Kosuke Arima; Tatsuya Ishigaki; | arxiv-cs.CL | 2023-10-12 |
1273 | Open-Set Knowledge-Based Visual Question Answering with Inference Paths Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we confront the challenge of \emph{explainable open-set} KB-VQA, where the system is required to answer questions with entities at wild and retain an explainable reasoning path. |
Jingru Gan; Xinzhe Han; Shuhui Wang; Qingming Huang; | arxiv-cs.LG | 2023-10-12 |
1274 | Low-Resource Clickbait Spoiling for Indonesian Via Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our contributions include the construction of manually labeled clickbait spoiling corpus in Indonesian and an evaluation on using cross-lingual zero-shot question answering-based models to tackle clikcbait spoiling for low-resource language like Indonesian. |
Ni Putu Intan Maharani; Ayu Purwarianti; Alham Fikri Aji; | arxiv-cs.CL | 2023-10-12 |
1275 | Understanding How to Inform Blind and Low-Vision Users About Data Privacy Through Privacy Question Answering Assistants Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conducted an in-depth qualitative study with 21 US BLV participants to understand their data privacy risk perception and mitigation, as well as their information behaviors related to data privacy. |
YUANYUAN FENG et. al. | arxiv-cs.HC | 2023-10-12 |
1276 | Question Answering for Electronic Health Records: A Scoping Review of Datasets and Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We searched for articles from January 1st, 2005 to September 30th, 2023 in four digital sources including Google Scholar, ACL Anthology, ACM Digital Library, and PubMed to collect relevant publications on EHR QA. |
Jayetri Bardhan; Kirk Roberts; Daisy Zhe Wang; | arxiv-cs.LG | 2023-10-12 |
1277 | QASiNa: Religious Domain Question Answering Using Sirah Nabawiyah Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose the Question Answering Sirah Nabawiyah (QASiNa) dataset, a novel dataset compiled from Sirah Nabawiyah literatures in Indonesian language. |
Muhammad Razif Rizqullah; Ayu Purwarianti; Alham Fikri Aji; | arxiv-cs.CL | 2023-10-12 |
1278 | Framework for Question-Answering in Sanskrit Through Automated Construction of Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we target the problem of building knowledge graphs for particular types of relationships from sa\d{m}sk\d{r}ta texts. |
Hrishikesh Terdalkar; Arnab Bhattacharya; | arxiv-cs.CL | 2023-10-11 |
1279 | QACHECK: A Demonstration System for Question-Guided Multi-Hop Fact-Checking Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing fact-checking systems often lack transparency in their decision-making, making it challenging for users to comprehend their reasoning process. To address this, we propose the Question-guided Multi-hop Fact-Checking (QACHECK) system, which guides the model’s reasoning process by asking a series of questions critical for verifying a claim. |
Liangming Pan; Xinyuan Lu; Min-Yen Kan; Preslav Nakov; | arxiv-cs.CL | 2023-10-11 |
1280 | MemSum-DQA: Adapting An Efficient Long Document Extractive Summarizer for Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce MemSum-DQA, an efficient system for document question answering (DQA) that leverages MemSum, a long document extractive summarizer. |
Nianlong Gu; Yingqiang Gao; Richard H. R. Hahnloser; | arxiv-cs.CL | 2023-10-10 |
1281 | Question Classification for Intelligent Question Answering: A Comprehensive Survey Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the era of GeoAI, Geospatial Intelligent Question Answering (GeoIQA) represents the ultimate pursuit for everyone. Even generative AI systems like ChatGPT-4 struggle to handle … |
Hao Sun; Shu Wang; Yunqiang Zhu; Wen Yuan; Zhiqiang Zou; | ISPRS Int. J. Geo Inf. | 2023-10-10 |
1282 | Jaeger: A Concatenation-Based Multi-Transformer VQA Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although there has been encouraging progress in document-based question answering due to the utilization of large language and open-world prior models\cite{1}, several challenges persist, including prolonged response times, extended inference durations, and imprecision in matching. In order to overcome these challenges, we propose Jaegar, a concatenation-based multi-transformer VQA model. |
Jieting Long; Zewei Shi; Penghao Jiang; Yidong Gan; | arxiv-cs.CL | 2023-10-10 |
1283 | Answer Candidate Type Selection: Text-to-Text Language Model for Closed Book Question Answering Meets Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the capacity of the models is limited and the quality decreases for questions with less popular entities. In this paper, we present a novel approach which works on top of the pre-trained Text-to-Text QA system to address this issue. |
MIKHAIL SALNIKOV et. al. | arxiv-cs.CL | 2023-10-10 |
1284 | Towards Mitigating Hallucination in Large Language Models Via Self-Reflection IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our investigation centers on the identification and comprehension of common problematic answers, with a specific emphasis on hallucination. To tackle this challenge, we present an interactive self-reflection methodology that incorporates knowledge acquisition and answer generation. |
ZIWEI JI et. al. | arxiv-cs.CL | 2023-10-09 |
1285 | FireAct: Toward Language Agent Fine-tuning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. |
BAIAN CHEN et. al. | arxiv-cs.CL | 2023-10-09 |
1286 | Causal Reasoning Through Two Layers of Cognition for Improving Generalization in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Besides, diverse interpretations of the input lead to various modes of answer generation, highlighting the role of causal reasoning between interpreting and answering steps in VQA. Through this lens, we propose Cognitive pathways VQA (CopVQA) improving the multimodal predictions by emphasizing causal reasoning factors. |
Trang Nguyen; Naoaki Okazaki; | arxiv-cs.AI | 2023-10-09 |
1287 | Retrieval-Generation Synergy Augmented Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: One is to retrieve from an external knowledge base, and the other is to utilize large language models to generate documents. |
Zhangyin Feng; Xiaocheng Feng; Dezhi Zhao; Maojin Yang; Bing Qin; | arxiv-cs.CL | 2023-10-08 |
1288 | Multi-Semantic Alignment Co-Reasoning Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering challenges models on understanding textual questions with varying complexity and searching for clues from visual content with different hierarchical … |
Min Peng; Liangchen Liu; Zhenghao Li; Yu Shi; Xiangdong Zhou; | 2023 IEEE International Conference on Image Processing … | 2023-10-08 |
1289 | Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Therefore, the pertinent question to ask is: Can image-text models be adapted to video tasks and is there any benefit to using these models over pretraining directly on videos? In this work, we focus on this question by proposing a detailed study on the generalization abilities of image-text models when evaluated on video understanding tasks in a zero-shot setting. |
Avinash Madasu; Anahita Bhiwandiwalla; Vasudev Lal; | arxiv-cs.CV | 2023-10-07 |
1290 | Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We identify confounding effects and LM-KG misalignment as key factors causing spurious explanations. To address this, we introduce the LM-KG Fidelity metric to assess KG representation reliability and propose the LM-KG Distribution-aware Alignment (\textit{LKDA}) algorithm to improve explanation faithfulness. |
Weihe Zhai; Arkaitz Zubiaga; | arxiv-cs.CL | 2023-10-07 |
1291 | Policy-Gradient Training of Language Models for Ranking Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This reliance on heuristics stems from the fact that the contrastive loss itself is heuristic and does not directly optimize the downstream metrics of decision quality at the end of the processing pipeline. To address this issue, we introduce Neural PG-RANK, a novel training algorithm that learns to rank by instantiating a LLM as a Plackett-Luce ranking policy. |
Ge Gao; Jonathan D. Chang; Claire Cardie; Kianté Brantley; Thorsten Joachim; | arxiv-cs.CL | 2023-10-06 |
1292 | Analysis of The Reasoning with Redundant Information Provided Ability of Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The study designed a modified version of the grade school math 8K (GSM-8K) dataset which has several variants focusing on different attributes of redundant information. |
Wenbei Xie; | arxiv-cs.CL | 2023-10-06 |
1293 | Retrieval-augmented Generation to Improve Math Question-Answering: Trade-offs Between Groundedness and Human Preference IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we designed prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions. |
ZACHARY LEVONIAN et. al. | arxiv-cs.CL | 2023-10-04 |
1294 | Integrating UMLS Knowledge Into Large Language Models for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In our research, we develop an augmented LLM framework based on the Unified Medical Language System (UMLS), aiming to better serve the healthcare community. |
RUI YANG et. al. | arxiv-cs.CL | 2023-10-04 |
1295 | Multimodal Question Answering for Unified Information Extraction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Due to the diversity of tasks and settings, most current MIE models are task-specific and data-intensive, which limits their generalization to real-world scenarios with diverse task requirements and limited labeled data. To address these issues, we propose a novel multimodal question answering (MQA) framework to unify three MIE tasks by reformulating them into a unified span extraction and multi-choice QA pipeline. |
Yuxuan Sun; Kai Zhang; Yu Su; | arxiv-cs.CL | 2023-10-04 |
1296 | SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we demonstrate that despite the effectiveness of scene graphs in VQA tasks, current methods that utilize idealized annotated scene graphs struggle to generalize when using predicted scene graphs extracted from images. To address this issue, we introduce the SelfGraphVQA framework. |
Bruno Souza; Marius Aasan; Helio Pedrini; Adín Ramírez Rivera; | arxiv-cs.CV | 2023-10-03 |
1297 | Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a unique object-level multimodal LLM architecture that merges vectorized numeric modalities with a pre-trained LLM to improve context understanding in driving situations. |
LONG CHEN et. al. | arxiv-cs.RO | 2023-10-03 |
1298 | On The Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To inspect the association of VQA models to human cognition, we designed a survey to record human thinking process and analyzed VQA models by comparing the outputs and attention maps with those of humans. |
Liben Chen; Long Chen; Tian Ellison-Chen; Zhuoyuan Xu; | arxiv-cs.CV | 2023-10-03 |
1299 | Systematic Literature Review on Ontology-based Indonesian Question Answering System Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-Answering (QA) systems at the intersection of natural language processing, information retrieval, and knowledge representation aim to provide efficient responses to … |
Fadhila Tangguh Admojo; Adidah Lajis; H. Nasir; | Knowl. Eng. Data Sci. | 2023-10-03 |
1300 | An Empirical Study of ChatGPT-3.5 on Question Answering and Code Maintenance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Ever since the launch of ChatGPT in 2022, a rising concern is whether ChatGPT will replace programmers and kill jobs. Motivated by this widespread concern, we conducted an empirical study to systematically compare ChatGPT against programmers in question-answering and software-maintaining. |
MD MAHIR ASEF KABIR et. al. | arxiv-cs.SE | 2023-10-03 |
1301 | Generating Explanations in Medical Question-Answering By Expectation Maximization Inference Over Evidence Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To do so, we propose a novel approach for generating natural language explanations for answers predicted by medical QA systems. |
Wei Sun; Mingxiao Li; Damien Sileo; Jesse Davis; Marie-Francine Moens; | arxiv-cs.CL | 2023-10-02 |
1302 | External Commonsense Knowledge As A Modality for Social Intelligence Question-Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Artificial Social Intelligence (ASI) refers to the perception and understanding of social interactions. It involves the usage of contextual information about social cues to … |
Sanika Natu; Shounak Sural; Sulagna Sarkar; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1303 | Human Mobility Question Answering (Vision Paper) Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Mining human mobility data is crucial for various applications such as smart city planning, pandemic management, and personalised recommendation system. In this paper, we aim to tackle this gap and introduce a novel task, that is, human mobility question answering (MobQA). |
Hao Xue; Flora D. Salim; | arxiv-cs.CL | 2023-10-02 |
1304 | Investigating Better Context Representations for Generative Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Sumam Francis; Marie-Francine Moens; | Information Retrieval Journal | 2023-10-02 |
1305 | Multi-Modal Correlated Network with Emotional Reasoning Knowledge for Social Intelligence Question-Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The capacity for social reasoning is essential to the development of social intelligence in humans, which we easily acquire through study and experience. The acquisition of such … |
Baijun Xie; Chung Hyuk Park; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1306 | MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering is a challenging task that requires understanding the video and question in the same context. This becomes even harder when the questions involve … |
Mobeen Ahmad; Geonwoo Park; Dongchan Park; Sanguk Park; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1307 | Cross-Modal Dense Passage Retrieval for Outside Knowledge Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In many language processing tasks including most notably Large Language Modeling (LLM), retrieval augmentation improves the performance of the models by adding information during … |
Benjamin Z. Reichman; Larry Heck; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1308 | ReAcTable: Enhancing ReAct for Table Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nonetheless, a conspicuous gap exists in the research landscape, where there is limited exploration of how innovative foundational research, which integrates incremental reasoning with external tools in the context of LLMs, as exemplified by the ReAct paradigm, could potentially bring advantages to the TQA task. In this paper, we aim to fill this gap, by introducing ReAcTable (ReAct for Table Question Answering tasks), a framework inspired by the ReAct paradigm that is carefully enhanced to address the challenges uniquely appearing in TQA tasks such as interpreting complex data semantics, dealing with errors generated by inconsistent data and generating intricate data transformations. |
YUNJIA ZHANG et. al. | arxiv-cs.DB | 2023-10-01 |
1309 | Understanding AI Cognition: A Neural Module for Inference Inspired By Human Memory Mechanisms Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: How humans and machines make sense of current inputs for relation reasoning and question-answering while putting the perceived information into context of our past memories, has … |
Xiangyu Zeng; Jie Lin; Piao Hu; Ruizheng Huang; Zhicheng Zhang; | ArXiv | 2023-10-01 |
1310 | Multi-modal Spatial Relational Attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
HAIBO YAO et. al. | Image Vis. Comput. | 2023-10-01 |
1311 | Question Answering Models for Human-machine Interaction in The Manufacturing Industry Related Papers Related Patents Related Grants Related Venues Related Experts View |
Eneko Ruiz; M. Inés Torres; A. del Pozo; | Comput. Ind. | 2023-10-01 |
1312 | Event-Oriented Visual Question Answering: The E-VQA Dataset and Benchmark Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual question answering (VQA) is a challenging task that reasons over questions on images with knowledge. A prerequisite for VQA is the availability of annotated datasets, while … |
Zhenguo Yang; Jiale Xiang; Jiuxiang You; Qing Li; Wenyin Liu; | IEEE Transactions on Knowledge and Data Engineering | 2023-10-01 |
1313 | Multi-aspect Attentive Text Representations for Simple Question Answering Over Knowledge Base Related Papers Related Patents Related Grants Related Venues Related Experts View |
Zhixiang Zeng; Yuefeng Li; Jianming Yong; Xiaohui Tao; Vicky Liu; | Nat. Lang. Process. J. | 2023-10-01 |
1314 | Robust Visual Question Answering Via Semantic Cross Modal Augmentation Related Papers Related Patents Related Grants Related Venues Related Experts View |
Akib Mashrur; Wei Luo; Nayyar A. Zaidi; Antonio Robles-Kelly; | Comput. Vis. Image Underst. | 2023-10-01 |
1315 | A Framework for Inference Inspired By Human Memory Mechanisms Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Inspired by human brain’s memory system and cognitive architectures, we propose a PMI framework that consists of perception, memory and inference components. |
Xiangyu Zeng; Jie Lin; Piao Hu; Ruizheng Huang; Zhicheng Zhang; | arxiv-cs.LG | 2023-10-01 |
1316 | Learning Neighbor-enhanced Region Representations and Question-guided Visual Representations for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Ling Gao; Hongda Zhang; Nan Sheng; Lida Shi; Hao Xu; | Expert Syst. Appl. | 2023-10-01 |
1317 | Testing The Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To that end, we attempt at creating a shared modeling approach in the pretraining stage with encoder-decoder style LLMs that can cater to diverse tasks. We evaluate our approach that continually pretrains and finetunes different model families of T5 with data from tables and surrounding context, on these downstream tasks at different model scales. |
Soumajyoti Sarkar; Leonard Lausen; | arxiv-cs.CL | 2023-10-01 |
1318 | Question-Answering Model for Schizophrenia Symptoms and Their Impact on Daily Life Using Mental Health Forums Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The purpose of this paper is to present a new methodology for building a medical dataset and obtain a QA model for analysis of symptoms and impact on daily life for a specific disease domain. |
Christian Internò; Eloisa Ambrosini; | arxiv-cs.LG | 2023-09-30 |
1319 | Question Answering Over Knowledge Graphs Using BERT Based Relation Mapping Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: A knowledge graph (KG) is a structured form of knowledge describing real‐world entities, properties and relationships as a graph. Question answering over knowledge graphs (KGQA) … |
S. C. M.; JayaramanPrem Prakash; Pramod Kumar Singh; | Expert Systems | 2023-09-29 |
1320 | Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios Via Self-knowledge Distillation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Beyond performance improvements, we offer valuable insights through comprehensive analyses and an ablation study, further substantiating the benefits and constraints of our approach. |
Casimiro Pio Carrino; Carlos Escolano; José A. R. Fonollosa; | arxiv-cs.CL | 2023-09-29 |
1321 | Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper proposes Fine-grained Late-interaction Multi-modal Retrieval (FLMR) which significantly improves knowledge retrieval in RA-VQA. |
Weizhe Lin; Jinghong Chen; Jingbiao Mei; Alexandru Coca; Bill Byrne; | arxiv-cs.CL | 2023-09-29 |
1322 | VDC: Versatile Data Cleanser Based on Visual-Linguistic Inconsistency By Multimodal Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing detectors only focus on detecting poisoned samples or noisy labels, that are often prone to weak generalization when dealing with dirty samples from other domains.In this paper, we find a commonality of various dirty samples is visual-linguistic inconsistency between images and associated labels. To capture the semantic inconsistency between modalities, we propose versatile data cleanser (VDC) leveraging the surpassing capabilities of multimodal large language models (MLLM) in cross-modal alignment and reasoning.It consists of three consecutive modules: the visual question generation module to generate insightful questions about the image; the visual question answering module to acquire the semantics of the visual content by answering the questions with MLLM; followed by the visual answer evaluation module to evaluate the inconsistency.Extensive experiments demonstrate its superior performance and generalization to various categories and types of dirty samples. |
Zihao Zhu; Mingda Zhang; Shaokui Wei; Bingzhe Wu; Baoyuan Wu; | arxiv-cs.CV | 2023-09-28 |
1323 | Using Weak Supervision and Data Augmentation in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the roles weak supervision and data augmentation play in training deep neural network QA models. |
Chumki Basu; Himanshu Garg; Allen McIntosh; Sezai Sablak; John R. Wullert II; | arxiv-cs.CL | 2023-09-28 |
1324 | Toloka Visual Question Answering Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Toloka Visual Question Answering, a new crowdsourced dataset allowing comparing performance of machine learning systems against human level of expertise in the grounding visual question answering task. |
Dmitry Ustalov; Nikita Pavlichenko; Sergey Koshelev; Daniil Likhobaba; Alisa Smirnova; | arxiv-cs.CV | 2023-09-28 |
1325 | Spider4SPARQL: A Complex Benchmark for Evaluating Knowledge Graph Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce Spider4SPARQL – a new SPARQL benchmark dataset featuring 9,693 previously existing manually generated NL questions and 4,721 unique, novel, and complex SPARQL queries of varying complexity. |
Catherine Kosten; Philippe Cudré-Mauroux; Kurt Stockinger; | arxiv-cs.CL | 2023-09-28 |
1326 | MKRAG: Medical Knowledge Retrieval Augmented Generation for Medical Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the problem, our work employs a transparent process of retrieval augmented generation (RAG), aiming to improve LLM responses without the need for fine-tuning or retraining. Specifically, we propose a comprehensive retrieval strategy to extract medical facts from an external knowledge base, and then inject them into the LLM’s query prompt. |
YUCHENG SHI et. al. | arxiv-cs.CL | 2023-09-27 |
1327 | Knowledge Proxy Intervention for Deconfounded Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To tackle the challenge that the confounder in VideoQA is unobserved and non-enumerable in general, we propose a model-agnostic framework called Knowledge Proxy Intervention (KPI), which introduces an extra knowledge proxy variable in the causal graph to cut the backdoor path and remove the confounder. |
Jiangtong Li; Li Niu; Liqing Zhang; | iccv | 2023-09-27 |
1328 | Toward Multi-Granularity Decision-Making: Explicit Visual Reasoning with Hierarchical Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To fill the gap, this paper makes progresses from two distinct perspectives: (1) It presents a Hierarchical Concept Graph (HCG) that discriminates and associates multi-granularity concepts with a multi-layered hierarchical structure, aligning visual observations with knowledge across different levels to alleviate data biases. |
Yifeng Zhang; Shi Chen; Qi Zhao; | iccv | 2023-09-27 |
1329 | PromptCap: Prompt-Guided Image Captioning for VQA with GPT-3 IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Generic image captions often miss visual details essential for the LM to answer visual questions correctly. To address this challenge, we propose PromptCap (Prompt-guided image Captioning), a captioning model designed to serve as a better connector between images and black-box LMs. |
YUSHI HU et. al. | iccv | 2023-09-27 |
1330 | VQA-GNN: Reasoning with Multimodal Knowledge Via Graph Neural Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To perform more expressive reasoning, we propose VQA-GNN, a new VQA model that performs bidirectional fusion between unstructured and structured multimodal knowledge to obtain unified knowledge representations. |
Yanan Wang; Michihiro Yasunaga; Hongyu Ren; Shinya Wada; Jure Leskovec; | iccv | 2023-09-27 |
1331 | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating The Generalizability of Video Question Answering Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We hence propose a new benchmark, Open-vocabulary Video Question Answering (OVQA), to measure the generalizability of VideoQA models by considering rare and unseen answers. |
DOHWAN KO et. al. | iccv | 2023-09-27 |
1332 | Variational Causal Inference Network for Explanatory Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Moreover, they neglect the complex relationships among question words, visual regions, and explanation tokens. To address these issues, we propose a Variational Causal Inference Network (VCIN) that establishes the causal correlation between predicted answers and explanations, and captures cross-modal relationships to generate rational explanations. |
Dizhan Xue; Shengsheng Qian; Changsheng Xu; | iccv | 2023-09-27 |
1333 | Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: On the other hand, neglecting the interactions between modalities will lead to poor performance. To tackle these challenging issues, we propose a comprehensive formulation for CL-VQA from the perspective of multi-modal vision-language fusion. |
ZI QIAN et. al. | iccv | 2023-09-27 |
1334 | Question Answering Using Deep Learning in Low Resource Indian Language Marathi Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we investigate different transformer models for creating a reading comprehension-based Marathi question answering system. |
Dhiraj Amin; Sharvari Govilkar; Sagar Kulkarni; | arxiv-cs.CL | 2023-09-27 |
1335 | TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Based on this approach, we introduce TIFA v1.0, a benchmark consisting of 4K diverse text inputs and 25K questions across 12 categories (object, counting, etc.). |
YUSHI HU et. al. | iccv | 2023-09-27 |
1336 | Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. |
Deniz Engin; Yannis Avrithis; | arxiv-cs.CV | 2023-09-27 |
1337 | VQA Therapy: Exploring Answer Differences By Visually Grounding Answers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Given that different people can provide different answers to a visual question, we aim to better understand why with answer groundings. |
Chongyan Chen; Samreen Anjum; Danna Gurari; | iccv | 2023-09-27 |
1338 | Discovering Spatio-Temporal Rationales for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To tackle the challenge, we highlight the importance of identifying question-critical temporal moments and spatial objects from the vast amount of video content. Towards this, we propose a Spatio-Temporal Rationalizer (STR), a differentiable selection module that adaptively collects question-critical moments and objects using cross-modal interaction. |
Yicong Li; Junbin Xiao; Chun Feng; Xiang Wang; Tat-Seng Chua; | iccv | 2023-09-27 |
1339 | Encyclopedic VQA: Visual Questions About Detailed Properties of Fine-Grained Categories IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose Encyclopedic-VQA, a large scale visual question answering (VQA) dataset featuring visual questions about detailed properties of fine-grained categories and instances. |
THOMAS MENSINK et. al. | iccv | 2023-09-27 |
1340 | Simple Baselines for Interactive Video Retrieval with Questions and Answers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recently, there has been renewed interest in interactive systems to enhance retrieval, but existing approaches are complex and deliver limited gains in performance. In this work, we revisit this topic and propose several simple yet effective baselines for interactive video retrieval via question-answering. |
Kaiqu Liang; Samuel Albanie; | iccv | 2023-09-27 |
1341 | Fine-tuning and Aligning Question Answering Models for Complex Information Extraction Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we propose an approach that uses and integrates extractive QA models for improved feature extraction of German business documents such as insurance reports or medical leaflets into a document analysis solution. |
Matthias Engelbach; Dennis Klau; Felix Scheerer; Jens Drawehn; Maximilien Kintz; | arxiv-cs.CL | 2023-09-26 |
1342 | Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets: the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. |
Jianing Wang; Chengyu Wang; Chuanqi Tan; Jun Huang; Ming Gao; | arxiv-cs.CL | 2023-09-26 |
1343 | Question-Answering Approach to Evaluating Legal Summaries Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel legal summarization evaluation framework that utilizes GPT-4 to generate a set of question-answer pairs that cover main points and information in the reference summary. |
Huihui Xu; Kevin Ashley; | arxiv-cs.CL | 2023-09-26 |
1344 | Legal Question-Answering in The Indian Context: Efficacy, Challenges, and Potential of Modern AI Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Legal QA platforms bear the promise to metamorphose the manner in which legal experts engage with jurisprudential documents. In this exposition, we embark on a comparative exploration of contemporary AI frameworks, gauging their adeptness in catering to the unique demands of the Indian legal milieu, with a keen emphasis on Indian Legal Question Answering (AILQA). |
Shubham Kumar Nigam; Shubham Kumar Mishra; Ayush Kumar Mishra; Noel Shallum; Arnab Bhattacharya; | arxiv-cs.CL | 2023-09-26 |
1345 | A Question-Answering Approach to Evaluating Legal Summaries Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Traditional evaluation metrics like ROUGE compare lexical overlap between the reference and generated summaries without taking argumentative structure into account, which is … |
Huihui Xu; Kevin D. Ashley; | International Conference on Legal Knowledge and Information … | 2023-09-26 |
1346 | Analyzing The Efficacy of An LLM-Only Approach for Image-based Document Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recent document question answering models consist of two key components: the vision encoder, which captures layout and visual elements in images, and a Large Language Model (LLM) … |
Nidhi Hegde; S. Paul; Gagan Madan; Gaurav Aggarwal; | ArXiv | 2023-09-25 |
1347 | Does The most Sinfully Decadent Cake Ever Taste Good? Answering Yes/No Questions from Figurative Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we investigate the robustness of Question Answering (QA) models on figurative text. |
Geetanjali Rakshit; Jeffrey Flanigan; | arxiv-cs.CL | 2023-09-24 |
1348 | Does The “Most Sinfully Decadent Cake Ever” Taste Good? Answering Yes/No Questions from Figurative Contexts Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Figurative language is commonplace in natural language, and while making communication memorable and creative, can be difficult to understand. In this work, we investigate the … |
Geetanjali Rakshit; Jeffrey Flanigan; | ArXiv | 2023-09-24 |
1349 | Unified Transformer with Cross-Modal Mixture Experts for Remote-Sensing Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Remote-sensing visual question answering (RSVQA) aims to provide accurate answers to remote sensing images and their associated questions by leveraging both visual and textual … |
GANG LIU et. al. | Remote. Sens. | 2023-09-24 |
1350 | Diversifying Question Generation Over Knowledge Base Via External Natural Questions Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Previous methods on knowledge base question generation (KBQG) primarily focus on enhancing the quality of a single generated question. Recognizing the remarkable paraphrasing … |
Shasha Guo; Jing Zhang; Xirui Ke; Cuiping Li; Hong Chen; | ArXiv | 2023-09-23 |
1351 | Furthest Reasoning with Plan Assessment: Stable Reasoning Path with Retrieval-Augmented Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These inaccuracies, accumulated by the iterative interaction between IR and LLM, lead to a disaster in effectiveness at the end. To overcome above barriers, in this paper, we propose a novel pipeline for MHQA called Furthest-Reasoning-with-Plan-Assessment (FuRePA), including an improved framework (Furthest Reasoning) and an attached module (Plan Assessor). |
Yin Zhu; Zhiling Luo; Gong Cheng; | arxiv-cs.CL | 2023-09-22 |
1352 | HRoT: Hybrid Prompt Strategy and Retrieval of Thought for Table-Text Hybrid Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. |
TONGXU LUO et. al. | arxiv-cs.CL | 2023-09-22 |
1353 | SQUARE: Automatic Question Answering Evaluation Using Multiple Positive and Negative References Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation), using multiple reference answers (combining multiple correct and incorrect references) for sentence-form QA. |
Matteo Gabburo; Siddhant Garg; Rik Koncel Kedziorski; Alessandro Moschitti; | arxiv-cs.CL | 2023-09-21 |
1354 | Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task that requires rich world knowledge. |
YIKE WU et. al. | arxiv-cs.CL | 2023-09-20 |
1355 | Knowledge Graph Question Answering for Materials Science (KGQA4MAT): Developing Natural Language Interface for Metal-Organic Frameworks Knowledge Graph (MOF-KG) Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We present a comprehensive benchmark dataset for Knowledge Graph Question Answering in Materials Science (KGQA4MAT), with a focus on metal-organic frameworks (MOFs). A knowledge … |
YUAN AN et. al. | ArXiv | 2023-09-20 |
1356 | Knowledge Graph Question Answering for Materials Science (KGQA4MAT): Developing Natural Language Interface for Metal-Organic Frameworks Knowledge Graph (MOF-KG) Using LLM Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a comprehensive benchmark dataset for Knowledge Graph Question Answering in Materials Science (KGQA4MAT), with a focus on metal-organic frameworks (MOFs). |
YUAN AN et. al. | arxiv-cs.AI | 2023-09-20 |
1357 | Retrieving Supporting Evidence for Generative Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we report two simple experiments to automatically validate generated answers against a corpus. |
Siqing Huo; Negar Arabzadeh; Charles L. A. Clarke; | arxiv-cs.IR | 2023-09-20 |
1358 | Visual Question Answering in The Medical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present domain-specific pre-training strategies, including a novel contrastive learning pretraining method, to mitigate the problem of small datasets for the Med-VQA task. |
Louisa Canepa; Sonit Singh; Arcot Sowmya; | arxiv-cs.CV | 2023-09-20 |
1359 | Enhancing Open-Domain Table Question Answering Via Syntax- and Structure-aware Dense Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing studies of open-domain table QA either directly adopt text retrieval methods or consider the table structure only in the encoding layer for table retrieval, which may cause syntactical and structural information loss during table scoring. To address this issue, we propose a syntax- and structure-aware retrieval method for the open-domain table QA task. |
Nengzheng Jin; Dongfang Li; Junying Chen; Joanna Siebert; Qingcai Chen; | arxiv-cs.CL | 2023-09-19 |
1360 | Benchmarks for Pirá 2.0, A Reading Comprehension Dataset About The Ocean, The Brazilian Coast, and Climate Change Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we define six benchmarks over the Pir\’a dataset, covering closed generative question answering, machine reading comprehension, information retrieval, open question answering, answer triggering, and multiple choice question answering. |
PAULO PIROZELLI et. al. | arxiv-cs.CL | 2023-09-19 |
1361 | Localize, Retrieve and Fuse: A Generalized Framework for Free-Form Question Answering Over Tables Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, this paper proposes a generalized three-stage approach: Table-to- Graph conversion and cell localizing, external knowledge retrieval, and the fusion of table and text (called TAG-QA), to address the challenge of inferring long free-form answers in generative TableQA. |
WENTING ZHAO et. al. | arxiv-cs.CL | 2023-09-19 |
1362 | QASnowball: An Iterative Bootstrapping Framework for High-Quality Question-Answering Data Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, obtaining sufficient data to build an effective and stable QA system still remains an open problem. For this problem, we introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball), which can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples. |
KUNLUN ZHU et. al. | arxiv-cs.CL | 2023-09-19 |
1363 | Syntax Tree Constrained Graph Network for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To fill the gap, we suggested a novel Syntax Tree Constrained Graph Network (STCGN) for VQA based on entity message passing and syntax tree. |
Xiangrui Su; Qi Zhang; Chongyang Shi; Jiachang Liu; Liang Hu; | arxiv-cs.CV | 2023-09-17 |
1364 | NOWJ1@ALQAC 2023: Enhancing Legal Task Performance with Classic Statistical Models and Pre-trained Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes the NOWJ1 Team’s approach for the Automated Legal Question Answering Competition (ALQAC) 2023, which focuses on enhancing legal task performance by integrating classical statistical models and Pre-trained Language Models (PLMs). |
TAN-MINH NGUYEN et. al. | arxiv-cs.CL | 2023-09-16 |
1365 | Multimodal Multi-Hop Question Answering Through A Conversation Between Tools and Efficiently Finetuned Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We employ a tool-interacting divide-and-conquer strategy enabling large language models (LLMs) to answer complex multimodal multi-hop questions. |
Hossein Rajabzadeh; Suyuchen Wang; Hyock Ju Kwon; Bang Liu; | arxiv-cs.CL | 2023-09-16 |
1366 | PDFTriage: Question Answering Over Long, Structured Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: When a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the QA system. To bridge this fundamental gap in handling structured documents, we propose an approach called PDFTriage that enables models to retrieve the context based on either structure or content. |
JON SAAD-FALCON et. al. | arxiv-cs.CL | 2023-09-16 |
1367 | SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Modern open-domain question answering systems often rely on accurate and efficient retrieval components to find passages containing the facts necessary to answer the question. … |
Piotr Rybak; M. Ogrodniczuk; | ArXiv | 2023-09-15 |
1368 | Silver Retriever: Advancing Neural Passage Retrieval for Polish Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present Silver Retriever, a neural retriever for Polish trained on a diverse collection of manually or weakly labeled datasets. |
Piotr Rybak; Maciej Ogrodniczuk; | arxiv-cs.CL | 2023-09-15 |
1369 | D3: Data Diversity Design for Systematic Generalization in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present new evidence in the problem of Visual Question Answering (VQA) that reveals that the diversity of simple tasks (i.e. tasks formed by a few subtasks and concepts) plays a key role in achieving systematic generalization. |
AMIR RAHIMI et. al. | arxiv-cs.AI | 2023-09-15 |
1370 | Investigating Answerability of LLMs for Long-Form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a question-generation method from abstractive summaries and show that generating follow-up questions from summaries of long documents can create a challenging setting for LLMs to reason and infer from long contexts. |
Meghana Moorthy Bhat; Rui Meng; Ye Liu; Yingbo Zhou; Semih Yavuz; | arxiv-cs.CL | 2023-09-15 |
1371 | CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In recent years, large language models (LLMs) have shown remarkable capabilities at scale, particularly at generating text conditioned on a prompt. |
Rachneet Sachdeva; Martin Tutek; Iryna Gurevych; | arxiv-cs.CL | 2023-09-14 |
1372 | Enhancing Yes/no Question Answering with Weak Supervision Via Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Dimitris Dimitriadis; Grigorios Tsoumakas; | Applied Intelligence | 2023-09-14 |
1373 | Feature Engineering in Learning-to-Rank for Community Question Answering Task Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These data are leveraged in automated CQA ranking systems where similar questions (and answers) are presented in response to the query of the user. In this work, we empirically investigate a few aspects of this domain. |
Nafis Sajid; Md Rashidul Hasan; Muhammad Ibrahim; | arxiv-cs.LG | 2023-09-14 |
1374 | Multimodal Bi-direction Guided Attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Linqin Cai; Nuoying Xu; Hang Tian; Kejia Chen; Haodu Fan; | Neural Processing Letters | 2023-09-13 |
1375 | Evaluating The Ebb and Flow: An In-depth Analysis of Question-Answering Trends Across Diverse Platforms Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Community Question Answering (CQA) platforms steadily gain popularity as they provide users with fast responses to their queries. The swiftness of these responses is contingent on … |
Rima Hazra; Agnik Saha; Somnath Banerjee; Animesh Mukherjee; | arxiv-cs.SI | 2023-09-12 |
1376 | Answering Subjective Induction Questions on Products By Summarizing Multi-sources Multi-viewpoints Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: That is quite different from the traditional QA task, in which the answer to a factoid question is unique and can be found from a single data source. To address this new task, we propose a three-steps method. |
Yufeng Zhang; Meng-xiang Wang; Jianxing Yu; | arxiv-cs.CL | 2023-09-11 |
1377 | NeCo@ALQAC 2023: Legal Domain Knowledge Acquisition for Low-Resource Languages Through Data Enrichment Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents NeCo Team’s solutions to the Vietnamese text processing tasks provided in the Automated Legal Question Answering Competition 2023 (ALQAC 2023), focusing on legal domain knowledge acquisition for low-resource languages through data enrichment. |
HAI-LONG NGUYEN et. al. | arxiv-cs.CL | 2023-09-11 |
1378 | Two Is Better Than One: Answering Complex Questions By Multiple Knowledge Sources with Generalized Links Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we formulate the novel Multi-KB-QA task that leverages the full and partial links among multiple KBs to derive correct answers, a benchmark with diversified link and query types is also constructed to efficiently evaluate Multi-KB-QA performance. |
MINHAO ZHANG et. al. | arxiv-cs.CL | 2023-09-10 |
1379 | AGent: A Novel Pipeline for Automatically Creating Unanswerable Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, manually annotating unanswerable questions is labor-intensive. To address this, we propose AGent, a novel pipeline that automatically creates new unanswerable questions by re-matching a question with a context that lacks the necessary information for a correct answer. |
Son Quoc Tran; Gia-Huy Do; Phong Nguyen-Thuan Do; Matt Kretchmar; Xinya Du; | arxiv-cs.CL | 2023-09-10 |
1380 | MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering Over Text, Tables and Images Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recently, with the rise of large language models (LLM), in-context learning (ICL) has become the most popular way to solve QA problems. We propose MMHQA-ICL framework for addressing this problems, which includes stronger heterogeneous data retriever and an image caption module. |
WEIHAO LIU et. al. | arxiv-cs.CL | 2023-09-09 |
1381 | Can NLP Models ‘Identify’, ‘Distinguish’, and ‘Justify’ Questions That Don’t Have A Definitive Answer? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Can SOTA models accurately identify such questions and provide a reasonable response? To investigate the above question, we introduce QnotA, a dataset consisting of five different categories of questions that don’t have definitive answers. |
AYUSHI AGARWAL et. al. | arxiv-cs.CL | 2023-09-08 |
1382 | A Study on Influential Features for Predicting Best Answers in Community Question-Answering Forums Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The knowledge provided by user communities in question-answering (QA) forums is a highly valuable source of information for satisfying user information needs. However, finding the … |
Valeria Zoratto; Daniela Godoy; Gabriela N. Aranda; | Inf. | 2023-09-07 |
1383 | Interpretable Visual Question Answering Via Reasoning Supervision Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, such models are likely to disregard crucial visual cues and often rely on multimodal shortcuts and inherent biases of the language modality to predict the correct answer, a phenomenon commonly referred to as lack of visual grounding. In this work, we alleviate this shortcoming through a novel architecture for visual question answering that leverages common sense reasoning as a supervisory signal. |
Maria Parelli; Dimitrios Mallis; Markos Diomataris; Vassilis Pitsikalis; | arxiv-cs.CV | 2023-09-07 |
1384 | Introducing Forecast Utterance for Conversational Data Science Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: A significant challenge for the agent in this endeavor is to accurately comprehend the user’s prediction goals and, consequently, formulate precise ML tasks. In this paper, we take a pioneering step towards this ambitious goal by introducing a new concept called Forecast Utterance and then focus on the automatic and accurate interpretation of users’ prediction goals from these utterances. |
Md Mahadi Hassan; Alex Knipper; Shubhra Kanti Karmaker; | arxiv-cs.CL | 2023-09-07 |
1385 | ATM: Action Temporality Modeling for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Action Temporality Modeling (ATM) for temporality reasoning via three-fold uniqueness: (1) rethinking the optical flow and realizing that optical flow is effective in capturing the long horizon temporality reasoning; (2) training the visual-text embedding by contrastive learning in an action-centric manner, leading to better action representations in both vision and text modalities; and (3) preventing the model from answering the question given the shuffled video in the fine-tuning stage, to avoid spurious correlation between appearance and motion and hence ensure faithful temporality reasoning. |
Junwen Chen; Jie Zhu; Yu Kong; | arxiv-cs.CV | 2023-09-05 |
1386 | Understanding Video Scenes Through Text: Insights from Text-based Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The NewsVideoQA dataset contains question-answer pairs related to the text in news videos, while M4-ViteVQA comprises question-answer pairs from diverse categories like vlogging, traveling, and shopping. We provide an analysis of the formulation of these datasets on various levels, exploring the degree of visual understanding and multi-frame comprehension required for answering the questions. |
Soumya Jahagirdar; Minesh Mathew; Dimosthenis Karatzas; C. V. Jawahar; | arxiv-cs.CV | 2023-09-04 |
1387 | Evaluating A Radius-based Pipeline for Question Answering Over Cultural (CIDOC-CRM Based) Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: CIDOC-CRM is an event-based international standard for cultural documentation that has been widely used for offering semantic interoperability in the Cultural Heritage (CH) … |
Nikos Gounakis; M. Mountantonakis; Yannis Tzitzikas; | Proceedings of the 34th ACM Conference on Hypertext and … | 2023-09-04 |
1388 | Enabling The Informed Patient Paradigm with Secure and Personalized Medical Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Quality patient care is a complex and multifaceted problem requiring the integration of data from multiple sources. We propose Medicient, a knowledge-graph-based question … |
Joel Oduro-Afriyie; Hasan M. Jamil; | Proceedings of the 14th ACM International Conference on … | 2023-09-03 |
1389 | Can I Trust Your Answer? Visually Grounded Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Experiments with different backbones demonstrate that this grounding mechanism improves both grounding and QA. With these efforts, we aim to push towards trustworthy VLMs in VQA systems. |
Junbin Xiao; Angela Yao; Yicong Li; Tat Seng Chua; | arxiv-cs.CV | 2023-09-03 |
1390 | MedChatZH: A Better Medical Adviser Learns from Better Instructions Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Generative large language models (LLMs) have shown great success in various applications, including question-answering (QA) and dialogue systems. However, in specialized domains … |
Yang Tan; Mingchen Li; Zijie Huang; Huiqun Yu; Guisheng Fan; | ArXiv | 2023-09-03 |
1391 | A Template-based Approach for Question Answering Over Knowledge Bases Related Papers Related Patents Related Grants Related Venues Related Experts View |
Anna Formica; Ida Mele; F. Taglino; | Knowledge and Information Systems | 2023-09-02 |
1392 | Generative Data Augmentation Using LLMs Improves Distributional Robustness in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We take a two-step generation approach, generating both contexts and QA pairs to augment existing datasets. |
Arijit Ghosh Chowdhury; Aman Chadha; | arxiv-cs.CL | 2023-09-02 |
1393 | Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge-based visual question answering not only needs to answer the questions based on images but also incorporates external knowledge to study reasoning in the joint space of … |
YAN WANG et. al. | ACM Transactions on Multimedia Computing, Communications … | 2023-09-02 |
1394 | LeanContext: Cost-Efficient Domain-Specific Question Answering Using LLMs IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-answering (QA) is a significant application of Large Language Models (LLMs), shaping chatbot capabilities across healthcare, education, and customer service. However, … |
Md. Adnan Arefeen; Biplob K. Debnath; S. Chakradhar; | ArXiv | 2023-09-02 |
1395 | Generative Retrieval for Conversational Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yongqing Li; Nan Yang; Liang Wang; Furu Wei; Wenjie Li; | Inf. Process. Manag. | 2023-09-01 |
1396 | CLVIN: Complete Language-vision Interaction Network for Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
Chongqing Chen; Dezhi Han; Xiang Shen; | Knowl. Based Syst. | 2023-09-01 |
1397 | Multimodal Representative Answer Extraction in Community Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Ming Li; Yating Ma; Y. Li; Yixue Bai; | J. King Saud Univ. Comput. Inf. Sci. | 2023-09-01 |
1398 | A Contrastive Framework for Enhancing Knowledge Graph Question Answering: Alleviating Exposure Bias Related Papers Related Patents Related Grants Related Venues Related Experts View |
HUIFANG DU et. al. | Knowl. Based Syst. | 2023-09-01 |
1399 | Prompt-WNQA: A Prompt-based Complex Question Answering for Wireless Network Over Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View |
Pei Liu; Bing Qian; Qi Sun; Longgang Zhao; | Comput. Networks | 2023-09-01 |
1400 | Empirical Study on Using Adapters for Debiased Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Jae-Won Cho; Dawit Mureja Argaw; Youngtaek Oh; Dong-Jin Kim; In-So Kweon; | Comput. Vis. Image Underst. | 2023-09-01 |
1401 | Query Path Generation Via Bidirectional Reasoning for Multihop Question Answering From Knowledge Bases Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multihop question answering from knowledge bases (KBQA) is a hot research topic in natural language processing. Recently, the graph neural network-based (GNN-based) methods have … |
GENG ZHANG et. al. | IEEE Transactions on Cognitive and Developmental Systems | 2023-09-01 |
1402 | Context-aware Multi-level Question Embedding Fusion for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
SHENGDONG LI et. al. | Inf. Fusion | 2023-09-01 |
1403 | DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we describe the details of the training as well and the results on the different benchmarks. |
Shaltiel Shmidman; Avi Shmidman; Moshe Koppel; | arxiv-cs.CL | 2023-08-31 |
1404 | Separate and Locate: Rethink The Text in Text-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The 1-D position embedding can only represent the left-right sequence relationship between words in a sentence, but not the complex spatial position relationship. To tackle these problems, we propose a novel method named Separate and Locate (SaL) that explores text contextual cues and designs spatial position embedding to construct spatial relations between OCR texts. |
Chengyang Fang; Jiangnan Li; Liang Li; Can Ma; Dayong Hu; | arxiv-cs.CV | 2023-08-30 |
1405 | Hyperbolic Code Retrieval: A Novel Approach for Efficient Code Search Using Hyperbolic Space Embeddings Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these methods often lead to computational and memory inefficiencies, posing a significant challenge to their real-world applicability. To tackle this challenge, we propose a novel approach, the Hyperbolic Code QA Matching (HyCoQA). |
XUNZHU TANG et. al. | arxiv-cs.SE | 2023-08-29 |
1406 | KGConv, A Conversational Corpus Grounded in Wikidata Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present KGConv, a large, conversational corpus of 71k conversations where each question-answer pair is grounded in a Wikidata fact. |
Quentin Brabant; Gwenole Lecorve; Lina M. Rojas-Barahona; Claire Gardent; | arxiv-cs.CL | 2023-08-29 |
1407 | Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models By Translation-following Demonstrations IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This disparity is demanded in further fine-tuning and affecting the cross-lingual abilities of LLMs. In this paper, we propose to empower Instructiontuned LLMs (It-LLMs) in languages other than English by building semantic alignment between them. |
Leonardo Ranaldi; Giulia Pucci; Andre Freitas; | arxiv-cs.CL | 2023-08-27 |
1408 | Knowledge-Based Version Incompatibility Detection for Deep Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Therefore, these techniques cannot detect version issues due to undocumented version constraints or issues involving hardware drivers or OS. To address this challenge, we propose to leverage the abundant discussions of DL version issues from Stack Overflow to facilitate version incompatibility detection. |
Zhongkai Zhao; Bonan Kou; Mohamed Yilmaz Ibrahim; Muhao Chen; Tianyi Zhang; | arxiv-cs.SE | 2023-08-25 |
1409 | Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Even so, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful intermediate reasoning steps, especially in the context of answering knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. |
KEHENG WANG et. al. | arxiv-cs.CL | 2023-08-25 |
1410 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. |
JINZE BAI et. al. | arxiv-cs.CV | 2023-08-24 |
1411 | TG-VQA: Ternary Game of Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we innovatively resort to game theory, which can simulate complicated relationships among multiple players with specific interaction strategies, e.g., video, question, and answer as ternary players, to achieve fine-grained alignment for VideoQA task. |
HAO LI et. al. | ijcai | 2023-08-23 |
1412 | SQuAD-SRC: A Dataset for Multi-Accent Spoken Reading Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we construct a large-scale multi-accent human spoken dataset SQuAD-SRC, in order to study the problem of multi-accent spoken reading comprehension. |
Yixuan Tang; Anthony K.H: Tung; | ijcai | 2023-08-23 |
1413 | Answer Mining from A Pool of Images: Towards Retrieval-Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Towards solving the RETVQA task, we propose a unified Multi Image BART (MI-BART) that takes a question and retrieved images using our relevance encoder for free-form fluent answer generation. |
Abhirama Subramanyam Penamakuri; Manish Gupta; Mithun Das Gupta; Anand Mishra; | ijcai | 2023-08-23 |
1414 | Keep Skills in Mind: Understanding and Implementing Skills in Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a new approach named Dynamic Skill-aware Commonsense Question Answering (DSCQA), which transcends the limitations of traditional methods by informing the model about the need for each skill in questions and utilizes skills as a critical driver in CQA process. |
MEIKAI BAO et. al. | ijcai | 2023-08-23 |
1415 | COOL, A Context Outlooker, and Its Application to Question Answering and Other Natural Language Processing Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an outlook attention mechanism, COOL, for natural language processing. |
Fangyi Zhu; See-Kiong Ng; Stéphane Bressan; | ijcai | 2023-08-23 |
1416 | Local and Global: Temporal Question Answering Via Information Fusion IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the fruitful efforts of previous models in temporal KGQA, they still have several limitations. (I) They neither emphasize the graph structural information between entities in KGs nor explicitly utilize a multi-hop relation path through graph neural networks to enhance answer prediction. (II) They adopt pre-trained language models (LMs) to obtain question representations, focusing merely on the global information related to the question while not highlighting the local information of the entities in KGs. To address these limitations, we introduce a novel model that simultaneously explores both Local information and Global information for the task of temporal KGQA (LGQA). |
YONGHAO LIU et. al. | ijcai | 2023-08-23 |
1417 | A Logic-based Approach to Contrastive Explainability for Neurosymbolic Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a CE framework for VQA that uses a neurosymbolic VQA architecture which disentangles perception from reasoning. |
Thomas Eiter; Tobias Geibinger; Nelson Higuera; Johannes Oetsch; | ijcai | 2023-08-23 |
1418 | HopPG: Self-Iterative Program Generation for Multi-Hop Question Answering Over Heterogeneous Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: On the other hand, this way ignores the semantic information of the intermediate answers at each hop, which is beneficial for subsequent generation. To alleviate these challenges, we propose a self-iterative framework for multi-hop program generation (HopPG) over heterogeneous knowledge, which leverages the previous execution results to retrieve supporting facts and generate subsequent programs hop by hop. |
Yingyao Wang; Yongwei Zhou; Chaoqun Duan; Junwei Bao; Tiejun Zhao; | arxiv-cs.CL | 2023-08-22 |
1419 | Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Text-to-music generation (T2M-Gen) faces a major obstacle due to the scarcity of large-scale publicly available music datasets with natural language captions. To address this, we propose the Music Understanding LLaMA (MU-LLaMA), capable of answering music-related questions and generating captions for music files. |
Shansong Liu; Atin Sakkeer Hussain; Chenshuo Sun; Ying Shan; | arxiv-cs.SD | 2023-08-22 |
1420 | Bridging The Gap: Deciphering Tabular Data Using Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In the realm of natural language processing, the understanding of tabular data has perpetually stood as a focal point of scholarly inquiry. The emergence of expansive language models, exemplified by the likes of ChatGPT, has ushered in a wave of endeavors wherein researchers aim to harness these models for tasks related to table-based question answering. |
Hengyuan Zhang; Peng Chang; Zongcheng Ji; | arxiv-cs.CL | 2023-08-22 |
1421 | DocPrompt: Large-scale Continue Pretrain for Zero-shot and Few-shot Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Docprompt for document question answering tasks with powerful zero-shot and few-shot performance. |
Sijin Wu; Dan Zhang; Teng Hu; Shikun Feng; | arxiv-cs.CL | 2023-08-21 |
1422 | LibriSQA: A Novel Dataset and Framework for Spoken Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Given the evident paucity of existing speech-text LLMs, we propose a lightweight, end-to-end framework to execute the SQA task on the LibriSQA, witnessing significant results. |
Zihan Zhao; Yiyang Jiang; Heyang Liu; Yanfeng Wang; Yu Wang; | arxiv-cs.CL | 2023-08-20 |
1423 | Generic Attention-model Explainability By Weighted Relevance Accumulation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a weighted relevancy strategy, which takes the importance of token values into consideration, to reduce distortion when equally accumulating relevance. |
Yiming Huang; Aozhe Jia; Xiaodan Zhang; Jiawei Zhang; | arxiv-cs.CV | 2023-08-20 |
1424 | Towards Multi-Lingual Audio Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Audio Question Answering (AQA) is a multi-modal translation task where a system analyzes an audio signal and a natu-ral language question to generate a desirable natural language … |
Swarup Ranjan Behera; Pailla Balakrishna Reddy; A. Tripathi; Megavath Bharadwaj Rathod; Tejesh Karavadi; | Interspeech | 2023-08-20 |
1425 | Improving Visual Question Answering for Bridge Inspection By Pre‐training with External Data of Image–text Pairs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper explores the application of visual question answering (VQA) in bridge inspection using recent advancements in multimodal artificial intelligence (AI) systems. VQA … |
Thannarot Kunlamai; T. Yamane; M. Suganuma; Pang-jo Chun; Takayuki Okatani; | Computer‐Aided Civil and Infrastructure Engineering | 2023-08-18 |
1426 | Breaking Language Barriers: A Question Answering Dataset for Hindi and Marathi Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To tackle the challenge of data scarcity, we have developed a novel approach for translating the SQuAD 2.0 dataset into Hindi and Marathi. |
Maithili Sabane; Onkar Litake; Aman Chadha; | arxiv-cs.CL | 2023-08-18 |
1427 | Accelerated Materials Language Processing Enabled By GPT Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we develop generative pretrained transformer (GPT)-enabled pipelines where the complex architectures of prior MLP models are replaced with strategic designs of prompt engineering. |
Jaewoong Choi; Byungju Lee; | arxiv-cs.CL | 2023-08-18 |
1428 | End-to-End Beam Retrieval for Multi-Hop Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Beam Retrieval, an end-to-end beam retrieval framework for multi-hop QA. |
Jiahao Zhang; Haiyang Zhang; Dongmei Zhang; Yong Liu; Shen Huang; | arxiv-cs.CL | 2023-08-17 |
1429 | Answering Ambiguous Questions with A Database of Questions, Answers, and Revisions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia. |
Haitian Sun; William W. Cohen; Ruslan Salakhutdinov; | arxiv-cs.CL | 2023-08-16 |
1430 | Learning The Meanings of Function Words from Grounded Language Using A Visual Question Answering Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learnt by both models and children. |
Eva Portelance; Michael C. Frank; Dan Jurafsky; | arxiv-cs.CL | 2023-08-16 |
1431 | Research on Question Answering for Knowledge Graph of Aircraft PHM Fault Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: A question recognition method based on BERT-BiLSTM-ATT-CRF is proposed to solve the problem of entity recognition difficulties faced by Question answering in the field of aircraft … |
XIANGZHEN MENG et. al. | 2023 IEEE 9th International Conference on Cloud Computing … | 2023-08-12 |
1432 | Meta-path Reasoning of Knowledge Graph for Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Miao Zhang; Tingting He; M. Dong; | Frontiers of Computer Science | 2023-08-12 |
1433 | Multi-hop Question Answering Over Incomplete Knowledge Graph with Abstract Conceptual Evidence Related Papers Related Patents Related Grants Related Venues Related Experts View |
QIBO SUN et. al. | Applied Intelligence | 2023-08-11 |
1434 | Performance Prediction for Multi-hop Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The problem is challenging due to the multi-step nature of the retrieval process, potential dependency of the steps and the reasoning involved. To tackle this challenge, we propose multHP, a novel pre-retrieval method for predicting the performance of open-domain multi-hop questions. |
Mohammadreza Samadi; Davood Rafiei; | arxiv-cs.CL | 2023-08-11 |
1435 | Progressive Spatio-temporal Perception for Audio-Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a Progressive Spatio-Temporal Perception Network (PSTP-Net), which contains three modules that progressively identify key spatio-temporal regions w.r.t. questions. |
Guangyao Li; Wenxuan Hou; Di Hu; | arxiv-cs.CV | 2023-08-10 |
1436 | ADMUS: A Progressive Question Answering Framework Adaptable to Multiple Knowledge Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Therefore, we present ADMUS, a progressive knowledge base question answering framework designed to accommodate a wide variety of datasets, including multiple languages, diverse backbone knowledge bases, and disparate question answering datasets. To accomplish the purpose, we decouple the architecture of conventional KBQA systems and propose this dataset-independent framework. |
Yirui Zhan; Yanzeng Li; Minhao Zhang; Lei Zou; | arxiv-cs.CL | 2023-08-09 |
1437 | Building Interpretable and Reliable Open Information Retriever for New Domains Overnight Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose an information retrieval pipeline that uses entity/event linking model and query decomposition model to focus more accurately on different information units of the query. |
Xiaodong Yu; Ben Zhou; Dan Roth; | arxiv-cs.CL | 2023-08-09 |
1438 | Top K Relevant Passage Retrieval for Biomedical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we work on the existing DPR framework for the biomedical domain and retrieve answers from the Pubmed articles which is a reliable source to answer medical questions. |
Shashank Gupta; | arxiv-cs.CL | 2023-08-08 |
1439 | Towards An AI to Win Ghana’s National Science and Maths Quiz Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: That is the question we seek to answer in the NSMQ AI project, an open-source project that is building AI to compete live in the NSMQ and win. |
GEORGE BOATENG et. al. | arxiv-cs.HC | 2023-08-08 |
1440 | On Monotonic Aggregation for Open-domain QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We identify the cause, and based on that we propose Judge-Specialist framework. |
Sang-eun Han; Yeonseok Jeong; Seung-won Hwang; Kyungjae Lee; | arxiv-cs.CL | 2023-08-08 |
1441 | SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present SciGraphQA, a synthetic multi-turn question-answer dataset related to academic graphs. |
Shengzhi Li; Nima Tajbakhsh; | arxiv-cs.CL | 2023-08-07 |
1442 | KITLM: Domain-Specific Knowledge InTegration Into Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To boost the domain-specific understanding, we propose, KITLM, a novel knowledge base integration approach into language model through relevant information infusion. |
Ankush Agarwal; Sakharam Gawade; Amar Prakash Azad; Pushpak Bhattacharyya; | arxiv-cs.CL | 2023-08-07 |
1443 | Prompt Guided Copy Mechanism for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a pluggable approach for extractive methods that introduces a novel prompt-guided copy mechanism to improve the fluency and appropriateness of the extracted answers. |
YONG ZHANG et. al. | arxiv-cs.CL | 2023-08-07 |
1444 | Redundancy-aware Transformer for Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose a novel transformer-based architecture, that aims to model VideoQA in a redundancy-aware manner. |
YICONG LI et. al. | arxiv-cs.CV | 2023-08-06 |
1445 | PaniniQA: Enhancing Patient Education Through Interactive Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present PaniniQA, a patient-centric interactive question answering system designed to help patients understand their discharge instructions. |
PENGSHAN CAI et. al. | arxiv-cs.CL | 2023-08-06 |
1446 | Decision Knowledge Graphs: Construction of and Usage in Question Answering for Clinical Practice Guidelines Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a Decision Knowledge Graph (DKG) representation to store CPGs and to perform question-answering on CPGs. |
Vasudhan Varma Kandula; Pushpak Bhattacharyya; | arxiv-cs.IR | 2023-08-05 |
1447 | Learning to Select The Relevant History Turns in Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Irrelevant context, on the other hand, brings noise to the system, thereby resulting in a decline in the model’s performance. In this paper, we propose a framework, DHS-ConvQA (Dynamic History Selection in Conversational Question Answering), that first generates the context and question entities for all the history turns, which are then pruned on the basis of similarity they share in common with the question at hand. |
MUNAZZA ZAIB et. al. | arxiv-cs.CL | 2023-08-04 |
1448 | WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). |
XIAO LIU et. al. | kdd | 2023-08-04 |
1449 | Dual-feature Collaborative Relation-attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Lu Yao; You Yang; Juntao Hu; | International Journal of Multimedia Information Retrieval | 2023-08-04 |
1450 | RealCQA: Scientific Chart Question Answering As A Test-bed for First-Order Logic Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. |
Saleem Ahmed; Bhavin Jawade; Shubham Pandey; Srirangaraj Setlur; Venu Govindaraju; | arxiv-cs.CV | 2023-08-03 |
1451 | BamnetTL: Bidirectional Attention Memory Network with Transfer Learning for Question Answering Matching Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In KBQA (knowledge base question answering), questions are processed using NLP (natural language processing), and knowledge base technology is used to generate the corresponding … |
Lei Su; Jiazhi Guo; Liping Wu; Han Deng; | Int. J. Intell. Syst. | 2023-08-03 |
1452 | Open-Domain Long-Form Question–Answering Using Transformer-Based Pipeline Related Papers Related Patents Related Grants Related Venues Related Experts View |
Aprameya Dash; Mohit Awachar; Anshul Patel; Bhawana Rudra; | SN Computer Science | 2023-08-03 |
1453 | Teaching Smaller Language Models To Generalise To Unseen Compositional Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To do so we propose a combination of multitask supervised pretraining on up to 93 tasks designed to instill diverse reasoning abilities, and a dense retrieval system that aims to retrieve a set of evidential paragraph fragments. |
Tim Hartill; Neset Tan; Michael Witbrock; Patricia J. Riddle; | arxiv-cs.CL | 2023-08-02 |
1454 | Improving Visual Question Answering for Remote Sensing Via Alternate-guided Attention and Combined Loss Related Papers Related Patents Related Grants Related Venues Related Experts View |
JIANGFAN FENG et. al. | Int. J. Appl. Earth Obs. Geoinformation | 2023-08-01 |
1455 | Counting-based Visual Question Answering with Serial Cascaded Attention Deep Learning Related Papers Related Patents Related Grants Related Venues Related Experts View |
Tesfayee Meshu Welde; L. Liao; | Pattern Recognit. | 2023-08-01 |
1456 | Improved Relation Span Detection in Question Answering Systems Over Extracted Knowledge Bases Related Papers Related Patents Related Grants Related Venues Related Experts View |
Somayyeh Behmanesh; Alireza Talebpour; M. Shamsfard; Mohammad Jafari; | Expert Syst. Appl. | 2023-08-01 |
1457 | DAQAS: Deep Arabic Question Answering System Based on Duplicate Question Detection and Machine Reading Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts View |
H. ALAMI et. al. | J. King Saud Univ. Comput. Inf. Sci. | 2023-08-01 |
1458 | Spatio-Temporal Two-stage Fusion for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
FEIFEI XU et. al. | Comput. Vis. Image Underst. | 2023-08-01 |
1459 | Question-conditioned Debiasing with Focal Visual Context Fusion for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Jin Liu; Guoxiang Wang; Chongfeng Fan; F. Zhou; Huijuan Xu; | Knowl. Based Syst. | 2023-08-01 |
1460 | Neural Age Screening on Question Answering Communities Related Papers Related Patents Related Grants Related Venues Related Experts View |
Mohan Timilsina; A. Figueroa; | Eng. Appl. Artif. Intell. | 2023-08-01 |
1461 | Designing A Communication Bridge Between Communities: Participatory Design for A Question-Answering AI Agent Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: How do we design an AI system that is intended to act as a communication bridge between two user communities with different mental models and vocabularies? |
Jeonghyun Lee; Vrinda Nandan; Harshvardhan Sikka; Spencer Rugaber; Ashok Gole; | arxiv-cs.HC | 2023-08-01 |
1462 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. |
Vaibhav Adlakha; Parishad BehnamGhader; Xing Han Lu; Nicholas Meade; Siva Reddy; | arxiv-cs.CL | 2023-07-31 |
1463 | Olio: A Semantic Search Interface for Data Repositories Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For example, searching for a flight status or a game score returns a dynamically generated response along with supporting, pre-authored documents contextually relevant to the query. In this paper, we extend this hybrid search paradigm to data repositories that contain curated data sources and visualization content. |
Vidya Setlur; Andriy Kanyuka; Arjun Srinivasan; | arxiv-cs.HC | 2023-07-31 |
1464 | KoBBQ: Korean Bias Benchmark for Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present KoBBQ, a Korean bias benchmark dataset, and we propose a general framework that addresses considerations for cultural adaptation of a dataset. |
JIHO JIN et. al. | arxiv-cs.CL | 2023-07-31 |
1465 | No That’s Not What I Meant: Handling Third Position Repair in Conversational Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The ability to handle miscommunication is crucial to robust and faithful conversational AI. People usually deal with miscommunication immediately as they detect it, using highly … |
Vevake Balaraman; Arash Eshghi; Ioannis Konstas; Ioannis V. Papaioannou; | ArXiv | 2023-07-31 |
1466 | No That’s Not What I Meant: Handling Third Position Repair in Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: For stand-alone TPR execution, we perform both automatic and human evaluations on a fine-tuned T5 model, as well as OpenAI’s GPT-3 LLMs. |
Vevake Balaraman; Arash Eshghi; Ioannis Konstas; Ioannis Papaioannou; | arxiv-cs.CL | 2023-07-31 |
1467 | Question Answering with Deep Neural Networks for Semi-Structured Heterogeneous Genealogical Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, these supervised DNN models require training datasets that are absent in the genealogical domain. This study proposes an end-to-end approach for question answering using genealogical family trees by: 1) representing genealogical data as knowledge graphs, 2) converting them to texts, 3) combining them with unstructured texts, and 4) training a trans-former-based question answering model. |
Omri Suissa; Maayan Zhitomirsky-Geffet; Avshalom Elmalech; | arxiv-cs.CL | 2023-07-30 |
1468 | Around The GLOBE: Numerical Aggregation Question-Answering on Heterogeneous Genealogical Knowledge Graphs with Deep Neural Networks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Numerical aggregation QA is critical for distant reading and analysis for researchers (and the general public) interested in investigating cultural heritage domains. Therefore, in this study, we present a new end-to-end methodology for numerical aggregation QA for genealogical trees that includes: 1) an automatic method for training dataset generation; 2) a transformer-based table selection method, and 3) an optimized transformer-based numerical aggregation QA model. |
Omri Suissa; Maayan Zhitomirsky-Geffet; Avshalom Elmalech; | arxiv-cs.CL | 2023-07-30 |
1469 | QUARE: Towards A Question-answering Model for Requirements Elicitation Related Papers Related Patents Related Grants Related Venues Related Experts View |
Johnathan Mauricio Calle-Gallego; C. Jaramillo; | Automated Software Engineering | 2023-07-29 |
1470 | BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers Models for Vietnamese Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, there is a lack of models that target specific countries such as Vietnam. To address this limitation, we introduce a transformer-based Vietnamese model named BARTPhoBEiT. |
Khiem Vinh Tran; Kiet Van Nguyen; Ngan Luu Thuy Nguyen; | arxiv-cs.CL | 2023-07-28 |
1471 | Context-VQA: Towards Context-Aware and Purposeful Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To further motivate and analyze the distinction between different contexts, we introduce Context-VQA, a VQA dataset that pairs images with contexts, specifically types of websites (e.g., a shopping website). |
Nandita Naik; Christopher Potts; Elisa Kreiss; | arxiv-cs.CL | 2023-07-28 |
1472 | LOIS: Looking Out of Instance Semantics for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose a finer model framework without bounding boxes in this work, termed Looking Out of Instance Semantics (LOIS) to tackle this important issue. |
SIYU ZHANG et. al. | arxiv-cs.CV | 2023-07-26 |
1473 | One Stop Shop for Question-Answering Dataset Selection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we offer a new visualization tool — Dataset Statistical View (DSV), to lower the barrier of research entry by providing easy access to the question-answering (QA) datasets that researchers can build their work upon. |
Chang Nian Chuy; Qinmin Vivian Hu; Chen Ding; | sigir | 2023-07-25 |
1474 | BeamQA: Multi-hop Knowledge Graph Question Answering with Sequence-to-Sequence Prediction and Beam Search IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing KGQA frameworks that use such techniques often depend on learning a transformation from the query representation to the graph embedding space, which requires access to a large training dataset. We present BeamQA, an approach that overcomes these limitations by combining a sequence-to-sequence prediction model with beam search execution in the embedding space. |
Farah Atif; Ola El Khatib; Djellel Difallah; | sigir | 2023-07-25 |
1475 | Limitations of Open-Domain Question Answering Benchmarks for Document-level Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this approach ignores important document-level cues that can be crucial in answering questions. This paper reviews three open-domain QA benchmarks from a document-level perspective and finds that they are biased towards passage-level information. |
Ehsan Kamalloo; Charles L. A. Clarke; Davood Rafiei; | sigir | 2023-07-25 |
1476 | Explainable Conversational Question Answering Over Heterogeneous Sources Via Iterative Graph Neural Networks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our method EXPLAIGNN overcomes these limitations by integrating information from a mixture of sources with user-comprehensible explanations for answers. |
Philipp Christmann; Rishiraj Saha Roy; Gerhard Weikum; | sigir | 2023-07-25 |
1477 | Keyword-Aware Relative Spatio-Temporal Graph Networks for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a Keyword-aware Relative Spatio-Temporal (KRST) graph network for VideoQA. |
YI CHENG et. al. | arxiv-cs.CV | 2023-07-25 |
1478 | Cross-Market Product-Related Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conduct a data analysis to understand the scope of the cross-market question-answering task. |
NEGIN GHASEMI et. al. | sigir | 2023-07-25 |
1479 | On Answer Position Bias in Transformers for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we analyze the self-attention and embedding generation components of five Transformer-based models with different architectures and position embedding strategies. |
Rafael Glater; Rodrygo L. T. Santos; | sigir | 2023-07-25 |
1480 | MAMO: Fine-Grained Vision-Language Representations Learning with Masked Multimodal Modeling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a jointly masked multimodal modeling method to learn fine-grained multimodal representations. |
ZIJIA ZHAO et. al. | sigir | 2023-07-25 |
1481 | MythQA: Query-Based Large-Scale Check-Worthy Claim Detection Through Multi-Answer Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Many efforts have been put into how to identify check-worthy claims from a small scale of pre-collected claims, but how to efficiently detect check-worthy claims directly from a large-scale information source, such as Twitter, remains underexplored. To fill this gap, we introduce MythQA, a new multi-answer open-domain question answering(QA) task that involves contradictory stance mining for query-based large-scale check-worthy claim detection. |
Yang Bai; Anthony Colas; Daisy Zhe Wang; | sigir | 2023-07-25 |
1482 | A Symmetric Dual Encoding Dense Retrieval Framework for Knowledge-Intensive Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a new pipeline for KI-VQA tasks, consisting of a retriever and a reader. |
Alireza Salemi; Juan Altmayer Pizzorno; Hamed Zamani; | sigir | 2023-07-25 |
1483 | Learning to Ask Questions for Zero-shot Dialogue State Tracking Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a method for performing zero-shot Dialogue State Tracking (DST) by casting the task as a learning-to-ask-questions framework. |
Diogo Tavares; David Semedo; Alexander Rudnicky; Joao Magalhaes; | sigir | 2023-07-25 |
1484 | GPT-3 Models Are Few-Shot Financial Reasoners Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We run several experiments with GPT-3 and find that a separate retrieval model and logic engine continue to be essential components to achieving SOTA performance in this task, particularly due to the precise nature of financial questions and the complex information stored in financial documents. With this understanding, our refined prompt-engineering approach on GPT-3 achieves near SOTA accuracy without any fine-tuning. |
Raul Salles de Padua; Imran Qureshi; Mustafa U. Karakaplan; | arxiv-cs.CL | 2023-07-25 |
1485 | Leader-Generator Net: Dividing Skill and Implicitness for Conquering FairytaleQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, a simple but effective Leader-Generator Network is proposed to explicitly separate and extract fine-grained reading skills and the implicitness or explicitness of the question. |
Wei Peng; Wanshui Li; Yue Hu; | sigir | 2023-07-25 |
1486 | Contributions to The Improvement of Question Answering Systems in The Biomedical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: QA aims at providing inquirers with direct, short and precise answers to their natural language questions. In this Ph.D. thesis, we propose four contributions to improve the performance of QA in the biomedical domain. |
Mourad Sarrouti; | arxiv-cs.CL | 2023-07-25 |
1487 | MA-MRC: A Multi-answer Machine Reading Comprehension Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we aim to construct an MRC dataset with both data of single answer and multiple answers. |
ZHIANG YUE et. al. | sigir | 2023-07-25 |
1488 | A Zero-shot and Few-shot Study of Instruction-Finetuned Large Language Models Applied to Clinical and Biomedical Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We evaluate four state-of-the-art instruction-tuned large language models (LLMs) — ChatGPT, Flan-T5 UL2, Tk-Instruct, and Alpaca — on a set of 13 real-world clinical and biomedical natural language processing (NLP) tasks in English, such as named-entity recognition (NER), question-answering (QA), relation extraction (RE), etc. |
Yanis Labrak; Mickael Rouvier; Richard Dufour; | arxiv-cs.CL | 2023-07-22 |
1489 | Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To contribute to automating the medical vision-language model, we propose a novel Chest-Xray Difference Visual Question Answering (VQA) task. |
XINYUE HU et. al. | arxiv-cs.CV | 2023-07-22 |
1490 | Robust Visual Question Answering: Datasets, Methods, and Future Challenges Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In recent years, various datasets and debiasing methods have been proposed to evaluate and enhance the VQA robustness, respectively. |
JIE MA et. al. | arxiv-cs.CV | 2023-07-21 |
1491 | Investigating The Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we present the first analysis on the factual knowledge boundaries of LLMs and how retrieval augmentation affects LLMs on open-domain question answering (QA), with a bunch of important findings. |
RUIYANG REN et. al. | arxiv-cs.CL | 2023-07-20 |
1492 | Generator-Retriever-Generator Approach for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel approach called Generator-Retriever-Generator (GRG) that combines document retrieval techniques with a large language model (LLM), by first prompting the model to generate contextual documents based on a given question. |
Abdelrahman Abdallah; Adam Jatowt; | arxiv-cs.CL | 2023-07-20 |
1493 | Explaining Autonomous Driving Actions with Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To facilitate interpretability of decision-making in autonomous driving, we present a Visual Question Answering (VQA) framework, which explains driving actions with question-answering-based causal reasoning. |
Shahin Atakishiyev; Mohammad Salameh; Housam Babiker; Randy Goebel; | arxiv-cs.CV | 2023-07-19 |
1494 | Towards A Performance Analysis on Pre-trained Visual Question Answering Models for Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This short paper presents a preliminary analysis of three popular Visual Question Answering (VQA) models, namely ViLBERT, ViLT, and LXMERT, in the context of answering questions relating to driving scenarios. |
Kaavya Rekanar; Ciarán Eising; Ganesh Sistu; Martin Hayes; | arxiv-cs.CV | 2023-07-18 |
1495 | Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing analyses are done in small models far from the state of the art. To address this, we present a case study of circuit analysis in the 70B Chinchilla model, aiming to test the scalability of circuit analysis. |
TOM LIEBERUM et. al. | arxiv-cs.LG | 2023-07-18 |
1496 | Traffic-Domain Video Question Answering with Automatic Captioning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present a novel approach termed Traffic-domain Video Question Answering with Automatic Captioning (TRIVIA), which serves as a weak-supervision technique for infusing traffic-domain knowledge into large video-language models. |
Ehsan Qasemi; Jonathan M. Francis; Alessandro Oltramari; | arxiv-cs.CV | 2023-07-18 |
1497 | Generative Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-modal tasks involving vision and language in deep learning continue to rise in popularity and are leading to the development of newer models that can generalize beyond the … |
Ethan Shen; Scotty Singh; B. Kumar; | ArXiv | 2023-07-18 |
1498 | EarthQA: A Question Answering Engine for Earth Observation Data Archives * Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: EarthQA is a question answering engine that accepts questions in natural language (English) that ask for satellite images satisfying certain criteria and returns links to such … |
D. Punjani; Manolis Koubarakis; Eleni Tsalapati; | IGARSS 2023 – 2023 IEEE International Geoscience and Remote … | 2023-07-16 |
1499 | DecompEval: Evaluating Generated Texts As Unsupervised Decomposed Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Furthermore, existing metrics only provide an evaluation score for each dimension without revealing the evidence to interpret how this score is obtained. To deal with these challenges, we propose a simple yet effective metric called DecompEval. |
PEI KE et. al. | arxiv-cs.CL | 2023-07-13 |
1500 | Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval Augmented Generation Models for Open Book Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a framework – Prompt, Generate, Train (PGT) – to efficiently develop a generative question-answering model for open-book question-answering over a proprietary collection of text documents. |
C. S. Krishna; | arxiv-cs.LG | 2023-07-12 |