Paper Digest: Recent Papers on Question Answering
Paper Digest Team extracted all recent Question Answering related papers on our radar, and generated highlight sentences for them. The results are then sorted by relevance & date. In addition to this ‘static’ page, we also provide a real-time version of this article, which has more coverage and is updated in real time to include the most recent updates on this topic.
This list is created by the Paper Digest Team. Experience the cutting-edge capabilities of Paper Digest, an innovative AI-powered research platform that empowers you to read, write, get answers and review.
Try us today and unlock the full potential of our services for free!
TABLE 1: Paper Digest: Recent Papers on Question Answering
Paper | Author(s) | Source | Date | |
---|---|---|---|---|
1 | Review-Then-Refine: A Dynamic Framework for Multi-Hop Question Answering with Temporal Adaptability Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenge, this paper proposes a novel framework called review-then-refine, which aims to enhance LLM performance in multi-hop QA scenarios with temporal information. |
Xiangsen Chen; Xuming Hu; Nan Tang; | arxiv-cs.CL | 2024-12-19 |
2 | Multimodal Hypothetical Summary for Retrieval-based Multi-image Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Conventional retrieve-then-answer pipelines often suffer from cascading errors because the training objective of QA fails to optimize the retrieval stage. To address this issue, we propose a novel method to effectively introduce and reference retrieved information into the QA. |
Peize Li; Qingyi Si; Peng Fu; Zheng Lin; Yan Wang; | arxiv-cs.CV | 2024-12-19 |
3 | CodeRepoQA: A Large-scale Benchmark for Software Engineering Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce CodeRepoQA, a large-scale benchmark specifically designed for evaluating repository-level question-answering capabilities in the field of software engineering. |
RUIDA HU et. al. | arxiv-cs.SE | 2024-12-19 |
4 | Multi-OphthaLingua: A Multilingual Benchmark for Assessing and Debiasing LLM Ophthalmological QA in LMICs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing debiasing methods such as Translation Chain-of-Thought or Retrieval-augmented generation (RAG) by themselves fall short of closing this performance gap, often failing to improve performance across all languages and lacking specificity for the medical domain. To address this issue, We propose CLARA (Cross-Lingual Reflective Agentic system), a novel inference time de-biasing method leveraging retrieval augmented generation and self-verification. |
DAVID RESTREPO et. al. | arxiv-cs.CL | 2024-12-18 |
5 | GraphEQA: Using 3D Semantic Scene Graphs for Real-time Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This remains a challenging problem in robotics, due to the difficulties in obtaining useful semantic representations, updating these representations online, and leveraging prior world knowledge for efficient exploration and planning. Aiming to address these limitations, we propose GraphEQA, a novel approach that utilizes real-time 3D metric-semantic scene graphs (3DSGs) and task relevant images as multi-modal memory for grounding Vision-Language Models (VLMs) to perform EQA tasks in unseen environments. |
SAUMYA SAXENA et. al. | arxiv-cs.RO | 2024-12-18 |
6 | Question: How Do Large Language Models Perform on The Question Answering Tasks? Answer: Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a comprehensive performance comparison between smaller fine-tuned models and out-of-the-box instruction-following LLMs on the Stanford Question Answering Dataset 2.0 (SQuAD2), specifically when using a single-inference prompting technique. |
Kevin Fischer; Darren Fürst; Sebastian Steindl; Jakob Lindner; Ulrich Schäfer; | arxiv-cs.CL | 2024-12-17 |
7 | LLM-based Discriminative Reasoning for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To deal with the issue, we propose a novel LLM-based Discriminative Reasoning (LDR) method to explicitly model the subgraph retrieval and answer inference process. |
MUFAN XU et. al. | arxiv-cs.CL | 2024-12-17 |
8 | EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce EXIT, an extractive context compression framework that enhances both the effectiveness and efficiency of retrieval-augmented generation (RAG) in question answering (QA). |
TAEHO HWANG et. al. | arxiv-cs.CL | 2024-12-17 |
9 | Interpretable LLM-based Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although recent approaches using Large Language Models (LLMs) have significantly improved Table QA performance, their explanations for how the answers are generated are ambiguous. To fill this gap, we introduce Plan-of-SQLs ( or POS), an interpretable, effective, and efficient approach to Table QA that answers an input query solely with SQL executions. |
Ivan Brugere; Shubham Sharma; Sanjay Kariyappa; Anh Totti Nguyen; Freddy Lecue; | arxiv-cs.CL | 2024-12-16 |
10 | Context Filtering with Reward Modeling in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Yet, the mix of relevant and irrelevant information in these contexts can hinder performance enhancements in QA tasks. To address this, we introduce a context filtering approach that removes non-essential details, summarizing crucial content through Reward Modeling. |
Sangryul Kim; James Thorne; | arxiv-cs.CL | 2024-12-16 |
11 | SCITAT: A Question Answering Benchmark for Scientific Tables and Text Covering Diverse Reasoning Types Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, current SQA datasets have limited reasoning types and neglect the relevance between tables and text, creating a significant gap with real scenarios. To address these challenges, we propose a QA benchmark for scientific tables and text with diverse reasoning types (SciTaT). |
XUANLIANG ZHANG et. al. | arxiv-cs.CL | 2024-12-16 |
12 | CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, because of the inherent limitation of MCQ-based evaluation and the increasing reasoning ability of MLLMs, models can give the current answer purely by combining short video understanding with elimination, without genuinely understanding the video content. To address this gap, we introduce CG-Bench, a novel benchmark designed for clue-grounded question answering in long videos. |
GUO CHEN et. al. | arxiv-cs.CV | 2024-12-16 |
13 | Precise Length Control in Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a method to adapt pre-trained decoder-only LLMs for precise control of response length. |
Bradley Butcher; Michael O’Keefe; James Titchener; | arxiv-cs.CL | 2024-12-16 |
14 | Advancements and Challenges in Bangla Question Answering Models: A Comprehensive Review Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The domain of Natural Language Processing (NLP) has experienced notable progress in the evolution of Bangla Question Answering (QA) systems. This paper presents a comprehensive review of seven research articles that contribute to the progress in this domain. |
Md Iftekhar Islam Tashik; Abdullah Khondoker; Enam Ahmed Taufik; Antara Firoz Parsa; S M Ishtiak Mahmud; | arxiv-cs.CL | 2024-12-16 |
15 | Overview of TREC 2024 Medical Video Question Answering (MedVidQA) Track Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: With increasing interest in AI to support clinical decision-making and improve patient engagement, there is a need to explore such challenges and develop efficient algorithms for medical language-video understanding and generation. Toward this, we introduced new tasks to foster research toward designing systems that can understand medical videos to provide visual answers to natural language questions, and are equipped with multimodal capability to generate instruction steps from the medical video. |
Deepak Gupta; Dina Demner-Fushman; | arxiv-cs.CV | 2024-12-15 |
16 | Patch-level Sounding Object Tracking for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a new Patch-level Sounding Object Tracking (PSOT) method. |
ZHANGBIN LI et. al. | arxiv-cs.MM | 2024-12-14 |
17 | VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose VisDoMRAG, a novel multimodal Retrieval Augmented Generation (RAG) approach that simultaneously utilizes visual and textual RAG, combining robust visual retrieval capabilities with sophisticated linguistic reasoning. |
MANAN SURI et. al. | arxiv-cs.CL | 2024-12-14 |
18 | RETQA: A Large-Scale Open-Domain Tabular Question Answering Dataset for Real Estate Sector Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Compared with existing tabular question answering datasets, RETQA poses greater challenges due to three key factors: long-table structures, open-domain retrieval, and multi-domain queries. To tackle these challenges, we propose the SLUTQA framework, which integrates large language models with spoken language understanding tasks to enhance retrieval and answering accuracy. |
Zhensheng Wang; Wenmian Yang; Kun Zhou; Yiquan Zhang; Weijia Jia; | arxiv-cs.CL | 2024-12-13 |
19 | VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose the VLR-Bench, a visual question answering (VQA) benchmark for evaluating vision language models (VLMs) based on retrieval augmented generation (RAG). |
HYEONSEOK LIM et. al. | arxiv-cs.CV | 2024-12-13 |
20 | Evidence Contextualization and Counterfactual Attribution for Conversational QA Over Heterogeneous Data with RAG Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, several RAG systems today suffer from two shortcomings: (i) retrieved passages usually contain their raw text and lack appropriate document context, negatively impacting both retrieval and answering quality; and (ii) attribution strategies that explain answer generation usually rely only on similarity between the answer and the retrieved passages, thereby only generating plausible but not causal explanations. In this work, we demonstrate RAGONITE, a RAG system that remedies the above concerns by: (i) contextualizing evidence with source metadata and surrounding text; and (ii) computing counterfactual attribution, a causal explanation approach where the contribution of an evidence to an answer is determined by the similarity of the original response to the answer obtained by removing that evidence. |
RISHIRAJ SAHA ROY et. al. | arxiv-cs.CL | 2024-12-13 |
21 | Lost in The Middle, and In-Between: Enhancing Language Models’ Ability to Reason Over Long Contexts in Multi-Hop QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here, we demonstrate the effects of the lost in the middle problem in the multi-hop question answering setting — in which multiple reasoning hops over disconnected documents are required — and show that performance degrades not only with respect to the distance of information from the edges of the context, but also between pieces of information. |
George Arthur Baker; Ankush Raut; Sagi Shaier; Lawrence E Hunter; Katharina von der Wense; | arxiv-cs.CL | 2024-12-13 |
22 | Foundation Models and Adaptive Feature Selection: A Synergistic Approach to Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Local-Global Question Aware Video Embedding (LGQAVE), which incorporates three major innovations to integrate multi-modal knowledge better and emphasize semantic visual concepts relevant to specific questions. |
SAI BHARGAV RONGALI et. al. | arxiv-cs.CV | 2024-12-12 |
23 | Towards A Multimodal Large Language Model with Pixel-Level Insight for Biomedicine Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a novel end-to-end multimodal large language model for the biomedical domain, named MedPLIB, which possesses pixel-level understanding. |
XIAOSHUANG HUANG et. al. | arxiv-cs.CV | 2024-12-12 |
24 | Assessing The Robustness of Retrieval-Augmented Generation Systems in K-12 Educational Question Answering with Knowledge Discrepancies Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the discrepancy between textbooks and the parametric knowledge in Large Language Models (LLMs) could undermine the effectiveness of RAG systems. To systematically investigate the robustness of RAG systems under such knowledge discrepancies, we present EduKDQA, a question answering dataset that simulates knowledge discrepancies in real applications by applying hypothetical knowledge updates in answers and source documents. |
Tianshi Zheng; Weihan Li; Jiaxin Bai; Weiqi Wang; Yangqiu Song; | arxiv-cs.CL | 2024-12-12 |
25 | Discrete Subgraph Sampling for Interpretable Graph Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we integrate different discrete subset sampling methods into a graph-based visual question answering system to compare their effectiveness in generating interpretable explanatory subgraphs intrinsically. |
Pascal Tilli; Ngoc Thang Vu; | arxiv-cs.CL | 2024-12-11 |
26 | Piece of Table: A Divide-and-Conquer Approach for Selecting Sub-Tables in Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, when applying linearized tables to LMs, the maximum token lengths often imposed in self-attention calculations make it difficult to comprehensively understand the context spread across large tables. To address these challenges, we present PieTa (Piece of Table), a new framework for sub-table-based question answering (QA). |
Wonjin Lee; Kyumin Kim; Sungjae Lee; Jihun Lee; Kwang In Kim; | arxiv-cs.CL | 2024-12-10 |
27 | RAG-based Question Answering Over Heterogeneous Data and Text Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This article presents the QUASAR system for question answering over unstructured text, structured tables, and knowledge graphs, with unified treatment of all sources. |
Philipp Christmann; Gerhard Weikum; | arxiv-cs.CL | 2024-12-10 |
28 | Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we explore unsupervised model ranking for LMMs by leveraging their uncertainty signals, such as softmax probabilities. |
WEIJIE TU et. al. | arxiv-cs.CV | 2024-12-09 |
29 | PediaBench: A Comprehensive Chinese Pediatric Dataset for Benchmarking Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Through an in-depth analysis of experimental results, we offer insights into the ability of LLMs to answer pediatric questions in the Chinese context, highlighting their limitations for further improvements. |
QIAN ZHANG et. al. | arxiv-cs.CL | 2024-12-09 |
30 | FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge Distillation for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Current methods focus on single-hop question answering or a single modality, which makes them unsuitable for real-world scenarios such as analyzing multimodal educational materials, summarizing lengthy academic articles, or interpreting scientific studies that combine charts, images, and text. To address this gap, we propose a novel methodology, introducing the first framework for creating a high-quality dataset that enables training models for multimodal multihop question answering. |
Amirhossein Abaskohi; Spandana Gella; Giuseppe Carenini; Issam H. Laradji; | arxiv-cs.CL | 2024-12-09 |
31 | An Entailment Tree Generation Approach for Multimodal Multi-Hop Question Answering with Mixture-of-Experts and Iterative Feedback Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: 2) The reasoning process without interpretable reasoning steps makes the model difficult to discover the logical errors for handling complex questions. To solve these problems, we propose a unified LLMs-based approach but without heavily relying on them due to the LLM’s potential errors, and innovatively treat multimodal multi-hop question answering as a joint entailment tree generation and question answering problem. |
QING ZHANG et. al. | arxiv-cs.CL | 2024-12-08 |
32 | Evaluating Hallucination in Text-to-Image Diffusion Models with Scene-Graph Based Question-Answering Agent Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We believe that an effective T2I evaluation metric should accomplish the following: detect instances where the generated images do not align with the textual prompts, a discrepancy we define as the `hallucination problem’ in T2I tasks; record the types and frequency of hallucination issues, aiding users in understanding the causes of errors; and provide a comprehensive and intuitive scoring that close to human standard. To achieve these objectives, we propose a method based on large language models (LLMs) for conducting question-answering with an extracted scene-graph and created a dataset with human-rated scores for generated images. |
ZIYUAN QIN et. al. | arxiv-cs.CV | 2024-12-07 |
33 | Knowledge Graphs Are All You Need: Leveraging KGs in Physics Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a pipeline aimed at enhancing model response quality for Question Answering tasks. |
KRISHNASAI ADDALA et. al. | arxiv-cs.CL | 2024-12-06 |
34 | SplaXBERT: Leveraging Mixed Precision Training and Context Splitting for Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: SplaXBERT, built on ALBERT-xlarge with context-splitting and mixed precision training, achieves high efficiency in question-answering tasks on lengthy texts. Tested on SQuAD v1.1, … |
Zhu Yufan; Hao Zeyu; Li Siqi; Niu Boqian; | arxiv-cs.CL | 2024-12-06 |
35 | GRAF: Graph Retrieval Augmented By Facts for Legal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We first introduce JuRO, the first openly available Romanian legal MCQA dataset, comprising three different examinations and a number of 10,836 total questions. Along with this dataset, we introduce CROL, an organized corpus of laws that has a total of 93 distinct documents with their modifications from 763 time spans, that we leveraged in this work for Information Retrieval (IR) techniques. |
Cristian-George Crăciun; Răzvan-Alexandru Smădu; Dumitru-Clementin Cercel; Mihaela-Claudia Cercel; | arxiv-cs.CL | 2024-12-05 |
36 | Question Answering for Decisionmaking in Green Building Design: A Multimodal Data Reasoning Method Driven By Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Based on previous research, this study innovatively integrates large language models with DGBD, creating GreenQA, a question answering framework for multimodal data reasoning. |
Yihui Li; Xiaoyue Yan; Hao Zhou; Borong Lin; | arxiv-cs.AI | 2024-12-05 |
37 | Prompt Engineering Guidance for Conceptual Agent-based Model Extraction Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This document contains detailed information about the prompts used in the experimental process discussed in the paper Toward Automating Agent-based Model Generation: A Benchmark for Model Extraction using Question-Answering Techniques. |
Siamak Khatami; Christopher Frantz; | arxiv-cs.MA | 2024-12-05 |
38 | Give Me Some Hard Questions: Synthetic Data Generation for Clinical QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that naive prompting often results in easy questions that do not reflect the complexity of clinical scenarios. To address this, we propose two prompting strategies: 1) instructing the model to generate questions that do not overlap with the input context, and 2) summarizing the input record using a predefined schema to scaffold question generation. |
FAN BAI et. al. | arxiv-cs.CL | 2024-12-05 |
39 | Domain-specific Question Answering with Hybrid Search Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we show that a hybrid approach combining a fine-tuned dense retriever with keyword based sparse search methods significantly enhances performance. |
DEWANG SULTANIA et. al. | arxiv-cs.CL | 2024-12-04 |
40 | Copy-Move Forgery Detection and Question Answering for Remote Sensing Image Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces the task of Remote Sensing Copy-Move Question Answering (RSCMQA). |
ZE ZHANG et. al. | arxiv-cs.CV | 2024-12-03 |
41 | QA-TOOLBOX: Conversational Question-Answering for Process Task Guidance in Manufacturing Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we explore utilizing LLMs for data augmentation for manufacturing task guidance system. |
RAMESH MANUVINAKURIKE et. al. | arxiv-cs.CL | 2024-12-03 |
42 | An Evolutionary Large Language Model for Hallucination Mitigation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose EvoLLMs, an innovative framework inspired by Evolutionary Computation, which automates the generation of high-quality Question-answering (QA) datasets while minimizing hallucinations. |
Abdennour Boulesnane; Abdelhakim Souilah; | arxiv-cs.CL | 2024-12-03 |
43 | Hybrid-SQuAD: Hybrid Scholarly Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, scholarly information often spans heterogeneous sources, necessitating the development of QA systems that integrate information from multiple heterogeneous data sources. To address this challenge, we introduce Hybrid-SQuAD (Hybrid Scholarly Question Answering Dataset), a novel large-scale QA dataset designed to facilitate answering questions incorporating both text and KG facts. |
Tilahun Abedissa Taffa; Debayan Banerjee; Yaregal Assabie; Ricardo Usbeck; | arxiv-cs.CL | 2024-12-03 |
44 | Eyes on The Road: State-of-the-Art Video Question Answering Models Assessment for Traffic Monitoring Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The framework leverages GPT-4o to assess accuracy, relevance, and consistency across basic detection, temporal reasoning, and decomposition queries. |
Joseph Raj Vishal; Divesh Basina; Aarya Choudhary; Bharatesh Chakravarthi; | arxiv-cs.CV | 2024-12-02 |
45 | GraphOTTER: Evolving LLM-based Graph Reasoning for Complex Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose GraphOTTER that explicitly establishes the reasoning process to pinpoint the correct answers. |
QIANLONG LI et. al. | arxiv-cs.CL | 2024-12-02 |
46 | A Lightweight Transformer-based Visual Question Answering Network with Weight-Sharing Hybrid Attention Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yue Zhu; Dongyue Chen; Tong Jia; Shizhuo Deng; | Neurocomputing | 2024-12-01 |
47 | Generative Language Models Potential for Requirement Engineering Applications: Insights Into Current Strengths and Limitations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Traditional language models have been extensively evaluated for software engineering domain, however the potential of ChatGPT and Gemini have not been fully explored. To fulfill this gap, the paper in hand presents a comprehensive case study to investigate the potential of both language models for development of diverse types of requirement engineering applications. |
Summra Saleem; Muhammad Nabeel Asim; Ludger Van Elst; Andreas Dengel; | arxiv-cs.SE | 2024-12-01 |
48 | DynRank: Improving Passage Retrieval with Dynamic Zero-Shot Prompting Based on Question Classification Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents DynRank, a novel framework for enhancing passage retrieval in open-domain question-answering systems through dynamic zero-shot question classification. |
Abdelrahman Abdallah; Jamshid Mozafari; Bhawna Piryani; Mohammed M. Abdelgwad; Adam Jatowt; | arxiv-cs.CL | 2024-11-30 |
49 | Perception Test 2024: Challenge Summary and A Novel Hour-Long VideoQA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We summarise in this report the challenge tasks and results, and introduce in detail the novel hour-long video QA benchmark 1h-walk VQA. |
Joseph Heyward; João Carreira; Dima Damen; Andrew Zisserman; Viorica Pătrăucean; | arxiv-cs.CV | 2024-11-29 |
50 | TQA-Bench: Evaluating LLMs for Multi-Table Question Answering with Scalable Context and Symbolic Extension Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing benchmarks primarily focus on single-table QA, failing to capture the intricacies of reasoning across multiple relational tables, as required in real-world domains such as finance, healthcare, and e-commerce. To address this gap, we present TQA-Bench, a new multi-table QA benchmark designed to evaluate the capabilities of LLMs in tackling complex QA tasks over relational data. |
Zipeng Qiu; You Peng; Guangxin He; Binhang Yuan; Chen Wang; | arxiv-cs.AI | 2024-11-29 |
51 | Actions and Objects Pathways for Domain Adaptation in Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the Actions and Objects Pathways (AOPath) for out-of-domain generalization in video question answering tasks. |
Safaa Abdullahi Moallim Mohamud; Ho-Young Jung; | arxiv-cs.CV | 2024-11-28 |
52 | Overview of TREC 2024 Biomedical Generative Retrieval (BioGen) Track Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Methods for grounding generated statements in reliable sources along with practical evaluation approaches are needed to overcome this barrier. Towards this, in our pilot task organized at TREC 2024, we introduced the task of reference attribution as a means to mitigate the generation of false statements by LLMs answering biomedical questions. |
Deepak Gupta; Dina Demner-Fushman; William Hersh; Steven Bedrick; Kirk Roberts; | arxiv-cs.IR | 2024-11-27 |
53 | Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The goal of our survey is to provide an overview of the development of VQA and a detailed description of the latest models with high timeliness. |
JIAYI KUANG et. al. | arxiv-cs.CL | 2024-11-26 |
54 | Task Progressive Curriculum Learning for Robust Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we show for the first time that robust Visual Question Answering is attainable by simply enhancing the training strategy. |
AHMED AKL et. al. | arxiv-cs.CV | 2024-11-26 |
55 | Text-Guided Coarse-to-Fine Fusion Network for Robust Remote Sensing Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a Text-guided Coarse-to-Fine Fusion Network (TGFNet), which leverages the semantic relationships between question text and multi-source images to guide the network toward complementary fusion at the feature level. |
ZHICHENG ZHAO et. al. | arxiv-cs.CV | 2024-11-24 |
56 | AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce AfriMed-QA, the first large scale Pan-African English multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open and closed-ended) sourced from over 60 medical schools across 16 countries, covering 32 medical specialties. |
TOBI OLATUNJI et. al. | arxiv-cs.CL | 2024-11-23 |
57 | VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning Via Core Frame Selection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To exploit the potential of high-quality VideoQA pairs, we propose a Hybrid LVLMs Collaboration framework, featuring a Frame Selector and a two-stage instruction fine-tuned reasoning LVLM. |
SONGHAO HAN et. al. | arxiv-cs.CV | 2024-11-22 |
58 | Retrieval-Augmented Generation for Domain-Specific Question Answering: A Case Study on Pittsburgh and CMU Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We designed a Retrieval-Augmented Generation (RAG) system to provide large language models with relevant documents for answering domain-specific questions about Pittsburgh and Carnegie Mellon University (CMU). |
Haojia Sun; Yaqi Wang; Shuting Zhang; | arxiv-cs.LG | 2024-11-20 |
59 | Do LLMs Understand Ambiguity in Text? A Case Study in Open-world Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We demonstrate how simple, training-free, token-level disambiguation methods may be effectively used to improve LLM performance for ambiguous question answering tasks. |
Aryan Keluskar; Amrita Bhattacharjee; Huan Liu; | arxiv-cs.CL | 2024-11-19 |
60 | Neon: News Entity-Interaction Extraction for Enhanced Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the information modeled by the parametric memory of LLMs is often outdated, and Web results from prototypical retrieval systems may fail to capture the latest relevant information and struggle to handle conflicting reports in evolving news. To address this challenge, we present the NEON framework, designed to extract emerging entity interactions — such as events or activities — as described in news articles. |
Sneha Singhania; Silviu Cucerzan; Allen Herring; Sujay Kumar Jauhar; | arxiv-cs.CL | 2024-11-19 |
61 | Mitigating Knowledge Conflicts in Language Model-Driven Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we argue that hallucination could be mitigated via explicit correlation between input source and generated content. |
HAN CAO et. al. | arxiv-cs.CL | 2024-11-18 |
62 | A Comprehensive Survey on Visual Question Answering Datasets and Algorithms Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Since the inception of this field, a plethora of VQA datasets and models have been published. In this article, we meticulously analyze the current state of VQA datasets and models, while cleanly dividing them into distinct categories and then summarizing the methodologies and characteristics of each category. |
Raihan Kabir; Naznin Haque; Md Saiful Islam; | arxiv-cs.CV | 2024-11-17 |
63 | Memory-Augmented Multimodal LLMs for Surgical VQA Via Self-Contained Inquiry Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these methods often struggle with limited scene understanding and question comprehension, and some rely on external resources (e.g., pre-extracted object features), which can introduce errors and generalize poorly across diverse surgical environments. To address these challenges, we propose SCAN, a simple yet effective memory-augmented framework that leverages Multimodal LLMs to improve surgical context comprehension via Self-Contained Inquiry. |
WENJUN HOU et. al. | arxiv-cs.CV | 2024-11-16 |
64 | Understanding Multimodal LLMs: The Mechanistic Interpretability of Llava in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we apply mechanistic interpretability methods to analyze the visual question answering (VQA) mechanisms in the first MLLM, Llava. |
Zeping Yu; Sophia Ananiadou; | arxiv-cs.CL | 2024-11-16 |
65 | LLaVA-o1: Let Vision Language Models Reason Step-by-Step Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce LLaVA-o1, a novel VLM designed to conduct autonomous multistage reasoning. |
GUOWEI XU et. al. | arxiv-cs.CV | 2024-11-15 |
66 | Visual Question Answering Based Evaluation Metrics for Text-to-image Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes new evaluation metrics that assess the alignment between input text and generated images for every individual object. |
Mizuki Miyamoto; Ryugo Morita; Jinjia Zhou; | arxiv-cs.CV | 2024-11-15 |
67 | A Benchmark for Long-Form Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a new publicly available benchmark featuring real-world consumer medical questions with long-form answer evaluations annotated by medical doctors. |
PEDRAM HOSSEINI et. al. | arxiv-cs.CL | 2024-11-14 |
68 | Comprehensive and Practical Evaluation of Retrieval-Augmented Generation Systems for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper addresses this gap by providing a comprehensive evaluation framework for medical question-answering (QA) systems in a RAG setting for these situations, including sufficiency, integration, and robustness. We introduce Medical Retrieval-Augmented Generation Benchmark (MedRGB) that provides various supplementary elements to four medical QA datasets for testing LLMs’ ability to handle these specific scenarios. |
Nghia Trung Ngo; Chien Van Nguyen; Franck Dernoncourt; Thien Huu Nguyen; | arxiv-cs.CL | 2024-11-14 |
69 | The Limited Impact of Medical Adaptation of Large Language and Vision-Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we compare ten public medical LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting and supervised fine-tuning regimes for medical question-answering (QA). |
Daniel P. Jeong; Pranav Mani; Saurabh Garg; Zachary C. Lipton; Michael Oberst; | arxiv-cs.CL | 2024-11-13 |
70 | Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces QA-Attack (Question Answering Attack), a novel word-level adversarial strategy that fools QA models. |
Jiyao Li; Mingze Ni; Yongshun Gong; Wei Liu; | arxiv-cs.CL | 2024-11-12 |
71 | Large Language Models Are Poor Clinical Decision-Makers: A Comprehensive Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To better understand LLMs in the clinic, we construct a benchmark ClinicBench. |
FENGLIN LIU et. al. | emnlp | 2024-11-11 |
72 | DVD: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Retrieval-augmented generation (RAG) offers a potential remedy, yet the uneven retrieval quality and irrelevant contents may distract LLMs. In this work, we address these issues at the generation phase by treating RAG as a multi-document QA task. |
Jing Jin; Houfeng Wang; Hao Zhang; Xiaoguang Li; Zhijiang Guo; | emnlp | 2024-11-11 |
73 | Training-free Deep Concept Injection Enables Language Models for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we make the first attempt to demonstrate that the PLM is able to perform zero-shot crossmodal tasks without any crossmodal pretraining, when the observed visual concepts are injected as both additional input text tokens and augmentation in the intermediate features within each feed-forward network for the PLM. |
Xudong Lin; Manling Li; Richard Zemel; Heng Ji; Shih-Fu Chang; | emnlp | 2024-11-11 |
74 | Toward Optimal Search and Retrieval for RAG Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here, we work towards the goal of understanding how retrievers can be optimized for RAG pipelines for common tasks such as Question Answering (QA). |
ALEXANDRIA LETO et. al. | arxiv-cs.CL | 2024-11-11 |
75 | MILD Bot: Multidisciplinary Childhood Cancer Survivor Question-Answering Bot Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study introduces a Multidisciplinary chILDhood cancer survivor question-answering (MILD) bot designed to support childhood cancer survivors facing diverse challenges in their survivorship journey. |
MIRAE KIM et. al. | emnlp | 2024-11-11 |
76 | CompAct: Compressing Retrieved Documents Actively for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Context compression tackles this issue by filtering out irrelevant information, but current methods still struggle in realistic scenarios where crucial information cannot be captured with a single-step approach. To overcome this limitation, we introduce CompAct, a novel framework that employs an active strategy to condense extensive documents without losing key information. |
Chanwoong Yoon; Taewhoo Lee; Hyeon Hwang; Minbyul Jeong; Jaewoo Kang; | emnlp | 2024-11-11 |
77 | You Make Me Feel Like A Natural Question: Training QA Systems on Transformed Trivia Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Training question-answering QA and information retrieval systems for web queries require large, expensive datasets that are difficult to annotate and time-consuming to gather. … |
TASNIM KABIR et. al. | emnlp | 2024-11-11 |
78 | ERVQA: A Dataset to Benchmark The Readiness of Large Vision Language Models in Hospital Environments Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of |
SOURJYADIP RAY et. al. | emnlp | 2024-11-11 |
79 | EfficientRAG: Efficient Retriever for Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce EfficientRAG, an efficient retriever for multi-hop question answering. |
ZIYUAN ZHUANG et. al. | emnlp | 2024-11-11 |
80 | SciDQA: A Deep Reading Comprehension Dataset Over Scientific Papers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce SciDQA, a new dataset for reading comprehension that challenges language models to deeply understand scientific articles, consisting of 2,937 QA pairs. |
Shruti Singh; Nandan Sarkar; Arman Cohan; | emnlp | 2024-11-11 |
81 | Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Thus, the retrieved knowledge is not truly conducive to helping answer the question, affecting the performance of the overall system. To address this issue, we propose a novel framework that leverages the visual-language model to select the key knowledge retrieved by DPR and answer questions. |
Dongze Hao; Qunbo Wang; Longteng Guo; Jie Jiang; Jing Liu; | emnlp | 2024-11-11 |
82 | CasiMedicos-Arg: A Medical Question Answering Dataset Annotated with Explanatory Argumentative Structures Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Developing new tools to aid residents to train their explanation skills is therefore a central objective of AI in education. In this paper, we follow this direction, and we present, to the best of our knowledge, the first multilingual dataset for Medical Question Answering where correct and incorrect diagnoses for a clinical case are enriched with a natural language explanation written by doctors. |
EKATERINA SVIRIDOVA et. al. | emnlp | 2024-11-11 |
83 | RAG4ITOps: A Supervised Fine-Tunable and Comprehensive RAG Framework for IT Operations and Maintenance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a general and comprehensive framework based on Retrieval Augmented Generation (RAG) and facilitate the whole business process of establishing QA systems for IT operations and maintenance. |
TIANYANG ZHANG et. al. | emnlp | 2024-11-11 |
84 | Encoding and Controlling Global Semantics for Long-form Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To further enhance the controllability, we introduce a cross-modal compositional congruence objective to encourage global semantics aligned with the question. |
THONG THANH NGUYEN et. al. | emnlp | 2024-11-11 |
85 | Self-Training Large Language and Vision Assistant for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the advancement of medical image understanding and reasoning critically depends on building high-quality visual instruction data, which is costly and labor-intensive to obtain, particularly in the medical domain. To mitigate this data-starving issue, we introduce Self-Training Large Language and Vision Assistant for Medical (STLLaVA-Med). |
Guohao Sun; Can Qin; Huazhu Fu; Linwei Wang; Zhiqiang Tao; | emnlp | 2024-11-11 |
86 | A Simple LLM Framework for Long-Range Video Question-Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present LLoVi, a simple yet effective **L**anguage-based **Lo**ng-range **Vi**deo question-answering (LVQA) framework. |
CE ZHANG et. al. | emnlp | 2024-11-11 |
87 | Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present MIRAGE – Model Internals-based RAG Explanations – a plug-and-play approach using model internals for faithful answer attribution in RAG applications. |
Jirui Qi; Gabriele Sarti; Raquel Fern�ndez; Arianna Bisazza; | emnlp | 2024-11-11 |
88 | Efficient Answer Retrieval System (EARS): Combining Local DB Search and Web Search for Generative QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose an efficient answer retrieval system **EARS**: a production-ready, factual question answering (QA) system that combines local knowledge base search with generative, context-based QA. |
Nikita Krayko; Ivan Sidorov; Fedor Laputin; Daria Galimzianova; Vasily Konovalov; | emnlp | 2024-11-11 |
89 | OMG-QA: Building Open-Domain Multi-Modal Generative Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce OMG-QA, a new resource for question answering that is designed to evaluate the effectiveness of question answering systems that perform retrieval augmented generation (RAG) in scenarios that demand reasoning on multi-modal, multi-document contexts. |
LINYONG NAN et. al. | emnlp | 2024-11-11 |
90 | Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the novel challenge of VideoQA within a continual learning framework, and empirically identify a critical issue: fine-tuning a large language model (LLM) for a sequence of tasks often results in catastrophic forgetting. |
CHEN CAI et. al. | emnlp | 2024-11-11 |
91 | Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). |
MINZHENG WANG et. al. | emnlp | 2024-11-11 |
92 | LLoCO: Learning Long Contexts Offline Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. |
SIJUN TAN et. al. | emnlp | 2024-11-11 |
93 | Multi-Level Information Retrieval Augmented Generation for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a multi-level information RAG approach that enhances answer generation through entity retrieval and query expansion. |
Adjali Omar; Olivier Ferret; Sahar Ghannay; Herv� Le Borgne; | emnlp | 2024-11-11 |
94 | Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. |
Sagi Shaier; Ari Kobren; Philip V. Ogren; | emnlp | 2024-11-11 |
95 | StorySparkQA: Expert-Annotated QA Pairs with Real-World Knowledge for Children’s Story-Based Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This limitation can be attributed to the existing question-answering (QA) datasets used for children’s education, upon which the systems are built, failing to capture the nuances of how education experts think when conducting interactive story reading activities. To bridge this gap, we design an annotation framework, empowered by existing knowledge graph to capture experts’ annotations and thinking process, and leverage this framework to construct StorySparkQA dataset, which comprises 5, 868 expert-annotated QA pairs with real-world knowledge. |
JIAJU CHEN et. al. | emnlp | 2024-11-11 |
96 | Subgraph Retrieval Enhanced By Graph-Text Alignment for Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To deal with the problems above, we propose a novel framework: \textbf{S}ubgraph R\textbf{E}trieval Enhanced by Gra\textbf{P}h-\textbf{T}ext \textbf{A}lignment, named \textbf{SEPTA}. |
BOCI PENG et. al. | arxiv-cs.LG | 2024-11-11 |
97 | REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus likely leading to misleading or even incorrect utilization of external knowledge (i. e. , retrieved documents). To address this issue, in this paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA). |
YUHAO WANG et. al. | emnlp | 2024-11-11 |
98 | RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, most existing datasets for this task are either constructed using a single source corpus or consist of short extractive answers, which fall short of evaluating large language model (LLM) based RAG-QA systems on cross-domain generalization. To address these limitations, we create Long-form RobustQA (LFRQA), a new dataset comprising human-written long-form answers that integrate short extractive answers from multiple documents into a single, coherent narrative, covering 26K queries and large corpora across seven different domains. |
RUJUN HAN et. al. | emnlp | 2024-11-11 |
99 | Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We identify confounding effects and LM-KG misalignment as key factors causing spurious explanations. To address this, we introduce the LM-KG Fidelity metric to assess KG representation reliability and propose the LM-KG Distribution-aware Alignment (LKDA) algorithm to improve explanation faithfulness. |
Weihe Zhai; Arkaitz Zubiaga; Bingquan Liu; Chengjie Sun; Yalong Zhao; | emnlp | 2024-11-11 |
100 | Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal Assistant Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We revisit knowledge-aware text-based visual question answering, also known as Text-KVQA in the light of modern advancements in large multimodal models (LMMs), and make the following contributions: (i) We propose VisTEL – a principled approach to perform visual text entity linking. |
Abhirama Subramanyam Penamakuri; Anand Mishra; | emnlp | 2024-11-11 |
101 | Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In response, we propose Right for Right Reasons (R3), a commonsense KGQA methodology that allows for a verifiable reasoning procedure by axiomatically surfacing intrinsic commonsense knowledge of LLMs and grounding every factual reasoning step on KG triples. |
Armin Toroghi; Willis Guo; Mohammad Mahdi Abdollah Pour; Scott Sanner; | emnlp | 2024-11-11 |
102 | Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. |
Gregor Geigle; Radu Timofte; Goran Glava�; | emnlp | 2024-11-11 |
103 | TimeR4 : Time-aware Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To further enhance LLMs’ temporal reasoning ability, this paper aims to integrate relevant temporal knowledge from TKGs into LLMs through a Time-aware Retrieve-Rewrite-Retrieve-Rerank framework, which we named TimeR4. |
XINYING QIAN et. al. | emnlp | 2024-11-11 |
104 | PCQPR: Proactive Conversational Question Planning with Reflection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we redefine the CQG task as Conclusion-driven Conversational Question Generation (CCQG) by focusing on proactivity, not merely reacting to the unfolding conversation but actively steering it towards a conclusion-oriented question-answer pair. To address this, we propose a novel approach, called Proactive Conversational Question Planning with self-Refining (PCQPR). |
Shasha Guo; Lizi Liao; Jing Zhang; Cuiping Li; Hong Chen; | emnlp | 2024-11-11 |
105 | Generate-on-Graph: Treat LLM As Both Agent and KG for Incomplete Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To handle IKGQA, we propose a training-free method called Generate-on-Graph (GoG), which can generate new factual triples while exploring KGs. |
YAO XU et. al. | emnlp | 2024-11-11 |
106 | LongRAG: A Dual-Perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose LongRAG, a general, dual-perspective, and robust LLM-based RAG system paradigm for LCQA to enhance RAG’s understanding of complex long-context knowledge (i. e. , global information and factual details). |
QINGFEI ZHAO et. al. | emnlp | 2024-11-11 |
107 | FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Food is a rich and varied dimension of cultural heritage, crucial to both individuals and social groups. To bridge the gap in the literature on the often-overlooked regional diversity in this domain, we introduce FoodieQA, a manually curated, fine-grained image-text dataset capturing the intricate features of food cultures across various regions in China. |
WENYAN LI et. al. | emnlp | 2024-11-11 |
108 | Where Am I? Large Language Models Wandering Between Semantics and Structures in Long Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To verify LLMs’ task alignment, we introduce a verification framework and resources considering both semantic relevancy and structural diversity of the given long context knowledge. |
Seonmin Koo; Jinsung Kim; YoungJoon Jang; Chanjun Park; Heuiseok Lim; | emnlp | 2024-11-11 |
109 | Do Great Minds Think Alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recent advancements of large language models (LLMs)have led to claims of AI surpassing humansin natural language processing NLP tasks such as textual understanding and reasoning. %This work investigates these assertions by introducingCAIMIRA, a novel framework rooted in item response theory IRTthat enables quantitative assessment and comparison of problem-solving abilities inquestion-answering QA agents. |
Maharshi Gor; Hal Daum� Iii; Tianyi Zhou; Jordan Lee Boyd-Graber; | emnlp | 2024-11-11 |
110 | Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we compare seven public medical LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. |
Daniel P Jeong; Saurabh Garg; Zachary Chase Lipton; Michael Oberst; | emnlp | 2024-11-11 |
111 | Contextualized Sequence Likelihood: Enhanced Confidence Scores for Natural Language Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose enhancing the predicted sequence probability by assigning different weights to various tokens using attention values elicited from the base LLM. |
Zhen Lin; Shubhendu Trivedi; Jimeng Sun; | emnlp | 2024-11-11 |
112 | Triad: A Framework Leveraging A Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Triad, a unified framework that utilizes an LLM-based agent with multiple roles for KBQA tasks. |
CHANG ZONG et. al. | emnlp | 2024-11-11 |
113 | Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing methods, like concatenation or free-form textual conversion of triples, have limitations, including duplicated entities or relations, reduced evidence density, and failure to highlight crucial evidence. To address these issues, we propose EFSum, an Evidence-focused Fact Summarization framework for enhanced QA with knowledge-augmented LLMs. |
Sungho Ko; Hyunjin Cho; Hyungjoo Chae; Jinyoung Yeo; Dongha Lee; | emnlp | 2024-11-11 |
114 | Pre-training Cross-lingual Open Domain Question Answering with Large-scale Synthetic Supervision Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View |
Fan Jiang; Tom Drummond; Trevor Cohn; | emnlp | 2024-11-11 |
115 | RAC: Retrieval-augmented Conversation Dataset for Open-domain Question Answering in Conversational Settings Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present a novel retrieval-augmented conversation (RAC) dataset and develop a baseline system comprising query rewriting, retrieval, reranking, and response generation stages. |
Bonggeun Choi; JeongJae Park; Yoonsung Kim; Jaehyun Park; Youngjoong Ko; | emnlp | 2024-11-11 |
116 | EVQAScore: Efficient Video Question Answering Data Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although various methods have been proposed for assessing video caption quality, there remains a lack of dedicated evaluation methods for Video QA. To address this gap, we introduce EVQAScore, a reference-free method that leverages keyword extraction to assess both video caption and video QA data quality. |
Hao Liang; Zirong Chen; Wentao Zhang; | arxiv-cs.CV | 2024-11-11 |
117 | CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address them, we propose a novel rewriting method CoTKR, Chain- of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner, thereby mitigating the limitations of single-step knowledge rewriting. |
YIKE WU et. al. | emnlp | 2024-11-11 |
118 | LONGAGENT: Achieving Question Answering for 128k-Token-Long Documents Through Multi-Agent Collaboration Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce _LongAgent_, a multi-agent collaboration method that enables efficient and effective QA over 128k-token-long documents. |
JUN ZHAO et. al. | emnlp | 2024-11-11 |
119 | Can LLM Generate Culturally Relevant Commonsense QA Data? Case Study in Indonesian and Sundanese Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we investigate the effectiveness of using LLMs in generating culturally relevant commonsense QA datasets for Indonesian and Sundanese languages. |
Rifki Afina Putri; Faiz Ghifari Haznitrama; Dea Adhista; Alice Oh; | emnlp | 2024-11-11 |
120 | RE-RAG: Improving Open-Domain QA Performance and Interpretability with Relevance Estimator in Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a weakly supervised method for training the RE simply utilizing question-answer data without any labels for correct contexts. |
Kiseung Kim; Jay-Yoon Lee; | emnlp | 2024-11-11 |
121 | Cross-lingual Transfer for Automatic Question Generation By Learning Interrogative Structures in Target Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a simple and efficient XLT-QG method that operates without the need for monolingual, parallel, or labeled data in the target language, utilizing a small language model. |
Seonjeong Hwang; Yunsu Kim; Gary Lee; | emnlp | 2024-11-11 |
122 | ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, these methods require additional training, hand-crafted templates or human-written explanations. To address these issues, we introduce ZEBRA, a zero-shot question answering framework that combines retrieval, case-based reasoning and introspection and dispenses with the need for additional training of the LLM. |
Francesco Maria Molfese; Simone Conia; Riccardo Orlando; Roberto Navigli; | emnlp | 2024-11-11 |
123 | GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, for practical deployment, it is crucial to perform knowledge distillation to maintain high performance while operating under computational constraints. In this paper, we address a key question: given the importance of unsupervised distillation for student model performance, how can knowledge from multiple teacher models be effectively ensemble during this stage without the guidance of labels? |
WENJIE ZHOU et. al. | emnlp | 2024-11-11 |
124 | PDFTriage: Question Answering Over Long, Structured Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: When a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the QA system. To bridge this fundamental gap in handling structured documents, we propose an approach called PDFTriage that enables models to retrieve the context based on either structure or content. |
JON SAAD-FALCON et. al. | emnlp | 2024-11-11 |
125 | SparrowVQE: Visual Question Explanation for Course Content Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper aims to advance the field by introducing Visual Question Explanation (VQE), which enhances the ability of VQA to provide detailed explanations rather than brief responses and address the need for more complex interaction with visual content. |
Jialu Li; Manish Kumar Thota; Ruslan Gokhman; Radek Holik; Youshan Zhang; | arxiv-cs.CV | 2024-11-11 |
126 | Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. |
Yifei Yuan; Yang Deng; Anders S�gaard; Mohammad Aliannejadi; | emnlp | 2024-11-11 |
127 | TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Currently, existing methods perform all of these steps in a single pass without being able to adapt if insufficient or incorrect information is collected. To overcome this, we introduce a modular multi-LMM agent framework based on several agents with different roles, instructed by a Planner agent that updates its instructions using shared feedback from the other agents. |
Chuyi Shang; Amos You; Sanjay Subramanian; Trevor Darrell; Roei Herzig; | emnlp | 2024-11-11 |
128 | Revisiting Automated Evaluation for Long-form Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce LFTQA-Eval, a meta-evaluation dataset comprising 2,988 human-annotated examples, to rigorously assess the efficacy of current automated metrics in assessing LLM-based LFTQA systems, with a focus on faithfulness and comprehensiveness. |
Yuqi Wang; Lyuhao Chen; Songcheng Cai; Zhijian Xu; Yilun Zhao; | emnlp | 2024-11-11 |
129 | CommVQA: Situating Visual Question Answering in Communicative Contexts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To evaluate how situating images within naturalistic contexts shapes visual questions, we introduce CommVQA, a VQA dataset consisting of images, image descriptions, real-world communicative scenarios where the image might appear (e. g. , a travel website), and follow-up questions and answers conditioned on the scenario and description. |
Nandita Shankar Naik; Christopher Potts; Elisa Kreiss; | emnlp | 2024-11-11 |
130 | GUIDEQ: Framework for Guided Questioning for Progressive Informational Collection and Classification Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our work, GUIDEQ, presents a novel framework for asking guided questions to further progress a partial information. |
Priya Mishra; Suraj Racha; Kaustubh Ponkshe; Adit Akarsh; Ganesh Ramakrishnan; | arxiv-cs.CL | 2024-11-08 |
131 | SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the Source-aware Semantic Representation Network (SaSR-Net), a novel model designed for AVQA. |
TIANYU YANG et. al. | arxiv-cs.CV | 2024-11-07 |
132 | MEG: Medical Knowledge-Augmented Large Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present MEG, a parameter-efficient approach for medical knowledge-augmented LLMs. |
Laura Cabello; Carmen Martin-Turrero; Uchenna Akujuobi; Anders Søgaard; Carlos Bobed; | arxiv-cs.CL | 2024-11-06 |
133 | Lexicalization Is All You Need: Examining The Impact of Lexical Knowledge in A Compositional QALD System Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we examine the impact of lexicalization on Question Answering over Linked Data (QALD). |
David Maria Schmidt; Mohammad Fazleh Elahi; Philipp Cimiano; | arxiv-cs.AI | 2024-11-06 |
134 | Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we compare seven public medical LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. |
Daniel P. Jeong; Saurabh Garg; Zachary C. Lipton; Michael Oberst; | arxiv-cs.CL | 2024-11-06 |
135 | VQA$^2$: Visual Question Answering for Video Quality Assessment Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Nevertheless, related work has not been explored in the video domain, leaving substantial room for improvement. To address this gap, we introduce the VQA2 Instruction Dataset – the first visual question answering instruction dataset that focuses on video quality assessment. |
ZIHENG JIA et. al. | arxiv-cs.CV | 2024-11-06 |
136 | Leveraging Large Language Models in Code Question Answering: Baselines and Issues Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a work devoted to using large language models for question answering over source code in Python. |
Georgy Andryushchenko; Vladimir Ivanov; Vladimir Makharev; Elizaveta Tukhtina; Aidar Valeev; | arxiv-cs.CL | 2024-11-05 |
137 | FactTest: Factuality Testing in Large Language Models with Finite-Sample and Distribution-Free Guarantees Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce FactTest, a novel framework that statistically assesses whether a LLM can confidently provide correct answers to given questions with high-probability correctness guarantees. |
FAN NIE et. al. | arxiv-cs.CL | 2024-11-04 |
138 | Multimodal Commonsense Knowledge Distillation for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a novel graph-based multimodal commonsense knowledge distillation framework that constructs a unified relational graph over commonsense knowledge, visual objects and questions through a Graph Convolutional Network (GCN) following a teacher-student environment. |
Shuo Yang; Siwen Luo; Soyeon Caren Han; | arxiv-cs.CL | 2024-11-04 |
139 | One VLM to Keep It Learning: Generation and Balancing for Data-free Continual Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose the first data-free method that leverages the language generation capability of a VLM, instead of relying on external models, to produce pseudo-rehearsal data for addressing continual VQA. |
Deepayan Das; Davide Talon; Massimiliano Mancini; Yiming Wang; Elisa Ricci; | arxiv-cs.CV | 2024-11-04 |
140 | A Visual Question Answering Method for SAR Ship: Breaking The Requirement for Multimodal Dataset Construction and Model Fine-Tuning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This has greatly hindered the application of VQA to downstream tasks, such as ship information analysis based on Synthetic Aperture Radar (SAR) imagery. To address this challenge, this letter proposes a novel VQA approach that integrates object detection networks with visual language models, specifically designed for analyzing ships in SAR images. |
Fei Wang; Chengcheng Chen; Hongyu Chen; Yugang Chang; Weiming Zeng; | arxiv-cs.CV | 2024-11-03 |
141 | Diagnosing Medical Datasets with Training Dynamics Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This study explores the potential of using training dynamics as an automated alternative to human annotation for evaluating the quality of training data. |
Laura Wenderoth; | arxiv-cs.LG | 2024-11-03 |
142 | Goal-Oriented Semantic Communication for Wireless Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Meanwhile, this brings new communication challenges between the local and edge, including limited bandwidth, channel noise, and multipath effects, which degrade VQA performance and user quality of experience (QoE), particularly during the transmission of large high-resolution images. To overcome these bottlenecks, we propose a goal-oriented semantic communication (GSC) framework that focuses on effectively extracting and transmitting semantic information most relevant to the VQA goals, improving the answering accuracy and enhancing the effectiveness and efficiency. |
Sige Liu; Nan Li; Yansha Deng; Tony Q. S. Quek; | arxiv-cs.CV | 2024-11-03 |
143 | Right This Way: Can VLMs Guide Us to See More to Answer Questions? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This capability is especially valuable for assisting visually impaired individuals who often need guidance to capture images correctly. To evaluate this capability of current VLMs, we introduce a human-labeled dataset as a benchmark for this task. |
LI LIU et. al. | arxiv-cs.CV | 2024-11-01 |
144 | Enhancing Question Answering Precision with Optimized Vector Retrieval and Instructions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose an innovative approach to improve QA task performances by integrating optimized vector retrievals and instruction methodologies. |
Lixiao Yang; Mengyang Xu; Weimao Ke; | arxiv-cs.IR | 2024-11-01 |
145 | Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel training procedure, Birdie, that significantly enhances the in-context retrieval capabilities of SSMs without altering their architecture. |
Sam Blouir; Jimmy T. H. Smith; Antonios Anastasopoulos; Amarda Shehu; | arxiv-cs.CL | 2024-11-01 |
146 | GRS-QA — Graph Reasoning-Structured Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the impact of the inherent reasoning structures on LLM M-QA performance remains unclear, largely due to the absence of QA datasets that provide fine-grained reasoning structures. To address this gap, we introduce the Graph Reasoning-Structured Question Answering Dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs. |
ANISH PAHILAJANI et. al. | arxiv-cs.CL | 2024-11-01 |
147 | Multi-Modal Validation and Domain Interaction Learning for Knowledge-Based Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge-based Visual Question Answering (KB-VQA) aims to answer the image-aware question via the external knowledge, which requires an agent to not only understand images but … |
Ning Xu; Yifei Gao; An-An Liu; Hongshuo Tian; Yongdong Zhang; | IEEE Transactions on Knowledge and Data Engineering | 2024-11-01 |
148 | Rationale-Guided Retrieval Augmented Generation for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we present RAG$^2$ (RAtionale-Guided RAG), a new framework for enhancing the reliability of RAG in biomedical contexts. |
JIWOONG SOHN et. al. | arxiv-cs.CL | 2024-10-31 |
149 | Show Me What and Where Has Changed? Question Answering and Grounding for Remote Sensing Change Detection Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a new task named Change Detection Question Answering and Grounding (CDQAG), which extends the traditional change detection task by providing interpretable textual answers and intuitive visual evidence. |
KE LI et. al. | arxiv-cs.CV | 2024-10-31 |
150 | Dynamic Strategy Planning for Efficient Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In our work, we propose a novel technique DyPlan, to induce a dynamic strategy selection process in LLMs, to improve performance and reduce costs in question-answering. |
Tanmay Parekh; Pradyot Prakash; Alexander Radovic; Akshay Shekher; Denis Savenkov; | arxiv-cs.CL | 2024-10-30 |
151 | Synthetic Data Generation with Large Language Models for Personalized Community Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we investigate the potential of Large Language Models (LLMs) for generating synthetic documents to train an IR system for a Personalized Community Question Answering task. |
Marco Braga; Pranav Kasela; Alessandro Raganato; Gabriella Pasi; | arxiv-cs.IR | 2024-10-29 |
152 | Are VLMs Really Blind Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, these models fail to perform well on low-level basic visual tasks which are especially easy for humans. Our goal in this work was to determine if these models are truly blind to geometric reasoning or if there are ways to enhance their capabilities in this area. |
Ayush Singh; Mansi Gupta; Shivank Garg; | arxiv-cs.CL | 2024-10-29 |
153 | ProMQA: Question Answering Dataset for Multimodal Procedural Activity Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present a novel evaluation dataset, ProMQA, to measure system advancements in application-oriented scenarios. |
KIMIHIRO HASEGAWA et. al. | arxiv-cs.CL | 2024-10-29 |
154 | Enhancing Financial Question Answering with A Multi-Agent Reflection Framework Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a multi-agent framework incorporating a critic agent that reflects on the reasoning steps and final answers for each question. |
Sorouralsadat Fatemi; Yuheng Hu; | arxiv-cs.CL | 2024-10-29 |
155 | RealCQA-V2 : Visual Premise Proving A Manual COT Dataset for Charts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Visual Premise Proving (VPP), a novel task tailored to refine the process of chart question answering by deconstructing it into a series of logical premises. |
Saleem Ahmed; Ranga Setlur; Venu Govindaraju; | arxiv-cs.AI | 2024-10-29 |
156 | SimpsonsVQA: Enhancing Inquiry-Based Learning with A Tailored Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Hence, in this paper, we present SimpsonsVQA, a novel dataset for VQA derived from The Simpsons TV show, designed to promote inquiry-based learning. |
Ngoc Dung Huynh; Mohamed Reda Bouadjenek; Sunil Aryal; Imran Razzak; Hakim Hacid; | arxiv-cs.CV | 2024-10-29 |
157 | CT2C-QA: Multimodal Question Answering Over Chinese Text, Table and Chart Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present C$\text{T}^2$C-QA, a pioneering Chinese reasoning-based QA dataset that includes an extensive collection of text, tables, and charts, meticulously compiled from 200 selectively sourced webpages. |
BOWEN ZHAO et. al. | arxiv-cs.CL | 2024-10-28 |
158 | SandboxAQ’s Submission to MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper explores the problems of Question Answering (QA) and Named Entity Recognition (NER) in five diverse languages. |
Isidora Chara Tourni; Sayontan Ghosh; Brenda Miao; Constantijn van der Poel; | arxiv-cs.CL | 2024-10-28 |
159 | Few-Shot Multimodal Explanation for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Dizhan Xue; Shengsheng Qian; Changsheng Xu; | ACM Multimedia | 2024-10-28 |
160 | Get Large Language Models Ready to Speak: A Late-fusion Approach for Speech Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a text-to-speech (TTS) system powered by a fine-tuned Llama model, named TTS-Llama, that achieves state-of-the-art speech synthesis performance. |
MAOHAO SHEN et. al. | arxiv-cs.CL | 2024-10-27 |
161 | EfficientEQA: An Efficient Approach for Open Vocabulary Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In real-world scenarios, a robotic agent must efficiently explore and accurately answer questions in open-vocabulary settings. To address these challenges, we propose a novel framework called EfficientEQA for open-vocabulary EQA, which enables efficient exploration and accurate answering. |
KAI CHENG et. al. | arxiv-cs.RO | 2024-10-26 |
162 | Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents Sensor2Text, a model proficient in tracking daily activities and engaging in conversations using wearable sensors. |
Wenqiang Chen; Jiaxuan Cheng; Leyao Wang; Wei Zhao; Wojciech Matusik; | arxiv-cs.LG | 2024-10-25 |
163 | Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs Through Generation of Well-Formed Chains Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs. |
KUN LI et. al. | arxiv-cs.CL | 2024-10-24 |
164 | An Adaptive Framework for Generating Systematic Explanatory Answer in Online Q&A Platforms Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The pioneering task is defined as explanatory answer generation, which entails handling identified challenges such as the requirement for comprehensive information and logical coherence within the generated context. To address these issues, we refer to systematic thinking theory and propose SynthRAG, an innovative framework designed to enhance QA performance. |
ZIYANG CHEN et. al. | arxiv-cs.CL | 2024-10-23 |
165 | SimRAG: Self-Improving Retrieval-Augmented Generation for Adapting Large Language Models to Specialized Domains Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, adapting general-purpose RAG systems to specialized fields such as science and medicine poses unique challenges due to distribution shifts and limited access to domain-specific data. To tackle this, we propose SimRAG, a self-training approach that equips the LLM with joint capabilities of question answering and question generation for domain adaptation. |
RAN XU et. al. | arxiv-cs.CL | 2024-10-23 |
166 | Aggregated Knowledge Model: Enhancing Domain-Specific QA with Fine-Tuned and Retrieval-Augmented Generation Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a novel approach to enhancing closed-domain Question Answering (QA) systems, focusing on the specific needs of the Lawrence Berkeley National Laboratory (LBL) Science Information Technology (ScienceIT) domain. |
Fengchen Liu; Jordan Jung; Wei Feinstein; Jeff DAmbrogia; Gary Jung; | arxiv-cs.CL | 2024-10-23 |
167 | Leveraging The Domain Adaptation of Retrieval Augmented Generation Models for Question Answering and Reducing Hallucination Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigated the performance of diverse RAG and RAG-like architectures through domain adaptation and evaluated their ability to generate accurate and relevant response grounded in the contextual knowledge base. |
Salman Rakin; Md. A. R. Shibly; Zahin M. Hossain; Zeeshan Khan; Md. Mostofa Akbar; | arxiv-cs.CL | 2024-10-23 |
168 | Graphusion: A RAG Framework for Knowledge Graph Construction with A Global Perspective Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces Graphusion, a zero-shot KGC framework from free text. |
RUI YANG et. al. | arxiv-cs.CL | 2024-10-23 |
169 | Correct After Answer: Enhancing Multi-Span Question Answering with Post-Processing Method Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose Answering-Classifying-Correcting (ACC) framework, which employs a post-processing strategy to handle incorrect predictions. |
JIAYI LIN et. al. | arxiv-cs.CL | 2024-10-22 |
170 | SG-FSM: A Self-Guiding Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Based on Finite State Machine Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, Multi-hop Question Answering (MHQA) remains challenging for many existing models due to issues like hallucination, error propagation, and limited context length. To address these challenges and enhance LLMs’ performance on MHQA, we propose the Self-Guiding prompting Finite State Machine (SG-FSM), designed to strengthen multi-hop reasoning abilities. |
XIAOCHEN WANG et. al. | arxiv-cs.CL | 2024-10-22 |
171 | VoiceTextBlender: Augmenting Large Language Models with Speech Capabilities Via Single-Stage Joint Speech-Text Supervised Fine-Tuning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Another critical challenge with SpeechLMs is catastrophic forgetting-where models optimized for speech tasks suffer significant degradation in text-only performance. To mitigate these issues, we propose a novel single-stage joint speech-text SFT approach on the low-rank adaptation (LoRA) of the LLM backbone. |
YIFAN PENG et. al. | arxiv-cs.CL | 2024-10-22 |
172 | Which Client Is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models, addressing privacy reliability challenges in the medical domain. |
He Zhu; Ren Togo; Takahiro Ogawa; Miki Haseyama; | arxiv-cs.CV | 2024-10-22 |
173 | Reasoning Before Responding: Towards Legal Long-form Question Answering with Interpretability Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The endeavor to generate detailed answers to contextually rich legal questions has faced challenges, primarily due to the limited availability of specialized datasets involving intensive manual effort or incapability of existing LFQA models to produce informative responses. Addressing this, our research introduces a semi-synthetic dataset, Legal-LFQA (L2FQA) created by exploiting a large language model (LLM) and utilizing contexts derived from existing legal datasets. |
Utkarsh Ujwal; Sai Sri Harsha Surampudi; Sayantan Mitra; Tulika Saha; | cikm | 2024-10-21 |
174 | Learning-to-Defer for Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, their size poses deployment challenges on resource-constrained devices. Addressing these limitations, we introduce an adapted two-stage Learning-to-Defer mechanism that enhances decision-making by enabling selective deference to human experts or larger models without retraining language models in the context of question-answering. |
Yannis Montreuil; Axel Carlier; Lai Xing Ng; Wei Tsang Ooi; | arxiv-cs.CL | 2024-10-21 |
175 | RD-P: A Trustworthy Retrieval-Augmented Prompter with Knowledge Graphs for LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel method called Retrieve-and-Discriminate Prompter (RD-P), which leverages knowledge graphs (KGs) for trustworthy RAG by synchronizing knowledge retrieval and discrimination in a unified model. |
Yubo Huang; Guosun Zeng; | cikm | 2024-10-21 |
176 | Enhancing The Completeness of Rationales for Multi-Step Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, drawing inspiration from human-like reasoning processes in answering multi-step questions, we explicitly plan the rationales to ensure their completeness. |
SHANGZI XUE et. al. | cikm | 2024-10-21 |
177 | Fine-Tuning LLMs for Reliable Medical Question-Answering Services Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an advanced approach to medical question-answering (QA) services, using fine-tuned Large Language Models (LLMs) to improve the accuracy and reliability of healthcare information. |
Ali Anaissi; Ali Braytee; Junaid Akram; | arxiv-cs.CL | 2024-10-21 |
178 | Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Additionally, there is a gap in evaluation datasets and methodologies for ontology-free KG construction. To overcome these limitations, we propose SynthKG, a multi-step, document-level ontology-free KG synthesis workflow based on LLMs. |
PRAFULLA KUMAR CHOUBEY et. al. | arxiv-cs.CL | 2024-10-21 |
179 | LeDQA: A Chinese Legal Case Document-based Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present LeDQA, the first Chinese legal case document-based question answering dataset to our best knowledge. |
Bulou Liu; Zhenhao Zhu; Qingyao Ai; Yiqun Liu; Yueyue Wu; | cikm | 2024-10-21 |
180 | In Situ Answer Sentence Selection at Web-scale Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present Passage-based Extracting Answer Sentence In-place (PEASI), a novel answer selection model optimized for Web-scale setting. |
Zeyu Zhang; Thuy Vu; Alessandro Moschitti; | cikm | 2024-10-21 |
181 | DiaKoP: Dialogue-based Knowledge-oriented Programming for Neural-symbolic Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present Dialogue-based Knowledge-oriented Programming system (DiaKoP), a system with a chat interface designed for multi-turn knowledge base question answering (KBQA). |
ZHICHENG LEE et. al. | cikm | 2024-10-21 |
182 | Retrieval-enhanced Knowledge Editing in Language Models for Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To tackle the problem, we propose the Retrieval-Augmented model Editing (RAE) framework for multi-hop question answering. |
YUCHENG SHI et. al. | cikm | 2024-10-21 |
183 | Reverse Question Answering: Can An LLM Write A Question So Hard (or Bad) That It Can’t Answer? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: By finding question and answer types yielding RQA errors, we suggest improvements for LLM RQA reasoning. |
NISHANT BALEPUR et. al. | arxiv-cs.CL | 2024-10-20 |
184 | MedLogic-AQA: Enhancing Medical Question Answering with Abstractive Models Focusing on Logical Structures Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing approaches often struggle to grasp the intricate logical structures and relationships inherent in medical contexts, thus limiting their capacity to furnish precise and nuanced answers. In this work, we address this gap by proposing a novel Abstractive QA system MedLogic-AQA that harnesses First Order Logic (FOL) based rules extracted from both context and questions to generate well-grounded answers. |
Aizan Zafar; Kshitij Mishra; Asif Ekbal; | arxiv-cs.CL | 2024-10-20 |
185 | BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning Via Compression Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To accelerate inference, reduce costs, and minimize distractions, this paper presents BRIEF (Bridging Retrieval and Inference through Evidence Fusion), a lightweight approach that performs query-aware multi-hop reasoning by compressing retrieved documents into highly dense textual summaries to integrate into in-context learning. |
Yuankai Li; Jia-Chen Gu; Di Wu; Kai-Wei Chang; Nanyun Peng; | arxiv-cs.CL | 2024-10-20 |
186 | ChitroJera: A Regionally Relevant Visual Question Answering Dataset for Bangla Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, existing Bangla VQA datasets offer little cultural relevance and are largely adapted from their foreign counterparts. To address these challenges, we introduce a large-scale Bangla VQA dataset titled ChitroJera, totaling over 15k samples where diverse and locally relevant data sources are used. |
DEEPARGHYA DUTTA BARUA et. al. | arxiv-cs.CV | 2024-10-19 |
187 | Optimizing Retrieval-Augmented Generation with Elasticsearch for Enhanced Question-Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims to improve the accuracy and quality of large-scale language models (LLMs) in answering questions by integrating Elasticsearch into the Retrieval Augmented Generation (RAG) framework. |
JIAJING CHEN et. al. | arxiv-cs.IR | 2024-10-18 |
188 | MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart Problems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Current benchmarks primarily focus on single-chart tasks, neglecting the multi-hop reasoning required to extract and integrate information from multiple charts, which is essential in practical applications. To fill this gap, we introduce MultiChartQA, a benchmark that evaluates MLLMs’ capabilities in four key areas: direct question answering, parallel question answering, comparative reasoning, and sequential reasoning. |
Zifeng Zhu; Mengzhao Jia; Zhihan Zhang; Lang Li; Meng Jiang; | arxiv-cs.CL | 2024-10-18 |
189 | Electrocardiogram-Language Model for Few-Shot Question Answering with Meta Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces a novel multimodal meta-learning method for few-shot ECG question answering, addressing the challenge of limited labeled data while leveraging the rich knowledge encoded within large language models (LLMs). |
Jialu Tang; Tong Xia; Yuan Lu; Cecilia Mascolo; Aaqib Saeed; | arxiv-cs.LG | 2024-10-18 |
190 | SwaQuAD-24: QA Benchmark Dataset in Swahili Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes the creation of a Swahili Question Answering (QA) benchmark dataset, aimed at addressing the underrepresentation of Swahili in natural language processing (NLP). |
Alfred Malengo Kondoro; | arxiv-cs.CL | 2024-10-18 |
191 | Bridging The Training-Inference Gap in LLMs By Leveraging Self-Generated Tokens Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Marginal differences in predictions at each step can cascade over successive steps, resulting in different distributions from what the models were trained for and potentially leading to unpredictable behavior. This paper proposes two simple approaches based on model own generation to address this discrepancy between the training and inference time. |
ZHEPENG CEN et. al. | arxiv-cs.LG | 2024-10-18 |
192 | Addressing Blind Guessing: Calibration of Selection Bias in Multiple-Choice Question Answering By Video Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we conduct a comprehensive empirical analysis of several VLM architectures across major datasets designed to assess complex video-focused reasoning. |
Olga Loginova; Oleksandr Bezrukov; Alexey Kravets; | arxiv-cs.CL | 2024-10-18 |
193 | BQA: Body Language Question Answering Dataset for Video Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Enabling current Video Large Language Models (VideoLLMs) to accurately interpret body language is a crucial challenge, as human unconscious actions can easily cause the model to misinterpret their intent. To address this, we propose a dataset, BQA, a body language question answering dataset, to validate whether the model can correctly interpret emotions from short clips of body language comprising 26 emotion labels of videos of body language. |
SHINTARO OZAKI et. al. | arxiv-cs.CL | 2024-10-17 |
194 | FinQAPT: Empowering Financial Decisions with End-to-End LLM-driven Question Answering Pipeline Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduced a novel clustering-based negative sampling technique to enhance context extraction and a novel prompting method called Dynamic N-shot Prompting to boost the numerical question-answering capabilities of LLMs. |
Kuldeep Singh; Simerjot Kaur; Charese Smiley; | arxiv-cs.IR | 2024-10-17 |
195 | LEGAL-UQA: A Low-Resource Urdu-English Dataset for Legal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present LEGAL-UQA, the first Urdu legal question-answering dataset derived from Pakistan’s constitution. |
Faizan Faisal; Umair Yousaf; | arxiv-cs.CL | 2024-10-16 |
196 | Open Domain Question Answering with Conflicting Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To explore how humans reason through conflicting contexts, we request our annotators to provide explanations for their selections of correct answers. We demonstrate that by finetuning LLMs to explain their answers, we can introduce richer information into their training that guide them through the process of reasoning with conflicting contexts. |
SIYI LIU et. al. | arxiv-cs.CL | 2024-10-16 |
197 | AGENTiGraph: An Interactive Knowledge Graph Platform for LLM-based Chatbots Utilizing Private Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce AGENTiGraph (Adaptive Generative ENgine for Task-based Interaction and Graphical Representation), a platform for knowledge management through natural language interaction. |
XINJIE ZHAO et. al. | arxiv-cs.AI | 2024-10-15 |
198 | Eliminating The Language Bias for Visual Question Answering with Fine-grained Causal Intervention Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel causal intervention training scheme named CIBi to eliminate language bias from a finer-grained perspective. |
YING LIU et. al. | arxiv-cs.CV | 2024-10-14 |
199 | BanglaQuAD: A Bengali Open-domain Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces BanglaQuAD, a Bengali question answering dataset, containing 30,808 question-answer pairs constructed from Bengali Wikipedia articles by native speakers. |
MD RASHAD AL HASAN RONY et. al. | arxiv-cs.CL | 2024-10-14 |
200 | TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce TemporalBench, a new benchmark dedicated to evaluating fine-grained temporal understanding in videos. |
MU CAI et. al. | arxiv-cs.CV | 2024-10-14 |
201 | Unleashing The Power of LLMs As Multi-Modal Encoders for Text and Graph-Structured Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing methods for integrating graph and text embeddings, often based on Multi-layer Perceptrons (MLPs) or shallow transformers, are limited in their ability to fully exploit the heterogeneous nature of these modalities. To overcome this, we propose Janus, a simple yet effective framework that leverages Large Language Models (LLMs) to jointly encode text and graph data. |
JIACHENG LIN et. al. | arxiv-cs.CL | 2024-10-14 |
202 | A Step Towards Mixture of Grader: Statistical Analysis of Existing Automatic Evaluation Metrics Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As a potential solution, we discuss how a Mixture Of Grader could potentially improve the auto QA evaluator quality. |
Yun Joon Soh; Jishen Zhao; | arxiv-cs.CL | 2024-10-13 |
203 | LoRE: Logit-Ranked Retriever Ensemble for Enhancing Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose LoRE (Logit-Ranked Retriever Ensemble), a novel approach that improves answer accuracy and relevance by mitigating positional bias. |
Saikrishna Sanniboina; Shiv Trivedi; Sreenidhi Vijayaraghavan; | arxiv-cs.CL | 2024-10-13 |
204 | Quebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces two corpora: the Quebec Automobile Insurance Expertise Reference Corpus and a set of 82 Expert Answers to Layperson Automobile Insurance Questions. |
David Beauchemin; Zachary Gagnon; Ricahrd Khoury; | arxiv-cs.CL | 2024-10-12 |
205 | Enhanced Electronic Health Records Text Summarization Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The proposed system leverages the Google Flan-T5 model to generate tailored EHR summaries based on clinician-specified topics. |
Ruvarashe Madzime; Clement Nyirenda; | arxiv-cs.CL | 2024-10-12 |
206 | Declarative Knowledge Distillation from Large Language Models for Visual Question Answering Datasets Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The downside is that crafting the rules for such a component can be an additional burden on the developer. We address this challenge by presenting an approach for declarative knowledge distillation from Large Language Models (LLMs). |
Thomas Eiter; Jan Hadl; Nelson Higuera; Johannes Oetsch; | arxiv-cs.AI | 2024-10-12 |
207 | Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we introduce HeurVidQA, a framework that leverages domain-specific entity-action heuristics to refine pre-trained video-language foundation models. |
Ting Yu; Kunhao Fu; Shuhui Wang; Qingming Huang; Jun Yu; | arxiv-cs.CV | 2024-10-12 |
208 | Multi-granularity Contrastive Cross-modal Collaborative Generation for End-to-End Long-term Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, recent emerging successful video-language pre-training models enable cost-effective end-to-end modeling but fall short in domain-specific ratiocination and exhibit disparities in task formulation. Toward this end, we present an entirely end-to-end solution for long-term VideoQA: Multi-granularity Contrastive cross-modal collaborative Generation (MCG) model. |
Ting Yu; Kunhao Fu; Jian Zhang; Qingming Huang; Jun Yu; | arxiv-cs.CV | 2024-10-12 |
209 | Retriever-and-Memory: Towards Adaptive Note-Enhanced Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address these, we propose a generic RAG approach called Adaptive Note-Enhanced RAG (Adaptive-Note) for complex QA tasks, which includes the iterative information collector, adaptive memory reviewer, and task-oriented generator, while following a new Retriever-and-Memory paradigm. |
RUOBING WANG et. al. | arxiv-cs.CL | 2024-10-11 |
210 | Measuring The Groundedness of Legal Question-Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work presents a comprehensive benchmark of various methods to assess the groundedness of AI-generated responses, aiming to significantly enhance their reliability. |
DIETRICH TRAUTMANN et. al. | arxiv-cs.CL | 2024-10-11 |
211 | Retrieving Contextual Information for Long-Form Question Answering Using Weak Supervision Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose and compare different weak supervision techniques to optimize retrieval for contextual information. |
Philipp Christmann; Svitlana Vakulenko; Ionut Teodor Sorodoc; Bill Byrne; Adrià de Gispert; | arxiv-cs.CL | 2024-10-11 |
212 | Increasing The Difficulty of Automatically Generated Questions Via Reinforcement Learning with Synthetic Preference Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a cost-effective approach for generating domain-specific MRC datasets with increased difficulty using Reinforcement Learning from Human Feedback (RLHF) from synthetic preference data. |
William Thorne; Ambrose Robinson; Bohua Peng; Chenghua Lin; Diana Maynard; | arxiv-cs.CL | 2024-10-10 |
213 | TVBench: Redesigning Video-Language Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. |
Daniel Cores; Michael Dorkenwald; Manuel Mucientes; Cees G. M. Snoek; Yuki M. Asano; | arxiv-cs.CV | 2024-10-10 |
214 | ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Traditionally, each prompt has been considered indivisible and updated independently, leading the parameters increase proportionally as prompt length grows. To address this issue, we propose Adaptive Codebook for Composite and Efficient Prompt Tuning (ACCEPT). |
Yu-Chen Lin; Wei-Hua Li; Jun-Cheng Chen; Chu-Song Chen; | arxiv-cs.CL | 2024-10-10 |
215 | FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. |
JINGYANG DENG et. al. | arxiv-cs.CL | 2024-10-09 |
216 | $β$-calibration of Language Model Confidence Scores for Generative QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We argue, however, that this standard (average-case) notion of calibration is difficult to interpret for decision-making in generative QA. To address this, we generalize the standard notion of average calibration and introduce $\beta$-calibration, which ensures calibration holds across different question-and-answer groups. |
Putra Manggala; Atalanti Mastakouri; Elke Kirschbaum; Shiva Prasad Kasiviswanathan; Aaditya Ramdas; | arxiv-cs.CL | 2024-10-09 |
217 | Do Great Minds Think Alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recent advancements of large language models (LLMs) have led to claims of AI surpassing humans in natural language processing (NLP) tasks such as textual understanding and reasoning. This work investigates these assertions by introducing CAIMIRA, a novel framework rooted in item response theory (IRT) that enables quantitative assessment and comparison of problem-solving abilities of question-answering (QA) agents: humans and AI systems. |
Maharshi Gor; Hal Daumé III; Tianyi Zhou; Jordan Boyd-Graber; | arxiv-cs.CL | 2024-10-08 |
218 | ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Within any single domain, actions can often appear quite similar, making it challenging for deep models to distinguish them accurately. To evaluate the effectiveness of multimodal foundation models in helping us recognize such actions, we present ActionAtlas v1.0, a multiple-choice video question answering benchmark featuring short videos across various sports. |
MOHAMMADREZA SALEHI et. al. | arxiv-cs.CV | 2024-10-08 |
219 | PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce PDF-WuKong, a multimodal large language model (MLLM) which is designed to enhance multimodal question-answering (QA) for long PDF documents. |
XUDONG XIE et. al. | arxiv-cs.CV | 2024-10-08 |
220 | ERVQA: A Dataset to Benchmark The Readiness of Large Vision Language Models in Hospital Environments Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of |
SOURJYADIP RAY et. al. | arxiv-cs.CL | 2024-10-08 |
221 | Right This Way: Can VLMs Guide Us to See More to Answer Questions? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This capability is especially valuable for assisting visually impaired individuals. To evaluate this capability of current VLMs, we introduce a human-labeled dataset as a benchmark for this task. |
LI LIU et. al. | nips | 2024-10-07 |
222 | MEQA: A Benchmark for Multi-hop Event-centric Question Answering with Explanations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a novel semi-automatic question generation strategy by composing event structures from information extraction (IE) datasets and present the first Multi-hop Event-centric Question Answering (MEQA) benchmark. |
Ruosen Li; Zimu Wang; Son Tran; Lei Xia; Xinya Du; | nips | 2024-10-07 |
223 | Document-level Causal Relation Extraction with Knowledge-guided Binary Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a Knowledge-guided binary Question Answering (KnowQA) method with event structures for ECRE, consisting of two stages: Event Structure Construction and Binary Question Answering. |
Zimu Wang; Lei Xia; Wei Wang; Xinya Du; | arxiv-cs.CL | 2024-10-07 |
224 | Cost-efficient Knowledge-based Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose Coke, a novel cost-efficient strategy for KBQA with LLMs, modeled as a tailored multi-armed bandit problem to minimize calls to LLMs within limited budgets. |
JUNNAN DONG et. al. | nips | 2024-10-07 |
225 | SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing question-answering (QA) datasets based on scientific papers are limited in scale and focus solely on textual content. To address this limitation, we introduce SPIQA (Scientific Paper Image Question Answering), the first large-scale QA dataset specifically designed to interpret complex figures and tables within the context of scientific research articles across various domains of computer science. |
Shraman Pramanick; Rama Chellappa; Subhashini Venugopalan; | nips | 2024-10-07 |
226 | CRAG – Comprehensive RAG Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing RAG datasets, however, do not adequately represent the diverse and dynamic nature of real-world Question Answering (QA) tasks. To bridge this gap, we introduce the Comprehensive RAG Benchmark (CRAG), a factual question answering benchmark of 4,409 question-answer pairs and mock APIs to simulate web and Knowledge Graph (KG) search. |
XIAO YANG et. al. | nips | 2024-10-07 |
227 | Wings: Learning Multimodal LLMs Without Text-only Forgetting Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Wings, a novel MLLM that excels in both text-only dialogues and multimodal comprehension. |
YI-KAI ZHANG et. al. | nips | 2024-10-07 |
228 | HAWK: Learning to Understand Open-World Video Anomalies Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce HAWK, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. |
JIAQI TANG et. al. | nips | 2024-10-07 |
229 | FinBen: An Holistic Financial Benchmark for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce FinBen, the first extensive open-source evaluation benchmark, including 36 datasets spanning 24 financial tasks, covering seven critical aspects: information extraction (IE), textual analysis, question answering (QA), text generation, risk management, forecasting, and decision-making. |
QIANQIAN XIE et. al. | nips | 2024-10-07 |
230 | RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. |
JOAO MONTEIRO et. al. | nips | 2024-10-07 |
231 | Learnable In-Context Vector for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose \textbf{Learnable ICV} (L-ICV) to distill essential task information from demonstrations, improving ICL performance in LMMs. |
YINGZHE PENG et. al. | nips | 2024-10-07 |
232 | CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We benchmark several Multimodal Large Language Models (MLLMs) on CVQA, and we show that the dataset is challenging for the current state-of-the-art models. |
DAVID ROMERO et. al. | nips | 2024-10-07 |
233 | CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We identify more advanced/explicit causal relationship modeling \& joint modeling of vision and language as the immediate areas for future efforts to focus upon. |
PARITOSH PARMAR et. al. | nips | 2024-10-07 |
234 | Crafting Interpretable Embeddings By Asking LLMs Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM. |
VINAMRA BENARA et. al. | nips | 2024-10-07 |
235 | LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark that features video-language interleaved inputs up to an hour long. |
Haoning Wu; DONGXU LI; Bei Chen; Junnan Li; | nips | 2024-10-07 |
236 | LOVA3: Learning to Visual Question Answering, Asking and Assessment Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named “Learning tO Visual Question Answering, Asking and Assessment,” designed to equip MLLMs with these additional capabilities. |
Hengyuan Zhao; Pan Zhou; Difei Gao; Mike Zheng Shou; | nips | 2024-10-07 |
237 | G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. |
XIAOXIN HE et. al. | nips | 2024-10-07 |
238 | Co-occurrence Is Not Factual Association in Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Pretrained language models can encode a large amount of knowledge and utilize it for various reasoning tasks, yet they can still struggle to learn novel factual knowledge effectively from finetuning on limited textual demonstrations. In this work, we show that the reason for this deficiency is that language models are biased to learn word co-occurrence statistics instead of true factual associations. |
Xiao Zhang; Miao Li; Ji Wu; | nips | 2024-10-07 |
239 | FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce FAMMA, an open-source benchmark for financial multilingual multimodal question answering (QA). |
SIQIAO XUE et. al. | arxiv-cs.CL | 2024-10-06 |
240 | Optimizing AI Reasoning: A Hamiltonian Dynamics Approach to Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces an innovative approach to analyzing and improving multi-hop reasoning in AI systems by drawing inspiration from Hamiltonian mechanics. |
Javier Marin; | arxiv-cs.AI | 2024-10-06 |
241 | Overview of Factify5WQA: Fact Verification Through 5W Question-Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Researchers have found that fake news spreads much times faster than real news. This is a major problem, especially in today’s world where social media is the key source of news … |
SURYAVARDAN SURESH et. al. | arxiv-cs.CL | 2024-10-05 |
242 | Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. |
Sagi Shaier; Ari Kobren; Philip Ogren; | arxiv-cs.CL | 2024-10-05 |
243 | Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce Compositional Time Series Reasoning, a new task of handling intricate multistep reasoning tasks from time series data. |
WEN YE et. al. | arxiv-cs.LG | 2024-10-05 |
244 | Cross-lingual Transfer for Automatic Question Generation By Learning Interrogative Structures in Target Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a simple and efficient XLT-QG method that operates without the need for monolingual, parallel, or labeled data in the target language, utilizing a small language model. |
Seonjeong Hwang; Yunsu Kim; Gary Geunbae Lee; | arxiv-cs.CL | 2024-10-04 |
245 | Question-Answering System for Bangla: Fine-tuning BERT-Bangla for A Closed Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Question-answering systems for Bengali have seen limited development, particularly in domain-specific applications. Leveraging advancements in natural language processing, this paper explores a fine-tuned BERT-Bangla model to address this gap. |
Subal Chandra Roy; Md Motaleb Hossen Manik; | arxiv-cs.CL | 2024-10-04 |
246 | Structured List-Grounded Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Motivated by the observation that even advanced language models like GPT-3.5 often miss semantic cues from lists, this paper aims to enhance question answering (QA) systems for better interpretation and use of structured lists. |
MUJEEN SUNG et. al. | arxiv-cs.CL | 2024-10-04 |
247 | ALR$^2$: A Retrieve-then-Reason Framework for Long-context Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that modern LLMs struggle to accurately retrieve relevant facts and instead, often hallucinate retrieved facts, resulting in flawed reasoning and the production of incorrect answers. To address these issues, we introduce ALR$^2$, a method that augments the long-context reasoning capability of LLMs via an explicit two-stage procedure, i.e., aligning LLMs with the objectives of both retrieval and reasoning. |
HUAYANG LI et. al. | arxiv-cs.CL | 2024-10-04 |
248 | Video Instruction Tuning With Synthetic Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The development of video large multimodal models (LMMs) has been hindered by the difficulty of curating large amounts of high-quality raw data from the web. To address this, we propose an alternative approach by creating a high-quality synthetic dataset specifically for video instruction-following, namely LLaVA-Video-178K. |
YUANHAN ZHANG et. al. | arxiv-cs.CV | 2024-10-03 |
249 | Domain-Specific Retrieval-Augmented Generation Using Vector Stores, Knowledge Graphs, and Tensor Factorization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce SMART-SLIC, a highly domain-specific LLM framework, that integrates RAG with KG and a vector store (VS) that store factual domain specific information. |
RYAN C. BARRON et. al. | arxiv-cs.CL | 2024-10-03 |
250 | MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose MA-RLHF, a simple yet effective RLHF framework that incorporates macro actions — sequences of tokens or higher-level language constructs — into the learning process. |
YEKUN CHAI et. al. | arxiv-cs.CL | 2024-10-03 |
251 | Listening to The Wise Few: Select-and-Copy Attention Heads for Multiple-Choice QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, such a format for evaluating LLMs has limitations, since even if the model knows the correct answer, it may struggle to select the corresponding letter simply due to difficulties in following this rigid format. To address this, we introduce new scores that better capture and reveal model’s underlying knowledge: the Query-Key Score (QK-score), derived from the interaction between query and key representations in attention heads, and the Attention Score, based on attention weights. |
EDUARD TULCHINSKII et. al. | arxiv-cs.CL | 2024-10-03 |
252 | Coal Mining Question Answering with LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a novel approach to coal mining question answering (QA) using large language models (LLMs) combined with tailored prompt engineering techniques. |
Antonio Carlos Rivera; Anthony Moore; Steven Robinson; | arxiv-cs.CL | 2024-10-03 |
253 | Question-guided Knowledge Graph Re-scoring and Injection for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the retrieved subgraph inevitably brings distraction information for knowledge utilization, impeding the model’s ability to perform accurate reasoning. To address this issue, we propose a Question-guided Knowledge Graph Re-scoring method (Q-KGR) to eliminate noisy pathways for the input question, thereby focusing specifically on pertinent factual knowledge. |
YU ZHANG et. al. | arxiv-cs.CL | 2024-10-02 |
254 | AHP-Powered LLM Reasoning for Multi-Criteria Evaluation of Open-Ended Responses Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a method that leverages LLMs and the analytic hierarchy process (AHP) to assess answers to open-ended questions. |
Xiaotian Lu; Jiyi Li; Koh Takeuchi; Hisashi Kashima; | arxiv-cs.CL | 2024-10-02 |
255 | Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These challenges often arise due to the complexity and ambiguity present in longer texts. To enhance the performance of LLMs in such scenarios, we introduce the Long Question Coreference Adaptation (LQCA) method. |
YANMING LIU et. al. | arxiv-cs.CL | 2024-10-02 |
256 | Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing benchmarks have primarily focused on basic factual question-answering from single narrative documents, making them inadequate for assessing a model`s ability to comprehend complex real-world instructional documents and provide accurate step-by-step guidance in daily life. To bridge this gap, we present InsCoQA, a novel benchmark tailored for evaluating large language models (LLMs) in the context of CQA with instructional documents. |
SHIWEI WU et. al. | arxiv-cs.CL | 2024-10-01 |
257 | Semantic Parsing with Candidate Expressions for Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a grammar augmented with candidate expressions for semantic parsing on a large KB with a seq2seq PLM. |
Daehwan Nam; Gary Geunbae Lee; | arxiv-cs.CL | 2024-10-01 |
258 | Quantifying Reliance on External Information Over Parametric Knowledge During Retrieval Augmented Generation (RAG) Using Mechanistic Analysis Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose (a) Causal Mediation Analysis; for proving that parametric memory is minimally utilized when answering a question and (b) Attention Contributions and Knockouts for showing the last token residual stream do not get enriched from the subject token in the question, but gets enriched from tokens of RAG-context. |
RESHMI GHOSH et. al. | arxiv-cs.CL | 2024-10-01 |
259 | Vamos: Versatile Action Models for Video Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose versatile action models (Vamos), a learning framework powered by a large language model as the “reasoner”, and can flexibly leverage visual embedding and free-form text descriptions as its input. |
SHIJIE WANG et. al. | eccv | 2024-09-30 |
260 | Compositional Substitutivity of Visual Reasoning for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the compositional substitutivity of visual reasoning in the context of visual question answering (VQA).Specifically, for each question-image pair, we construct a support question set and a support image set, and both sets contain questions/images that share synonymous primitives with the original question/image.To quantitatively evaluate the substitutivity of VQA models, we introduce two datasets: GQA-SPS and VQA-SPS v2, by performing three types of substitutions using synonymous primitives including words, visual entities, and referents. |
CHUANHAO LI et. al. | eccv | 2024-09-30 |
261 | FunQA: Towards Surprising Video Comprehension IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce FunQA, a challenging video question answering (QA) dataset specifically designed to evaluate and enhance the depth of video reasoning based on counter-intuitive and fun videos. |
BINZHU XIE et. al. | eccv | 2024-09-30 |
262 | ViLA: Efficient Video-Language Alignment for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose an efficient Video-Language Alignment (ViLA) network. |
XIJUN WANG et. al. | eccv | 2024-09-30 |
263 | VideoINSTA: Zero-shot Long Video Understanding Via Informative Spatial-Temporal Reasoning with LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a framework VideoINSTA, i.e. INformative Spatial-TemporAl Reasoning for zero-shot long-form video understanding. |
RUOTONG LIAO et. al. | arxiv-cs.CV | 2024-09-30 |
264 | TimeCraft: Navigate Weakly-Supervised Temporal Grounded Video Question Answering Via Bi-directional Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we focus on the grounded VQA task, which necessitates models to provide answers along with explicit visual evidence, i.e., certain video segments. |
Huabin Liu; Xiao Ma; Cheng Zhong; Yang Zhang; Weiyao Lin; | eccv | 2024-09-30 |
265 | LingoQA: Video Question Answering for Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce LingoQA, a novel dataset and benchmark for visual question answering in autonomous driving.We release our dataset and benchmark1 as an evaluation platform for vision-language models in autonomous driving. |
ANA-MARIA MARCU et. al. | eccv | 2024-09-30 |
266 | QAEncoder: Towards Aligned Representation Learning in Question Answering System Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the inherent gap between user queries and relevant documents hinders precise matching. Motivated by our conical distribution hypothesis, which posits that potential queries and documents form a cone-like structure in the embedding space, we introduce QAEncoder, a training-free approach to bridge this gap. |
ZHENGREN WANG et. al. | arxiv-cs.CL | 2024-09-30 |
267 | GRACE: Graph-Based Contextual Debiasing for Fair Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Innovative methods are required to ensure that LLMs can deliver unbiased yet contextually relevant responses. To tackle this challenge, we present GRAph-based Contextual DEbiasing (GRACE), a novel graph-based method for debiasing knowledge-based VQA models. |
Yifeng Zhang; Ming Jiang; Qi Zhao; | eccv | 2024-09-30 |
268 | CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To overcome this limitation, we introduce the CAT, which enhances MLLM in three ways: 1) besides straightforwardly bridging audio and video, we design a clue aggregator that aggregates question-related clues in dynamic audio-visual scenarios to enrich the detailed knowledge required for large language models.Notably, we collect an audio-visual joint instruction dataset named AVinstruct, to further enhance the capacity of CAT to model cross-semantic correlations. |
QILANG YE et. al. | eccv | 2024-09-30 |
269 | An Explainable Vision Question Answer Model Via Diffusion Chain-of-Thought Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This means that generating explanations solely for the answer can lead to a semantic discrepancy between the content of the explanation and the question-answering content. To address this, we propose a step-by-step reasoning approach to reduce such semantic discrepancies. |
Chunhao LU; Qiang Lu; Jake Luo; | eccv | 2024-09-30 |
270 | WSI-VQA: Interpreting Whole Slide Images By Generative Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel framework (WSI-VQA) to interpret WSIs by generative visual question answering. |
Pingyi Chen; Chenglu Zhu; Sunyi Zheng; Honglin Li; Lin Yang; | eccv | 2024-09-30 |
271 | Learning Trimodal Relation for Audio-Visual Question Answering with Missing Modality Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a framework that ensures robust AVQA performance even when a modality is missing. |
Kyu Ri Park; Hong Joo Lee; Jung Uk Kim; | eccv | 2024-09-30 |
272 | DriveLM: Driving with Graph Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving. |
CHONGHAO SIMA et. al. | eccv | 2024-09-30 |
273 | Q&A Prompts: Discovering Rich Visual Clues Through Mining Question-Answer Prompts for VQA Requiring Diverse World Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we believe that if we can collect rich visual clues, we will recognize the image more accurately, understand the question better, recall relevant knowledge more easily, and finally reason out the answer. |
Haibo Wang; Weifeng Ge; | eccv | 2024-09-30 |
274 | Fully Authentic Visual Question Answering Dataset from Online Communities Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the first VQA dataset in which all contents originate from an authentic use case. |
CHONGYAN CHEN et. al. | eccv | 2024-09-30 |
275 | Video Question Answering with Procedural Programs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose to answer questions about videos by generating short procedural programs that solve visual subtasks to obtain a final answer. |
Rohan Choudhury; Koichiro Niinuma; Kris Kitani; Laszlo A Jeni; | eccv | 2024-09-30 |
276 | AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel and challenging benchmark, AutoEval-Video, to comprehensively evaluate large vision-language models in open-ended video question answering. |
Weiran Huang; Xiuyuan Chen; Yuan Lin; Yuchen Zhang; | eccv | 2024-09-30 |
277 | See and Think: Embodied Agent in Virtual Environment IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes STEVE, a comprehensive and visionary embodied agent in the Minecraft virtual environment.We also collect STEVE-21K dataset, which includes 600+ vision-environment pairs, 20K knowledge question-answering pairs, and 200+ skill-code pairs. |
ZHONGHAN ZHAO et. al. | eccv | 2024-09-30 |
278 | Towards Robust Extractive Question Answering Models: Rethinking The Training Methodology Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper proposes a novel training method to improve the robustness of Extractive Question Answering (EQA) models. |
Son Quoc Tran; Matt Kretchmar; | arxiv-cs.CL | 2024-09-29 |
279 | See Then Tell: Enhancing Key Information Extraction with Vision Grounding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce STNet (See then Tell Net), a novel end-to-end model designed to deliver precise answers with relevant vision grounding. |
SHUHANG LIU et. al. | arxiv-cs.CV | 2024-09-29 |
280 | Zero-Shot Multi-Hop Question Answering Via Monte-Carlo Tree Search with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Unlike previous works, we propose a zero-shot prompting method, which relies solely on instructions without the support of hand-crafted few-shot examples that typically require domain expertise. |
SEONGMIN LEE et. al. | arxiv-cs.CL | 2024-09-28 |
281 | Exploring Language Model Generalization in Low-Resource Extractive QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift, i.e., can LLMs generalize to domains that require specific knowledge such as medicine and law in a zero-shot fashion without additional in-domain training? |
Saptarshi Sengupta; Wenpeng Yin; Preslav Nakov; Shreya Ghosh; Suhang Wang; | arxiv-cs.CL | 2024-09-27 |
282 | Charting The Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel framework that leverages Visual Question Answering (VQA) models to automate the evaluation of LLM-generated data visualizations. |
James Ford; Xingmeng Zhao; Dan Schumacher; Anthony Rios; | arxiv-cs.CV | 2024-09-27 |
283 | Rehearsing Answers to Probable Questions with Perspective-Taking Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, scenarios involving the preparation of answers to probable questions during professional oral presentations remain underexplored. In this paper, we pioneer the examination of this crucial yet overlooked topic by utilizing real-world QA conversation transcripts between company managers and professional analysts. |
Yung-Yu Shih; Ziwei Xu; Hiroya Takamura; Yun-Nung Chen; Chung-Chi Chen; | arxiv-cs.CL | 2024-09-27 |
284 | Efficient In-Domain Question Answering for Resource-Constrained Environments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we combine RAFT with LoRA to reduce fine tuning and storage requirements and gain faster inference times while maintaining comparable RAG performance. |
Isaac Chung; Phat Vo; Arman C. Kizilkale; Aaron Reite; | arxiv-cs.CL | 2024-09-26 |
285 | SynTQA: Synergistic Table-based Question Answering Via Mixture of Text-to-SQL and E2E TQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To combine both strengths, we propose a Synergistic Table-based Question Answering approach that integrate different models via answer selection, which is agnostic to any model types. |
Siyue Zhang; Anh Tuan Luu; Chen Zhao; | arxiv-cs.CL | 2024-09-25 |
286 | Detecting Temporal Ambiguity in Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions. |
Bhawna Piryani; Abdelrahman Abdallah; Jamshid Mozafari; Adam Jatowt; | arxiv-cs.CL | 2024-09-25 |
287 | Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel framework that enhances temporal awareness and reasoning through Temporal Information-Aware Embedding and Granular Contrastive Reinforcement Learning. |
Wanqi Yang; Yanda Li; Meng Fang; Ling Chen; | arxiv-cs.CL | 2024-09-25 |
288 | Empirical Insights on Fine-Tuning Large Language Models for Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, effective strategies for fine-tuning LLMs for the QA task remain largely unexplored. To address this gap, we categorize supervised fine-tuning (SFT) data based on the extent of knowledge memorized by the pretrained LLMs and conduct a series of empirical analyses. |
JUNJIE YE et. al. | arxiv-cs.CL | 2024-09-24 |
289 | Exploring Hint Generation Approaches in Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a novel context preparation approach called HINTQA, which employs Automatic Hint Generation (HG) techniques. |
Jamshid Mozafari; Abdelrahman Abdallah; Bhawna Piryani; Adam Jatowt; | arxiv-cs.CL | 2024-09-24 |
290 | Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. |
Yifei Yuan; Yang Deng; Anders Søgaard; Mohammad Aliannejadi; | arxiv-cs.CL | 2024-09-24 |
291 | Using Similarity to Evaluate Factual Consistency in Summaries Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Therefore, many techniques for detecting factual inconsistencies build pipelines around natural language inference (NLI) or question-answering (QA) models with additional supervised learning steps. In this paper, we revisit similarity-based metrics, showing that this failure stems from the comparison text selection and its granularity. |
Yuxuan Ye; Edwin Simpson; Raul Santos Rodriguez; | arxiv-cs.CL | 2024-09-23 |
292 | LINKAGE: Listwise Ranking Among Varied-Quality References for Non-Factoid QA Evaluation Via LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Inspired by the evolution from pointwise to pairwise to listwise in learning-to-rank methods, we propose a novel listwise NFQA evaluation approach, that utilizes LLMs to rank candidate answers in a list of reference answers sorted by descending quality. |
Sihui Yang; Keping Bi; Wanqing Cui; Jiafeng Guo; Xueqi Cheng; | arxiv-cs.CL | 2024-09-23 |
293 | Learning When to Retrieve, What to Rewrite, and How to Respond in Conversational QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a method for enabling LLMs to decide when to retrieve in RAG settings given a conversational context. |
Nirmal Roy; Leonardo F. R. Ribeiro; Rexhina Blloshmi; Kevin Small; | arxiv-cs.CL | 2024-09-23 |
294 | Scene-Text Grounding for Text-Based Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose to study Grounded TextVideoQA by forcing models to answer questions and spatio-temporally localize the relevant scene-text regions, thus decoupling QA from scenetext recognition and promoting research towards interpretable QA. |
SHENG ZHOU et. al. | arxiv-cs.CV | 2024-09-22 |
295 | QMOS: Enhancing LLMs for Telecommunication with Question Masked Loss and Option Shuffling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces QMOS, an innovative approach which uses a Question-Masked loss and Option Shuffling trick to enhance the performance of LLMs in answering Multiple-Choice Questions in the telecommunications domain. |
Blessed Guda; Gabrial Zencha A.; Lawrence Francis; Carlee Joe-Wong; | arxiv-cs.CL | 2024-09-21 |
296 | First Place Solution to The Multiple-choice Video QA Track of The Second Perception Test Challenge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this report, we present our first-place solution to the Multiple-choice Video Question Answering (QA) track of The Second Perception Test Challenge. |
YINGZHE PENG et. al. | arxiv-cs.CV | 2024-09-20 |
297 | AQA: Adaptive Question Answering in A Society of LLMs Via Contextual Multi-Armed Bandit Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this aim, we build on recent advances in the orchestration of multiple large language models (LLMs) and formulate adaptive QA as a dynamic orchestration challenge. We define this as a contextual multi-armed bandit problem, where the context is defined by the characteristics of the incoming question and the action space consists of potential communication graph configurations among the LLM agents. |
Mohanna Hoveyda; Arjen P. de Vries; Maarten de Rijke; Harrie Oosterhuis; Faegheh Hasibi; | arxiv-cs.CL | 2024-09-20 |
298 | SMART-RAG: Selection Using Determinantal Matrices for Augmented Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This issue is particularly evident in unsupervised retrieval settings, where there are no mechanisms to effectively mitigate these problems, leading to suboptimal context selection. To address this, we propose Selection using Matrices for Augmented Retrieval (SMART) in question answering tasks, a fully unsupervised and training-free framework designed to optimize context selection in RAG. |
Jiatao Li; Xinyu Hu; Xiaojun Wan; | arxiv-cs.CL | 2024-09-20 |
299 | A Multimodal Dense Retrieval Approach for Speech-Based Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Furthermore, the ASR model propagates its errors to the retriever. In this work, we try to alleviate these limitations by proposing an ASR-free, end-to-end trained multimodal dense retriever that can work directly on spoken questions. |
Georgios Sidiropoulos; Evangelos Kanoulas; | arxiv-cs.CL | 2024-09-20 |
300 | Evaluating Image Hallucination in Text-to-Image Generation with Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we focus on the problem of image hallucination, where images created by generation models fail to faithfully depict factual content. |
Youngsun Lim; Hojun Choi; Hyunjung Shim; | arxiv-cs.CV | 2024-09-19 |
301 | MQA-KEAL: Multi-hop Question Answering Under Knowledge Editing for Arabic Language Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although, there have been numerous attempts for LLMs Knowledge Editing (KE), i.e., to edit the LLMs prior knowledge and in turn test it via Multi-hop Question Answering (MQA), yet so far these studies are primarily focused on English language. To bridge this gap, in this paper we propose: Multi-hop Questioning Answering under Knowledge Editing for Arabic Language (MQA-KEAL). |
Muhammad Asif Ali; Nawal Daftardar; Mutayyaba Waheed; Jianbin Qin; Di Wang; | arxiv-cs.CL | 2024-09-18 |
302 | ProSLM : A Prolog Synergized Language Model for Explainable Domain Specific Knowledge Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose \systemname{}, a novel neurosymbolic framework, to improve the robustness and reliability of LLMs in question-answering tasks. |
Priyesh Vakharia; Abigail Kufeldt; Max Meyers; Ian Lane; Leilani Gilpin; | arxiv-cs.CL | 2024-09-17 |
303 | Contextual Breach: Assessing The Robustness of Transformer-based QA Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a unique dataset that incorporates seven distinct types of adversarial noise into the context, each applied at five different intensity levels on the SQuAD dataset. |
Asir Saadat; Nahian Ibn Asad; Md Farhan Ishmam; | arxiv-cs.CL | 2024-09-17 |
304 | OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This approach has limitations: (i) it is very expensive due to the need for training large encoders on extensive datasets, (ii) acquiring aligned large paired datasets is challenging, and (iii) adding new modalities requires retraining the entire framework to incorporate these modalities. To address these issues, we propose OneEncoder, a lightweight framework that progressively represents and aligns four modalities (image, text, audio, video). |
Bilal Faye; Hanane Azzag; Mustapha Lebbah; | arxiv-cs.CV | 2024-09-17 |
305 | StruEdit: Structured Outputs Enable The Fast and Accurate Knowledge Editing for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We argue that these challenges stem from the unstructured nature of natural language outputs. To address the above challenges, we propose $\textbf{Stru}$ctural $\textbf{Edit}$ing ($\textbf{StruEdit}$), an improved baseline for knowledge editing. |
BAOLONG BI et. al. | arxiv-cs.CL | 2024-09-16 |
306 | HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision Making Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces HALO, a novel framework designed to enhance the accuracy and reliability of medical question-answering (QA) systems by focusing on the detection and mitigation of hallucinations. |
SUMERA ANJUM et. al. | arxiv-cs.CL | 2024-09-16 |
307 | A Benchmark Dataset with Larger Context for Non-Factoid Question Answering Over Islamic Text Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Yet, the scarcity of QA systems tailored specifically to the detailed nature of inquiries about the Quranic Tafsir (explanation, interpretation, context of Quran for clarity) and Ahadith poses significant challenges. To address this gap, we introduce a comprehensive dataset meticulously crafted for QA purposes within the domain of Quranic Tafsir and Ahadith. |
Faiza Qamar; Seemab Latif; Rabia Latif; | arxiv-cs.CL | 2024-09-15 |
308 | QTG-VQA: Question-Type-Guided Architectural for VideoQA Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Particularly, considering the significant variation in dependency on temporal information across different question types, and given that the representation of such information coincidentally represents a principal challenge and difficulty for VideoQA as opposed to ImageQA. To address these challenges, we propose QTG-VQA, a novel architecture that incorporates question-type-guided attention and adaptive learning mechanism. |
Zhixian He; Pengcheng Zhao; Fuwei Zhang; Shujin Lin; | arxiv-cs.CV | 2024-09-14 |
309 | Contri(e)ve: Context + Retrieve for Scholarly Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a two step solution using open source Large Language Model(LLM): Llama3.1 for Scholarly-QALD dataset. |
Kanchan Shivashankar; Nadine Steinmetz; | arxiv-cs.IR | 2024-09-13 |
310 | Electrocardiogram Report Generation and Question Answering Via Retrieval-Augmented Self-Supervised Modeling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Interpreting electrocardiograms (ECGs) and generating comprehensive reports remain challenging tasks in cardiology, often requiring specialized expertise and significant time investment. To address these critical issues, we propose ECG-ReGen, a retrieval-based approach for ECG-to-text report generation and question answering. |
Jialu Tang; Tong Xia; Yuan Lu; Cecilia Mascolo; Aaqib Saeed; | arxiv-cs.LG | 2024-09-13 |
311 | L3Cube-IndicQuest: A Benchmark Question Answering Dataset for Evaluating Knowledge of LLMs in Indic Context Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present the L3Cube-IndicQuest, a gold-standard factual question-answering benchmark dataset designed to evaluate how well multilingual LLMs capture regional knowledge across various Indic languages. |
Pritika Rohera; Chaitrali Ginimav; Akanksha Salunke; Gayatri Sawant; Raviraj Joshi; | arxiv-cs.CL | 2024-09-13 |
312 | QueryCAD: Grounded Question Answering for CAD Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these models are rarely considered in novel AI-based approaches, such as the automatic synthesis of robot programs, as there are no readily available methods that would allow CAD models to be incorporated for the analysis, interpretation, or extraction of information. To address these limitations, we propose QueryCAD, the first system designed for CAD question answering, enabling the extraction of precise information from CAD models using natural language queries. |
Claudius Kienle; Benjamin Alt; Darko Katic; Rainer Jäkel; | arxiv-cs.RO | 2024-09-13 |
313 | Top-down Activity Representation Learning for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, to leverage the spatial visual context representation capability of the CLIP model for obtaining non-continuous visual representations in terms of contextual events in videos, we convert long-term video sequences into a spatial image domain and finetune the multimodal model LLaVA for the VideoQA task. |
Yanan Wang; Shuichiro Haruta; Donghuo Zeng; Julio Vizcarra; Mori Kurokawa; | arxiv-cs.CV | 2024-09-12 |
314 | Multi-object Event Graph Representation Learning for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While prior works have focused on modeling individual object movements using transformer-based methods, they falter when capturing complex scenarios involving multiple objects (e.g., a boy is throwing a ball in a hoop). We propose a contrastive language event graph representation learning method called CLanG to address this limitation. |
Yanan Wang; Shuichiro Haruta; Donghuo Zeng; Julio Vizcarra; Mori Kurokawa; | arxiv-cs.CV | 2024-09-12 |
315 | Source2Synth: Synthetic Data Generation and Curation Grounded in Real Data Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Source2Synth: a new method that can be used for teaching LLMs new skills without relying on costly human annotations. |
ALISIA LUPIDI et. al. | arxiv-cs.CL | 2024-09-12 |
316 | Experimenting with Legal AI Solutions: The Case of Question-Answering for Access to Justice Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose a human-centric legal NLP pipeline, covering data sourcing, inference, and evaluation. |
Jonathan Li; Rohan Bhambhoria; Samuel Dahan; Xiaodan Zhu; | arxiv-cs.CL | 2024-09-11 |
317 | Integrating SPARQL and LLMs for Question Answering Over Scholarly Data Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes a methodology that combines SPARQL queries, divide and conquer algorithms, and a pre-trained extractive question answering model. |
Fomubad Borista Fondi; Azanzi Jiomekong Fidel; Gaoussou Camara; | arxiv-cs.IR | 2024-09-11 |
318 | AdaCAD: Adaptively Decoding to Balance Conflicts Between Contextual and Parametric Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a fine-grained, instance-level approach called AdaCAD, which dynamically infers the weight of adjustment based on the degree of conflict, as measured by the Jensen-Shannon divergence between distributions representing contextual and parametric knowledge. |
Han Wang; Archiki Prasad; Elias Stengel-Eskin; Mohit Bansal; | arxiv-cs.CL | 2024-09-11 |
319 | Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Previous works like Retrival-Augmented VQA-v2 (RAVQA-v2) focus on utilizing as much input information, such as image-based textual descriptions and retrieved knowledge, as possible to improve performance, but they all overlook the issue that with the number of input tokens increasing, inference efficiency significantly decreases, which contradicts the demands of practical applications. To address this issue, we propose Retrieval-Augmented MLLM with Compressed Contexts (RACC). |
WEIXI WENG et. al. | arxiv-cs.CV | 2024-09-11 |
320 | Towards Building A Robust Knowledge Intensive Question Answering Model with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the issue of model accuracy decline caused by noisy external information, we propose a data augmentation-based fine-tuning method to enhance LLM’s robustness against noise. |
Xingyun Hong; Yan Shao; Zhilin Wang; Manni Duan; Jin Xiongnan; | arxiv-cs.CL | 2024-09-09 |
321 | Seek and Solve Reasoning for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Inspired by how humans solve TQA tasks, we propose a Seek-and-Solve pipeline that instructs the LLM to first seek relevant information and then answer questions. |
Ruya Jiang; Chun Wang; Weihong Deng; | arxiv-cs.CL | 2024-09-08 |
322 | Question-Answering Dense Video Events Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present question-answering dense video events, a novel task that requires answering and grounding the dense-event questions in long videos, thus challenging MLLMs to faithfully comprehend and reason about multiple events occurring over extended time periods. |
Hangyu Qin; Junbin Xiao; Angela Yao; | arxiv-cs.CV | 2024-09-06 |
323 | WebQuest: A Benchmark for Multimodal QA on Web Page Sequences Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present WebQuest, a multi-page question-answering dataset that requires reasoning across multiple related web pages. |
MARIA WANG et. al. | arxiv-cs.IR | 2024-09-06 |
324 | Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: A key issue is the hallucination problem, where models generate information unsupported by the underlying data, potentially leading to dangerous misinformation. This paper presents a novel approach designed to bridge this gap by combining Large Language Models (LLM) and Knowledge Graphs (KG) to improve the accuracy and reliability of question-answering systems, on the example of a biomedical KG. |
Larissa Pusch; Tim O. F. Conrad; | arxiv-cs.CL | 2024-09-06 |
325 | COLUMBUS: Evaluating COgnitive Lateral Understanding Through Multiple-choice ReBUSes Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Effective problem-solving also necessitates lateral thinking, which remains understudied in AI and has not been used to test visual perception systems. To bridge this gap, we formulate visual lateral thinking as a multiple-choice question-answering task and describe a three-step taxonomy-driven methodology for instantiating task examples. |
Koen Kraaijveld; Yifan Jiang; Kaixin Ma; Filip Ilievski; | arxiv-cs.CV | 2024-09-06 |
326 | RAG Based Question-Answering for Contextual Response Prediction System Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. |
Sriram Veturi; Saurabh Vaichal; Reshma Lal Jagadheesh; Nafis Irtiza Tripto; Nian Yan; | arxiv-cs.CL | 2024-09-05 |
327 | LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we aim to enable long-context LLMs to generate responses with fine-grained sentence-level citations, improving their faithfulness and verifiability. |
JIAJIE ZHANG et. al. | arxiv-cs.CL | 2024-09-04 |
328 | MARAGS: A Multi-Adapter System for Multi-Task Retrieval Augmented Generation Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we present a multi-adapter retrieval augmented generation system (MARAGS) for Meta’s Comprehensive RAG (CRAG) competition for KDD CUP 2024. |
Mitchell DeHaven; | arxiv-cs.CL | 2024-09-04 |
329 | GoT-CQA: Graph-of-Thought Guided Compositional Reasoning for Chart Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The former refers to answering this question strictly based on the analysis of the visual content or internal data of the given chart, while the latter emphasizes the various logical and numerical reasoning involved in answer prediction process. In this paper, we pay more attention on the complex reasoning in CQA task, and propose a novel Graph-of-Thought (GoT) guided compositional reasoning model called GoT-CQA to overcome this problem. |
LINGLING ZHANG et. al. | arxiv-cs.CV | 2024-09-04 |
330 | CRAFT Your Dataset: Task-Specific Synthetic Dataset Generation Through Corpus Retrieval and Augmentation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose Corpus Retrieval and Augmentation for Fine-Tuning (CRAFT), a method for generating synthetic datasets, given a small number of user-written few-shots that demonstrate the task to be performed. |
Ingo Ziegler; Abdullatif Köksal; Desmond Elliott; Hinrich Schütze; | arxiv-cs.CL | 2024-09-03 |
331 | Diversify-verify-adapt: Efficient and Robust Retrieval-Augmented Ambiguous Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although the iterative RAG approach has been proposed to address this problem, it comes at the cost of significantly reduced efficiency. To address these issues, we propose the diversify-verify-adapt (DIVA) framework. |
YEONJUN IN et. al. | arxiv-cs.CL | 2024-09-03 |
332 | VProChart: Answering Chart Question Through Visual Perception Alignment Agent and Programmatic Solution Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, chart images are inherently difficult to interpret, and chart-related questions often involve complex logical and numerical reasoning, which hinders the performance of existing models. This paper introduces VProChart, a novel framework designed to address these challenges in CQA by integrating a lightweight Visual Perception Alignment Agent (VPAgent) and a Programmatic Solution Reasoning approach. |
MUYE HUANG et. al. | arxiv-cs.CV | 2024-09-03 |
333 | Multi-modal Situated Reasoning in 3D Scenes Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing datasets and benchmarks for situated understanding are limited in data modality, diversity, scale, and task scope. To address these limitations, we propose Multi-modal Situated Question Answering (MSQA), a large-scale multi-modal situated reasoning dataset, scalably collected leveraging 3D scene graphs and vision-language models (VLMs) across a diverse range of real-world 3D scenes. |
XIONGKUN LINGHU et. al. | arxiv-cs.CV | 2024-09-03 |
334 | How Privacy-Savvy Are Large Language Models? A Case Study on Compliance and Privacy Technical Review Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper seeks to address this gap by providing a comprehensive case study evaluating LLMs’ performance in privacy-related tasks such as privacy information extraction (PIE), legal and regulatory key point detection (KPD), and question answering (QA) with respect to privacy policies and data protection regulations. We introduce a Privacy Technical Review (PTR) framework, highlighting its role in mitigating privacy risks during the software development life-cycle. |
XICHOU ZHU et. al. | arxiv-cs.CL | 2024-09-03 |
335 | Kvasir-VQA: A Text-Image Pair GI Tract Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce Kvasir-VQA, an extended dataset derived from the HyperKvasir and Kvasir-Instrument datasets, augmented with question-and-answer annotations to facilitate advanced machine learning tasks in Gastrointestinal (GI) diagnostics. |
SUSHANT GAUTAM et. al. | arxiv-cs.CV | 2024-09-02 |
336 | Language Models Benefit from Preparation with Elicited Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a simple prompting technique, called PREP, that involves using two instances of LMs: the first (LM1) generates relevant information, and the second (LM2) receives the information from the user and answers the question. |
Jiacan Yu; Hannah An; Lenhart K. Schubert; | arxiv-cs.CL | 2024-09-02 |
337 | Retrieval-Augmented Natural Language Reasoning for Explainable Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a new VQA-NLE model, ReRe (Retrieval-augmented natural language Reasoning), using leverage retrieval information from the memory to aid in generating accurate answers and persuasive explanations without relying on complex networks and extra datasets. |
Su Hyeon Lim; Minkuk Kim; Hyeon Bae Kim; Seong Tae Kim; | arxiv-cs.CV | 2024-08-30 |
338 | MAPWise: Evaluating Vision-Language Models for Advanced Map Queries Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study investigates the efficacy of VLMs in answering questions based on choropleth maps, which are widely used for data analysis and representation. To facilitate and encourage research in this area, we introduce a novel map-based question-answering benchmark, consisting of maps from three geographical regions (United States, India, China), each containing 1000 questions. |
Srija Mukhopadhyay; Abhishek Rajgaria; Prerana Khatiwada; Vivek Gupta; Dan Roth; | arxiv-cs.CV | 2024-08-30 |
339 | LLM-Based Multi-Hop Question Answering with Knowledge Graph Integration in Evolving Environments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing methods for such knowledge editing still face difficulties with multi-hop questions that require accurate fact identification and sequential logical reasoning, particularly among numerous fact updates. To tackle these challenges, this paper introduces Graph Memory-based Editing for Large Language Models (GMeLLo), a straightforward and effective method that merges the explicit knowledge representation of Knowledge Graphs (KGs) with the linguistic flexibility of LLMs. |
RUIRUI CHEN et. al. | arxiv-cs.CL | 2024-08-28 |
340 | Can Visual Language Models Replace OCR-Based Visual Question Answering Pipelines in Production? A Case Study in Retail Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our study includes two commercial models, GPT-4V [16] and GPT-4o [17], as well as four open-source models: InternVL [5], LLaVA 1.5 [12], LLaVA-NeXT [13], and CogAgent [9]. |
Bianca Lamm; Janis Keuper; | arxiv-cs.CV | 2024-08-28 |
341 | Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the hallucination in generative question answering (GQA) where the answer can not be derived from the document, we propose a novel evidence-enhanced triplet generation framework, EATQA, encouraging the model to predict all the combinations of (Question, Evidence, Answer) triplet by flipping the source pair and the target label to understand their logical relationships, i.e., predict Answer(A), Question(Q), and Evidence(E) given a QE, EA, and QA pairs, respectively. |
Haowei Du; Huishuai Zhang; Dongyan Zhao; | arxiv-cs.CL | 2024-08-27 |
342 | Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We develop an automated pipeline to create multi-hop question-answering pairs with associated temporal evidence, enabling to construct a large-scale dataset for instruction-tuning. |
Qirui Chen; Shangzhe Di; Weidi Xie; | arxiv-cs.CV | 2024-08-26 |
343 | Question Answering System of Bridge Design Specification Based on Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Through the self-built question and answer task dataset, based on the tensorflow and keras deep learning platform framework, the model is constructed and trained to predict the start position and end position of the answer in the bridge design specification given by the user. |
Leye Zhang; Xiangxiang Tian; Hongjun Zhang; | arxiv-cs.CL | 2024-08-25 |
344 | IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce an automatic evaluation framework IQA-EVAL to achieve Interactive Question Answering Evaluations, more specifically, we introduce a LLM-based Evaluation Agent (LEA) that can: (1) simulate human behaviors to generate interactions with IQA models; (2) automatically evaluate the generated interactions. |
Ruosen Li; Ruochen Li; Barry Wang; Xinya Du; | arxiv-cs.CL | 2024-08-24 |
345 | Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a new internal and external knowledge interactive refinement paradigm dubbed IEKR to utilize internal knowledge in LLM to help retrieve relevant knowledge from the external knowledge base, as well as exploit the external knowledge to refine the hallucination of generated internal knowledge. |
Haowei Du; Dongyan Zhao; | arxiv-cs.CL | 2024-08-23 |
346 | Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this report, we introduce Vintern-1B, a reliable 1-billion-parameters multimodal large language model (MLLM) for Vietnamese language tasks. |
KHANG T. DOAN et. al. | arxiv-cs.LG | 2024-08-22 |
347 | Enhanced Fine-Tuning of Lightweight Domain-Specific Q&A Model Based on Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Commercial companies face the dual challenges of privacy protection and resource constraints when involving LLMs for fine-tuning. This paper propose a novel framework, Self-Evolution, designed to address these issues by leveraging lightweight open-source LLMs through multiple iterative fine-tuning rounds. |
SHENGLIN ZHANG et. al. | arxiv-cs.AI | 2024-08-22 |
348 | Assessing Modality Bias in Video Question Answering Benchmarks with Multimodal Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing video question-answering (VidQA) benchmarks and datasets often exhibit a bias toward a single modality, despite the goal of requiring advanced reasoning skills that integrate diverse modalities to answer the queries. In this work, we introduce the modality importance score (MIS) to identify such bias. |
JEAN PARK et. al. | arxiv-cs.LG | 2024-08-22 |
349 | RConE: Rough Cone Embedding for Multi-Hop Logical Query Answering on Multi-Modal Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose RConE, an embedding method to capture the multi-modal information needed to answer a query. |
Mayank Kharbanda; Rajiv Ratn Shah; Raghava Mutharaju; | arxiv-cs.AI | 2024-08-21 |
350 | Mathematical Information Retrieval: Search and Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The framework is used to organize and relate the other core topics of the book, including interactions between people and systems, representing math formulas in sources, and evaluation. |
Richard Zanibbi; Behrooz Mansouri; Anurag Agarwal; | arxiv-cs.IR | 2024-08-21 |
351 | Multimodal Datasets and Benchmarks for Reasoning About Dynamic Spatio-Temporality in Everyday Environments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We used a 3D simulator to create artificial video data with standardized annotations, aiming to aid in the development of Embodied AI. |
Takanori Ugai; Kensho Hara; Shusaku Egami; Ken Fukuda; | arxiv-cs.AI | 2024-08-21 |
352 | What Are The Limits of Cross-lingual Dense Passage Retrieval for Low-resource Languages? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we analyze the capabilities of the multi-lingual Dense Passage Retriever (mDPR) for extremely low-resource languages. |
Jie Wu; Zhaochun Ren; Suzan Verberne; | arxiv-cs.IR | 2024-08-21 |
353 | DocTabQA: Answering Questions from Long Documents Using Tables Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the QTabA dataset, encompassing 300 financial documents, accompanied by manually annotated 1.5k question-table pairs. |
Haochen Wang; Kai Hu; Haoyu Dong; Liangcai Gao; | arxiv-cs.CL | 2024-08-21 |
354 | FoRAG: Factuality-optimized Retrieval Augmented Generation for Web-enhanced Long-form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the emergence of various open source methods and web-enhanced commercial systems such as Bing Chat, two critical problems remain unsolved, i.e., the lack of factuality and clear logic in the generated long-form answers. In this paper, we remedy these issues via a systematic study on answer generation in web-enhanced LFQA. |
TIANCHI CAI et. al. | kdd | 2024-08-21 |
355 | DyGKT: Dynamic Graph Learning for Knowledge Tracing Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The three dynamical characteristics above contain the great potential to revolutionize the existing knowledge tracing methods. Along this line, we propose a Dynamic Graph-based Knowledge Tracing model, namely DyGKT. |
KE CHENG et. al. | kdd | 2024-08-21 |
356 | Differentiating Choices Via Commonality for Multiple-Choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel MCQA model by differentiating choices through identifying and eliminating their commonality, called DCQA. |
WENQING DENG et. al. | arxiv-cs.CL | 2024-08-21 |
357 | Answer Is All You Need: Instruction-following Text Embedding Via Answering The Question Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion. |
LETIAN PENG et. al. | acl | 2024-08-20 |
358 | Putting People in LLMs’ Shoes: Generating Better Answers Via Question Rewriter Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, their effectiveness in QA is often undermined by the vagueness of user questions. To address this issue, we introduce single-round instance-level prompt optimization, referred to as question rewriter. |
Junhao Chen; Bowen Wang; Zhouqiang jiang; Yuta Nakashima; | arxiv-cs.CL | 2024-08-20 |
359 | SOTOPIA-p: Interactive Learning of Socially Intelligent Language Agents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-p, that improves the social intelligence of language agents. |
RUIYI WANG et. al. | acl | 2024-08-20 |
360 | MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose to select the most informative data for fine-tuning, thereby improving the efficiency of the fine-tuning process with comparative or even better accuracy on the open-domain QA task. |
XIUSI CHEN et. al. | acl | 2024-08-20 |
361 | TaPERA: Enhancing Faithfulness and Interpretability in Long-Form Table QA By Content Planning and Execution-based Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While Large language models based systems have made significant progress, it often hallucinates, especially when the task involves complex reasoning over tables. To tackle this issue, we propose a new LLM-based framework, TaPERA, for LFTQA tasks. |
Yilun Zhao; Lyuhao Chen; Arman Cohan; Chen Zhao; | acl | 2024-08-20 |
362 | EWEK-QA : Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Second, web-retrieved contents are usually obtained by some simple heuristics such as fixed length or breakpoints which might lead to splitting information into pieces. To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system. |
MOHAMMAD DEHGHAN et. al. | acl | 2024-08-20 |
363 | Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Yet, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). |
Guanming Xiong; Junwei Bao; Wen Zhao; | acl | 2024-08-20 |
364 | Learning Relational Decomposition of Queries for Question Answering from Tables Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: By learning to imitate a restricted subset of SQL-like algebraic operations, we demonstrate that their execution flow provides intermediate supervision steps that allow for increased generalization and structural reasoning compared to classical approaches. |
Rapha�l Mouravieff; Benjamin Piwowarski; Sylvain Lamprier; | acl | 2024-08-20 |
365 | FinTextQA: A Dataset for Long-form Financial Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. |
JIAN CHEN et. al. | acl | 2024-08-20 |
366 | Temporal Knowledge Question Answering Via Abstract Reasoning Induction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we address the challenge of enhancing temporal knowledge reasoning in Large Language Models (LLMs). |
Ziyang Chen; Dongfang Li; Xiang Zhao; Baotian Hu; Min Zhang; | acl | 2024-08-20 |
367 | MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring for UE methods. |
YAVUZ FARUK BAKMAN et. al. | acl | 2024-08-20 |
368 | Modality-Aware Integration with Large Language Models for Knowledge-Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To tackle these, we present a novel modality-aware integration with LLMs for KVQA (MAIL). |
JUNNAN DONG et. al. | acl | 2024-08-20 |
369 | Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. |
ZHENGLIANG SHI et. al. | acl | 2024-08-20 |
370 | Exploring Hybrid Question Answering Via Program-based Prompting Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose HProPro, a novel program-based prompting framework for the hybrid question answering task. |
QI SHI et. al. | acl | 2024-08-20 |
371 | Multilingual Non-Factoid Question Answering with Silver Answers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the scope of such datasets for low-resource languages remains limited, with only a few works centered on factoid-based QuADs and none on non-factoid QuADs. Therefore, this work presents MuNfQuAD, a multilingual QuAD with non-factoid questions. |
Ritwik Mishra; Sreeram Vennam; Rajiv Ratn Shah; Ponnurangam Kumaraguru; | arxiv-cs.CL | 2024-08-20 |
372 | HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering Using LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. |
Pranoy Panda; Ankush Agarwal; Chaitanya Devaguptapu; Manohar Kaul; Prathosh Ap; | acl | 2024-08-20 |
373 | PokeMQA: Programmable Knowledge Editing for Multi-hop Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We thus propose a framework, Programmable knowledge editing for Multi-hop Question Answering (PokeMQA), to decouple the jobs. |
HENGRUI GU et. al. | acl | 2024-08-20 |
374 | Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. |
MAYUR PATIDAR et. al. | acl | 2024-08-20 |
375 | ColBERT Retrieval and Ensemble Response Scoring for Language Model Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The Specializing Large Language Models for Telecom Networks challenge aimed to enhance the performance of two small language models, Phi-2 and Falcon-7B in telecommunication question answering. In this paper, we present our question answering systems for this challenge. |
Alex Gichamba; Tewodros Kederalah Idris; Brian Ebiyau; Eric Nyberg; Teruko Mitamura; | arxiv-cs.CL | 2024-08-20 |
376 | Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. |
ELAN MARKOWITZ et. al. | acl | 2024-08-20 |
377 | To Generate or to Retrieve? On The Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents MedGENIE, the first generate-then-read framework for multiple-choice question answering in medicine. |
Giacomo Frisoni; Alessio Cocchieri; Alex Presepi; Gianluca Moro; Zaiqiao Meng; | acl | 2024-08-20 |
378 | MMToM-QA: Multimodal Theory of Mind Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: People can flexibly reason about another person�s mind based on conceptual representations (e. g. , goals, beliefs, plans) extracted from any available data. To address this, we introduce a multimodal Theory of Mind question answering (MMToM-QA) benchmark. |
CHUANYANG JIN et. al. | acl | 2024-08-20 |
379 | SymKGQA: Few-Shot Knowledge Graph Question Answering Via Symbolic Program Generation and Execution Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recently, a new LF called KoPL has been introduced that explicitly models complex reasoning process step-by-step in a symbolic manner and has shown SOTA on KQA Pro in fully-supervised setting. Inspired by this, we propose SymKGQA framework that generates step-by-step Symbolic LF i. e. , KoPL in a few-shot in-context learning setting using LLM. |
Prerna Agarwal; Nishant Kumar; Srikanta Bedathur; | acl | 2024-08-20 |
380 | Domain Adaptation for Subjective Induction Questions Answering on Products By Adversarial Disentangled Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: It is hard for traditional methods to work well without considering the shift of domain patterns. To address this problem, we propose a novel domain-adaptive model. |
YUFENG ZHANG et. al. | acl | 2024-08-20 |
381 | Is Table Retrieval A Solved Problem? Exploring Join-Aware Multi-Table Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: If the join plan is not considered in the retrieval stage, the subsequent steps of reasoning and answering based on those retrieved tables are likely to be incorrect. To address this problem, we introduce a method that uncovers useful join relations for any query and database during table retrieval. |
Peter Baile Chen; Yi Zhang; Dan Roth; | acl | 2024-08-20 |
382 | RetinaQA: A Robust Knowledge Base Question Answering Model for Both Answerable and Unanswerable Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent research has found that such models, when superficially adapted to detect answerability, struggle to satisfactorily identify the different categories of unanswerable questions, and simultaneously preserve good performance for answerable questions. Towards addressing this issue, we propose RetinaQA, a new KBQA model that unifies two key ideas in a single KBQA architecture: (a) discrimination over candidate logical forms, rather than generating these, for handling schema-related unanswerability, and (b) sketch-filling-based construction of candidate logical forms for handling data-related unaswerability. |
Prayushi Faldu; Indrajit Bhattacharya; Mausam .; | acl | 2024-08-20 |
383 | CoDi: Conversational Distillation for Grounded Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Secondly, high-quality conversational datasets are often scarce, small, and domain-specific. Addressing these challenges, we introduce a novel data distillation framework named CoDi (short for Conversational Distillation, pronounced Cody), allowing us to synthesize large-scale, assistant-style datasets in a steerable and diverse manner. |
PATRICK HUBER et. al. | arxiv-cs.CL | 2024-08-20 |
384 | Spiral of Silence: How Is Large Language Model Killing Information Retrieval?�A Case Study on Open Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we construct and iteratively run a simulation pipeline to deeply investigate the short-term and long-term effects of LLM text on RAG systems. |
XIAOYANG CHEN et. al. | acl | 2024-08-20 |
385 | FastFiD: Improve Inference Efficiency of Open Domain Question Answering Via Sentence Selection Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nevertheless, this framework can be relatively time-consuming, particularly due to the extensive length of the gathered passages. To address this, we introduce FastFiD in this paper, a novel approach that executes sentence selection on the encoded passages. |
Yufei Huang; Xu Han; Maosong Sun; | acl | 2024-08-20 |
386 | BizBench: A Quantitative Reasoning Benchmark for Business and Finance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce BizBench, a benchmark for evaluating models� ability to reason about realistic financial problems. |
MICHAEL KRUMDICK et. al. | acl | 2024-08-20 |
387 | Paraphrasing in Affirmative Terms Improves Negation Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we experiment with seamless strategies that incorporate affirmative interpretations (i. e. , paraphrases without negation) to make models more robust against negation. |
MohammadHossein Rezaei; Eduardo Blanco; | acl | 2024-08-20 |
388 | AutoAct: Automatic Agent Learning from Scratch for QA Via Self-Planning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce AutoAct, an automatic agent learning framework for QA that does not rely on large-scale annotated data and synthetic planning trajectories from closed-source models (e. g. , GPT-4). |
SHUOFEI QIAO et. al. | acl | 2024-08-20 |
389 | Beyond Memorization: The Challenge of Random Memory Access in Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the mechanisms underlying knowledge storage and memory access within their parameters remain elusive. In this paper, we investigate whether a generative LM (e. g. , GPT-2) is able to access its memory sequentially or randomly. |
TONGYAO ZHU et. al. | acl | 2024-08-20 |
390 | ProtT3: Protein-to-Text Generation for Text-based Protein Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address their limitations, we introduce ProtT3, a framework for Protein-to-Text Generation for Text-based Protein Understanding. |
ZHIYUAN LIU et. al. | acl | 2024-08-20 |
391 | FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To evaluate complex reasoning in LLMs more fully, we present FanOutQA, a high-quality dataset of fan-out question-answer pairs and human-annotated decompositions with English Wikipedia as the knowledge base. |
Andrew Zhu; Alyssa Hwang; Liam Dugan; Chris Callison-Burch; | acl | 2024-08-20 |
392 | Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and answer attributability. |
Tobias Schimanski; Jingwei Ni; Mathias Kraus; Elliott Ash; Markus Leippold; | acl | 2024-08-20 |
393 | Never Lost in The Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The �lost in the middle� problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Position-Agnostic Multi-step QA (PAM QA). |
JUNQING HE et. al. | acl | 2024-08-20 |
394 | Narrowing The Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose GRANOLA QA, a novel evaluation setting where a predicted answer is evaluated in terms of accuracy and informativeness against a set of multi-granularity answers. |
Gal Yona; Roee Aharoni; Mor Geva; | acl | 2024-08-20 |
395 | SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level. |
ZHENWEN LIANG et. al. | acl | 2024-08-20 |
396 | SyllabusQA: A Course Logistics Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce SyllabusQA, an open-source dataset with 63 real course syllabi covering 36 majors, containing 5,078 open-ended course logistics-related question-answer pairs that are diverse in both question types and answer formats. |
Nigel Fernandez; Alexander Scarlatos; Andrew Lan; | acl | 2024-08-20 |
397 | Safety Alignment in NLP Tasks: Weakly Aligned Summarization As An In-Context Attack Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our study, focusing on safety-sensitive documents obtained through adversarial attacks, reveals significant disparities in the safety alignment of various NLP tasks. |
Yu Fu; Yufei Li; Wen Xiao; Cong Liu; Yue Dong; | acl | 2024-08-20 |
398 | BeamAggR: Beam Aggregation Reasoning Over Multi-source Knowledge for Multi-hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, significant challenges still persist, including inaccurate and insufficient retrieval for complex questions, as well as difficulty in integrating multi-source knowledge. To address this, we propose Beam Aggregation Reasoning (BeamAggR), a reasoning framework for knowledge-intensive multi-hop QA. |
ZHENG CHU et. al. | acl | 2024-08-20 |
399 | Consistency Training By Synthetic Question Generation for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: By citing a common modeling error prevalent in previous research, we introduce a new baseline and compare our model�s performance against it, demonstrating an improvement in results, particularly in later turns of the conversation, when dealing with questions that include a large historical context. |
Hamed Hemati; Hamid Beigy; | acl | 2024-08-20 |
400 | Ranking Generated Answers: On The Agreement of Retrieval Models with Humans on Consumer Health Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a method for evaluating LLM answers that uses ranking signals as a substitute for explicit relevance judgements. |
Sebastian Heineking; Jonas Probst; Daniel Steinbach; Martin Potthast; Harrisen Scells; | arxiv-cs.IR | 2024-08-19 |
401 | TableBench: A Comprehensive and Complex Benchmark for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite these achievements, LLMs still encounter significant challenges when applied in industrial scenarios, particularly due to the increased complexity of reasoning required with real-world tabular data, underscoring a notable disparity between academic benchmarks and practical applications. To address this discrepancy, we conduct a detailed investigation into the application of tabular data in industrial scenarios and propose a comprehensive and complex benchmark TableBench, including 18 fields within four major categories of table question answering (TableQA) capabilities. |
XIANJIE WU et. al. | arxiv-cs.CL | 2024-08-17 |
402 | Developing A Llama-Based Chatbot for CI/CD Question Answering: A Case Study at Ericsson Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents our experience developing a Llama-based chatbot for question answering about continuous integration and continuous delivery (CI/CD) at Ericsson, a multinational telecommunications company. |
Daksh Chaudhary; Sri Lakshmi Vadlamani; Dimple Thomas; Shiva Nejati; Mehrdad Sabetzadeh; | arxiv-cs.SE | 2024-08-17 |
403 | MuRAR: A Simple and Effective Multimodal Retrieval and Answer Refinement Framework for Multimodal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a simple and effective framework named MuRAR (Multimodal Retrieval and Answer Refinement). |
ZHENGYUAN ZHU et. al. | arxiv-cs.IR | 2024-08-16 |
404 | Beyond The Hype: A Dispassionate Look at Vision-language Models in Medical Scenario Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce RadVUQA, a novel Radiological Visual Understanding and Question Answering benchmark, to comprehensively evaluate existing LVLMs. |
Yang Nan; Huichi Zhou; Xiaodan Xing; Guang Yang; | arxiv-cs.CV | 2024-08-16 |
405 | RealMedQA: A Pilot Biomedical Question Answering Dataset Containing Realistic Clinical Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present RealMedQA, a dataset of realistic clinical questions generated by humans and an LLM. |
GREGORY KELL et. al. | arxiv-cs.CL | 2024-08-16 |
406 | IIU: Independent Inference Units for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose Independent Inference Units (IIU) for fine-grained multi-modal reasoning to decompose intra-modal information by the functionally independent units. |
Yili Li; Jing Yu; Keke Gai; Gang Xiong; | arxiv-cs.CV | 2024-08-15 |
407 | LLaVA-Surg: Towards Multimodal Surgical Assistant Via Structured Surgical Video Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: One major contributing factor is the absence of datasets in the surgical field. In this paper, we create a new dataset, Surg-QA, consisting of 102,000 surgical video-instruction pairs, the largest of its kind so far. |
JIAJIE LI et. al. | arxiv-cs.CV | 2024-08-15 |
408 | W-RAG: Weakly Supervised Dense Retrieval in RAG for Open-domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose W-RAG by utilizing the ranking capabilities of LLMs to create weakly labeled data for training dense retrievers. |
Jinming Nian; Zhiyuan Peng; Qifan Wang; Yi Fang; | arxiv-cs.CL | 2024-08-15 |
409 | Assessing and Enhancing Large Language Models in Rare Disease Question-answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a rare disease question-answering (ReDis-QA) dataset to evaluate the performance of LLMs in diagnosing rare diseases. |
GUANCHU WANG et. al. | arxiv-cs.CE | 2024-08-15 |
410 | Evaluating Fine-Tuning Efficiency of Human-Inspired Learning Strategies in Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This study evaluates the fine-tuning efficiency of five human-inspired strategies across four language models, three datasets, and both human- and LLM-labelled data in the context of medical question answering. |
Yushi Yang; Andrew M. Bean; Robert McCraith; Adam Mahdi; | arxiv-cs.CL | 2024-08-14 |
411 | QirK: Question Answering Via Intermediate Representation on Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We demonstrate QirK, a system for answering natural language questions on Knowledge Graphs (KG). |
JAN LUCA SCHEERER et. al. | arxiv-cs.DB | 2024-08-14 |
412 | Enhancing Visual Question Answering Through Ranking-Based Hybrid Training and Multimodal Fusion Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Current VQA models struggle with complex questions due to limitations in capturing and integrating multimodal information effectively. To address these challenges, we propose the Rank VQA model, which leverages a ranking-inspired hybrid training strategy to enhance VQA performance. |
Peiyuan Chen; Zecheng Zhang; Yiping Dong; Li Zhou; Han Wang; | arxiv-cs.CV | 2024-08-14 |
413 | A RAG-Based Question-Answering Solution for Cyber-Attack Investigation and Attribution Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In the constantly evolving field of cybersecurity, it is imperative for analysts to stay abreast of the latest attack trends and pertinent information that aids in the investigation and attribution of cyber-attacks. In this work, we introduce the first question-answering (QA) model and its application that provides information to the cybersecurity experts about cyber-attacks investigations and attribution. |
Sampath Rajapaksha; Ruby Rani; Erisa Karafili; | arxiv-cs.CR | 2024-08-12 |
414 | Chain of Condition: Construct, Verify and Solve Conditions for Conditional Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing approaches struggle with CQA due to two challenges: (1) precisely identifying necessary conditions and the logical relationship, and (2) verifying conditions to detect any that are missing. In this paper, we propose a novel prompting approach, Chain of condition, by first identifying all conditions and constructing their logical relationships explicitly according to the document, then verifying whether these conditions are satisfied, finally solving the logical expression to indicate any missing conditions and generating the answer accordingly. |
Jiuheng Lin; Yuxuan Lai; Yansong Feng; | arxiv-cs.CL | 2024-08-10 |
415 | Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. |
Chunggi Lee; Tica Lin; Hanspeter Pfister; Chen Zhu-Tian; | arxiv-cs.HC | 2024-08-09 |
416 | Surgical-VQLA++: Adversarial Contrastive Learning for Calibrated Robust Visual Question-Localized Answering in Robotic Surgery Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the inability of VQA models to visually indicate the regions of interest corresponding to the given questions results in incomplete comprehension of the surgical scene. To tackle this, we propose the surgical visual question localized-answering (VQLA) for precise and context-aware responses to specific queries regarding surgical images. |
LONG BAI et. al. | arxiv-cs.CV | 2024-08-09 |
417 | Towards A Generative Approach for Emotion Detection and Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: But can they perform emotional reasoning by concatenating `Let’s think step-by-step’ to the input prompt? In this paper we investigate this question along with introducing a novel approach to zero-shot emotion detection and emotional reasoning using LLMs. |
Ankita Bhaumik; Tomek Strzalkowski; | arxiv-cs.CL | 2024-08-09 |
418 | VideoQA in The Era of LLMs: An Empirical Study Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work conducts a timely and comprehensive study of Video-LLMs’ behavior in VideoQA, aiming to elucidate their success and failure modes, and provide insights towards more human-like video understanding and question answering. |
JUNBIN XIAO et. al. | arxiv-cs.CV | 2024-08-08 |
419 | Enhancing Robustness of Retrieval-Augmented Language Models with In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, RALMs still struggle with unanswerable queries, where the retrieved contexts do not contain the correct answer, and with conflicting information, where different sources provide contradictory answers due to imperfect retrieval. This study introduces an in-context learning-based approach to enhance the reasoning capabilities of RALMs, making them more robust in imperfect retrieval scenarios. |
Seong-Il Park; Seung-Woo Choi; Na-Hyun Kim; Jay-Yoon Lee; | arxiv-cs.CL | 2024-08-08 |
420 | Enhancing Healthcare Through Large Language Models: A Study on Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a detailed study of various LLMs trained on the MedQuAD medical question-answering dataset, with a focus on identifying the most effective model for providing accurate medical information. |
Haoran Yu; Chang Yu; Zihan Wang; Dongxian Zou; Hao Qin; | arxiv-cs.CL | 2024-08-07 |
421 | Targeted Visual Prompting for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To address this, region-based questions have been proposed as a means to assess and enhance actual visual understanding through compositional evaluation. To combine these two perspectives, this paper introduces targeted visual prompting to equip MLLMs with region-based questioning capabilities. |
Sergio Tascon-Morales; Pablo Márquez-Neila; Raphael Sznitman; | arxiv-cs.CV | 2024-08-06 |
422 | Entity Retrieval for Answering Entity-Centric Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose Entity Retrieval, a novel retrieval method which rather than relying on question-document similarity, depends on the salient entities within the question to identify the retrieval documents. |
Hassan S. Shavarani; Anoop Sarkar; | arxiv-cs.IR | 2024-08-05 |
423 | XMainframe: A Large Language Model for Mainframe Modernization Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce XMainframe, a state-of-the-art large language model (LLM) specifically designed with knowledge of mainframe legacy systems and COBOL codebases. |
ANH T. V. DAU et. al. | arxiv-cs.CL | 2024-08-05 |
424 | Leveraging Inter-Chunk Interactions for Enhanced Retrieval in Large Language Model-Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Previous research typically handles paragraphs from external documents in isolation, resulting in a lack of context and ambiguous references, particularly in multi-document and complex tasks. To overcome these challenges, we propose a new retrieval framework IIER, that leverages Inter-chunk Interactions to Enhance Retrieval. |
TIEZHENG GUO et. al. | arxiv-cs.CL | 2024-08-05 |
425 | Developing PUGG for Polish: A Modern Approach to KBQA, MRC, and IR Dataset Construction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We executed this pipeline and introduced the PUGG dataset, the first Polish KBQA dataset, and novel datasets for MRC and IR. |
ALBERT SAWCZYN et. al. | arxiv-cs.AI | 2024-08-05 |
426 | KG-CoT: Chain-of-Thought Prompting of Large Language Models Over Knowledge Graphs for Knowledge-Aware Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, fragmented knowledge facts extracted by knowledge retrievers fail to provide explicit and coherent reasoning paths for improving LLM reasoning. To address these challenges, we propose KG-CoT, a novel knowledge-augmented paradigm that leverages a small-scale step-by-step graph reasoning model to reason over knowledge graphs (KGs) and utilizes a reasoning path generation method to generate chains of reasoning with high confidence for large-scale LLMs. |
Ruilin Zhao; Feng Zhao; Long Wang; Xianzhi Wang; Guandong Xu; | ijcai | 2024-08-03 |
427 | MMVQA: A Comprehensive Dataset for Investigating Multipage Multimodal Information Retrieval in PDF-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The paper introduces PDF-MVQA, tailored for research journal articles, encompassing multiple pages and multimodal retrieval. |
Yihao Ding; Kaixuan Ren; Jiabin Huang; Siwen Luo; Soyeon Caren Han; | ijcai | 2024-08-03 |
428 | KnowledgeHub: An End-to-End Tool for Assisted Scientific Discovery Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes the KnowledgeHub tool, a scientific literature Information Extraction (IE) and Question Answering (QA) pipeline. |
SHINNOSUKE TANAKA et. al. | ijcai | 2024-08-03 |
429 | GigaPevt: Multimodal Medical Assistant Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This demo paper presents GigaPevt, the first multimodal medical assistant that combines the dialog capabilities of large language models with specialized medical models. |
PAVEL BLINOV et. al. | ijcai | 2024-08-03 |
430 | ScreenAI: A Vision-Language Model for UI and Infographics Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce ScreenAI, a vision-language model that specializes in UI and infographics understanding. |
GILLES BAECHLER et. al. | ijcai | 2024-08-03 |
431 | Graph Collaborative Expert Finding with Contrastive Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we try to address the limitation of current models that typically neglect the intrinsic high-order connectivity within expert-question interactions, which is pivotal for collaborative effects. |
Qiyao Peng; Wenjun Wang; Hongtao Liu; Cuiying Huo; Minglai Shao; | ijcai | 2024-08-03 |
432 | Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: When using large language models (LLMs) in knowledge-intensive tasks, such as open-domain question answering, external context can bridge the gap between external knowledge and the LLMs’ parametric knowledge. |
YOUNA KIM et. al. | arxiv-cs.CL | 2024-08-02 |
433 | DebateQA: Evaluating Question Answering on Debatable Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, traditional QA benchmarks assume fixed answers are inadequate for this purpose. To address this, we introduce DebateQA, a dataset of 2,941 debatable questions, each accompanied by multiple human-annotated partial answers that capture a variety of perspectives. |
Rongwu Xu; Xuan Qi; Zehan Qi; Wei Xu; Zhijiang Guo; | arxiv-cs.CL | 2024-08-02 |
434 | BioRAG: A RAG-LLM Framework for Biological Question Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The question-answering system for Life science research, which is characterized by the rapid pace of discovery, evolving insights, and complex interactions among knowledge entities, presents unique challenges in maintaining a comprehensive knowledge warehouse and accurate information retrieval. To address these issues, we introduce BioRAG, a novel Retrieval-Augmented Generation (RAG) with the Large Language Models (LLMs) framework. |
CHENGRUI WANG et. al. | arxiv-cs.CL | 2024-08-02 |
435 | Towards Flexible Evaluation for Generative Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Although Visual Question Answering (VQA) could serve as a developed test field, limitations of VQA evaluation, like the inflexible pattern of Exact Match, have hindered MLLMs from demonstrating their real capability and discourage rich responses. Therefore, this paper proposes the use of semantics-based evaluators for assessing unconstrained open-ended responses on VQA datasets. |
Huishan Ji; Qingyi Si; Zheng Lin; Weiping Wang; | arxiv-cs.CV | 2024-08-01 |
436 | MKEAH: Multimodal Knowledge Extraction and Accumulation Based on Hyperplane Embedding for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
HENG ZHANG et. al. | Virtual Real. Intell. Hardw. | 2024-08-01 |
437 | Transformer-based Vision-language Alignment for Robot Navigation and Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Haonan Luo; Ziyu Guo; Zhenyu Wu; Fei Teng; Tian-Jie Li; | Inf. Fusion | 2024-08-01 |
438 | Prompting Medical Large Vision-Language Models to Diagnose Pathologies By Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose two prompting strategies for MLVLMs that reduce hallucination and improve VQA performance. |
Danfeng Guo; Demetri Terzopoulos; | arxiv-cs.CV | 2024-07-31 |
439 | Decomposed Prompting to Answer Questions on A Course Discussion Board Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose and evaluate a question-answering system that uses decomposed prompting to classify and answer student questions on a course discussion board. |
BRANDON JAIPERSAUD et. al. | arxiv-cs.CL | 2024-07-30 |
440 | Boosting Audio Visual Question Answering Via Key Semantic-Aware Cues Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a Temporal-Spatial Perception Model (TSPM), which aims to empower the model to perceive key visual and auditory cues related to the questions. |
Guangyao Li; Henghui Du; Di Hu; | arxiv-cs.CV | 2024-07-30 |
441 | Advancing Vietnamese Visual Question Answering with Transformer and Convolutional Integration Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the prevalence of approaches in English, there is a notable lack of systems specifically developed for certain languages, particularly Vietnamese. This study aims to bridge this gap by conducting comprehensive experiments on the Vietnamese Visual Question Answering (ViVQA) dataset, demonstrating the effectiveness of our proposed model. |
Ngoc Son Nguyen; Van Son Nguyen; Tung Le; | arxiv-cs.CV | 2024-07-30 |
442 | SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here, by utilizing vision-language model (VLM), we proposed an e2eAD method called SimpleLLM4AD. |
Peiru Zheng; Yun Zhao; Zhan Gong; Hong Zhu; Shaohua Wu; | arxiv-cs.CV | 2024-07-30 |
443 | Pyramid Coder: Hierarchical Code Generator for Compositional Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, there are challenges in enabling LLMs to comprehend the usage of image processing modules and generate relevant code. To overcome these challenges, this paper introduces PyramidCoder, a novel prompting framework for PVQA models. |
Ruoyue Shen; Nakamasa Inoue; Koichi Shinoda; | arxiv-cs.CV | 2024-07-30 |
444 | Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. |
Xingchen Zeng; Haichuan Lin; Yilin Ye; Wei Zeng; | arxiv-cs.CV | 2024-07-29 |
445 | AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, they often require long input prompts to provide the LLM with sufficient API usage details to generate relevant code. To address this limitation, we propose AdaCoder, an adaptive prompt compression framework for VPMs. |
Mahiro Ukai; Shuhei Kurita; Atsushi Hashimoto; Yoshitaka Ushiku; Nakamasa Inoue; | arxiv-cs.AI | 2024-07-28 |
446 | Answerability Fields: Answerable Location Estimation Via Diffusion Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Answerability Fields, a novel approach to predicting answerability within complex indoor environments. |
Daichi Azuma; Taiki Miyanishi; Shuhei Kurita; Koya Sakamoto; Motoaki Kawanabe; | arxiv-cs.CV | 2024-07-26 |
447 | A Role-specific Guided Large Language Model for Ophthalmic Consultation Based on Stylistic Differentiation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose EyeDoctor, an ophthalmic medical questioning large language model that enhances accuracy through doctor-patient role perception guided and an augmented knowledge base with external disease information. |
LAIYI FU et. al. | arxiv-cs.CL | 2024-07-25 |
448 | Constructing The CORD-19 Vaccine Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce new dataset ‘CORD-19-Vaccination’ to cater to scientists specifically looking into COVID-19 vaccine-related research. |
Manisha Singh; Divy Sharma; Alonso Ma; Bridget Tyree; Margaret Mitchell; | arxiv-cs.CL | 2024-07-25 |
449 | Audio Entailment: Assessing Deductive Reasoning for Audio Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the novel task of Audio Entailment to evaluate an ALM’s deductive reasoning ability. |
SOHAM DESHMUKH et. al. | arxiv-cs.SD | 2024-07-25 |
450 | The Geometry of Queries: Query-Based Innovations in Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce Query-Based Retrieval Augmented Generation (QB-RAG), a novel approach that pre-computes a database of potential queries from a content base using LLMs. |
Eric Yang; Jonathan Amar; Jong Ha Lee; Bhawesh Kumar; Yugang Jia; | arxiv-cs.LG | 2024-07-25 |
451 | 3D Question Answering for City Scene Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: From the method perspective, we propose a Scene graph enhanced City-level Understanding method (Sg-CityU), which utilizes the scene graph to introduce the spatial semantic. |
PENGLEI SUN et. al. | arxiv-cs.CV | 2024-07-24 |
452 | ScholarChemQA: Unveiling The Power of Language Models in Chemical Research Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Correspondingly, we introduce a QAMatch model, specifically designed to effectively answer chemical questions by fully leveraging our collected data. |
XIUYING CHEN et. al. | arxiv-cs.CL | 2024-07-23 |
453 | Exploring The Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we conduct an extensive empirical study on representation learning for downstream Visual Question Answering (VQA), which requires an accurate compositional understanding of the scene. |
Amir Mohammad Karimi Mamaghan; Samuele Papa; Karl Henrik Johansson; Stefan Bauer; Andrea Dittadi; | arxiv-cs.CV | 2024-07-22 |
454 | KaPQA: Knowledge-Augmented Product Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, accurately assessing the performance of these applications remains a challenge, mainly due to the lack of suitable benchmarks that effectively simulate real-world scenarios. To address this challenge, we introduce two product question-answering (QA) datasets focused on Adobe Acrobat and Photoshop products to help evaluate the performance of existing models on domain-specific product QA tasks. |
SWETHA EPPALAPALLY et. al. | arxiv-cs.CL | 2024-07-22 |
455 | MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To construct MMInstruct, we propose an instruction generation data engine that leverages GPT-4V, GPT-3.5, and manual correction. |
YANGZHOU LIU et. al. | arxiv-cs.CV | 2024-07-22 |
456 | OMoS-QA: A Dataset for Cross-Lingual Extractive Question Answering in A German Migration Context Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we present OMoS-QA, a dataset of German and English questions paired with relevant trustworthy documents and manually annotated answers, specifically tailored to this scenario. |
Steffen Kleinle; Jakob Prange; Annemarie Friedrich; | arxiv-cs.CL | 2024-07-22 |
457 | RadioRAG: Factual Large Language Models for Enhanced Diagnostics in Radiology Using Dynamic Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Large language models (LLMs) have advanced the field of artificial intelligence (AI) in medicine. |
SOROOSH TAYEBI ARASTEH et. al. | arxiv-cs.CL | 2024-07-22 |
458 | Customized Retrieval Augmented Generation and Benchmarking for EDA Tool Documentation QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Off-the-shelf RAG flows are well pretrained on general-purpose documents, yet they encounter significant challenges when being applied to knowledge-intensive vertical domains, such as electronic design automation (EDA). This paper addresses such issue by proposing a customized RAG framework along with three domain-specific techniques for EDA tool documentation QA, including a contrastive learning scheme for text embedding model fine-tuning, a reranker distilled from proprietary LLM, and a generative LLM fine-tuned with high-quality domain corpus. |
Yuan Pu; Zhuolun He; Tairu Qiu; Haoyuan Wu; Bei Yu; | arxiv-cs.CL | 2024-07-21 |
459 | Knowledge Acquisition Disentanglement for Knowledge-based Visual Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Furthermore, the “forward-only” answering process fails to explicitly capture the knowledge needs of LLMs, which can further hurt answering quality. To cope with the above limitations, we propose DKA: Disentangled Knowledge Acquisition from LLM feedback, a training-free framework that disentangles knowledge acquisition to avoid confusion and uses LLM’s feedback to specify the required knowledge. |
WENBIN AN et. al. | arxiv-cs.CV | 2024-07-21 |
460 | End-to-End Video Question Answering with Frame Scoring Mechanisms and Adaptive Sampling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Simply uniformly sampling frames or indiscriminately aggregating frame-level visual features often falls short in capturing the nuanced and relevant contexts of videos to well perform VideoQA. To mitigate these issues, we propose VidF4, a novel VideoQA framework equipped with tailored frame selection strategy for effective and efficient VideoQA. |
JIANXIN LIANG et. al. | arxiv-cs.CV | 2024-07-21 |
461 | Generalization V.s. Memorization: Tracing Language Models’ Capabilities Back to Pretraining Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To effectively capture task-specific pretraining data frequency, we propose a novel task-gram language model, which is built by counting the co-occurrence of semantically related $n$-gram pairs from task inputs and outputs in the pretraining corpus. |
XINYI WANG et. al. | arxiv-cs.CL | 2024-07-20 |
462 | Evaluating Language Models As Risk Scores Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we focus on the use of LLMs as risk scores for unrealizable prediction tasks. |
André F. Cruz; Moritz Hardt; Celestine Mendler-Dünner; | arxiv-cs.LG | 2024-07-19 |
463 | INDIC QA BENCHMARK: A Multilingual Benchmark to Evaluate Question Answering Capability of LLMs for Indic Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the evaluation of LLMs’ capabilities in non-English languages for context-based QA is limited by the scarcity of benchmarks in non-English languages. To address this gap, we introduce Indic-QA, the largest publicly available context-grounded question-answering dataset for 11 major Indian languages from two language families. |
Abhishek Kumar Singh; Rudra Murthy; Vishwajeet kumar; Jaydeep Sen; Ganesh Ramakrishnan; | arxiv-cs.LG | 2024-07-18 |
464 | Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Towards a solution, we introduce MIRAGE (Multi-Image Retrieval Augmented Generation), an open-source, lightweight visual-RAG framework that processes up to 10k images on a single 40G A100 GPU — far surpassing the 1k-image limit of contemporary models. |
TSUNG-HAN WU et. al. | arxiv-cs.CV | 2024-07-18 |
465 | Clinical Reading Comprehension with Encoder-Decoder Models Enhanced By Direct Preference Optimization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we combine encoder-decoder models with the direct preference optimization (DPO) method to improve over prior state of the art for the RadQA radiology question answering task by 12-15 F1 points. |
Md Sultan Al Nahian; Ramakanth Kavuluru; | arxiv-cs.IR | 2024-07-18 |
466 | Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with An Iterative Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel iterative RAG method called ReSP, equipped with a dual-function summarizer. |
Zhouyu Jiang; Mengshu Sun; Lei Liang; Zhiqiang Zhang; | arxiv-cs.CL | 2024-07-17 |
467 | EchoSight: Advancing Visual-Language Models with Wiki Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce EchoSight, a novel multimodal Retrieval-Augmented Generation (RAG) framework that enables large language models (LLMs) to answer visual questions requiring fine-grained encyclopedic knowledge. |
Yibin Yan; Weidi Xie; | arxiv-cs.CV | 2024-07-17 |
468 | Continual Learning for Temporal-Sensitive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we explore an emerging research area of Continual Learning for Temporal Sensitive Question Answering (CLTSQA). |
WANQI YANG et. al. | arxiv-cs.CL | 2024-07-17 |
469 | Search Engines, LLMs or Both? Evaluating Information Seeking Strategies for Answering Health Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we focus on their merits in answering health questions. |
Marcos Fernández-Pichel; Juan C. Pichel; David E. Losada; | arxiv-cs.IR | 2024-07-17 |
470 | TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the first multitask, multiple-choice Turkish QA benchmark, TurkishMMLU, to evaluate LLMs’ understanding of the Turkish language. |
Arda Yüksel; Abdullatif Köksal; Lütfi Kerem Şenel; Anna Korhonen; Hinrich Schütze; | arxiv-cs.CL | 2024-07-17 |
471 | Localizing and Mitigating Errors in Long-form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work introduces HaluQuestQA, the first hallucination dataset with localized error annotations for human-written and model-generated LFQA answers. |
Rachneet Sachdeva; Yixiao Song; Mohit Iyyer; Iryna Gurevych; | arxiv-cs.CL | 2024-07-16 |
472 | Reasoning with Large Language Models, A Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We provide an in-depth coverage of core approaches and open problems, and we propose a research agenda for the near future. |
ASKE PLAAT et. al. | arxiv-cs.AI | 2024-07-16 |
473 | TM-PATHVQA:90000+ Textless Multilingual Questions for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, this work implements a speech-based VQA system by introducing a Textless Multilingual Pathological VQA (TMPathVQA) dataset, an expansion of the PathVQA dataset, containing spoken questions in English, German & French. |
Tonmoy Rajkhowa; Amartya Roy Chowdhury; Sankalp Nagaonkar; Achyut Mani Tripathi; | arxiv-cs.CV | 2024-07-16 |
474 | Multimodal Reranking for Knowledge-Intensive Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce an additional module, a multi-modal reranker, to improve the ranking quality of knowledge candidates for answer generation. |
Haoyang Wen; Honglei Zhuang; Hamed Zamani; Alexander Hauptmann; Michael Bendersky; | arxiv-cs.CL | 2024-07-16 |
475 | Video-Language Alignment Via Spatio-Temporal Graph Transformer Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel Spatio-Temporal Graph Transformer module to uniformly learn spatial and temporal contexts for video-language alignment pre-training (dubbed STGT). |
SHI-XUE ZHANG et. al. | arxiv-cs.CV | 2024-07-16 |
476 | Unraveling The Truth: Do VLMs Really Understand Charts? A Deep Dive Into Consistency and Robustness Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We investigate two key aspects: 1) the models’ ability to handle varying levels of chart and question complexity, and 2) their robustness across different visual representations of the same underlying data. |
SRIJA MUKHOPADHYAY et. al. | arxiv-cs.CL | 2024-07-15 |
477 | Evaluation of RAG Metrics for Question Answering in The Telecom Domain Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Retrieval Augmented Generation (RAG) is widely used to enable Large Language Models (LLMs) perform Question Answering (QA) tasks in various domains. However, RAG based on … |
SUJOY ROYCHOWDHURY et. al. | ArXiv | 2024-07-15 |
478 | Graphusion: Leveraging Large Language Models for Scientific Knowledge Graph Fusion and Construction in NLP Education Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Graphusion, a zero-shot KGC framework from free text. |
RUI YANG et. al. | arxiv-cs.CL | 2024-07-15 |
479 | RAG-Ex: A Generic Framework for Explaining Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce RAG-Ex, a model- and language-agnostic explanation framework that presents approximate explanations to the users revealing why the LLMs possibly generated a piece of text as a response, given the user input. |
Viju Sudhi; Sinchana Ramakanth Bhat; Max Rudat; Roman Teucher; | sigir | 2024-07-14 |
480 | A Question-Answering Assistant Over Personal Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Based on a fine-grained schema customized for PKG, the PKGQA system in this paper comprises Symbolic Semantic Parsing, Frequently Asked Question (FAQ) Semantic Matching, and Neural Semantic Parsing modules, which are designed to take into account both accuracy and efficiency. |
LINGYUAN LIU et. al. | sigir | 2024-07-14 |
481 | Towards Robust QA Evaluation Via Open LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite their remarkable capabilities, proprietary LLMs are costly and subject to internal changes that can affect their output, which inhibits the reproducibility of their results and limits the widespread adoption of LLM-based evaluation. In this demo, we aim to use publicly available LLMs for standardizing LLM-based QA evaluation. |
Ehsan Kamalloo; Shivani Upadhyay; Jimmy Lin; | sigir | 2024-07-14 |
482 | CIQA: A Coding Inspired Question Answering Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel domain-agnostic model to address the problem by leveraging domain-specific and open-source code libraries. |
Mousa Arraf; Kira Radinsky; | sigir | 2024-07-14 |
483 | GenSco: Can Question Decomposition Based Passage Alignment Improve Question Answering? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate whether providing aligned context via a carefully selected passage sequence leads to better answer generation by the LLM for multi-hop QA. |
Barah Fazili; Koustava Goswami; Natwar Modani; Inderjeet Nair; | arxiv-cs.CL | 2024-07-14 |
484 | Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel customer service question-answering method that amalgamates RAG with a knowledge graph (KG). |
ZHENTAO XU et. al. | sigir | 2024-07-14 |
485 | ArabicaQA: A Comprehensive Dataset for Arabic Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we address the significant gap in Arabic natural language processing (NLP) resources by introducing ArabicaQA, the first large-scale dataset for machine reading comprehension and open-domain question answering in Arabic. |
ABDELRAHMAN ABDALLAH et. al. | sigir | 2024-07-14 |
486 | Are Large Language Models Good at Utility Judgments? Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: (iv) We propose a k-sampling, listwise approach to reduce the dependency of LLMs on the sequence of input passages, thereby facilitating subsequent answer generation. |
HENGRAN ZHANG et. al. | sigir | 2024-07-14 |
487 | ChroniclingAmericaQA: A Large-scale Question Answering Dataset Based on Historical American Newspaper Pages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To further contribute to advancing QA and MRC tasks and to overcome the limitation of previous datasets, we introduce ChroniclingAmericaQA, a large-scale temporal QA dataset with 487K question-answer pairs created based on the historical newspaper collection Chronicling America. |
Bhawna Piryani; Jamshid Mozafari; Adam Jatowt; | sigir | 2024-07-14 |
488 | Let Me Show You Step By Step: An Interpretable Graph Routing Network for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel interpretable graph routing network (GRN) which explicitly conducts entity routing over a constructed scene knowledge graph step by step for KB-VQA. |
DUOKANG WANG et. al. | sigir | 2024-07-14 |
489 | Boosting Conversational Question Answering with Fine-Grained Retrieval-Augmentation and Self-Check Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a conversation-level RAG (ConvRAG) approach, which incorporates fine-grained retrieval augmentation and self-check for conversational question answering (CQA). |
LINHAO YE et. al. | sigir | 2024-07-14 |
490 | Can LLMs Master Math? Investigating Large Language Models on Math Stack Exchange Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we follow a two-step approach to investigating the proficiency of LLMs in answering mathematical questions. |
ANKIT SATPUTE et. al. | sigir | 2024-07-14 |
491 | NativQA: Multilingual Culturally-Aligned Natural Query for LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose a scalable, language-independent framework, NativQA, to seamlessly construct culturally and regionally aligned QA datasets in native languages, for LLM evaluation and tuning. |
MD. ARID HASAN et. al. | arxiv-cs.CL | 2024-07-13 |
492 | One Stone, Four Birds: A Comprehensive Solution for QA System Using Supervised Contrastive Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a novel and comprehensive solution to enhance both the robustness and efficiency of question answering (QA) systems through supervised contrastive learning (SCL). |
Bo Wang; Tsunenori Mine; | arxiv-cs.CL | 2024-07-12 |
493 | Bridging The Gap Between Information Seeking and Product Search Systems: Q&A Recommendation for E-commerce Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The recent success of Large Language Models (LLMs) has opened up an opportunity to bridge the gap between the two tasks to help customers achieve their goals quickly and effectively by integrating conversational QA within product search. In this paper, we propose to recommend users Question-Answer (Q&A) pairs that are relevant to their product search and can help them make a purchase decision. |
Saar Kuzi; Shervin Malmasi; | arxiv-cs.CL | 2024-07-12 |
494 | Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose to embed an attention mechanism guided by segmentation into a RSVQA pipeline. |
LUCREZIA TOSATO et. al. | arxiv-cs.CV | 2024-07-11 |
495 | Uncertainty Estimation of Large Language Models in Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we benchmark popular UE methods with different model sizes on medical question-answering datasets. |
Jiaxin Wu; Yizhou Yu; Hong-Yu Zhou; | arxiv-cs.CL | 2024-07-11 |
496 | AutoBencher: Creating Salient, Novel, Difficult Datasets for Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present three desiderata for a good benchmark for language models: (i) salience (e.g., knowledge about World War II is more salient than a random day in history), (ii) novelty (i.e., the benchmark reveals new trends in model rankings not shown by previous benchmarks), and (iii) difficulty (i.e., the benchmark should be difficult for existing models, leaving headroom for future improvement). |
Xiang Lisa Li; Evan Zheran Liu; Percy Liang; Tatsunori Hashimoto; | arxiv-cs.CL | 2024-07-11 |
497 | Examining Long-Context Large Language Models for Environmental Review Document Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Long context and retrieval-augmented generation (RAG) are two such methods that have recently gained popularity. In this work, we examine the benefits of both of these techniques by utilizing question answering (QA) task in a niche domain. |
HUNG PHAN et. al. | arxiv-cs.CL | 2024-07-09 |
498 | MST5 — Multilingual Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this research, we propose a simplified approach to enhance multilingual KGQA systems by incorporating linguistic context and entity information directly into the processing pipeline of a language model. |
NIKIT SRIVASTAVA et. al. | arxiv-cs.CL | 2024-07-08 |
499 | Sponsored Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present the first formal analysis of a sponsored QA platform. |
Tommy Mordo; Moshe Tennenholtz; Oren Kurland; | arxiv-cs.GT | 2024-07-05 |
500 | On Scalable Oversight with Weak LLMs Judging Strong LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we study debate, where two AI’s compete to convince a judge; consultancy, where a single AI tries to convince a judge that asks questions; and compare to a baseline of direct question-answering, where the judge just answers outright without the AI. |
ZACHARY KENTON et. al. | arxiv-cs.LG | 2024-07-05 |
501 | Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present our solution for the WSDM2023 Toloka Visual Question Answering Challenge. |
Xiangyu Wu; Zhouyang Chi; Yang Yang; Jianfeng Lu; | arxiv-cs.CV | 2024-07-05 |
502 | Question Answering with Texts and Tables Through Deep Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a novel architecture to generate multi-hop answers to open domain questions that require information from texts and tables, using the Open Table-and-Text Question Answering dataset for validation and training. |
MARCOS M. JOSÉ et. al. | arxiv-cs.CL | 2024-07-05 |
503 | Black-box Model Ensembling for Textual and Visual Question Answering Via Information Fusion Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, fine-tuning these models is either difficult, as it requires access via APIs, rendering them as black-boxes, or costly due to the need of tuning a large number of parameters. To address this, we introduce InfoSel, a data-efficient ensemble method that learns to dynamically pick the winner from existing black-box models for predictions on both textual and multimodal visual question answering tasks. |
Yuxi Xia; Kilm Zaporojets; Benjamin Roth; | arxiv-cs.CL | 2024-07-04 |
504 | Leveraging Topic Specificity and Social Relationships for Expert Finding in Community Question Answering Platforms Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present TUEF, a Topic-oriented User-Interaction model for Expert Finding, which aims to fully and transparently leverage the heterogeneous information available within online question-answering communities. |
Maddalena Amendola; Andrea Passarella; Raffaele Perego; | arxiv-cs.IR | 2024-07-04 |
505 | STOC-TOT: Stochastic Tree-of-Thought with Constrained Decoding for Complex Reasoning in Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose STOC-TOT, a stochastic tree-of-thought reasoning prompting method with constrained decoding for MHQA and conduct a detailed comparison with other reasoning prompts on different question types and reasoning types. |
Zhenyu Bi; Daniel Hajialigol; Zhongkai Sun; Jie Hao; Xuan Wang; | arxiv-cs.CL | 2024-07-04 |
506 | Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a robust discriminator named RelD to effectively detect hallucination in LLMs’ generated answers. |
YUYAN CHEN et. al. | arxiv-cs.CL | 2024-07-04 |
507 | FSM: A Finite State Machine Based Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a prompting method, Finite State Machine (FSM) to enhance the reasoning capabilities of LLM for complex tasks in addition to improved effectiveness and trustworthiness. |
XIAOCHEN WANG et. al. | arxiv-cs.CL | 2024-07-03 |
508 | VDMA: Video Question Answering with Dynamically Generated Multi-Agents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Video Question Answering with Dynamically Generated Multi-Agents (VDMA). |
Noriyuki Kugo; Tatsuya Ishibashi; Kosuke Ono; Yuji Sato; | arxiv-cs.CV | 2024-07-03 |
509 | Visual Robustness Benchmark for Visual Question Answering (VQA) Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose the first large-scale benchmark comprising 213,000 augmented images, challenging the visual robustness of multiple VQA models and assessing the strength of realistic visual corruptions. |
MD FARHAN ISHMAM et. al. | arxiv-cs.CV | 2024-07-03 |
510 | Align and Aggregate: Compositional Reasoning with Video Alignment and Answer Aggregation for Video Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the recent progress made in Video Question-Answering (VideoQA), these methods typically function as black-boxes, making it difficult to understand their reasoning processes and perform consistent compositional reasoning. To address these challenges, we propose a \textit{model-agnostic} Video Alignment and Answer Aggregation (VA$^{3}$) framework, which is capable of enhancing both compositional consistency and accuracy of existing VidQA methods by integrating video aligner and answer aggregator modules. |
Zhaohe Liao; Jiangtong Li; Li Niu; Liqing Zhang; | arxiv-cs.CV | 2024-07-03 |
511 | UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs’ Memorization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces UnSeenTimeQA, a novel data contamination-free time-sensitive question-answering (TSQA) benchmark. |
MD NAYEM UDDIN et. al. | arxiv-cs.CL | 2024-07-03 |
512 | Calibrated Large Language Models for Binary Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel approach that utilizes the inductive Venn–Abers predictor (IVAP) to calibrate the probabilities associated with the output tokens corresponding to the binary labels. |
Patrizio Giovannotti; Alexander Gammerman; | arxiv-cs.CL | 2024-07-01 |
513 | M2QA: Multi-domain Multilingual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This prevents the transfer of NLP systems from well-resourced languages and domains to non-dominant language-domain combinations. To address this gap, we introduce M2QA, a multi-domain multilingual question answering benchmark. |
LEON ENGLÄNDER et. al. | arxiv-cs.CL | 2024-07-01 |
514 | DSAMR: Dual-Stream Attention Multi-hop Reasoning for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
YANHAN SUN et. al. | Expert Syst. Appl. | 2024-07-01 |
515 | Incorporating Multi-perspective Information Into Reinforcement Learning to Address Multi-hop Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
CHUANYANG GONG et. al. | Expert Syst. Appl. | 2024-07-01 |
516 | Explainable Knowledge Reasoning Via Thought Chains for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Chen Qiu; Zhiqiang Xie; Maofu Liu; Huijun Hu; | Inf. Process. Manag. | 2024-07-01 |
517 | The Solution for The ICCV 2023 Perception Test Challenge 2023 — Task 6 — Grounded VideoQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a grounded video question-answering solution. |
Hailiang Zhang; Dian Chao; Zhihao Guan; Yang Yang; | arxiv-cs.CV | 2024-07-01 |
518 | Event-centric Hierarchical Hyperbolic Graph for Multi-hop Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View |
Xun Zhu; Wang Gao; Tianyu Li; Wenguang Yao; Hongtao Deng; | Eng. Appl. Artif. Intell. | 2024-07-01 |
519 | Dynamic Few-Shot Learning for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce a novel approach called Dynamic Few-Shot Learning (DFSL). |
Jacopo D’Abramo; Andrea Zugarini; Paolo Torroni; | arxiv-cs.CL | 2024-07-01 |
520 | Hierarchical Memory for Long Video QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes our champion solution to the LOVEU Challenge @ CVPR’24, Track 1 (Long Video VQA). |
YIQIN WANG et. al. | arxiv-cs.CV | 2024-06-30 |
521 | BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: On the widely used popular knowledge graph, we discover over 90 factual errors which provide scenarios for agents to make discoveries and demonstrate the effectiveness of our approach. |
XINNA LIN et. al. | arxiv-cs.CL | 2024-06-29 |
522 | H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing methods employ either textual reasoning, which excels in semantic interpretation but struggles with mathematical operations, or symbolic reasoning, which handles computations well but lacks semantic understanding. This paper introduces a novel algorithm H-STAR that integrates both symbolic and semantic (textual) approaches in a two-stage process to address these limitations. |
Nikhil Abhyankar; Vivek Gupta; Dan Roth; Chandan K. Reddy; | arxiv-cs.DB | 2024-06-29 |
523 | STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, the advancement of medical image understanding and reasoning critically depends on building high-quality visual instruction data, which is costly and labor-intensive to obtain, particularly in the medical domain. To mitigate this data-starving issue, we introduce Self-Training Large Language and Vision Assistant for Medicine (STLLaVA-Med). |
Guohao Sun; Can Qin; Huazhu Fu; Linwei Wang; Zhiqiang Tao; | arxiv-cs.CV | 2024-06-28 |
524 | Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing approaches at the intersection of Continual Learning and Visual Question Answering (VQA) do not study how the multimodal nature of the input affects the learning dynamics of a model. In this paper, we demonstrate that each modality evolves at different rates across a continuum of tasks and that this behavior occurs in established encoder-only models as well as modern recipes for developing Vision & Language (VL) models. |
Malvina Nikandrou; Georgios Pantazopoulos; Ioannis Konstas; Alessandro Suglia; | arxiv-cs.CV | 2024-06-27 |
525 | Follow-Up Questions Improve Documents Generated By Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study investigates the impact of Large Language Models (LLMs) generating follow-up questions in response to user requests for short (1-page) text documents. |
Bernadette J Tix; | arxiv-cs.CL | 2024-06-27 |
526 | TrustUQA: A Trustful Framework for Unified Structured Data Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose TrustUQA, a trustful QA framework that can simultaneously support multiple types of structured data in a unified way. |
WEN ZHANG et. al. | arxiv-cs.CL | 2024-06-27 |
527 | FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce FlowVQA, a novel benchmark aimed at assessing the capabilities of visual question-answering multimodal language models in reasoning with flowcharts as visual contexts. |
SHUBHANKAR SINGH et. al. | arxiv-cs.CL | 2024-06-27 |
528 | Context Matters: An Empirical Study of The Impact of Contextual Information in Temporal Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce two new context-rich TQA datasets, ContextAQA and ContextTQE, and provide comprehensive evaluations and guidelines for training robust TQA models. |
DAN SCHUMACHER et. al. | arxiv-cs.CL | 2024-06-27 |
529 | Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present explicit diversity conditions for QAG, focusing on spatial aspects, question types, and entities, substantially increasing diversity in QA generation. |
Vikas Yadav; Hyuk Joon Kwon; Vijay Srinivasan; Hongxia Jin; | arxiv-cs.CL | 2024-06-25 |
530 | Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). |
MINZHENG WANG et. al. | arxiv-cs.CL | 2024-06-25 |
531 | Advancing Question Answering on Handwritten Documents: A State-of-the-Art Recognition-Based Model for HW-SQuAD Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a novel recognition-based approach that improves upon the previous state-of-the-art on the HW-SQuAD and BenthamQA datasets. |
Aniket Pal; Ajoy Mondal; C. V. Jawahar; | arxiv-cs.CV | 2024-06-25 |
532 | CaLMQA: Exploring Culturally Specific Long-form Question Answering Across 23 Languages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: While LFQA has been well-studied in English, this research has not been extended to other languages. To bridge this gap, we introduce CaLMQA, a collection of 1.5K complex culturally specific questions spanning 23 languages and 51 culturally agnostic questions translated from English into 22 other languages. |
SHANE ARORA et. al. | arxiv-cs.CL | 2024-06-25 |
533 | Is Your Benchmark Truly Adversarial? AdvScore: Evaluating Human-Grounded Adversarialness Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Given the lack of a standardized metric for measuring adversarialness, we propose AdvScore, a human-grounded evaluation metric. |
Yoo Yeon Sung; Maharshi Gor; Eve Fleisig; Ishani Mondal; Jordan Lee Boyd-Graber; | arxiv-cs.CL | 2024-06-24 |
534 | Context-augmented Retrieval: A Novel Framework for Fast Information Retrieval Based Response Generation Using Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For the same, this work proposes a new approach Context Augmented retrieval (CAR), where partitioning of vector database by real-time classification of information flowing into the corpus is done. |
Sai Ganesh; Anupam Purwar; Gautam B; | arxiv-cs.IR | 2024-06-24 |
535 | CogMG: Collaborative Augmentation Between Large Language Model and Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a collaborative augmentation framework, CogMG, leveraging knowledge graphs to address the limitations of LLMs in QA scenarios, explicitly targeting the problems of incomplete knowledge coverage and knowledge update misalignment. |
Tong Zhou; Yubo Chen; Kang Liu; Jun Zhao; | arxiv-cs.CL | 2024-06-24 |
536 | DEXTER: A Benchmark for Open-domain Complex Question Answering Using LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: While retrieval performance for classical QA tasks is well explored, their capabilities for heterogeneous complex retrieval tasks, especially in an open-domain setting, and the impact on downstream QA performance, are relatively unexplored. To address this, in this work, we propose a benchmark composing diverse complex QA tasks and provide a toolkit to evaluate state-of-the-art pre-trained dense and sparse retrieval models in an open-domain setting. |
Venktesh V. Deepali Prabhu; Avishek Anand; | arxiv-cs.CL | 2024-06-24 |
537 | HCQA @ Ego4D EgoSchema Challenge 2024 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this report, we present our champion solution for Ego4D EgoSchema Challenge in CVPR 2024. |
HAOYU ZHANG et. al. | arxiv-cs.CV | 2024-06-22 |
538 | Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate the construction of a more cohesive and stable Med-VQA structure. |
Lin Fan; Xun Gong; Cenyang Zheng; Yafei Ou; | arxiv-cs.LG | 2024-06-21 |
539 | 70B-parameter Large Language Models in Japanese Medical Question-answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Here we utilize multiple 70B-parameter LLMs for the first time and show that instruction tuning using Japanese medical question-answering dataset significantly improves the ability of Japanese LLMs to solve Japanese medical license exams, surpassing 50\% in accuracy. |
Issey Sukeda; Risa Kishikawa; Satoshi Kodera; | arxiv-cs.CL | 2024-06-21 |
540 | Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. |
ZHENGLIANG SHI et. al. | arxiv-cs.CL | 2024-06-21 |
541 | Pregnant Questions: The Importance of Pragmatic Awareness in Maternal Health Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In a high-risk domain such as maternal and infant health, a question-answering system must recognize these pragmatic constraints and go beyond simply answering user questions, examining them in context to respond helpfully. To achieve this, we study assumptions and implications, or pragmatic inferences, made when mothers ask questions about pregnancy and infant care by collecting a dataset of 2,727 inferences from 500 questions across three diverse sources. |
NEHA SRIKANTH et. al. | naacl | 2024-06-20 |
542 | Mitigating Bias for Question Answering Models By Tracking Bias Influence Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. |
MINGYU MA et. al. | naacl | 2024-06-20 |
543 | TRAQ: Trustworthy Retrieval Augmented Question Answering Via Conformal Prediction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Retrieval augmented generation (RAG) is a promising strategy to avoid hallucinations, but it does not provide guarantees on its correctness. To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or *TRAQ*, which provides the first end-to-end statistical correctness guarantee for RAG. |
Shuo Li; Sangdon Park; Insup Lee; Osbert Bastani; | naacl | 2024-06-20 |
544 | AudioChatLlama: Towards General-Purpose Speech Abilities for LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we extend the instruction-tuned Llama-2 model with end-to-end general-purpose speech processing and reasoning abilities while maintaining the wide range of original LLM capabilities, without using any carefully curated paired data. |
YASSIR FATHULLAH et. al. | naacl | 2024-06-20 |
545 | Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models Through Question Complexity IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel adaptive QA framework that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs from the simplest to the most sophisticated ones based on the query complexity. |
Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong Park; | naacl | 2024-06-20 |
546 | CPopQA: Ranking Cultural Concept Popularity By LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the extent to which an LLM effectively captures corpus-level statistical trends of concepts for reasoning, especially long-tail ones, is largely underexplored. In this study, we introduce a novel few-shot question-answering task (CPopQA) that examines LLMs� statistical ranking abilities for long-tail cultural concepts (e. g. , holidays), particularly focusing on these concepts� popularity in the United States and the United Kingdom, respectively. |
Ming Jiang; Mansi Joshi; | naacl | 2024-06-20 |
547 | On Narrative Question Answering Skills Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing task-level skill views oversimplify the multidimensional nature of tasks, while question-level taxonomies face issues in evaluation and methodology. To address these challenges, we introduce a more inclusive skill taxonomy that synthesizes and redefines narrative understanding skills from previous taxonomies and includes a generation skill dimension from the answering perspective. |
Emil Kalbaliyev; Kairit Sirts; | naacl | 2024-06-20 |
548 | Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. |
Gregor Geigle; Radu Timofte; Goran Glavaš; | arxiv-cs.CV | 2024-06-20 |
549 | LLaSA: A Multimodal LLM for Human Activity Analysis Through Wearable and Smartphone Sensors Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce LLaSA (Large Language and Sensor Assistant), a multimodal large language model built on LIMU-BERT and Llama, designed to interpret and answer queries related to human activities and motion analysis, leveraging sensor data and contextual reasoning. |
Sheikh Asif Imran; Mohammad Nur Hossain Khan; Subrata Biswas; Bashima Islam; | arxiv-cs.CL | 2024-06-20 |
550 | Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a novel framework for enhancing LLMs’ planning capabilities by using planning data derived from knowledge graphs (KGs). |
JUNJIE WANG et. al. | arxiv-cs.CL | 2024-06-20 |
551 | Is Prompt Transfer Always Effective? An Empirical Study of Prompt Transfer for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we characterize the question answering task based on features such as answer format and empirically investigate the transferability of soft prompts for the first time. |
Minji Jung; Soyeon Park; Jeewoo Sul; Yong Suk Choi; | naacl | 2024-06-20 |
552 | QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a simple yet efficient method called question and passage augmentation (QPaug) via LLMs for open-domain QA. |
Minsang Kim; Cheoneum Park; Seungjun Baek; | arxiv-cs.CL | 2024-06-20 |
553 | TTQA-RS- A Break-down Prompting Approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we have proposed a Retrieval Augmented Generation (RAG) based model – TTQA-RS: A break-down prompting approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization. |
Jayetri Bardhan; Bushi Xiao; Daisy Zhe Wang; | arxiv-cs.CL | 2024-06-20 |
554 | Towards Improved Multi-Source Attribution for Long-Form Answer Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite gaining increasing popularity for usage in QA systems and search engines, current LLMs struggle with attribution for long-form responses which require reasoning over multiple evidence sources. To address this, in this paper we aim to improve the attribution capability of LLMs for long-form answer generation to multiple sources, with multiple citations per sentence. |
Nilay Patel; Shivashankar Subramanian; Siddhant Garg; Pratyay Banerjee; Amita Misra; | naacl | 2024-06-20 |
555 | A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: At the core of KBLLaMA, we study (1) how to organize new knowledge about KBQA and (2) how to facilitate the learning of the organized knowledge. |
Lingxi Zhang; Jing Zhang; Yanling Wang; Cuiping Li; Hong Chen; | arxiv-cs.CL | 2024-06-20 |
556 | Self-Prompting Large Language Models for Zero-Shot Open-Domain QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of LLMs and their strong instruction understanding abilities. |
Junlong Li; Jinyuan Wang; Zhuosheng Zhang; Hai Zhao; | naacl | 2024-06-20 |
557 | SEMQA: Semi-Extractive Multi-Source Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion. |
TAL SCHUSTER et. al. | naacl | 2024-06-20 |
558 | SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce SQATIN, a new framework for dialog NLU based on (i) instruction tuning and (ii) question-answering-based formulation of ID and VE tasks. |
Evgeniia Razumovskaia; Goran Glava�; Anna Korhonen; Ivan Vulic; | naacl | 2024-06-20 |
559 | Unveiling Divergent Inductive Biases of LLMs on Temporal Data Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the adeptness of Large Language Models (LLMs) in discerning patterns and relationships from data, their inherent comprehension of temporal dynamics remains a formidable challenge. This research meticulously explores these intrinsic challenges within LLMs, with a specific emphasis on evaluating the performance of GPT-3. |
Sindhu Kishore; Hangfeng He; | naacl | 2024-06-20 |
560 | End-to-End Beam Retrieval for Multi-Hop Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Beam Retrieval, an end-to-end beam retrieval framework for multi-hop QA. |
Jiahao Zhang; Haiyang Zhang; Dongmei Zhang; Liu Yong; Shen Huang; | naacl | 2024-06-20 |
561 | PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models As Decision Makers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we conduct a study to utilize LLMs as a solution for decision making that requires complex data analysis. |
Myeonghwa Lee; Seonho An; Min-Soo Kim; | naacl | 2024-06-20 |
562 | SynDARin: Synthesising Datasets for Automated Reasoning in Low-Resource Languages Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This means that producing novel models and measuring the performance of multilingual LLMs in low-resource languages is challenging. To mitigate this, we propose $\textbf{S}$yn$\textbf{DAR}$in, a method for generating and validating QA datasets for low-resource languages. |
Gayane Ghazaryan; Erik Arakelyan; Pasquale Minervini; Isabelle Augenstein; | arxiv-cs.CL | 2024-06-20 |
563 | Retrieval Helps or Hurts? A Deeper Dive Into The Efficacy of Retrieval Augmentation to Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, our goal is to offer a more detailed, fact-centric analysis by exploring the effects of combinations of entities and relations. |
Seiji Maekawa; Hayate Iso; Sairam Gurajada; Nikita Bhutani; | naacl | 2024-06-20 |
564 | FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. |
Wei Zhou; Mohsen Mesgar; Heike Adel; Annemarie Friedrich; | naacl | 2024-06-20 |
565 | Evaluating RAG-Fusion with RAGElo: An Automated Elo-based Framework Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This results in difficulties in evaluating RAG variations, like RAG-Fusion (RAGF), in the context of a product QA task at Infineon Technologies. To solve these problems, we propose a comprehensive evaluation framework, which leverages Large Language Models (LLMs) to generate large datasets of synthetic queries based on real user queries and in-domain documents, uses LLM-as-a-judge to rate retrieved documents and answers, evaluates the quality of answers, and ranks different variants of Retrieval-Augmented Generation (RAG) agents with RAGElo’s automated Elo-based competition. |
Zackary Rackauckas; Arthur Câmara; Jakub Zavrel; | arxiv-cs.IR | 2024-06-20 |
566 | Temporal Knowledge Graph Question Answering: A Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work aims to serve as a comprehensive reference for TKGQA and to stimulate further research. |
MIAO SU et. al. | arxiv-cs.CL | 2024-06-20 |
567 | Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present MIRAGE –Model Internals-based RAG Explanations — a plug-and-play approach using model internals for faithful answer attribution in RAG applications. |
Jirui Qi; Gabriele Sarti; Raquel Fernández; Arianna Bisazza; | arxiv-cs.CL | 2024-06-19 |
568 | Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we introduce a new knowledge granularity, termed ‘logic unit’, where documents are transformed into more structured and loosely interconnected logic units with large language models. |
KAIKAI AN et. al. | arxiv-cs.AI | 2024-06-19 |
569 | AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current Vision-Language Models (VLMs) primarily focus on third-person view videos, neglecting the richness of egocentric perceptual experience. To address this gap, we propose three key contributions. First, we introduce the Egocentric Video Understanding Dataset (EVUD) for training VLMs on video captioning and question answering tasks specific to egocentric videos. |
ALESSANDRO SUGLIA et. al. | arxiv-cs.CV | 2024-06-19 |
570 | Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and Metrics for Open Domain Question Answering in The Era of Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel taxonomy for ODQA datasets that incorporates both the modality and difficulty of the question types. |
Akchay Srivastava; Atif Memon; | arxiv-cs.CL | 2024-06-19 |
571 | QRMeM: Unleash The Length Limitation Through Question Then Reflection Memory Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing techniques face challenges with static knowledge integration, leading to insufficient adaptation to task-specific needs and missing multi-segmentation relationships, which hinders the dynamic reorganization and logical combination of relevant segments during the response process. To address these issues, we introduce a novel strategy, Question then Reflection Memory Mechanism (QRMeM), incorporating a dual-structured memory pool. |
BO WANG et. al. | arxiv-cs.CL | 2024-06-18 |
572 | LIVE: Learnable In-Context Vector for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we propose Learnable In-Context VEctor (LIVE) to distill essential task information from demonstrations, improving ICL performance in LMMs. |
YINGZHE PENG et. al. | arxiv-cs.CL | 2024-06-18 |
573 | Problem-Solving in Language Model Networks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To improve the reasoning and question-answering capabilities of Large Language Models (LLMs), several multi-agent approaches have been introduced. |
Ciaran Regan; Alexandre Gournail; Mizuki Oka; | arxiv-cs.AI | 2024-06-18 |
574 | On The Robustness of Language Models for Tabular Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We highlight the need for improved methodologies, including structure-aware self-attention mechanisms and better handling of domain-specific tabular data, to develop more reliable LLMs for table comprehension. |
Kushal Raj Bhandari; Sixue Xing; Soham Dan; Jianxi Gao; | arxiv-cs.CL | 2024-06-18 |
575 | From RAGs to Rich Parameters: Probing How Language Models Utilize External Knowledge Over Parametric Information for Factual Queries Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we mechanistically examine the RAG pipeline to highlight that language models take shortcut and have a strong bias towards utilizing only the context information to answer the question, while relying minimally on their parametric memory. |
HITESH WADHWA et. al. | arxiv-cs.CL | 2024-06-18 |
576 | Diversify, Rationalize, and Combine: Ensembling Multiple QA Strategies for Zero-shot Knowledge-based VQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose Diversification, Evidence Truncation, and Combination for Knowledge-based Elucidation (DietCoke), which utilizes a bundle of complementary question-answering tactics and aggregates their answers using textual rationales. |
Miaoyu Li; Haoxin Li; Zilin Du; Boyang Li; | arxiv-cs.CL | 2024-06-18 |
577 | AvaTaR: Optimizing LLM Agents for Tool Usage Via Contrastive Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. |
SHIRLEY WU et. al. | arxiv-cs.LG | 2024-06-17 |
578 | RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. |
JOAO MONTEIRO et. al. | arxiv-cs.CL | 2024-06-17 |
579 | TRACE The Evidence: Constructing Knowledge-Grounded Reasoning Chains for Retrieval-Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To enhance the multi-hop reasoning ability of RAG models, we propose TRACE. |
Jinyuan Fang; Zaiqiao Meng; Craig Macdonald; | arxiv-cs.CL | 2024-06-17 |
580 | FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Food is a rich and varied dimension of cultural heritage, crucial to both individuals and social groups. To bridge the gap in the literature on the often-overlooked regional diversity in this domain, we introduce FoodieQA, a manually curated, fine-grained image-text dataset capturing the intricate features of food cultures across various regions in China. |
WENYAN LI et. al. | arxiv-cs.CL | 2024-06-16 |
581 | Multi-LLM QA with Embodied Exploration Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: There is a lack of insight into whether a Multi-LLM system can handle question-answering based on observations from embodied exploration. In this work, we address this gap by investigating the use of Multi-Embodied LLM Explorers (MELE) for QA in an unknown environment. |
Bhrij Patel; Vishnu Sashank Dorbala; Amrit Singh Bedi; Dinesh Manocha; | arxiv-cs.LG | 2024-06-16 |
582 | SHMamba: Structured Hyperbolic State Space Model for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the self-attention mechanism’s limitations in window modeling and quadratic computational complexity reduce its effectiveness in modeling long sequences. To address these limitations, we propose SHMamba: Structured Hyperbolic State Space Model to integrate the advantages of hyperbolic geometry and state space models. |
Zhe Yang; Wenrui Li; Guanghui Cheng; | arxiv-cs.AI | 2024-06-14 |
583 | Datasets for Multilingual Answer Sentence Selection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce new high-quality datasets for AS2 in five European languages (French, German, Italian, Portuguese, and Spanish), obtained through supervised Automatic Machine Translation (AMT) of existing English AS2 datasets such as ASNQ, WikiQA, and TREC-QA using a Large Language Model (LLM). |
Matteo Gabburo; Stefano Campese; Federico Agostini; Alessandro Moschitti; | arxiv-cs.CL | 2024-06-14 |
584 | Enhancing Question Answering on Charts Through Effective Pre-training Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While the current state-of-the-art approaches for document understanding (both OCR-based and OCR-free) work well, a thorough analysis of their capabilities and limitations has not yet been performed. Therefore, in this work, we addresses the limitation of current VisualQA models when applied to charts and plots. |
ASHIM GUPTA et. al. | arxiv-cs.CL | 2024-06-14 |
585 | Beyond Raw Videos: Understanding Edited Videos with Large Multimodal Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we leverage the edited videos on a popular short video platform, \textit{i.e.}, TikTok, and build a video VQA benchmark (named EditVid-QA) covering four typical editing categories, i.e., effect, funny, meme, and game. |
LU XU et. al. | arxiv-cs.CV | 2024-06-14 |
586 | EWEK-QA: Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Second, web-retrieved contents are usually obtained by some simple heuristics such as fixed length or breakpoints which might lead to splitting information into pieces. To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system. |
MOHAMMAD DEHGHAN et. al. | arxiv-cs.CL | 2024-06-14 |
587 | Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce an approach for KBVQA, augmenting the existing vision-language transformer encoder-decoder (OFA) model. |
Manas Jhalani; Annervaz K M; Pushpak Bhattacharyya; | arxiv-cs.CL | 2024-06-14 |
588 | CoG-DQA: Chain-of-Guiding Learning with Large Language Models for Diagram Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we introduce the Chain-of-Guiding Learning Model for Diagram Question Answering (CoG-DQA) a novel framework that effectively addresses DQA challenges. |
SHAOWEI WANG et. al. | cvpr | 2024-06-13 |
589 | Language-aware Visual Semantic Distillation for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we are inspired by the human recognition and learning pattern and propose VideoDistill a framework with language-aware (i.e. goal-driven) behavior in both vision perception and answer generation process. |
Bo Zou; Chao Yang; Yu Qiao; Chengbin Quan; Youjian Zhao; | cvpr | 2024-06-13 |
590 | VTQA: Visual Text Question Answering Via Entity Alignment and Cross-Media Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Motivated by the need for a more comprehensive evaluation we introduce a novel dataset comprising 23781 questions derived from 10124 image-text pairs. |
Kang Chen; Xiangqian Wu; | cvpr | 2024-06-13 |
591 | Optimizing Visual Question Answering Models for Driving: Bridging The Gap Between Human and Machine Attention Patterns Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose an approach integrating filters to optimize the model’s attention mechanisms, prioritizing relevant objects and improving accuracy. |
Kaavya Rekanar; Martin Hayes; Ganesh Sistu; Ciaran Eising; | arxiv-cs.CV | 2024-06-13 |
592 | DIEM: Decomposition-Integration Enhancing Multimodal Insights Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we propose the Decomposition-Integration Enhancing Multimodal Insight (DIEM) which initially decomposes the given question and image into multiple subquestions and several sub-images aiming to isolate specific elements for more focused analysis. |
XINYI JIANG et. al. | cvpr | 2024-06-13 |
593 | How to Configure Good In-Context Sequence for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To enhance the ICL performance in this study we use Visual Question Answering (VQA) as case study to explore diverse in-context configurations to find the powerful ones. |
Li Li; Jiawei Peng; Huiyi Chen; Chongyang Gao; Xu Yang; | cvpr | 2024-06-13 |
594 | Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Here we propose a beam-search-based most-likely prediction and a temperature-based multimodal prediction to implement both deterministic and stochastic inferences. |
Inhwan Bae; Junoh Lee; Hae-Gon Jeon; | cvpr | 2024-06-13 |
595 | Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While Multi-modal Language Models (MLMs) demon strate impressive multimodal ability they still struggle on providing factual and precise responses for tasks like vi sual question answering (VQA). In this paper we address this challenge from the perspective of contextual informa tion. |
Shitian Zhao; Zhuowan Li; Yadong Lu; Alan Yuille; Yan Wang; | cvpr | 2024-06-13 |
596 | Ranking Distillation for Open-Ended Video Question Answering with Insufficient Labels Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As a result existing works tend to directly treat all the unlabeled answers as negative labels leading to limited ability for generalization. In this work we introduce a simple yet effective ranking distillation framework (RADI) to mitigate this problem without additional manual annotation. |
Tianming Liang; Chaolei Tan; Beihao Xia; Wei-Shi Zheng; Jian-Fang Hu; | cvpr | 2024-06-13 |
597 | Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose using the principle of neighborhood consistency to identify unreliable responses from a black-box vision-language model in question answering tasks. |
Zaid Khan; Yun Fu; | cvpr | 2024-06-13 |
598 | Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Such VLMs often independently caption a large number of frames uniformly sampled from long videos, which is not efficient and can mostly be redundant. Questioning these decision choices, we explore optimal strategies for key-frame selection that can significantly reduce these redundancies, namely Hierarchical Keyframe Selector. |
JONGWOO PARK et. al. | arxiv-cs.CV | 2024-06-13 |
599 | Synthesize Step-by-Step: Tools Templates and LLMs As Data Generators for Reasoning-Based Chart VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we address the lack of reasoning ability by data augmentation. |
Zhuowan Li; Bhavan Jasani; Peng Tang; Shabnam Ghadar; | cvpr | 2024-06-13 |
600 | On Scaling Up A Multilingual Vision and Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We explore the boundaries of scaling up a multilingual vision and language model both in terms of size of the components and the breadth of its training task mixture. |
XI CHEN et. al. | cvpr | 2024-06-13 |
601 | Can I Trust Your Answer? Visually Grounded Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Experiments with different backbones demonstrate that this grounding mechanism improves both grounding and QA. With these efforts we aim to push towards trustworthy VLMs in VQA systems. |
Junbin Xiao; Angela Yao; Yicong Li; Tat-Seng Chua; | cvpr | 2024-06-13 |
602 | OpenEQA: Embodied Question Answering in The Era of Foundation Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a modern formulation of Embodied Question Answering (EQA) as the task of understanding an environment well enough to answer questions about it in natural language. |
ARJUN MAJUMDAR et. al. | cvpr | 2024-06-13 |
603 | DiscreteSLU: A Large Language Model with Self-Supervised Discrete Speech Units for Spoken Language Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose the use of discrete speech units (DSU), rather than continuous-valued speech encoder outputs, that are converted to the LLM token embedding space using the speech adapter. |
SUWON SHON et. al. | arxiv-cs.CL | 2024-06-13 |
604 | Towards Multilingual Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we work towards extending Audio-Visual Question Answering (AVQA) to multilingual settings. |
ORCHID CHETIA PHUKAN et. al. | arxiv-cs.LG | 2024-06-13 |
605 | MoReVQA: Exploring Modular Reasoning Models for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Thus unlike traditional single-stage planning methods we propose a multi-stage system consisting of an event parser a grounding stage and a final reasoning stage in conjunction with an external memory. |
Juhong Min; Shyamal Buch; Arsha Nagrani; Minsu Cho; Cordelia Schmid; | cvpr | 2024-06-13 |
606 | Grounded Question-Answering in Long Egocentric Videos Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper we delve into open-ended question-answering (QA) in long egocentric videos which allows individuals or robots to inquire about their own past visual experiences. |
Shangzhe Di; Weidi Xie; | cvpr | 2024-06-13 |
607 | Multi-Factor Adaptive Vision Selection for Egocentric Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The challenge of interpreting the world from a human perspective in Artificial Intelligence (AI) is particularly evident in egocentric video question answering, which grapples with issues like small object recognition, noise suppression, and spatial-temporal reasoning. To address these challenges, we introduce the Multi-Factor Adaptive vision Selection (MFAS) framework. |
HAOYU ZHANG et. al. | icml | 2024-06-12 |
608 | TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. |
Zhiruo Wang; Graham Neubig; Daniel Fried; | icml | 2024-06-12 |
609 | Switchable Decision: Dynamic Neural Generation Networks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a switchable decision to accelerate inference by dynamically assigning computation resources for each data instance. |
Shujian Zhang; Korawat Tanwisuth; Chengyue Gong; Pengcheng He; Mingyuan Zhou; | icml | 2024-06-12 |
610 | Unifying Image Processing As Visual Prompting Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these advances have predominantly concentrated on high-level vision tasks, with less attention paid to low-level vision tasks. To address this issue, we propose a universal model for general image processing that covers image restoration, image enhancement, image feature extraction tasks, etc. |
YIHAO LIU et. al. | icml | 2024-06-12 |
611 | In-Context Principle Learning from Mistakes IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nonetheless, all ICL-based approaches only learn from correct input-output pairs. In this paper, we revisit this paradigm, by learning more from the few given input-output examples. |
TIANJUN ZHANG et. al. | icml | 2024-06-12 |
612 | Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we suggest investigating internal activations and quantifying LLM’s truthfulness using the local intrinsic dimension (LID) of model activations. |
Fan Yin; Jayanth Srinivasa; Kai-Wei Chang; | icml | 2024-06-12 |
613 | MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we present MBBQ (Multilingual Bias Benchmark for Question-answering), a carefully curated version of the English BBQ dataset extended to Dutch, Spanish, and Turkish, which measures stereotypes commonly held across these languages. |
Vera Neplenbroek; Arianna Bisazza; Raquel Fernández; | arxiv-cs.CL | 2024-06-11 |
614 | Question-Answering (QA) Model for A Personalized Learning Assistant for Arabic Language Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper describes the creation, optimization, and assessment of a question-answering (QA) model for a personalized learning assistant that uses BERT transformers customized for … |
Mohammad Sammoudi; Ahmad Habaybeh; Huthaifa I. Ashqar; Mohammed Elhenawy; | ArXiv | 2024-06-11 |
615 | Scholarly Question Answering Using Large Language Models in The NFDI4DataScience Gateway Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a scholarly Question Answering (QA) system on top of the NFDI4DataScience Gateway, employing a Retrieval Augmented Generation-based (RAG) approach. |
HAMED BABAEI GIGLOU et. al. | arxiv-cs.CL | 2024-06-11 |
616 | Situational Awareness Matters in 3D Vision Language Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Being able to carry out complicated vision language reasoning tasks in 3D space represents a significant milestone in developing household robots and human-centered embodied AI. In this work, we demonstrate that a critical and distinct challenge in 3D vision language reasoning is situational awareness, which incorporates two key components: (1) The autonomous agent grounds its self-location based on a language prompt. |
Yunze Man; Liang-Yan Gui; Yu-Xiong Wang; | arxiv-cs.CV | 2024-06-11 |
617 | DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To improve the neural-symbolic reasoning capabilities of language agents powered by Large Language Models (LLMs) in KGQA, we propose the DecompositionAlignment-Reasoning Agent (DARA) framework. |
Haishuo Fang; Xiaodan Zhu; Iryna Gurevych; | arxiv-cs.CL | 2024-06-11 |
618 | Benchmarking Vision-Language Contrastive Methods for Medical Representation Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Through this study, we aim to answer the following research questions: (i) How transferable are general-domain representations to the medical domain? |
SHUVENDU ROY et. al. | arxiv-cs.CV | 2024-06-11 |
619 | VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present the VideoLLaMA 2, a set of Video Large Language Models (Video-LLMs) designed to enhance spatial-temporal modeling and audio understanding in video and audio-oriented tasks. |
ZESEN CHENG et. al. | arxiv-cs.CV | 2024-06-11 |
620 | DR-RAG: Applying Dynamic Document Relevance to Retrieval-Augmented Generation for Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To mine the relevance, a two-stage retrieval framework called Dynamic-Relevant Retrieval-Augmented Generation (DR-RAG) is proposed to improve document retrieval recall and the accuracy of answers while maintaining efficiency. |
ZIJIAN HEI et. al. | arxiv-cs.LG | 2024-06-11 |
621 | MedExQA: Medical Question Answering Benchmark with Multiple Explanations Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces MedExQA, a novel benchmark in medical question-answering, to evaluate large language models’ (LLMs) understanding of medical knowledge through explanations. |
Yunsoo Kim; Jinge Wu; Yusuf Abdulle; Honghan Wu; | arxiv-cs.CL | 2024-06-10 |
622 | HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering Using LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. |
Pranoy Panda; Ankush Agarwal; Chaitanya Devaguptapu; Manohar Kaul; Prathosh A P; | arxiv-cs.CL | 2024-06-10 |
623 | MemoriQA: A Question-Answering Lifelog Dataset Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Lifelogging can be referred to as the process of passively collecting data on an individual’s daily life. Lifelog data provides a large amount of information which can be used to … |
Quang-Linh Tran; Binh T. Nguyen; Gareth J. F. Jones; C. Gurrin; | Proceedings of the 1st ACM Workshop on AI-Powered Q&A … | 2024-06-10 |
624 | Chart Question Answering Based on Modality Conversion and Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: A two-stage chart question answering system is proposed in this paper. Chart/plot images are first converted into structured text-based data by a transformer-based conversion … |
Yi-Cheng Liu; Wei-Ta Chu; | Proceedings of the 1st ACM Workshop on AI-Powered Q&A … | 2024-06-10 |
625 | MyEachtraX: Lifelog Question Answering on Mobile Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Your whole life in your pocket. That is the premise of lifelogging, a technology that captures and stores every moment of your life in digital form. Built on top of MyEachtra and … |
Ly-Duyen Tran; Thanh-Binh Nguyen; C. Gurrin; Liting Zhou; | Proceedings of the 7th Annual ACM Workshop on the Lifelog … | 2024-06-10 |
626 | Evaluating The Retrieval Component in LLM-Based Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study proposes a straightforward baseline for evaluating retrievers in Retrieval-Augmented Generation (RAG)-based chatbots. |
Ashkan Alinejad; Krtin Kumar; Ali Vahdat; | arxiv-cs.CL | 2024-06-10 |
627 | MedREQAL: Examining Medical Knowledge Recall of Large Language Models Via Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we examine the capability of LLMs to exhibit medical knowledge recall by constructing a novel dataset derived from systematic reviews — studies synthesizing evidence-based answers for specific medical questions. |
Juraj Vladika; Phillip Schneider; Florian Matthes; | arxiv-cs.CL | 2024-06-09 |
628 | Zero-Shot End-To-End Spoken Question Answering In Medical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our study introduces a novel zero-shot SQA approach, compared to traditional cascade systems. |
Yanis Labrak; Adel Moumen; Richard Dufour; Mickael Rouvier; | arxiv-cs.CL | 2024-06-09 |
629 | CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: More importantly, although these datasets often extend their linguistic range via translation or some other approaches, they usually keep images the same, resulting in narrow cultural representation. To address these limitations, we construct CVQA, a new Culturally-diverse multilingual Visual Question Answering benchmark, designed to cover a rich set of languages and cultures, where we engage native speakers and cultural experts in the data collection process. |
DAVID ROMERO et. al. | arxiv-cs.CV | 2024-06-09 |
630 | MrRank: Improving Question Answering Retrieval System Through Multi-Result Ranking Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose an approach that leverages learning-to-rank techniques to combine heterogeneous IR systems. |
Danupat Khamnuansin; Tawunrat Chalothorn; Ekapol Chuangsuwanich; | arxiv-cs.CL | 2024-06-09 |
631 | Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Negation is important because it adds depth and nuance to the understanding of language and is also crucial for logical reasoning and inference. In this work, we address the above limitation and particularly focus on studying the impact of negation in LLM hallucinations. |
NEERAJ VARSHNEY et. al. | arxiv-cs.CL | 2024-06-08 |
632 | Venn Diagram Prompting : Accelerating Comprehension with Scaffolding Effect Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Venn Diagram (VD) Prompting, an innovative prompting technique which allows Large Language Models (LLMs) to combine and synthesize information across complex, diverse and long-context documents in knowledge-intensive question-answering tasks. |
Sakshi Mahendru; Tejul Pandit; | arxiv-cs.CL | 2024-06-08 |
633 | CRAG — Comprehensive RAG Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing RAG datasets, however, do not adequately represent the diverse and dynamic nature of real-world Question Answering (QA) tasks. To bridge this gap, we introduce the Comprehensive RAG Benchmark (CRAG), a factual question answering benchmark of 4,409 question-answer pairs and mock APIs to simulate web and Knowledge Graph (KG) search. |
XIAO YANG et. al. | arxiv-cs.CL | 2024-06-07 |
634 | ComplexTempQA: A Large-Scale Dataset for Complex Temporal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce ComplexTempQA, a large-scale dataset consisting of over 100 million question-answer pairs designed to tackle the challenges in temporal question answering. |
Raphael Gruber; Abdelrahman Abdallah; Michael Färber; Adam Jatowt; | arxiv-cs.CL | 2024-06-07 |
635 | TCMD: A Traditional Chinese Medicine QA Dataset for Evaluating Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a new medical question-answering (QA) dataset that contains massive manual instruction for solving Traditional Chinese Medicine examination tasks, called TCMD. |
Ping Yu; Kaitao Song; Fengchen He; Ming Chen; Jianfeng Lu; | arxiv-cs.CL | 2024-06-07 |
636 | MATTER: Memory-Augmented Transformer Using Heterogeneous Knowledge Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce an efficient memory-augmented transformer called MATTER, designed to retrieve relevant knowledge from multiple heterogeneous knowledge sources. |
Dongkyu Lee; Chandana Satya Prakash; Jack FitzGerald; Jens Lehmann; | arxiv-cs.CL | 2024-06-07 |
637 | CRAG – Comprehensive RAG Benchmark Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Retrieval-Augmented Generation (RAG) has recently emerged as a promising solution to alleviate Large Language Model (LLM)’s deficiency in lack of knowledge. Existing RAG datasets, … |
XIAO YANG et. al. | ArXiv | 2024-06-07 |
638 | FairytaleQA Translated: Enabling Educational Question and Answer Generation in Less-Resourced Languages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: While numerous datasets have been developed in English for this purpose, a noticeable void exists in less-resourced languages. To alleviate this gap, our paper introduces machine-translated versions of FairytaleQA, a renowned QA dataset designed to assess and enhance narrative comprehension skills in young children. |
Bernardo Leite; Tomás Freitas Osório; Henrique Lopes Cardoso; | arxiv-cs.CL | 2024-06-06 |
639 | Wings: Learning Multimodal LLMs Without Text-only Forgetting Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Wings, a novel MLLM that excels in both text-only dialogues and multimodal comprehension. |
YI-KAI ZHANG et. al. | arxiv-cs.CL | 2024-06-05 |
640 | M-QALM: A Benchmark to Assess Clinical Reading Comprehension and Knowledge Recall in Large Language Models Via Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: There is vivid research on adapting Large Language Models (LLMs) to perform a variety of tasks in high-stakes domains such as healthcare. |
ANAND SUBRAMANIAN et. al. | arxiv-cs.CL | 2024-06-05 |
641 | Measuring Retrieval Complexity in Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate which questions are challenging for retrieval-based Question Answering (QA). |
Matteo Gabburo; Nicolaas Paul Jedema; Siddhant Garg; Leonardo F. R. Ribeiro; Alessandro Moschitti; | arxiv-cs.CL | 2024-06-05 |
642 | I’ve Got The Answer! Interpretation of LLMs Hidden States in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We also identify the layers which have a negative effect on the model’s behavior. As a prospect of practical application of the hypothesis, we propose to train such weak layers additionally in order to improve the quality of the task solution. |
Valeriya Goloviznina; Evgeny Kotelnikov; | arxiv-cs.CL | 2024-06-04 |
643 | UniOQA: A Unified Framework for Knowledge Graph Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce UniOQA, a unified framework that integrates two complementary parallel workflows. |
Zhuoyang Li; Liran Deng; Hui Liu; Qiaoqiao Liu; Junzhao Du; | arxiv-cs.CL | 2024-06-04 |
644 | Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that these artifacts can significantly affect the models, confirmed by extensive experiments across diverse models, languages, and translation processes. In light of this, we present a simple data augmentation strategy that can alleviate the adverse impacts of translation artifacts. |
CHAEHUN PARK et. al. | arxiv-cs.CL | 2024-06-04 |
645 | EffiQA: Efficient Question-Answering with Strategic Multi-Model Collaboration on Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing approaches that integrate LLMs and KGs either underutilize the reasoning abilities of LLMs or suffer from prohibitive computational costs due to tight coupling. To address these limitations, we propose a novel collaborative framework named EffiQA that can strike a balance between performance and efficiency via an iterative paradigm. |
ZIXUAN DONG et. al. | arxiv-cs.CL | 2024-06-03 |
646 | Graph Neural Network Enhanced Retrieval for Question Answering of LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel retrieval method, called GNN-Ret, which leverages graph neural networks (GNNs) to enhance retrieval by exploiting the relatedness between passages. |
ZIJIAN LI et. al. | arxiv-cs.CL | 2024-06-03 |
647 | MedFuzz: Exploring The Robustness of Large Language Models in Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). |
ROBERT OSAZUWA NESS et. al. | arxiv-cs.CL | 2024-06-03 |
648 | Seeing Beyond Borders: Evaluating LLMs in Multilingual Ophthalmological Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLMs), such as GPT-3.5 [1] and GPT-4 [2], have significant potential for transforming several aspects of patient care from clinical note summarization to … |
DAVID RESTREPO et. al. | 2024 IEEE 12th International Conference on Healthcare … | 2024-06-03 |
649 | Selectively Answering Visual Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose Avg BLEU, a calibration score combining the benefits of both sampling and likelihood methods across modalities. |
Julian Martin Eisenschlos; Hernán Maina; Guido Ivetta; Luciana Benotti; | arxiv-cs.CL | 2024-06-03 |
650 | Compositional 4D Dynamic Scenes Understanding with Physics Priors for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a video question answering dataset SuperCLEVR-Physics that focuses on the dynamics properties of objects. |
XINGRUI WANG et. al. | arxiv-cs.CV | 2024-06-02 |
651 | Beyond Boundaries: A Human-like Approach for Question Answering Over Structured and Unstructured Information Sources Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Answering factual questions from heterogenous sources, such as graphs and text, is a key capacity of intelligent systems. Current approaches either (i) perform question answering … |
Jens Lehmann; Dhananjay Bhandiwad; Preetam Gattogi; S. Vahdati; | Transactions of the Association for Computational … | 2024-06-01 |
652 | Mix-tower: Light Visual Question Answering Framework Based on Exclusive Self-attention Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts View |
Deguang Chen; Jianrui Chen; Luheng Yang; Fanhua Shang; | Neurocomputing | 2024-06-01 |
653 | SPAGHETTI: Open-Domain Question Answering from Heterogeneous Data Sources with Retrieval and Semantic Parsing Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce SPAGHETTI: Semantic Parsing Augmented Generation for Hybrid English information from Text Tables and Infoboxes, a hybrid question-answering (QA) pipeline that utilizes information from heterogeneous knowledge sources, including knowledge base, text, tables, and infoboxes. |
HEIDI C. ZHANG et. al. | arxiv-cs.CL | 2024-06-01 |
654 | The Effect of Clustering Algorithms on Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Rana Husni AlMahmoud; Marwah Alian; | Expert Syst. Appl. | 2024-06-01 |
655 | Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose passage-specific prompt tuning for reranking in open-domain question answering (PSPT): a parameter-efficient method that fine-tunes learnable passage-specific soft prompts, incorporating passage-specific knowledge from a limited set of question-passage relevance pairs. |
Xuyang Wu; Zhiyuan Peng; Krishna Sravanthi Rajanala Sai; Hsin-Tai Wu; Yi Fang; | arxiv-cs.CL | 2024-05-31 |
656 | Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking Via Side-by-Side Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a holistic pipeline for automatic data generation including question generation, answering, and model scoring using an “Evaluator”. |
BERND BOHNET et. al. | arxiv-cs.CL | 2024-05-31 |
657 | GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style. |
Costas Mavromatis; George Karypis; | arxiv-cs.CL | 2024-05-30 |
658 | Video Question Answering for People with Visual Impairments Using An Egocentric 360-Degree Camera Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper addresses the daily challenges encountered by visually impaired individuals, such as limited access to information, navigation difficulties, and barriers to social interaction. To alleviate these challenges, we introduce a novel visual question answering dataset. |
Inpyo Song; Minjun Joo; Joonhyung Kwon; Jangwon Lee; | arxiv-cs.CV | 2024-05-30 |
659 | VQA Training Sets Are Self-play Environments for Generating Few-shot Pools Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a technique in which existing training sets can be directly used for constructing computational environments with task metrics as rewards. |
Tautvydas Misiunas; Hassan Mansoor; Jasper Uijlings; Oriana Riva; Victor Carbune; | arxiv-cs.CV | 2024-05-30 |
660 | The First ACM Workshop on AI-Powered Question Answering Systems for Multimedia Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The advent of large language models (LLMs) has energised research in Question-Answering (QA) tasks, enabling responses across varied domains like economics and mathematics. … |
TAI TAN MAI et. al. | Proceedings of the 2024 International Conference on … | 2024-05-30 |
661 | Evaluating Zero-Shot GPT-4V Performance on 3D Visual Question Answering Benchmarks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As interest in reformulating the 3D Visual Question Answering (VQA) problem in the context of foundation models grows, it is imperative to assess how these new paradigms influence existing closed-vocabulary datasets. In this case study, we evaluate the zero-shot performance of foundational models (GPT-4 Vision and GPT-4) on well-established 3D VQA benchmarks, namely 3D-VQA and ScanQA. |
Simranjit Singh; Georgios Pavlakos; Dimitrios Stamoulis; | arxiv-cs.CV | 2024-05-29 |
662 | A Multi-Source Retrieval Question Answering Framework Based on RAG Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing RAG paradigms are inevitably influenced by erroneous retrieval information, thereby reducing the reliability and correctness of generated results. Therefore, to improve the relevance of retrieval information, this study proposes a method that replaces traditional retrievers with GPT-3.5, leveraging its vast corpus knowledge to generate retrieval information. |
RIDONG WU et. al. | arxiv-cs.IR | 2024-05-29 |
663 | MathChat: Benchmarking Mathematical Reasoning and Instruction Following in Multi-Turn Interactions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces MathChat, a comprehensive benchmark specifically designed to evaluate LLMs across a broader spectrum of mathematical tasks. |
ZHENWEN LIANG et. al. | arxiv-cs.AI | 2024-05-29 |
664 | Peering Into The Mind of Language Models: An Approach for Attribution in Contextual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a novel method for attribution in contextual question answering, leveraging the hidden state representations of LLMs. |
Anirudh Phukan; Shwetha Somasundaram; Apoorv Saxena; Koustava Goswami; Balaji Vasan Srinivasan; | arxiv-cs.CL | 2024-05-28 |
665 | Conv-CoA: Improving Open-domain Question Answering in Large Language Models Via Conversational Chain-of-Action Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a Conversational Chain-of-Action (Conv-CoA) framework for Open-domain Conversational Question Answering (OCQA). |
Zhenyu Pan; Haozheng Luo; Manling Li; Han Liu; | arxiv-cs.CL | 2024-05-28 |
666 | THREAD: Thinking Deeper with Recursive Spawning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Large language models (LLMs) have shown impressive capabilities across diverse settings, but still struggle as the length and complexity of the context increases. To address this challenge, we propose Thinking Recursively and Dynamically (ThReaD). |
Philip Schroeder; Nathaniel Morgan; Hongyin Luo; James Glass; | arxiv-cs.CL | 2024-05-27 |
667 | Aligning LLMs Through Multi-perspective User Preference Ranking-based Feedback for Programming Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Code Community Question Answering (CCQA) seeks to tackle programming-related issues, thereby boosting productivity in both software engineering and academic research. Recent … |
HONGYU YANG et. al. | ArXiv | 2024-05-27 |
668 | Hawk: Learning to Understand Open-World Video Anomalies Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce Hawk, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. |
JIAQI TANG et. al. | arxiv-cs.CV | 2024-05-27 |
669 | Reason3D: Searching and Reasoning 3D Segmentation Via Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces Reason3D, a novel LLM designed for comprehensive 3D understanding. |
Kuan-Chih Huang; Xiangtai Li; Lu Qi; Shuicheng Yan; Ming-Hsuan Yang; | arxiv-cs.CV | 2024-05-27 |
670 | Accurate and Nuanced Open-QA Evaluation Through Textual Entailment Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose to study the entailment relations of answers to identify more informative and more general system answers, offering a much closer evaluation to human judgment on both NaturalQuestions and TriviaQA while being learning-free. |
Peiran Yao; Denilson Barbosa; | arxiv-cs.CL | 2024-05-26 |
671 | Map-based Modular Approach for Zero-shot Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a map-based modular approach to EQA, enabling real-world robots to explore and map unknown environments. |
Koya Sakamoto; Daichi Azuma; Taiki Miyanishi; Shuhei Kurita; Motoaki Kawanabe; | arxiv-cs.RO | 2024-05-26 |
672 | Crafting Interpretable Embeddings By Asking LLMs Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM. |
VINAMRA BENARA et. al. | arxiv-cs.CL | 2024-05-26 |
673 | Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: For each task, we review their relevant characteristics, sub-tasks, and specific challenges (e.g., missing datasets for multi-document summarization, coherence in story generation, and complex reasoning for question answering). |
Jonas Becker; Jan Philip Wahle; Bela Gipp; Terry Ruas; | arxiv-cs.CL | 2024-05-24 |
674 | Efficient Medical Question Answering with Knowledge-Augmented Question Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach. |
JULIEN KHLAUT et. al. | arxiv-cs.CL | 2024-05-23 |
675 | Experimental Design of Extractive Question-Answering Systems: Influence of Error Scores and Answer Length Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-answering (QA) systems are becoming more and more important because they enable human-computer communication in a natural language. In recent years, significant progress … |
Amer Farea; Frank Emmert-Streib; | J. Artif. Intell. Res. | 2024-05-23 |
676 | LOVA3: Learning to Visual Question Answering, Asking and Assessment Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. Inspired by the human learning mechanism, we introduce LOVA3, an innovative framework named Learning tO Visual question Answering, Asking and Assessment, designed to equip MLLMs with these additional capabilities. |
Henry Hengyuan Zhao; Pan Zhou; Difei Gao; Zechen Bai; Mike Zheng Shou; | arxiv-cs.CV | 2024-05-23 |
677 | FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Large language models are often challenged by generating erroneous or `hallucinated’ responses, especially in complex reasoning tasks. To mitigate this, we propose a retrieval augmented reasoning method, FiDeLiS, which enhances knowledge graph question answering by anchoring responses to structured, verifiable reasoning paths. |
YUAN SUI et. al. | arxiv-cs.AI | 2024-05-22 |
678 | OLAPH: Improving Factuality in Biomedical Long-form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Thus, we introduce MedLFQA, a benchmark dataset reconstructed using long-form question-answering datasets related to the biomedical domain. |
Minbyul Jeong; Hyeon Hwang; Chanwoong Yoon; Taewhoo Lee; Jaewoo Kang; | arxiv-cs.CL | 2024-05-21 |
679 | Efficient and Interpretable Information Retrieval for Product Question Answering with Heterogeneous Data Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we explore the potential of jointly learning dense semantic representation and combining it with the lexical one for ranking candidate information. |
Biplob Biswas; Rajiv Ramnath; | arxiv-cs.LG | 2024-05-21 |
680 | Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a new multi-task Urdu scene text dataset comprising over 1000 natural scene images, which can be used for text detection, recognition, and VQA tasks. |
HIBA MARYAM et. al. | arxiv-cs.CV | 2024-05-21 |
681 | MentalQA: An Annotated Arabic Corpus for Questions and Answers of Mental Healthcare Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce MentalQA, a novel Arabic dataset featuring conversational-style question-and-answer (QA) interactions. |
Hassan Alhuzali; Ashwag Alasmari; Hamad Alsaleh; | arxiv-cs.CL | 2024-05-21 |
682 | Causal Event Graph-Guided Language-based Spatiotemporal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models have excelled at encoding and leveraging language patterns in large text-based corpora for various tasks, including spatiotemporal event-based question … |
KAUSHIK ROY et. al. | AAAI Spring Symposia | 2024-05-20 |
683 | MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we tackle multilingual TEC-VQA by introducing MTVQA, the first benchmark featuring high-quality human expert annotations across 9 diverse languages, consisting of 6,778 question-answer pairs across 2,116 images. |
JINGQUN TANG et. al. | arxiv-cs.CV | 2024-05-20 |
684 | Increasing The LLM Accuracy for Question Answering: Ontologies to The Rescue! Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Building on the observations of our previous research where the inaccurate LLM-generated SPARQL queries followed incorrect paths, we present an approach that consists of 1) Ontology-based Query Check (OBQC): detects errors by leveraging the ontology of the knowledge graph to check if the LLM-generated SPARQL query matches the semantic of ontology and 2) LLM Repair: use the error explanations with an LLM to repair the SPARQL query. |
Dean Allemang; Juan Sequeda; | arxiv-cs.AI | 2024-05-19 |
685 | MemeMQA: Multimodal Question Answering for Memes Via Rationale-Based Inferencing Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To extend this research, we introduce MemeMQA, a multimodal question-answering framework aiming to solicit accurate responses to structured questions while providing coherent explanations. |
Siddhant Agarwal; Shivam Sharma; Preslav Nakov; Tanmoy Chakraborty; | arxiv-cs.CL | 2024-05-18 |
686 | StackOverflowVQA: Stack Overflow Visual Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we focus on the questions which need the understanding of images in addition to the question itself. |
Motahhare Mirzaei; Mohammad Javad Pirhadi; Sauleh Eetemadi; | arxiv-cs.CV | 2024-05-17 |
687 | FinTextQA: A Dataset for Long-form Financial Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. |
JIAN CHEN et. al. | arxiv-cs.CL | 2024-05-16 |
688 | SciQAG: A Framework for Auto-Generated Science Question Answering Dataset with Fine-grained Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce SciQAG, a novel framework for automatically generating high-quality science question-answer pairs from a large corpus of scientific literature based on large language models (LLMs). |
YUWEI WAN et. al. | arxiv-cs.CL | 2024-05-16 |
689 | Towards Better Question Generation in QA-based Event Extraction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, in QA-based EE, the quality of the questions dramatically affects the extraction accuracy, and how to generate high-quality questions for QA-based EE remains a challenge. In this work, to tackle this challenge, we suggest four criteria to evaluate the quality of a question and propose a reinforcement learning method, RLQG, for QA-based EE that can generate generalizable, high-quality, and context-dependent questions and provides clear guidance to QA models. |
Zijin Hong; Jian Liu; | arxiv-cs.CL | 2024-05-16 |
690 | Exploring The Impact of ChatGPT on Wikipedia Engagement Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore Wikipedia user metrics across four areas: page views, unique visitor numbers, edit counts and editor numbers within twelve language instances of Wikipedia. |
Neal Reeves; Wenjie Yin; Elena Simperl; | arxiv-cs.HC | 2024-05-16 |
691 | Question Answering System with Text Mining and Deep Networks Related Papers Related Patents Related Grants Related Venues Related Experts View |
Hüseyin Avni Ardaç; P. Erdoğmuş; | Evol. Syst. | 2024-05-16 |
692 | STAR: A Benchmark for Situated Reasoning in Real-World Videos IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a new benchmark that evaluates the situated reasoning ability via situation abstraction and logic-grounded question answering for real-world videos, called Situated Reasoning in Real-World Videos (STAR Benchmark). |
Bo Wu; Shoubin Yu; Zhenfang Chen; Joshua B Tenenbaum; Chuang Gan; | arxiv-cs.AI | 2024-05-15 |
693 | Prompting-based Synthetic Data Generation for Few-Shot Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: With this motivation, we show that using large language models can improve Question Answering performance on various datasets in the few-shot setting compared to state-of-the-art approaches. For this, we perform data generation leveraging the Prompting framework, suggesting that language models contain valuable task-agnostic knowledge that can be used beyond the common pre-training/fine-tuning scheme. |
Maximilian Schmidt; Andrea Bartezzaghi; Ngoc Thang Vu; | arxiv-cs.CL | 2024-05-15 |
694 | A Knowledge-Injected Curriculum Pretraining Framework for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, in this paper, we propose a general K nowledge-I njected C urriculum P retraining framework (KICP) to achieve comprehensive KG learning and exploitation for KBQA tasks, which is composed of knowledge injection (KI), knowledge adaptation (KA) and curriculum reasoning (CR). |
XIN LIN et. al. | www | 2024-05-13 |
695 | Demonstration of FeVisQA: Free-Form Question Answering Over Data Visualization Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question Answering (QA) systems playa vital role in knowledge acquisition. CodeQA refers to question answering (QA) over source code for code comprehension purpose. However, … |
Yuanfeng Song; Jinwei Lu; Xuefang Zhao; Raymond Chi-Wing Wong; Haodi Zhang; | 2024 IEEE 40th International Conference on Data Engineering … | 2024-05-13 |
696 | TIQ: A Benchmark for Temporal Question Answering with Implicit Time Constraints Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Temporal question answering (QA) involves explicit (e.g., …before 2024) or implicit (e.g., …during the Cold War period) time constraints. Implicit constraints are more … |
Zhen Jia; Philipp Christmann; G. Weikum; | Companion Proceedings of the ACM on Web Conference 2024 | 2024-05-13 |
697 | Harnessing Multi-Role Capabilities of Large Language Models for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose LLMQA, a generalized framework that formulates the ODQA process into three basic steps: query expansion, document selection, and answer generation, combining the superiority of both retrieval-based and generation-based evidence. |
HONGDA SUN et. al. | www | 2024-05-13 |
698 | Causal Question Answering with Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Hence, in this paper, we aim to answer causal questions with a causality graph, a large-scale dataset of causal relations between noun phrases along with the relations’ provenance data. |
Lukas Bl\{u}baum; Stefan Heindorf; | www | 2024-05-13 |
699 | KET-QA: A Dataset for Knowledge Enhanced Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose to use a knowledge base (KB) as the external knowledge source for TableQA and construct a dataset KET-QA with fine-grained gold evidence annotation. |
Mengkang Hu; Haoyu Dong; Ping Luo; Shi Han; Dongmei Zhang; | arxiv-cs.CL | 2024-05-13 |
700 | Faithful Temporal Question Answering Over Heterogeneous Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: As implicit questions are sparse in prior benchmarks, we introduce a principled method for generating diverse questions. |
Zhen Jia; Philipp Christmann; Gerhard Weikum; | www | 2024-05-13 |
701 | MedConceptsQA: Open Source Medical Concepts QA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present MedConceptsQA, a dedicated open source benchmark for medical concepts question answering. |
Ofir Ben Shoham; Nadav Rappoport; | arxiv-cs.CL | 2024-05-12 |
702 | ChartInsights: Evaluating Multimodal Large Language Models for Low-Level Chart Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While recent advancements in multimodal large language models (MLLMs) like GPT-4o have shown promise in high-level ChartQA tasks, such as chart captioning, their effectiveness in low-level ChartQA tasks (e.g., identifying correlations) remains underexplored. In this paper, we address this gap by evaluating MLLMs on low-level ChartQA using a newly curated dataset, ChartInsights, which consists of 22,347 (chart, task, query, answer) covering 10 data analysis tasks across 7 chart types. |
YIFAN WU et. al. | arxiv-cs.CL | 2024-05-11 |
703 | Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Since LLMs have probably seen the majority of factual question-answering datasets already, to facilitate our analysis, we proposed a fully automatic pipeline for creating a benchmark that requires knowledge of long-tail facts for answering the involved questions. |
WENYU HUANG et. al. | arxiv-cs.CL | 2024-05-10 |
704 | CourseGPT-zh: An Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, restricted access to closed-source LLMs via APIs and the difficulty in collecting massive high-quality datasets pose obstacles to the development of large language models in education fields of various courses. Given these challenges, we propose CourseGPT-zh, a course-oriented education LLM that supports customization and low-cost deployment. |
Zheyan Qu; Lu Yin; Zitong Yu; Wenbo Wang; Xing zhang; | arxiv-cs.CL | 2024-05-07 |
705 | Mitigating Clickbait: An Approach to Spoiler Generation Using Multitask Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study introduces ‘clickbait spoiling’, a novel technique designed to detect, categorize, and generate spoilers as succinct text responses, countering the curiosity induced by clickbait content. |
Sayantan Pal; Souvik Das; Rohini K. Srihari; | arxiv-cs.CL | 2024-05-07 |
706 | S-EQA: Tackling Situational Queries in Embodied Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present and tackle the problem of Embodied Question Answering (EQA) with Situational Queries (S-EQA) in a household environment. |
VISHNU SASHANK DORBALA et. al. | arxiv-cs.RO | 2024-05-07 |
707 | VSA4VQA: Scaling A Vector Symbolic Architecture to Visual Question Answering on Natural Images Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose VSA4VQA – a novel 4D implementation of VSAs that implements a mental representation of natural images for the challenging task of Visual Question Answering (VQA). |
Anna Penzkofer; Lei Shi; Andreas Bulling; | arxiv-cs.CV | 2024-05-06 |
708 | Overview of The EHRSQL 2024 Shared Task on Reliable Text-to-SQL Modeling on Electronic Health Records Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we describe the task of reliable text-to-SQL modeling, the dataset, and the methods and results of the participants. |
Gyubok Lee; Sunjun Kweon; Seongsu Bae; Edward Choi; | arxiv-cs.CL | 2024-05-04 |
709 | SUKHSANDESH: An Avatar Therapeutic Question Answering Platform for Sexual Education in Rural India Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This approach aims to foster empathy and connection, which is particularly beneficial for individuals with limited literacy skills. |
Salam Michael Singh; Shubhmoy Kumar Garg; Amitesh Misra; Aaditeshwar Seth; Tanmoy Chakraborty; | arxiv-cs.CL | 2024-05-03 |
710 | UQA: Corpus for Urdu Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces UQA, a novel dataset for question answering and text comprehension in Urdu, a low-resource language with over 70 million native speakers. |
Samee Arif; Sualeha Farid; Awais Athar; Agha Ali Raza; | arxiv-cs.CL | 2024-05-02 |
711 | OmniDrive: A Holistic LLM-Agent Framework for Autonomous Driving with 3D Perception, Reasoning and Planning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, capitalizing on MLLMs’ strong reasoning capabilities for improved planning behavior is challenging since planning requires full 3D situational awareness beyond 2D reasoning. To address this challenge, our work proposes a holistic framework for strong alignment between agent models and 3D driving tasks. |
SHIHAO WANG et. al. | arxiv-cs.CV | 2024-05-02 |
712 | Enhanced Textual Feature Extraction for Visual Question Answering: A Simple Convolutional Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we conduct a comprehensive comparison between complex textual models that leverage long-range dependencies and simpler models focusing on local textual features within a well-established VQA framework. |
Zhilin Zhang; | arxiv-cs.CV | 2024-05-01 |
713 | ConfigILM: A General Purpose Configurable Library for Combining Image and Language Models for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
L. Hackel; Kai Norman Clasen; Begum Demir; | SoftwareX | 2024-05-01 |
714 | Question-Aware Global-Local Video Understanding Network for Audio-Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: As a newly emerging task, audio-visual question answering (AVQA) has attracted research attention. Compared with traditional single-modality (e.g., audio or visual) QA tasks, it … |
Zailong Chen; Lei Wang; Peng Wang; Peng Gao; | IEEE Transactions on Circuits and Systems for Video … | 2024-05-01 |
715 | Video Question Answering With Semantic Disentanglement and Reasoning Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering aims to provide correct answers given complex videos and related questions, posting high requirements of the comprehension ability in both video and … |
Jin Liu; Guoxiang Wang; Jialong Xie; F. Zhou; Huijuan Xu; | IEEE Transactions on Circuits and Systems for Video … | 2024-05-01 |
716 | ZVQAF: Zero-shot Visual Question Answering with Feedback from Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View |
Cheng Liu; Chao Wang; Yan Peng; Zhixu Li; | Neurocomputing | 2024-05-01 |
717 | Suvach — Generated Hindi QA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a new benchmark specifically designed for evaluating Hindi EQA models and discusses the methodology to do the same for any task. |
Vaishak Narayanan; Prabin Raj KP; Saifudheen Nouphal; | arxiv-cs.CL | 2024-04-30 |
718 | When to Retrieve: Teaching LLMs to Utilize Information Retrieval Effectively Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we demonstrate how Large Language Models (LLMs) can effectively learn to use an off-the-shelf information retrieval (IR) system specifically when additional context is required to answer a given question. |
Tiziano Labruna; Jon Ander Campos; Gorka Azkune; | arxiv-cs.CL | 2024-04-30 |
719 | QLSC: A Query Latent Semantic Calibrator for Robust Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our work introduces a novel approach, called the “Query Latent Semantic Calibrator (QLSC)”, designed as an auxiliary module for existing MRC models. |
SHENG OUYANG et. al. | arxiv-cs.CL | 2024-04-30 |
720 | TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table Domains Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we establish a benchmark for table visual question answering, referred to as the TableVQA-Bench, derived from pre-existing table question-answering (QA) and table structure recognition datasets. |
Yoonsik Kim; Moonbin Yim; Ka Yeon Song; | arxiv-cs.CV | 2024-04-29 |
721 | Multi-Page Document Visual Question Answering Using Self-Attention Scoring Mechanism Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel method and efficient training strategy for multi-page Document VQA tasks. |
Lei Kang; Rubèn Tito; Ernest Valveny; Dimosthenis Karatzas; | arxiv-cs.CV | 2024-04-29 |
722 | ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question Answering By Understanding Vietnamese Text in Images Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Optical Character Recognition – Visual Question Answering (OCR-VQA) is the task of answering text information contained in images that have just been significantly developed in … |
HUY QUANG PHAM et. al. | ArXiv | 2024-04-29 |
723 | Multi-hop Question Answering Over Knowledge Graphs Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we evaluate the capability of (LLMs) to answer questions over KG that involve multiple hops. |
Abir Chakraborty; | arxiv-cs.AI | 2024-04-29 |
724 | QANA: LLM-based Question Generation and Network Analysis for Zero-shot Key Point Analysis and Beyond Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose Question-Answering Network Analysis (QANA), a novel opinion mining framework that utilizes Large Language Models (LLMs) to generate questions from users’ comments, constructs a bipartite graph based on the comments’ answerability to the questions, and applies centrality measures to examine the importance of opinions. |
TOMOKI FUKUMA et. al. | arxiv-cs.CL | 2024-04-28 |
725 | MediFact at MEDIQA-M3G 2024: Medical Question Answering in Dermatology with Multimodal Learning Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: The MEDIQA-M3G 2024 challenge necessitates novel solutions for Multilingual & Multimodal Medical Answer Generation in dermatology (wai Yim et al., 2024a). This paper addresses the … |
Nadia Saeed; | Clinical Natural Language Processing Workshop | 2024-04-27 |
726 | Can A Multichoice Dataset Be Repurposed for Extractive Question Answering? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our aim is to enable others to adapt our approach for the 120+ other language variants in Belebele, many of which are deemed under-resourced. |
TERESA LYNN et. al. | arxiv-cs.CL | 2024-04-26 |
727 | Türkçe Dil Modellerinin Performans Karşılaştırması Performance Comparison of Turkish Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Yet, despite the increasing number of these models, there is no comprehensive comparison of their performance for Turkish. This study aims to fill this gap in the literature. |
EREN DOGAN et. al. | arxiv-cs.CL | 2024-04-25 |
728 | Large Language Models in The Clinic: A Comprehensive Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To better understand LLMs in the clinic, we construct a benchmark ClinicBench. |
FENGLIN LIU et. al. | arxiv-cs.CL | 2024-04-25 |
729 | Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a medical vision-language model that integrates large vision and language models adapted for the medical domain. |
CUONG NHAT HA et. al. | arxiv-cs.CL | 2024-04-24 |
730 | KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models (LLMs) suffer from the hallucination problem and face significant challenges when applied to knowledge-intensive tasks. A promising approach is to leverage … |
XINXIN ZHENG et. al. | ArXiv | 2024-04-24 |
731 | Assessing The Potential Of Mid-Sized Language Models For Clinical QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models, such as GPT-4 and Med-PaLM, have shown impressive performance on clinical tasks; however, they require access to compute, are closed-source, and cannot be … |
ELLIOT BOLTON et. al. | ArXiv | 2024-04-24 |
732 | Evaluating Tool-Augmented Agents in Remote Sensing Platforms Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Tool-augmented Large Language Models (LLMs) have shown impressive capabilities in remote sensing (RS) applications. However, existing benchmarks assume question-answering input … |
Simranjit Singh; Michael Fore; Dimitrios Stamoulis; | ArXiv | 2024-04-23 |
733 | Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Multimodal LLMs are the natural evolution of LLMs, and enlarge their capabilities so as to work beyond the pure textual modality. As research is being carried out to design novel architectures and vision-and-language adapters, in this paper we concentrate on endowing such models with the capability of answering questions that require external knowledge. |
DAVIDE CAFFAGNI et. al. | arxiv-cs.CV | 2024-04-23 |
734 | Retrieval Augmented Generation for Domain-specific Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a novel framework to compile a large question-answer database and develop the approach for retrieval-aware finetuning of a Large Language model. |
SANAT SHARMA et. al. | arxiv-cs.CL | 2024-04-23 |
735 | Generate-on-Graph: Treat LLM As Both Agent and KG in Incomplete Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To handle IKGQA, we propose a training-free method called Generate-on-Graph (GoG), which can generate new factual triples while exploring KGs. |
YAO XU et. al. | arxiv-cs.CL | 2024-04-23 |
736 | RS-LLaVA: A Large Vision-Language Model for Joint Captioning and Question Answering in Remote Sensing Imagery IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this paper, we delve into the innovative application of large language models (LLMs) and their extension, large vision-language models (LVLMs), in the field of remote sensing … |
Y. Bazi; Laila Bashmal; Mohamad Mahmoud Al Rahhal; Riccardo Ricci; F. Melgani; | Remote. Sens. | 2024-04-23 |
737 | Tree of Reviews: A Tree-based Dynamic Iterative Retrieval Framework for Multi-hop Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-hop question answering is a knowledge-intensive complex problem. Large Language Models (LLMs) use their Chain of Thoughts (CoT) capability to reason complex problems step by … |
JIAPENG LI et. al. | ArXiv | 2024-04-22 |
738 | Listen Then See: Video Alignment with Speaker Attention Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a cross-modal alignment and subsequent representation fusion approach that achieves state-of-the-art results (82.06\% accuracy) on the Social IQ 2.0 dataset for SIQA. |
Aviral Agrawal; Carlos Mateo Samudio Lezcano; Iqui Balam Heredia-Marin; Prabhdeep Singh Sethi; | arxiv-cs.CV | 2024-04-21 |
739 | Exploring Diverse Methods in Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study explores innovative methods for improving Visual Question Answering (VQA) using Generative Adversarial Networks (GANs), autoencoders, and attention mechanisms. |
PANFENG LI et. al. | arxiv-cs.CV | 2024-04-21 |
740 | MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce MahaSQuAD, the first-ever full SQuAD dataset for the Indic language Marathi, consisting of 118,516 training, 11,873 validation, and 11,803 test samples. |
Ruturaj Ghatage; Aditya Kulkarni; Rajlaxmi Patil; Sharvi Endait; Raviraj Joshi; | arxiv-cs.CL | 2024-04-20 |
741 | PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Through this work, we aim to enhance the capabilities of existing vision-and-language models in handling challenges posed by text-dominant documents in VRD-QA. |
Yihao Ding; Kaixuan Ren; Jiabin Huang; Siwen Luo; Soyeon Caren Han; | arxiv-cs.CV | 2024-04-19 |
742 | LaPA: Latent Prompt Assist Model For Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose the Latent Prompt Assist model (LaPA) for medical visual question answering. |
Tiancheng Gu; Kaicheng Yang; Dongnan Liu; Weidong Cai; | arxiv-cs.CV | 2024-04-19 |
743 | MedThink: Explaining Medical Visual Question Answering Via Multimodal Decision-Making Rationale Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the model interpretability and transparency of existing MedVQA solutions are often limited, posing challenges in understanding their decision-making processes. To address this issue, we devise a semi-automated annotation process to streamline data preparation and build new benchmark MedVQA datasets R-RAD, R-SLAKE and R-Path. |
XIAOTANG GAI et. al. | arxiv-cs.CV | 2024-04-18 |
744 | Evaluating AI for Law: Bridging The Gap with Open-Source Solutions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study evaluates the performance of general-purpose AI, like ChatGPT, in legal question-answering tasks, highlighting significant risks to legal professionals and clients. |
Rohan Bhambhoria; Samuel Dahan; Jonathan Li; Xiaodan Zhu; | arxiv-cs.AI | 2024-04-18 |
745 | Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Furthermore, current datasets may not provide a precise diagnostic for these methods. To tackle these challenges, firstly, we propose a novel dataset, MUSIC-AVQA-R, crafted in two steps: rephrasing questions within the test split of a public dataset (MUSIC-AVQA) and subsequently introducing distribution shifts to split questions. |
JIE MA et. al. | arxiv-cs.CV | 2024-04-18 |
746 | Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Reka Core, Flash, and Edge, a series of powerful multimodal language models trained from scratch by Reka. |
AITOR ORMAZABAL et. al. | arxiv-cs.CL | 2024-04-18 |
747 | Characterizing LLM Abstention Behavior in Science QA with Context Perturbations Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. |
Bingbing Wen; Bill Howe; Lucy Lu Wang; | arxiv-cs.CL | 2024-04-18 |
748 | EuSQuAD: Automatically Translated and Aligned SQuAD2.0 for Basque Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This work presents EuSQuAD, the first initiative dedicated to automatically translating and aligning SQuAD2.0 into Basque, resulting in more than 142k QA examples. |
Aitor García-Pablos; Naiara Perez; Montse Cuadros; Jaione Bengoetxea; | arxiv-cs.CL | 2024-04-18 |
749 | Consistency Training By Synthetic Question Generation for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: By citing a common modeling error prevalent in previous research, we introduce a new baseline model and compare our model’s performance against it, demonstrating an improvement in results, particularly when dealing with questions that include a substantial amount of historical context. |
Hamed Hematian Hemati; Hamid Beigy; | arxiv-cs.CL | 2024-04-17 |
750 | Language Models Still Struggle to Zero-shot Reason About Time Series Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address this gap, we generate a first-of-its-kind evaluation framework for time series reasoning, including formal tasks and a corresponding dataset of multi-scale time series paired with text captions across ten domains. Using these data, we probe whether language models achieve three forms of reasoning: (1) Etiological Reasoning – given an input time series, can the language model identify the scenario that most likely created it? |
Mike A. Merrill; Mingtian Tan; Vinayak Gupta; Tom Hartvigsen; Tim Althoff; | arxiv-cs.CL | 2024-04-17 |
751 | Knowledge-Enriched Prompt for Low-Resource Named Entity Recognition Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Named Entity Recognition (NER) in low-resource settings aims to identify and categorize entities in a sentence with limited labeled data. Although prompt-based methods have … |
Wenlong Hou; Weidong Zhao; Xianhui Liu; Wenyan Guo; | ACM Transactions on Asian and Low-Resource Language … | 2024-04-17 |
752 | Spiral of Silence: How Is Large Language Model Killing Information Retrieval? – A Case Study on Open Domain Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The practice of Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs) with retrieval systems, has become increasingly prevalent. However, the … |
XIAOYANG CHEN et. al. | Annual Meeting of the Association for Computational … | 2024-04-16 |
753 | CoTAR: Chain-of-Thought Attribution Reasoning with Multi-level Granularity Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce an attribution-oriented Chain-of-Thought reasoning method to enhance the accuracy of attributions. |
Moshe Berchansky; Daniel Fleischer; Moshe Wasserblat; Peter Izsak; | arxiv-cs.CL | 2024-04-16 |
754 | Spiral of Silence: How Is Large Language Model Killing Information Retrieval? — A Case Study on Open Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we construct and iteratively run a simulation pipeline to deeply investigate the short-term and long-term effects of LLM text on RAG systems. |
XIAOYANG CHEN et. al. | arxiv-cs.IR | 2024-04-16 |
755 | ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: As a developing country, conditions are still limited, and this task is still open in Vietnam. Therefore, we introduce the first large-scale dataset in Vietnamese specializing in the ability to understand text appearing in images, we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual \textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over 16,000} images and \textbf{over 50,000} questions with answers. |
QUAN VAN NGUYEN et. al. | arxiv-cs.CL | 2024-04-16 |
756 | IMCN: Improved Modular Co-attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Cheng Liu; Chao Wang; Yan Peng; | Appl. Intell. | 2024-04-16 |
757 | HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Vision Language Models (VLMs) are now the de facto state-of-the-art for a number of tasks including visual question answering, recognising objects, and spatial referral. In … |
Siddhant Bansal; Michael Wray; D. Damen; | ArXiv | 2024-04-15 |
758 | TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: The advent of Large Multimodal Models (LMMs) has sparked a surge in research aimed at harnessing their remarkable reasoning abilities. However, for understanding text-rich images, … |
BOZHI LUAN et. al. | ArXiv | 2024-04-15 |
759 | Context-aware Chatbot Using MLLMs for Cultural Heritage Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-modal Large Language Models (MLLMs) are currently an extremely active research topic for the multimedia and computer vision communities, and show a significant impact in … |
Pavan Kartheek Rachabatuni; F. Principi; Paolo Mazzanti; Marco Bertini; | Proceedings of the 15th ACM Multimedia Systems Conference | 2024-04-15 |
760 | M3TQA: Multi-View, Multi-Hop and Multi-Stage Reasoning for Temporal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge Graph (KG) have attained notable triumph over Question Answering (QA) tasks. However, the presence of temporal constraints on numerous facts within the real world has … |
Zhiyuan Zha; Pengnian Qi; Xigang Bao; Mengyuan Tian; Biao Qin; | ICASSP 2024 – 2024 IEEE International Conference on … | 2024-04-14 |
761 | Prompting Large Language Models with Fine-Grained Visual Relations from Scene Graph for Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual Question Answering (VQA) is a task that requires models to comprehend both questions and images. An increasing number of works are leveraging the strong reasoning … |
JIAPENG LIU et. al. | ICASSP 2024 – 2024 IEEE International Conference on … | 2024-04-14 |
762 | CORAAL QA: A Dataset and Framework for Open Domain Spontaneous Speech Question Answering from Long Audio Files Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents a novel dataset (CORAAL QA) and framework for audio question-answering from long audio recordings containing spontaneous speech. The dataset introduced here … |
Natarajan Balaji Shankar; Alexander Johnson; Christina Chance; Hariram Veeramani; Abeer Alwan; | ICASSP 2024 – 2024 IEEE International Conference on … | 2024-04-14 |
763 | GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models Using Few Shot Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose GeMQuAD – a semi-supervised learning approach, extending the WeakDAP framework, applied to a dataset generated through ICL with just one example in the target language using AlexaTM 20B Seq2Seq LLM. |
Amani Namboori; Shivam Mangale; Andy Rosenbaum; Saleh Soltan; | arxiv-cs.CL | 2024-04-14 |
764 | Cross-Data Knowledge Graph Construction for LLM-enabled Educational Question-Answering System: A Case Study at HCMUT Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This article proposes a method for automatically constructing a Knowledge Graph from multiple data sources and discusses some initial applications (experimental trials) of KG in conjunction with LLMs for question-answering tasks. |
TUAN BUI et. al. | arxiv-cs.CL | 2024-04-14 |
765 | CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge Graph Prompting Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nevertheless, the original KGP framework necessitates costly fine-tuning with large datasets yet still suffers from LLM hallucination. Therefore, we propose a reasoning-infused LLM agent to enhance this framework. |
Zukang Yang; Zixuan Zhu; | arxiv-cs.CL | 2024-04-13 |
766 | Relational Reasoning and Adaptive Fusion for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Xiang Shen; Dezhi Han; Liang Zong; Zihan Guo; Jie Hua; | Appl. Intell. | 2024-04-13 |
767 | Improving Health Question Answering with Reliable and Time-Aware Evidence Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We discuss the results, highlight interesting examples, and outline challenges for future research, like managing evidence disagreement and crafting user-friendly explanations. |
Juraj Vladika; Florian Matthes; | arxiv-cs.CL | 2024-04-12 |
768 | Enhancing Visual Question Answering Through Question-Driven Image Captions As Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a straightforward and efficient question-driven image captioning approach within this pipeline to transfer contextual information into the question-answering (QA) model. |
Övgü Özdemir; Erdem Akagündüz; | arxiv-cs.CV | 2024-04-12 |
769 | Small Models Are (Still) Effective Cross-Domain Argument Extractors Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, detailed explorations of these techniques’ ability to actually enable this transfer are lacking. In this work, we provide such a study, exploring zero-shot transfer using both techniques on six major EAE datasets at both the sentence and document levels. |
William Gantt; Aaron Steven White; | arxiv-cs.CL | 2024-04-12 |
770 | Synthetic Dataset Creation and Fine-Tuning of Transformer Models for Question Answering in Serbian Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we focus on generating a synthetic question answering (QA) dataset using an adapted Translate-Align-Retrieve method. |
Aleksa Cvetanović; Predrag Tadić; | arxiv-cs.CL | 2024-04-12 |
771 | LLoCO: Learning Long Contexts Offline Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. |
SIJUN TAN et. al. | arxiv-cs.CL | 2024-04-11 |
772 | MM-PhyQA: Multimodal Physics Question-Answering with Multi-image CoT Prompting Related Papers Related Patents Related Grants Related Venues Related Experts View |
AVINASH ANAND et. al. | Pacific-Asia Conference on Knowledge Discovery and Data … | 2024-04-11 |
773 | Early Prediction of Promising Expert Users on Community Question Answering Sites Related Papers Related Patents Related Grants Related Venues Related Experts View |
P. Roy; Jyoti Prakash Singh; | Int. J. Syst. Assur. Eng. Manag. | 2024-04-09 |
774 | SurveyAgent: A Conversational System for Personalized and Efficient Research Survey Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces SurveyAgent, a novel conversational system designed to provide personalized and efficient research survey assistance to researchers. |
XINTAO WANG et. al. | arxiv-cs.CL | 2024-04-09 |
775 | MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Finally, the situation is particularly grim if we consider benchmarking LLMs for languages other than English which remains, as far as we know, a totally neglected topic. In order to address these shortcomings, in this paper we present MedExpQA, the first multilingual benchmark based on medical exams to evaluate LLMs in Medical Question Answering. |
Iñigo Alonso; Maite Oronoz; Rodrigo Agerri; | arxiv-cs.CL | 2024-04-08 |
776 | Enhancing Software-Related Information Extraction Via Single-Choice Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes our participation in the Shared Task on Software Mentions Disambiguation (SOMD), with a focus on improving relation extraction in scholarly texts through generative Large Language Models (LLMs) using single-choice question-answering. |
Wolfgang Otto; Sharmila Upadhyaya; Stefan Dietze; | arxiv-cs.CL | 2024-04-08 |
777 | PerkwE_COQA: Enhanced Persian Conversational Question Answering By Combining Contextual Keyword Extraction with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a novel method to elevate the performance of Persian Conversational question-answering (CQA) systems. |
Pardis Moradbeiki; Nasser Ghadiri; | arxiv-cs.CL | 2024-04-08 |
778 | Your Finetuned Large Language Model Is Already A Powerful Out-of-distribution Detector Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection. The intuition … |
Andi Zhang; Tim Z. Xiao; Weiyang Liu; Robert Bamler; Damon Wischik; | ArXiv | 2024-04-07 |
779 | Neural-Symbolic VideoQA: Learning Compositional Spatio-Temporal Reasoning for Real-world Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Existing approaches struggle to establish effective symbolic reasoning structures, which are crucial for answering compositional spatio-temporal questions. To address this challenge, we propose a neural-symbolic framework called Neural-Symbolic VideoQA (NS-VideoQA), specifically designed for real-world VideoQA tasks. |
Lili Liang; Guanglu Sun; Jin Qiu; Lizhong Zhang; | arxiv-cs.CV | 2024-04-05 |
780 | Which Experimental Design Is Better Suited for VQA Tasks? Eye Tracking Study on Cognitive Load, Performance, and Gaze Allocations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conducted an eye-tracking user study with 13 participants to investigate the influence of stimulus-question ordering and question modality on participants using visual question-answering (VQA) tasks. |
Sita A. Vriend; Sandeep Vidyapu; Amer Rama; Kun-Ting Chen; Daniel Weiskopf; | arxiv-cs.HC | 2024-04-05 |
781 | KazQAD: Kazakh Open-Domain Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce KazQAD — a Kazakh open-domain question answering (ODQA) dataset — that can be used in both reading comprehension and full ODQA settings, as well as for information retrieval experiments. |
Rustem Yeshpanov; Pavel Efimov; Leonid Boytsov; Ardak Shalkarbayuli; Pavel Braslavski; | arxiv-cs.CL | 2024-04-05 |
782 | CBR-RAG: Case-Based Reasoning for Retrieval Augmented Generation in LLMs for Legal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Retrieval-Augmented Generation (RAG) enhances Large Language Model (LLM) output by providing prior knowledge as context to input. This is beneficial for knowledge-intensive and … |
N. WIRATUNGA et. al. | International Conference on Case-Based Reasoning | 2024-04-04 |
783 | TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes TinyVQA, a novel multimodal deep neural network for visual question answering tasks that can be deployed on resource-constrained tinyML hardware. |
Hasib-Al Rashid; Argho Sarkar; Aryya Gangopadhyay; Maryam Rahnemoonfar; Tinoosh Mohsenin; | arxiv-cs.CV | 2024-04-04 |
784 | Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., <1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks. |
JOOYOUNG LEE et. al. | arxiv-cs.CL | 2024-04-04 |
785 | Towards Better Generalization in Open-Domain Question Answering By Mitigating Context Memorization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate the generalization performance of a retrieval-augmented QA model in two specific scenarios: 1) adapting to updated versions of the same knowledge corpus; 2) switching to completely different knowledge domains. |
Zixuan Zhang; Revanth Gangi Reddy; Kevin Small; Tong Zhang; Heng Ji; | arxiv-cs.CL | 2024-04-02 |
786 | Self-Improvement Programming for Temporal Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Motivated by semantic-parsing-based approaches that explicitly model constraints in questions by generating logical forms with symbolic operators, we design fundamental temporal operators for time constraints and introduce a novel self-improvement Programming method for TKGQA (Prog-TQA). |
ZHUO CHEN et. al. | arxiv-cs.CL | 2024-04-02 |
787 | Enhancing Human-Computer Interaction in Chest X-ray Analysis Using Vision and Language Model with Eye Gaze Patterns Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work proposes a novel approach to enhance human-computer interaction in chest X-ray analysis using Vision-Language Models (VLMs) enhanced with radiologists’ attention by incorporating eye gaze data alongside textual prompts. |
Yunsoo Kim; Jinge Wu; Yusuf Abdulle; Yue Gao; Honghan Wu; | arxiv-cs.CV | 2024-04-02 |
788 | Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. |
ZHUO CHEN et. al. | arxiv-cs.CL | 2024-04-02 |
789 | MChartQA: A Universal Benchmark for Multimodal Chart Question Answer Based on Vision-Language Alignment and Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Traditional methods, which typically involve either direct multimodal processing or a table-to-text conversion followed by language model analysis, have limitations in effectively handling these complex scenarios. This paper introduces a novel multimodal chart question-answering model, specifically designed to address these intricate tasks. |
JINGXUAN WEI et. al. | arxiv-cs.CV | 2024-04-01 |
790 | Simple Contrastive Learning in A Self-supervised Manner for Robust Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
SHUWEN YANG et. al. | Comput. Vis. Image Underst. | 2024-04-01 |
791 | Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Previous studies have explored using large large multimodal models (LMMs) as reward models to guide preference modeling, but their ability to accurately assess the factuality of generated responses compared to corresponding videos has not been conclusively established. This paper introduces a novel framework that utilizes detailed video captions as a proxy of video content, enabling language models to incorporate this information as supporting evidence for scoring video Question Answering (QA) predictions. |
RUOHONG ZHANG et. al. | arxiv-cs.CV | 2024-04-01 |
792 | Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: An open-domain question answering (QA) system usually follows a retrieve-then-read paradigm, in which a retriever is used to retrieve relevant passages from a large corpus, and … |
Dingmin Wang; Qiuyuan Huang; Matthew Jackson; Jianfeng Gao; | Transactions of the Association for Computational … | 2024-04-01 |
793 | VideoDistill: Language-aware Vision Distillation for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we are inspired by the human recognition and learning pattern and propose VideoDistill, a framework with language-aware (i.e., goal-driven) behavior in both vision perception and answer generation process. |
Bo Zou; Chao Yang; Yu Qiao; Chengbin Quan; Youjian Zhao; | arxiv-cs.CV | 2024-04-01 |
794 | Explainable Multi-hop Question Generation: An End-to-End Approach Without Intermediate Question Labeling Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce an end-to-end question rewriting model that increases question complexity through sequential rewriting. |
Seonjeong Hwang; Yunsu Kim; Gary Geunbae Lee; | arxiv-cs.CL | 2024-03-31 |
795 | DOCMASTER: A Unified Platform for Annotation, Training, & Inference in Document Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces DOCMASTER, a unified platform designed for annotating PDF documents, model training, and inference, tailored to document question-answering. |
Alex Nguyen; Zilong Wang; Jingbo Shang; Dheeraj Mekala; | arxiv-cs.CL | 2024-03-30 |
796 | How Robust Are The Tabular QA Models for Scientific Tables? A Study Using Customized Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To investigate the robustness of the existing state-of-the-art QA models on scientific hybrid tabular data, we propose a new dataset, SciTabQA, consisting of 822 question-answer pairs from scientific tables and their descriptions. |
Akash Ghosh; B Venkata Sahith; Niloy Ganguly; Pawan Goyal; Mayank Singh; | arxiv-cs.CL | 2024-03-30 |
797 | Multi-hop Question Answering Under Temporal Knowledge Editing IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts. To address this limitation, we propose a novel framework, namely TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA). |
KEYUAN CHENG et. al. | arxiv-cs.CL | 2024-03-30 |
798 | How Robust Are The QA Models for Hybrid Scientific Tabular Data? A Study Using Customized Dataset Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-answering (QA) on hybrid scientific tabular and textual data deals with scientific information, and relies on complex numerical reasoning. In recent years, while tabular … |
Akash Ghosh; Venkata Sahith Bathini; Niloy Ganguly; Pawan Goyal; Mayank Singh; | ArXiv | 2024-03-30 |
799 | Design As Desired: Utilizing Visual Question Answering for Multimodal Pre-training Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we utilize Visual Question Answering (VQA) for multimodal pre-training to guide the framework focusing on targeted pathological features. |
TONGKUN SU et. al. | arxiv-cs.CV | 2024-03-29 |
800 | Multi-Frame, Lightweight & Efficient Vision-Language Models for Question Answering in Autonomous Driving Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current approaches to these systems use expensive large language model (LLM) backbones and image encoders, making such systems unsuitable for real-time autonomous driving systems where tight memory constraints exist and fast inference time is necessary. To address these previous issues, we develop EM-VLM4AD, an efficient, lightweight, multi-frame vision language model which performs Visual Question Answering for autonomous driving. |
Akshay Gopalkrishnan; Ross Greer; Mohan Trivedi; | arxiv-cs.CV | 2024-03-28 |
801 | JDocQA: Japanese Document Question Answering Dataset for Generative Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset, essentially requiring both visual and textual information to answer questions, which comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese. |
Eri Onami; Shuhei Kurita; Taiki Miyanishi; Taro Watanabe; | arxiv-cs.CL | 2024-03-28 |
802 | An Image Grid Can Be Worth A Video: Zero-shot Video Question Answering Using A VLM IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we introduce a simple yet novel strategy where only a single Vision Language Model (VLM) is utilized. |
Wonkyun Kim; Changin Choi; Wonseok Lee; Wonjong Rhee; | arxiv-cs.CV | 2024-03-27 |
803 | MFORT-QA: Multi-hop Few-shot Open Rich Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the Multi-hop Few-shot Open Rich Table QA (MFORT-QA) approach, which consists of two major steps. |
Che Guan; Mengyu Huang; Peng Zhang; | arxiv-cs.CL | 2024-03-27 |
804 | A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we propose the Gaze-grounded VQA dataset (GazeVQA) that clarifies ambiguous questions using gaze information by focusing on a clarification process complemented by gaze information. |
Shun Inadumi; Seiya Kawano; Akishige Yuguchi; Yasutomo Kawanishi; Koichiro Yoshino; | arxiv-cs.CL | 2024-03-26 |
805 | Denoising Table-Text Retrieval for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Previous studies in table-text open-domain question answering have two common challenges: firstly, their retrievers can be affected by false-positive labels in training datasets; secondly, they may struggle to provide appropriate evidence for questions that require reasoning across the table. To address these issues, we propose Denoised Table-Text Retriever (DoTTeR). |
Deokhyung Kang; Baikjin Jung; Yunsu Kim; Gary Geunbae Lee; | arxiv-cs.CL | 2024-03-26 |
806 | Intrinsic Subgraph Generation for Interpretable Graph Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset. |
Pascal Tilli; Ngoc Thang Vu; | arxiv-cs.CL | 2024-03-26 |
807 | Can Multiple-choice Questions Really Be Useful in Detecting The Abilities of LLMs? IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The misalignment between the task and the evaluation method demands a thoughtful analysis of MCQ’s efficacy, which we undertake in this paper by evaluating nine LLMs on four question-answering (QA) datasets in two languages: Chinese and English. |
WANGYUE LI et. al. | arxiv-cs.CL | 2024-03-26 |
808 | GPTs and Language Barrier: A Cross-Lingual Legal QA Examination Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the application of Generative Pre-trained Transformers (GPTs) in cross-lingual legal Question-Answering (QA) systems using the COLIEE Task 4 dataset. |
Ha-Thanh Nguyen; Hiroaki Yamada; Ken Satoh; | arxiv-cs.CL | 2024-03-26 |
809 | Chain-of-Action: Faithful and Multimodal Question Answering Through Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a Chain-of-Action (CoA) framework for multimodal and retrieval-augmented Question-Answering (QA). |
Zhenyu Pan; Haozheng Luo; Manling Li; Han Liu; | arxiv-cs.CL | 2024-03-25 |
810 | ProCQA: A Large-scale Community-based Programming Question Answering Dataset for Code Search Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce ProCQA, a large-scale programming question answering dataset extracted from the StackOverflow community, offering naturally structured mixed-modal QA pairs. |
Zehan Li; Jianfei Zhang; Chuantao Yin; Yuanxin Ouyang; Wenge Rong; | arxiv-cs.CL | 2024-03-25 |
811 | CyberQ: Generating Questions and Answers for Cybersecurity Education Using Knowledge Graph-Augmented LLMs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Building a skilled cybersecurity workforce is paramount to building a safer digital world. However, the diverse skill set, constantly emerging vulnerabilities, and deployment of … |
Garima Agrawal; Kuntal Pal; Yuli Deng; Huanmin Liu; Yingying Chen; | AAAI Conference on Artificial Intelligence | 2024-03-24 |
812 | Synthesize Step-by-Step: Tools, Templates and LLMs As Data Generators for Reasoning-Based Chart VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we address the lack of reasoning ability by data augmentation. |
Zhuowan Li; Bhavan Jasani; Peng Tang; Shabnam Ghadar; | arxiv-cs.CV | 2024-03-24 |
813 | RetLLM-E: Retrieval-Prompt Strategy for Question-Answering on Student Discussion Forums Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper focuses on using Large Language Models to support teaching assistants in answering questions on large student forums such as Piazza and EdSTEM. Since student questions … |
CHANCHARIK MITRA et. al. | AAAI Conference on Artificial Intelligence | 2024-03-24 |
814 | Graph Reasoning Transformers for Knowledge-Aware Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Augmenting Language Models (LMs) with structured knowledge graphs (KGs) aims to leverage structured world knowledge to enhance the capability of LMs to complete … |
Ruilin Zhao; Feng Zhao; Liang Hu; Guandong Xu; | AAAI Conference on Artificial Intelligence | 2024-03-24 |
815 | SciSpace Copilot: Empowering Researchers Through Intelligent Reading Assistance Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We introduce SciSpace Copilot, an AI research assistant that helps in understanding and reading research papers faster by providing a plethora of features. Answering questions … |
TRINITA ROY et. al. | AAAI Conference on Artificial Intelligence | 2024-03-24 |
816 | Explore Until Confident: Efficient Exploration for Embodied Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We consider the problem of Embodied Question Answering (EQA), which refers to settings where an embodied agent such as a robot needs to actively explore an environment to gather information until it is confident about the answer to a question. In this work, we leverage the strong semantic reasoning capabilities of large vision-language models (VLMs) to efficiently explore and answer such questions. |
ALLEN Z. REN et. al. | arxiv-cs.RO | 2024-03-23 |
817 | Awakening Augmented Generation: Learning to Awaken Internal Knowledge of Large Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent works indicate that LLMs model rich knowledge, but it is often not effectively activated and awakened. Inspired by this, we propose a novel knowledge-augmented framework, $\textbf{Awakening-Augmented-Generation}$ (AAG), which mimics the human ability to answer questions using only thinking and recalling to compensate for knowledge gaps, thereby awaking relevant knowledge in LLMs without relying on external resources. |
HUANXUAN LIAO et. al. | arxiv-cs.CL | 2024-03-22 |
818 | Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recent advancements in Surgical Visual Question Answering (Surgical-VQA) and related region grounding have shown great promise for robotic and medical applications, addressing the … |
GUAN-FENG WANG et. al. | ArXiv | 2024-03-22 |
819 | Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. |
Kosuke Akimoto; Kunihiro Takeoka; Masafumi Oyamada; | arxiv-cs.CL | 2024-03-21 |
820 | Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models Through Question Complexity IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs from the simplest to the most sophisticated ones based on the query complexity. |
Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong C. Park; | arxiv-cs.CL | 2024-03-21 |
821 | Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose an adaptive multi-agent system, named Multi-Agent VQA, to overcome the limitations of foundation models in object detection and counting by using specialized agents as tools. |
Bowen Jiang; Zhijun Zhuang; Shreyas S. Shivakumar; Dan Roth; Camillo J. Taylor; | arxiv-cs.CV | 2024-03-21 |
822 | Large Language Models for Multi-Choice Question Classification of Medical Subjects Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The aim of this paper is to evaluate whether large language models trained on multi-choice question data can be used to discriminate between medical subjects. |
Víctor Ponce-López; | arxiv-cs.CL | 2024-03-21 |
823 | Language Repository for Long Video Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a Language Repository (LangRepo) for LLMs, that maintains concise and structured information as an interpretable (i.e., all-textual) representation. |
Kumara Kahatapitiya; Kanchana Ranasinghe; Jongwoo Park; Michael S. Ryoo; | arxiv-cs.CV | 2024-03-21 |
824 | Improved Baselines for Data-efficient Perceptual Augmentation of LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While different approaches have been explored to interface LLMs with “perceptual backbones” that process, e.g., visual or audio data, they are often explored for different tasks, different datasets, and using different perceptual backbones and language models, hindering direct comparison of the interfacing mechanisms. To remedy this lack of comparability between methods, we present an extensive experimental evaluation of different interfacing mechanisms, across multiple tasks (including image, video, and audio captioning as well as visual question answering), datasets and backbones, paying special attention to low-data settings. |
Théophane Vallaeys; Mustafa Shukor; Matthieu Cord; Jakob Verbeek; | arxiv-cs.CV | 2024-03-20 |
825 | Syn-QA2: Evaluating False Assumptions in Long-tail Questions with Synthetic QA Datasets Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce Syn-(QA)$^2$, a set of two synthetically generated QA datasets: one generated using perturbed relations from Wikidata, and the other by perturbing HotpotQA (Yang et al. 2018). |
Ashwin Daswani; Rohan Sawant; Najoung Kim; | arxiv-cs.CL | 2024-03-18 |
826 | Dr3: Ask Large Language Models Not to Give Off-Topic Answers in Open Domain Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This issue of off-topic answers accounts for approximately one-third of incorrect answers, yet remains underexplored despite its significance. To alleviate this issue, we propose the Discriminate->Re-Compose->Re- Solve->Re-Decompose (Dr3) mechanism. |
YUAN GAO et. al. | arxiv-cs.CL | 2024-03-18 |
827 | Enhancing Event Causality Identification with Rationale and Structure-Aware Causal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a multi-task learning framework to enhance event causality identification with rationale and structure-aware causal question answering. |
Baiyan Zhang; Qin Chen; Jie Zhou; Jian Jin; Liang He; | arxiv-cs.CL | 2024-03-17 |
828 | RetinaQA: A Robust Knowledge Base Question Answering Model for Both Answerable and Unanswerable Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent research has found that such models, when superficially adapted to detect answerability, struggle to satisfactorily identify the different categories of unanswerable questions, and simultaneously preserve good performance for answerable questions. Towards addressing this issue, we propose RetinaQA, a new KBQA model that unifies two key ideas in a single KBQA architecture: (a) discrimination over candidate logical forms, rather than generating these, for handling schema-related unanswerability, and (b) sketch-filling-based construction of candidate logical forms for handling data-related unaswerability. |
Prayushi Faldu; Indrajit Bhattacharya; | arxiv-cs.CL | 2024-03-16 |
829 | Knowledge Condensation and Reasoning for Knowledge-based VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenge, we propose two synergistic models: Knowledge Condensation model and Knowledge Reasoning model. |
DONGZE HAO et. al. | arxiv-cs.CV | 2024-03-15 |
830 | Few-Shot Image Classification and Segmentation As Visual Question Answering Using Vision-Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce the Vision-Instructed Segmentation and Evaluation (VISE) method that transforms the FS-CS problem into the Visual Question Answering (VQA) problem, utilising Vision-Language Models (VLMs), and addresses it in a training-free manner. |
Tian Meng; Yang Tao; Ruilin Lyu; Wuliang Yin; | arxiv-cs.CV | 2024-03-15 |
831 | Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a multimodal adversarial training architecture with spatial awareness capabilities. |
Zhixuan Shen; Haonan Luo; Sijia Li; Tianrui Li; | arxiv-cs.CV | 2024-03-14 |
832 | ProSwitch: Knowledge-Guided Instruction Tuning to Switch Between Professional and Non-Professional Responses Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study introduces a novel approach, named ProSwitch, which enables a language model to switch between professional and non-professional answers, by tuning and evaluating through the guidance of domain and style knowledge. |
CHANG ZONG et. al. | arxiv-cs.CL | 2024-03-14 |
833 | DAM: Dynamic Adapter Merging for Continual Video QA Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present a parameter-efficient method for continual video question-answering (VidQA) learning. |
FENG CHENG et. al. | arxiv-cs.CV | 2024-03-13 |
834 | RAGGED: Towards Informed Design of Retrieval Augmented Generation Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To systematically find the optimal configuration, we introduce RAGGED, a framework for analyzing RAG configurations across various DBQA tasks. |
Jennifer Hsia; Afreen Shaikh; Zhiruo Wang; Graham Neubig; | arxiv-cs.CL | 2024-03-13 |
835 | MoleculeQA: A Dataset to Evaluate Factual Accuracy in Molecular Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To rectify the absence of factual evaluation, we present MoleculeQA, a novel question answering (QA) dataset which possesses 62K QA pairs over 23K molecules. |
XINGYU LU et. al. | arxiv-cs.CL | 2024-03-12 |
836 | Answering Diverse Questions Via Text Attached with Key Audio-Visual Clues Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Indeed, the natural heterogeneous relationship between audiovisuals and text makes the perfect fusion challenging, to prevent high-level audio-visual semantics from weakening the network’s adaptability to diverse question types, we propose a framework for performing mutual correlation distillation (MCD) to aid question inference. |
Qilang Ye; Zitong Yu; Xin Liu; | arxiv-cs.CV | 2024-03-11 |
837 | InfiBench: Evaluating The Question-Answering Capabilities of Code Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, they are insufficient to cover the full range of expected capabilities of code LLMs, which span beyond code generation to answering diverse coding-related questions. To fill this gap, we propose InfiBench, the first large-scale freeform question-answering (QA) benchmark for code to our knowledge, comprising 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages. |
LINYI LI et. al. | arxiv-cs.SE | 2024-03-10 |
838 | KG-Rank: Enhancing Large Language Models for Medical QA with Knowledge Graphs and Ranking Techniques IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we develop an augmented LLM framework, KG-Rank, which leverages a medical knowledge graph (KG) along with ranking and re-ranking techniques, to improve the factuality of long-form question answering (QA) in the medical domain. |
RUI YANG et. al. | arxiv-cs.CL | 2024-03-09 |
839 | SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a novel evaluative benchmark named \textbf{SnapNTell}, specifically tailored for entity-centric VQA. |
JIELIN QIU et. al. | arxiv-cs.CV | 2024-03-07 |
840 | Can’t Remember Details in Long Documents? You Need Some R&R Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Long-context large language models (LLMs) hold promise for tasks such as question-answering (QA) over long documents, but they tend to miss important information in the middle of context documents (arXiv:2307.03172v3). Here, we introduce $\textit{R&R}$ — a combination of two novel prompt-based methods called $\textit{reprompting}$ and $\textit{in-context retrieval}$ (ICR) — to alleviate this effect in document-based QA. |
Devanshu Agrawal; Shang Gao; Martin Gajek; | arxiv-cs.CL | 2024-03-07 |
841 | Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces the novel task of multimodal puzzle solving, framed within the context of visual question-answering. |
Deepanway Ghosal; Vernon Toh Yan Han; Chia Yew Ken; Soujanya Poria; | arxiv-cs.CV | 2024-03-06 |
842 | Benchmarking Hallucination in Large Language Models Based on Unanswerable Math Word Problem Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a new method for evaluating LLM hallucination in Question Answering (QA) based on the unanswerable math word problem (MWP). |
YUHONG SUN et. al. | arxiv-cs.CL | 2024-03-06 |
843 | Evaluating The Elementary Multilingual Capabilities of Large Language Models with MultiQ IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recent research shows that, despite limits in their intended use, people prompt LLMs in many different languages. Therefore, in this paper, we investigate the basic multilingual capabilities of state-of-the-art open LLMs beyond their intended use. |
Carolin Holtermann; Paul Röttger; Timm Dill; Anne Lauscher; | arxiv-cs.CL | 2024-03-06 |
844 | Enhancing Generalization in Medical Visual Question Answering Tasks Via Gradient-Guided Model Perturbation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a method that incorporates gradient-guided parameter perturbations to the visual encoder of the multimodality model during both pre-training and fine-tuning phases, to improve model generalization for downstream medical VQA tasks. |
Gang Liu; Hongyang Li; Zerui He; Shenjun Zhong; | arxiv-cs.CV | 2024-03-05 |
845 | Vision-Language Models for Medical Report Generation and Visual Question Answering: A Review IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Medical vision-language models (VLMs) combine computer vision and natural language processing to analyze visual and textual medical data. |
Iryna Hartsock; Ghulam Rasool; | arxiv-cs.CV | 2024-03-04 |
846 | KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023. |
Sunjun Kweon; Byungjin Choi; Minkyu Kim; Rae Woong Park; Edward Choi; | arxiv-cs.CL | 2024-03-03 |
847 | Automatic Question-Answer Generation for Long-Tail Knowledge Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Pretrained Large Language Models (LLMs) have gained significant attention for addressing open-domain Question Answering (QA). While they exhibit high accuracy in answering … |
ROHAN KUMAR et. al. | ArXiv | 2024-03-03 |
848 | Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge Graph Question Answering (KGQA) methods seek to answer Natural Language questions using the relational information stored in Knowledge Graphs (KGs). With the recent … |
Armin Toroghi; Willis Guo; Mohammad Mahdi Torabi pour; Scott Sanner; | ArXiv | 2024-03-03 |
849 | CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring Commonsense Reasoning and Long-Tail Knowledge Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Knowledge graph question answering (KGQA) is a well-established field that seeks to provide factual answers to natural language (NL) questions by leveraging knowledge graphs … |
Willis Guo; Armin Toroghi; Scott Sanner; | ArXiv | 2024-03-03 |
850 | LocalRQA: From Generating Data to Locally Training, Testing, and Deploying Retrieval-Augmented QA Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose LocalRQA, an open-source toolkit that features a wide selection of model training algorithms, evaluation methods, and deployment tools curated from the latest research. |
Xiao Yu; Yunan Lu; Zhou Yu; | arxiv-cs.CL | 2024-03-01 |
851 | XMQAs: Constructing Complex-Modified Question-Answering Dataset for Robust Question Understanding IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question understanding is an important issue to the success of a Knowledge-based Question Answering (KBQA) system.However, the existing study does not pay enough attention to this … |
Yuyan Chen; Yanghua Xiao; Zhixu Li; Bang Liu; | IEEE Transactions on Knowledge and Data Engineering | 2024-03-01 |
852 | Let LLMs Take on The Latest Challenges! A Chinese Dynamic Question Answering Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To promote the improvement of Chinese LLMs’ ability to answer dynamic questions, in this paper, we introduce CDQA, a Chinese Dynamic QA benchmark containing question-answer pairs related to the latest news on the Chinese Internet. |
ZHIKUN XU et. al. | arxiv-cs.CL | 2024-02-29 |
853 | Prompting Explicit and Implicit Knowledge for Multi-hop Question Answering Based on Human Reading Process Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce a Prompting Explicit and Implicit knowledge (PEI) framework, which uses prompts to connect explicit and implicit knowledge, aligning with human reading process for multi-hop QA. |
Guangming Huang; Yunfei Long; Cunjin Luo; Jiaxing Shen; Xia Sun; | arxiv-cs.CL | 2024-02-29 |
854 | Can GPT Improve The State of Prior Authorization Via Guideline Based Automated Question Answering? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we evaluate whether GPT can validate numerous key factors, in turn helping health plans reach a decision drastically faster. |
Shubham Vatsal; Ayush Singh; Shabnam Tafreshi; | arxiv-cs.CL | 2024-02-28 |
855 | The First Place Solution of WSDM Cup 2024: Leveraging Large Language Models for Conversational Multi-Doc QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce our winning approach for the Conversational Multi-Doc QA challenge in WSDM Cup 2024, which exploits the superior natural language understanding and generation capability of Large Language Models (LLMs). |
Yiming Li; Zhao Zhang; | arxiv-cs.CL | 2024-02-28 |
856 | Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Moreover, the lack of reference explanations means we cannot easily evaluate the reasoning of model decisions, a crucial component of supporting doctors in making complex medical decisions. To address these challenges, we construct two new datasets: JAMA Clinical Challenge and Medbullets. |
Hanjie Chen; Zhouxiang Fang; Yash Singla; Mark Dredze; | arxiv-cs.CL | 2024-02-28 |
857 | Researchy Questions: A Dataset of Multi-Perspective, Decompositional Questions for LLM Web Agents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present Researchy Questions, a dataset of search engine queries tediously filtered to be non-factoid, “decompositional” and multi-perspective. |
CORBY ROSSET et. al. | arxiv-cs.CL | 2024-02-27 |
858 | JMLR: Joint Medical LLM and Retrieval Training for Enhancing Reasoning and Professional Question Answering Capability IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Unlike previous methods in RAG where the retrieval model was trained separately from the LLM, we introduce JMLR (for Jointly trains LLM and information Retrieval) during the fine-tuning phase. |
Junda Wang; Zhichao Yang; Zonghai Yao; Hong Yu; | arxiv-cs.CL | 2024-02-27 |
859 | BlendSQL: A Scalable Dialect for Unifying Hybrid Question Answering in Relational Algebra Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce BlendSQL, a superset of SQLite to act as a unified dialect for orchestrating reasoning across both unstructured and structured data. |
Parker Glenn; Parag Pravin Dakle; Liang Wang; Preethi Raghavan; | arxiv-cs.CL | 2024-02-27 |
860 | Unsupervised Multiple Choices Question Answering Via Universal Corpus Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel framework designed to generate synthetic MCQA data barely based on contexts from the universal domain without relying on any form of manual annotation. |
Qin Zhang; Hao Ge; Xiaojun Chen; Meng Fang; | arxiv-cs.CL | 2024-02-27 |
861 | REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus likely leading to misleading or even incorrect utilization of external knowledge (eg., retrieved documents). To address this issue, in this paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA). |
YUHAO WANG et. al. | arxiv-cs.CL | 2024-02-27 |
862 | GigaPevt: Multimodal Medical Assistant Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This demo paper presents the GigaPevt, the first multimodal medical assistant that combines the dialog capabilities of large language models with specialized medical models. |
PAVEL BLINOV et. al. | arxiv-cs.AI | 2024-02-26 |
863 | Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Temporal knowledge graph question answering (TKGQA) poses a significant challenge task, due to the temporal constraints hidden in questions and the answers sought from dynamic … |
YIFU GAO et. al. | arxiv-cs.CL | 2024-02-26 |
864 | SuRe: Summarizing Retrievals Using Answer Candidates for Open-domain QA of LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we design a simple yet effective framework to enhance open-domain QA (ODQA) with LLMs, based on the summarized retrieval (SuRe). |
JAEHYUNG KIM et. al. | iclr | 2024-02-26 |
865 | The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of The Open World IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present the All-Seeing (AS) project: a large-scale dataset and model for recognizing and understanding everything in the open world.Using a scalable data engine that incorporates human feedback and efficient models in the loop, we create a new dataset (AS-1B) with over 1.2 billion regions annotated with semantic tags, question-answering pairs, and detailed captions. |
WEIYUN WANG et. al. | iclr | 2024-02-26 |
866 | Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: With augmentation of retrieval module, open-source Large Language Models (LLMs) can produce coherent answers often with different focuses, but are still sub-optimal in terms of reliable evidence selection and in-depth question analysis. In this paper, we propose a novel Chain-ofDiscussion framework to leverage the synergy among multiple open-source LLMs aiming to provide more correct and more comprehensive answers for open-ended QA, although they are not strong enough individually. |
Mingxu Tao; Dongyan Zhao; Yansong Feng; | arxiv-cs.CL | 2024-02-26 |
867 | RAPPER: Reinforced Rationale-Prompted Paradigm for Natural Language Explanation in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In practice, one might encounter explanations which lack informativeness or contradict visual-grounded facts, known as implausibility and hallucination problems, respectively. To tackle these challenging issues, we consider the task of visual question answering (VQA) and introduce Rapper, a two-stage Reinforced Rationale-Prompted Paradigm. |
KAI-PO CHANG et. al. | iclr | 2024-02-26 |
868 | CABINET: Content Relevance-based Noise Reduction for Table Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) – a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information.We release our code and datasets here. |
SOHAN PATNAIK et. al. | iclr | 2024-02-26 |
869 | Bootstrapping Variational Information Pursuit with Large Language and Vision Models for Interpretable Image Classification Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This limits V-IP’s application to small-scale tasks where manual data annotation is feasible. In this work, we focus on image classification tasks and propose to relieve this bottleneck by leveraging pretrained language and vision models. |
Aditya Chattopadhyay; Kwan Ho Ryan Chan; Rene Vidal; | iclr | 2024-02-26 |
870 | EQA-MX: Embodied Question Answering Using Multimodal Expression Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we have introduced 8 novel embodied question answering (EQA) tasks to develop learning models to comprehend embodied questions with multimodal expressions.We have developed a novel large-scale dataset, EQA-MX, with over 8 million diverse embodied QA data samples involving multimodal expressions from multiple visual and verbal perspectives. |
Md Mofijul Islam; Alexi Gladstone; Riashat Islam; Tariq Iqbal; | iclr | 2024-02-26 |
871 | Deep Learning Approaches for Improving Question Answering Systems in Hepatocellular Carcinoma Research IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Models such as BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation. These pre-trained models serve as robust bases for various tasks including semantic understanding, intelligent writing, and reasoning, paving the way for a more generalized form of artificial intelligence. |
Shuning Huo; Yafei Xiang; Hanyi Yu; Mengran Zhu; Yulu Gong; | arxiv-cs.CL | 2024-02-25 |
872 | PerLTQA: A Personal Long-Term Memory Dataset for Memory Classification, Retrieval, and Synthesis in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Based on PerLTQA, we propose a novel framework for memory integration and generation, consisting of three main components: Memory Classification, Memory Retrieval, and Memory Synthesis. |
YIMING DU et. al. | arxiv-cs.CL | 2024-02-25 |
873 | Bridging The Gap Between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Integrating proposed mechanisms above, we present BridgeQA, that offers a fresh perspective on multi-modal transformer-based architectures for 3D-VQA. |
Wentao Mo; Yang Liu; | arxiv-cs.CV | 2024-02-24 |
874 | Predicting Semantic Category of Answers for Question Answering Systems Using Transformers: A Transfer Learning Approach Related Papers Related Patents Related Grants Related Venues Related Experts View |
S. C. M.; JayaramanPrem Prakash; Varun Sai Alaparthi; | Multim. Tools Appl. | 2024-02-24 |
875 | Biomedical Entity Linking As Multiple Choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Although biomedical entity linking (BioEL) has made significant progress with pre-trained language models, challenges still exist for fine-grained and long-tailed entities. To address these challenges, we present BioELQA, a novel model that treats Biomedical Entity Linking as Multiple Choice Question Answering. |
Zhenxi Lin; Ziheng Zhang; Xian Wu; Yefeng Zheng; | arxiv-cs.CL | 2024-02-23 |
876 | VISREAS: Complex Visual Reasoning with Unanswerable Questions Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Verifying a question’s validity before answering is crucial in real-world applications, where users may provide imperfect instructions. In this scenario, an ideal model should … |
Syeda Nahida Akter; Sangwu Lee; Yingshan Chang; Yonatan Bisk; Eric Nyberg; | Annual Meeting of the Association for Computational … | 2024-02-23 |
877 | Triad: A Framework Leveraging A Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Triad, a unified framework that utilizes an LLM-based agent with three roles for KBQA tasks. |
CHANG ZONG et. al. | arxiv-cs.CL | 2024-02-22 |
878 | Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP Education Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To tackle TutorQA queries, we present CGLLM, a pipeline integrating concept graphs with LLMs for answering diverse questions. |
RUI YANG et. al. | arxiv-cs.CL | 2024-02-22 |
879 | Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces Word-Sequence Entropy (WSE), a method that calibrates uncertainty at both the word and sequence levels, considering semantic relevance. |
ZHIYUAN WANG et. al. | arxiv-cs.CL | 2024-02-21 |
880 | LLMs Meet Long Video: Advancing Long Video Question Answering with An Interactive Visual Adapter in LLMs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, this approach incurs high computational costs due to the extensive array of video tokens, experiences reduced visual clarity as a consequence of token aggregation, and confronts challenges arising from irrelevant visual tokens while answering video-related questions. To alleviate these issues, we present an Interactive Visual Adapter (IVA) within LLMs, designed to enhance interaction with fine-grained visual elements. |
Yunxin Li; Xinyu Chen; Baotain Hu; Min Zhang; | arxiv-cs.CL | 2024-02-21 |
881 | ActiveRAG: Autonomously Knowledge Assimilation and Accommodation Through Retrieval-Augmented Agents Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce ActiveRAG, a multi-agent framework that mimics human learning behavior to help LLMs actively engage with and learn from retrieved evidence. |
ZHIPENG XU et. al. | arxiv-cs.CL | 2024-02-21 |
882 | Object Attribute Matters in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel VQA approach from the perspective of utilizing object attribute, aiming to achieve better object-level visual-language alignment and multimodal scene understanding. |
Peize Li; Qingyi Si; Peng Fu; Zheng Lin; Yan Wang; | aaai | 2024-02-20 |
883 | Self-DC: When to Retrieve and When to Generate? Self Divide-and-Conquer for Compositional Unknown Questions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose the first Compositional unknown Question-Answering dataset (CuQA), and introduce a Self Divide-and-Conquer (Self-DC) framework to empower LLMs to adaptively call different methods on-demand, resulting in better performance and efficiency. |
HONGRU WANG et. al. | arxiv-cs.CL | 2024-02-20 |
884 | STAIR: Spatial-Temporal Reasoning with Auditable Intermediate Results for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Though neural module networks are already widely studied on image-text tasks, applying them to videos is a non-trivial task, as reasoning on videos requires different abilities. In this paper, we define a set of basic video-text sub-tasks for video question answering and design a set of lightweight modules to complete them. |
Yueqian Wang; Yuxuan Wang; Kai Chen; Dongyan Zhao; | aaai | 2024-02-20 |
885 | Object-Aware Adaptive-Positivity Learning for Audio-Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose to explicitly consider fine-grained visual objects in video frames (object-level clues) and explore the multi-modal relations (\textit{i.e.}, the object, audio, and question) in terms of feature interaction and model optimization. |
Zhangbin Li; Dan Guo; Jinxing Zhou; Jing Zhang; Meng Wang; | aaai | 2024-02-20 |
886 | Cross-Modal Feature Distribution Calibration for Few-Shot Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Currently, most of the few-shot VQA methods are confined to simply extending few-shot classification methods to cross-modal tasks while ignoring the spatial distribution properties of multimodal features and cross-modal information interaction. To address this problem, we propose a novel Cross-modal feature Distribution Calibration Inference Network (CDCIN) in this paper, where a new concept named visual information entropy is proposed to realize multimodal features distribution calibration by cross-modal information interaction for more effective few-shot VQA. |
Jing Zhang; Xiaoqiang Liu; Mingzhe Chen; Zhe Wang; | aaai | 2024-02-20 |
887 | Question Calibration and Multi-Hop Modeling for Temporal Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: (II) They neither emphasize the graph structure between entities nor explicitly model the multi-hop relationship in the graph, which will make it difficult to solve complex multi-hop question answering. To alleviate this problem, we propose a novel Question Calibration and Multi-Hop Modeling (QC-MHM) approach. |
Chao Xue; Di Liang; Pengfei Wang; Jing Zhang; | arxiv-cs.CL | 2024-02-20 |
888 | Code-Style In-Context Learning for Knowledge-Based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current powerful LLMs have little exposure to logic forms during pre-training, resulting in a high format error rate. To solve this problem, we propose a code-style in-context learning method for KBQA, which converts the generation process of unfamiliar logical form into the more familiar code generation process for LLMs. |
Zhijie Nie; Richong Zhang; Zhongyuan Wang; Xudong Liu; | aaai | 2024-02-20 |
889 | Making Natural Language Reasoning Explainable and Faithful Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this talk, we will focus on (1) our design of leveraging structured information (that is grounded to the context), for the explainable complex question answering and reasoning; (2) our multi-module interpretable framework for inductive reasoning, which conducts step-wise faithful reasoning with iterative feedback. |
Xinya Du; | aaai | 2024-02-20 |
890 | Knowledge Graph Prompting for Multi-Document Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, few works explore this paradigm in multi-document question answering (MD-QA), a task demanding a thorough understanding of the logical associations among the contents and structures of documents. To fill this crucial gap, we propose a Knowledge Graph Prompting (KGP) method to formulate the right context in prompting LLMs for MD-QA, which consists of a graph construction module and a graph traversal module. |
YU WANG et. al. | aaai | 2024-02-20 |
891 | YTCommentQA: Video Question Answerability in Instructional Videos Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Discerning whether a question can be answered by video content is challenging due to the multi-modal nature of videos, where visual and verbal information are intertwined. To bridge this gap, we present the YTCommentQA dataset, which contains naturally-generated questions from YouTube, categorized by their answerability and required modality to answer — visual, script, or both. |
Saelyne Yang; Sunghyun Park; Yunseok Jang; Moontae Lee; | aaai | 2024-02-20 |
892 | Benchmarking Retrieval-Augmented Generation for Medicine IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Using MIRAGE, we conducted large-scale experiments with over 1.8 trillion prompt tokens on 41 combinations of different corpora, retrievers, and backbone LLMs through the MedRAG toolkit introduced in this work. |
Guangzhi Xiong; Qiao Jin; Zhiyong Lu; Aidong Zhang; | arxiv-cs.CL | 2024-02-20 |
893 | T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning Via Large Language Model Signals for Science Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Besides, the annotated rationales are hardly accurate due to the external essential information missed. To address these issues, we propose a novel method termed T-SciQ that aims at teaching science question answering with LLM signals. |
LEI WANG et. al. | aaai | 2024-02-20 |
894 | Slot-VLM: SlowFast Slots for Video-Language Modeling Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce Slot-VLM, a novel framework designed to generate semantically decomposed video tokens, in terms of object-wise and event-wise visual representations, to facilitate LLM inference. |
Jiaqi Xu; Cuiling Lan; Wenxuan Xie; Xuejin Chen; Yan Lu; | arxiv-cs.CV | 2024-02-20 |
895 | Video-Context Aligned Transformer for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Extremely imbalanced alignment of information from both sides leads to significant instability in reasoning. To address this concern, we propose the Video-Context Aligned Transformer (V-CAT), which leverages the context to achieve semantic and content alignment between video and question. |
LINLIN ZONG et. al. | aaai | 2024-02-20 |
896 | Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Additionally, these methods suffer from inefficient retrieval, as complex questions often contain abundant information, leading to the retrieval of irrelevant information inconsistent with the query’s intent. In this work, we propose a novel question decomposition framework called TRQA for multi-hop question answering, which addresses these limitations. |
KUN ZHANG et. al. | aaai | 2024-02-20 |
897 | Exploring The Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we address this research gap in two steps. |
DEHAI MIN et. al. | arxiv-cs.CL | 2024-02-20 |
898 | Detection-Based Intermediate Supervision for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For instance, (1) a prior assumption that each instance-module refers to only one grounded object yet overlooks other potentially associated grounded objects, impeding full cross-modal alignment learning; (2) IoU-based intermediate supervisions may introduce noise signals as the bounding box overlap issue might guide the model’s focus towards irrelevant objects. To address these issues, a novel method, Detection-based Intermediate Supervision (DIS), is proposed, which adopts a generative detection framework to facilitate multiple grounding supervisions via sequence generation. |
YUHANG LIU et. al. | aaai | 2024-02-20 |
899 | Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose an end-to-end methodology designed to generate long-form answers to any statutory law questions, utilizing a "retrieve-then-read" pipeline. |
Antoine Louis; Gijs van Dijck; Gerasimos Spanakis; | aaai | 2024-02-20 |
900 | BIDER: Bridging Knowledge Inconsistency for Efficient Retrieval-Augmented LLMs Via Key Supporting Evidence IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces BIDER, an approach that refines retrieval documents into Key Supporting Evidence (KSE) through knowledge synthesis, supervised fine-tuning (SFT), and preference alignment. |
Jiajie Jin; Yutao Zhu; Yujia Zhou; Zhicheng Dou; | arxiv-cs.CL | 2024-02-19 |
901 | FinBen: A Holistic Financial Benchmark for Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce FinBen, the first extensive open-source evaluation benchmark, including 36 datasets spanning 24 financial tasks, covering seven critical aspects: information extraction (IE), textual analysis, question answering (QA), text generation, risk management, forecasting, and decision-making. |
QIANQIAN XIE et. al. | arxiv-cs.CL | 2024-02-19 |
902 | Cofca: A Step-Wise Counterfactual Multi-hop QA Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, current factual Multi-hop QA (MHQA) benchmarks are annotated on open-source corpora such as Wikipedia, although useful for multi-step reasoning evaluation, they show limitations due to the potential data contamination in LLMs’ pre-training stage. To address these issues, we introduce a Step-wise Counterfactual benchmark (CofCA), a novel evaluation benchmark consisting of factual data and counterfactual data that reveals LLMs’ real reasoning abilities on multi-step reasoning and reasoning chain evaluation. |
Jian Wu; Linyi Yang; Zhen Wang; Manabu Okumura; Yue Zhang; | arxiv-cs.CL | 2024-02-19 |
903 | Question Answering Over Spatio-Temporal Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In response, we propose STCQA, a new spatio-temporal KGQA approach that utilizes a novel STKG embedding method named STComplEx. |
Xinbang Dai; Huiying Li; Guilin Qi; | arxiv-cs.CL | 2024-02-18 |
904 | Learning From Failure: Integrating Negative Examples When Fine-tuning Large Language Models As Agents IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Discarding failed trajectories also leads to significant wastage of data and resources and limits the possible optimization paths during fine-tuning. In this paper, we argue that unsuccessful trajectories offer valuable insights, and LLMs can learn from these trajectories through appropriate quality control and fine-tuning strategies. |
Renxi Wang; Haonan Li; Xudong Han; Yixuan Zhang; Timothy Baldwin; | arxiv-cs.CL | 2024-02-18 |
905 | Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we delve deeper into the CoT reasoning capabilities of LLMs in multi-hop question answering by utilizing knowledge graphs (KGs). |
MINH-VUONG NGUYEN et. al. | arxiv-cs.CL | 2024-02-17 |
906 | Evaluating LLMs’ Mathematical Reasoning in Financial Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The results provide insights into LLMs’ capabilities and limitations in handling complex mathematical scenarios for semi-structured tables. Ultimately, we introduce a novel prompting technique tailored to semi-structured documents, matching or outperforming other baselines in performance while providing a nuanced understanding of LLMs abilities for such a task. |
Pragya Srivastava; Manuj Malik; Vivek Gupta; Tanuja Ganu; Dan Roth; | arxiv-cs.CL | 2024-02-17 |
907 | PANDA (Pedantic ANswer-correctness Determination and Adjudication): Improving Automatic Evaluation for Question Answering and Text Generation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question answering (QA) can only make progress if we know if an answer is correct, but for many of the most challenging and interesting QA examples, current answer correctness … |
Zongxia Li; Ishani Mondal; Yijun Liang; Huy Nghiem; Jordan L. Boyd-Graber; | ArXiv | 2024-02-17 |
908 | Question-Instructed Visual Descriptions for Zero-Shot Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present Q-ViD, a simple approach for video question answering (video QA), that unlike prior methods, which are based on complex architectures, computationally expensive pipelines or use closed models like GPTs, Q-ViD relies on a single instruction-aware open vision-language model (InstructBLIP) to tackle videoQA using frame descriptions. |
David Romero; Thamar Solorio; | arxiv-cs.CV | 2024-02-16 |
909 | PEDANTS: Cheap But Effective and Interpretable Answer Equivalence Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Question answering (QA) can only make progress if we know if an answer is correct, but current answer correctness (AC) metrics struggle with verbose, free-form answers from large … |
Zongxia Li; Ishani Mondal; Yijun Liang; Huy Nghiem; Jordan Lee Boyd-Graber; | arxiv-cs.CL | 2024-02-16 |
910 | PAT-Questions: A Self-Updating Benchmark for Present-Anchored Temporal Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: PATQA poses unique challenges: (1) large language models (LLMs) may have outdated knowledge, (2) complex temporal relationships (e.g. ‘before’, ‘previous’) are hard to reason, (3) multi-hop reasoning may be required, and (4) the gold answers of benchmarks must be continuously updated. To address these challenges, we introduce the PAT-Questions benchmark, which includes single and multi-hop temporal questions. |
Jannat Ara Meem; Muhammad Shihab Rashid; Yue Dong; Vagelis Hristidis; | arxiv-cs.CL | 2024-02-16 |
911 | VQAttack: Transferable Adversarial Attacks on Visual Question Answering Via Pre-trained Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Correspondingly, we propose a novel VQAttack model, which can iteratively generate both image and text perturbations with the designed modules: the large language model (LLM)-enhanced image attack and the cross-modal joint attack module. |
ZIYI YIN et. al. | arxiv-cs.CV | 2024-02-16 |
912 | MURRE: Multi-Hop Table Retrieval with Removal for Open-Domain Text-to-SQL Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Since the questions in text-to-SQL usually contain all required information, while previous multi-hop retrieval supplements the questions with retrieved documents. Therefore, we propose the multi-hop table retrieval with removal (MURRE), which removes previously retrieved information from the question to guide the retriever towards unretrieved relevant tables. |
Xuanliang Zhang; Dingzirui Wang; Longxu Dou; Qingfu Zhu; Wanxiang Che; | arxiv-cs.CL | 2024-02-16 |
913 | A Question Answering Based Pipeline for Comprehensive Chinese EHR Information Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel approach that automatically generates training data for transfer learning of QA models. |
Huaiyuan Ying; Sheng Yu; | arxiv-cs.CL | 2024-02-16 |
914 | II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose II-MMR, a novel idea to identify and improve multi-modal multi-hop reasoning in VQA. |
Jihyung Kil; Farideh Tavazoee; Dongyeop Kang; Joo-Kyung Kim; | arxiv-cs.CV | 2024-02-16 |
915 | GenDec: A Robust Generative Question-decomposition Method for Multi-hop Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a \textbf{gen}erative question \textbf{dec}omposition method (GenDec) from the perspective of explainable QA by generating independent and complete sub-questions based on incorporating additional extracted evidence for enhancing LLMs’ reasoning ability in RAG. |
JIAN WU et. al. | arxiv-cs.CL | 2024-02-16 |
916 | A Dataset of Open-Domain Question Answering with Multiple-Span Answers Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Previous efforts for constructing MSQA datasets predominantly emphasized entity-centric contextualization, resulting in a bias towards collecting factoid questions and potentially overlooking questions requiring more detailed descriptive responses. To overcome these limitations, we present CLEAN, a comprehensive Chinese multi-span question answering dataset that involves a wide range of open-domain subjects with a substantial number of instances requiring descriptive answers. |
Zhiyi Luo; Yingying Zhang; Shuyun Luo; Ying Zhao; Wentao Lyu; | arxiv-cs.CL | 2024-02-15 |
917 | Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: For precise questions, we observe a minimum accuracy improvement of 7.5. |
Jiaxiang Liu; Tong Zhou; Yubo Chen; Kang Liu; Jun Zhao; | arxiv-cs.CL | 2024-02-15 |
918 | Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-rays Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Here, we introduce a novel VLM called PLURAL, which is pretrained on natural and longitudinal chest X-ray data for the diff-VQA task. |
Yeongjae Cho; Taehee Kim; Heejun Shin; Sungzoon Cho; Dongmyung Shin; | arxiv-cs.CV | 2024-02-14 |
919 | Prompt-based Personalized Federated Learning for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns in traditional medical visual question answering (VQA) methods. |
He Zhu; Ren Togo; Takahiro Ogawa; Miki Haseyama; | arxiv-cs.CV | 2024-02-14 |
920 | Visual Question Answering Instruction: Unlocking Multimodal Large Language Model To Domain-Specific Visual Multitasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We developed a method to transform domain-specific visual and vision-language datasets into a unified question answering format called Visual Question Answering Instruction (VQA-IN), thereby extending MLLM to domain-specific tasks. |
Jusung Lee; Sungguk Cha; Younghyun Lee; Cheoljong Yang; | arxiv-cs.CV | 2024-02-13 |
921 | Visually Dehallucinative Instruction Generation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a novel and scalable method for generating visually dehallucinative instructions, dubbed CAP2QA, that constrains the scope to only image contents. |
Sungguk Cha; Jusung Lee; Younghyun Lee; Cheoljong Yang; | arxiv-cs.CV | 2024-02-13 |
922 | T-RAG: Lessons from The LLM Trenches Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Large Language Models (LLM) have shown remarkable language capabilities fueling attempts to integrate them into applications across a wide range of domains. |
Masoomali Fatehkia; Ji Kim Lucas; Sanjay Chawla; | arxiv-cs.AI | 2024-02-12 |
923 | G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. |
XIAOXIN HE et. al. | arxiv-cs.LG | 2024-02-12 |
924 | FaBERT: Pre-training BERT on Persian Blogs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce FaBERT, a Persian BERT-base model pre-trained on the HmBlogs corpus, encompassing both informal and formal Persian texts. |
Mostafa Masumi; Seyed Soroush Majd; Mehrnoush Shamsfard; Hamid Beigy; | arxiv-cs.CL | 2024-02-09 |
925 | The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate” Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators. We assess the performance of three LLMs and one … |
Juhyun Oh; Eunsu Kim; Inha Cha; Alice Oh; | ArXiv | 2024-02-09 |
926 | The Generative AI Paradox on Evaluation: What It Can Solve, It May Not Evaluate Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators. |
Juhyun Oh; Eunsu Kim; Inha Cha; Alice Oh; | arxiv-cs.CL | 2024-02-09 |
927 | SPARQL Generation: An Analysis on Fine-tuning OpenLLaMA for Question Answering Over A Life Science Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To overcome this challenge, in this study, we evaluate several strategies for fine-tuning the OpenLlama LLM for question answering over life science knowledge graphs. In particular, we propose an end-to-end data augmentation approach for extending a set of existing queries over a given knowledge graph towards a larger dataset of semantically enriched question-to-SPARQL query pairs, enabling fine-tuning even for datasets where these pairs are scarce. |
Julio C. Rangel; Tarcisio Mendes de Farias; Ana Claudia Sima; Norio Kobayashi; | arxiv-cs.AI | 2024-02-07 |
928 | ScreenAI: A Vision-Language Model for UI and Infographics Understanding IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce ScreenAI, a vision-language model that specializes in UI and infographics understanding. |
GILLES BAECHLER et. al. | arxiv-cs.CV | 2024-02-07 |
929 | NORMY: Non-Uniform History Modeling for Open Retrieval Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose NORMY, the first unsupervised non-uniform history modeling pipeline which generates the best conversational history for each module. |
Muhammad Shihab Rashid; Jannat Ara Meem; Vagelis Hristidis; | arxiv-cs.IR | 2024-02-06 |
930 | Training Language Models to Generate Text with Citations Via Fine-grained Rewards IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose an effective training framework using fine-grained rewards to teach LLMs to generate highly supportive and relevant citations, while ensuring the correctness of their responses. |
Chengyu Huang; Zeqiu Wu; Yushi Hu; Wenya Wang; | arxiv-cs.CL | 2024-02-06 |
931 | Convincing Rationales for Visual Question Answering Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To generate both visual and textual rationales next to the predicted answer to the given image/question pair, we propose Convincing Rationales for VQA, CRVQA. |
Kun Li; George Vosselman; Michael Ying Yang; | arxiv-cs.CV | 2024-02-06 |
932 | LB-KBQA: Large-language-model and BERT Based Knowledge-Based Question and Answering System Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, both of the methods suffer from limited resources in intent recognition. To address this issue, we propose a novel KBQA system based on a Large Language Model(LLM) and BERT (LB-KBQA). |
Yan Zhao; Zhongyun Li; Yushan Pan; Jiaxing Wang; Yihong Wang; | arxiv-cs.CL | 2024-02-05 |
933 | Enhancing Textbook Question Answering Task with Large Language Models and Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a methodology that handle the out-of-domain scenario in TQA where concepts are spread across different lessons by incorporating the retrieval augmented generation (RAG) technique and utilize transfer learning to handle the long context and enhance reasoning abilities. |
Hessa Abdulrahman Alawwad; Areej Alhothali; Usman Naseem; Ali Alkhathlan; Amani Jamal; | arxiv-cs.CL | 2024-02-05 |
934 | Large Language Model for Table Processing: A Survey Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Tables, typically two-dimensional and structured to store large amounts of data, are essential in daily activities like database queries, spreadsheet calculations, and generating … |
Weizheng Lu; Jiaming Zhang; Jing Zhang; Yueguo Chen; | ArXiv | 2024-02-04 |
935 | GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Despite this, how to activate the capacity of MLLM as the implicit knowledge engine has not been explored yet. Therefore, we propose GeReA, a generate-reason framework that prompts a MLLM like InstructBLIP with question relevant vision and language information to generate knowledge-relevant descriptions and reasons those descriptions for knowledge-based VQA. |
ZIYU MA et. al. | arxiv-cs.CV | 2024-02-04 |
936 | Knowledge Generation for Zero-shot Knowledge-based VQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Inspired by recent work on knowledge generation from LLMs for text-based QA, in this work we propose and test a similar knowledge-generation-based K-VQA method, which first generates knowledge from an LLM and then incorporates the generated knowledge for K-VQA in a zero-shot manner. |
Rui Cao; Jing Jiang; | arxiv-cs.CL | 2024-02-04 |
937 | SemPool: Simple, Robust, and Interpretable KG Pooling for Enhancing Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, GNN-based methods for QA rely on the graph information of the candidate answer nodes, which limits their effectiveness in more challenging settings where critical answer information is not included in the KG. We propose a simple graph pooling approach that learns useful semantics of the KG that can aid the LM’s reasoning and that its effectiveness is robust under graph perturbations. |
Costas Mavromatis; Petros Karypis; George Karypis; | arxiv-cs.CL | 2024-02-03 |
938 | Large Language Model for Table Processing: A Survey IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We summarize the training techniques for LLMs and VLMs tailored for table processing. |
WEIZHENG LU et. al. | arxiv-cs.AI | 2024-02-03 |
939 | CABINET: Content Relevance Based Noise Reduction for Table Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) – a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. |
SOHAN PATNAIK et. al. | arxiv-cs.CL | 2024-02-02 |
940 | Enhancing Scene‐text Visual Question Answering with Relational Reasoning, Attention and Dynamic Vocabulary Integration Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text‐based VQA tasks, emphasizing the important role of … |
Mayank Agrawal; Anand Singh Jalal; Himanshu Sharma; | Computational Intelligence | 2024-02-01 |
941 | So Many Heads, So Many Wits: Multimodal Graph Reasoning for Text-Based Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: While texts related to images convey fundamental messages for scene understanding and reasoning, text-based visual question answering tasks concentrate on visual questions that … |
Wenbo Zheng; Lan Yan; Fei-Yue Wang; | IEEE Transactions on Systems, Man, and Cybernetics: Systems | 2024-02-01 |
942 | SPARQL Generation with Entity Pre-trained GPT for KG Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We managed to isolate which property of the task can be the most difficult to solve at few or zero-shot and we proposed pre-training on all entities (under CWA) to improve the performance. |
Diego Bustamante; Hideaki Takeda; | arxiv-cs.CL | 2024-02-01 |
943 | A Multi-scale Contextual Attention Network for Remote Sensing Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Jiangfan Feng; Hui Wang; | Int. J. Appl. Earth Obs. Geoinformation | 2024-02-01 |
944 | HiQA: A Hierarchical Contextual Augmentation RAG for Massive Documents QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: As language model agents leveraging external tools rapidly evolve, significant progress has been made in question-answering(QA) methodologies utilizing supplementary documents and … |
Xinyue Chen; Pengyu Gao; Jiangjiang Song; Xiaoyang Tan; | ArXiv | 2024-02-01 |
945 | Knowledge Graph-Based Reinforcement Federated Learning for Chinese Question and Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge question and answering (Q&A) is widely used. However, most existing semantic parsing methods in Q&A usually use cascading, which can incur error accumulation. In … |
LIANG XU et. al. | IEEE Transactions on Computational Social Systems | 2024-02-01 |
946 | Proximity QA: Unleashing The Power of Multi-Modal Large Language Models for Spatial Proximity Analysis Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, while existing MLLMs adeptly recognize \textit{what} objects are in an image, they still face challenges in effectively discerning \textit{where} these objects are, particularly along the distance (scene depth) axis. To overcome this limitation in MLLMs, we introduce Proximity Question Answering (Proximity QA), a novel framework designed to enable MLLMs to infer the proximity relationship between objects in images. |
Jianing Li; Xi Nan; Ming Lu; Li Du; Shanghang Zhang; | arxiv-cs.CV | 2024-01-31 |
947 | Desiderata for The Context Use of Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, most prior work focus on one or two of those problems in isolation, which makes it difficult to see trends across them. We aim to close this gap, by first outlining a set of — previously discussed as well as novel — desiderata for QA models. |
Sagi Shaier; Lawrence E Hunter; Katharina von der Wense; | arxiv-cs.CL | 2024-01-31 |
948 | HiQA: A Hierarchical Contextual Augmentation RAG for Multi-Documents QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these methods exhibit limited retrieval accuracy when faced with numerous indistinguishable documents, presenting notable challenges in their practical application. In response to these emerging challenges, we present HiQA, an advanced multi-document question-answering (MDQA) framework that integrates cascading metadata into content and a multi-route retrieval mechanism. |
Xinyue Chen; Pengyu Gao; Jiangjiang Song; Xiaoyang Tan; | arxiv-cs.CL | 2024-01-31 |
949 | Are My Answers Medically Accurate? Exploiting Medical Knowledge Graphs for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Aizan Zafar; Deeksha Varshney; Sovan Kumar Sahoo; Amitava Das; Asif Ekbal; | Appl. Intell. | 2024-01-31 |
950 | An Exam-based Evaluation Approach Beyond Traditional Relevance Judgments Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose two evaluation measures, the recall-oriented EXAM Cover metric, and the precision-oriented EXAM Qrels metric, the latter which can be implemented with trec_eval. |
Naghmeh Farzi; Laura Dietz; | arxiv-cs.IR | 2024-01-31 |
951 | Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we provide a Transformer-based model and a baseline benchmark for the Turkish Language. |
Savas Yildirim; | arxiv-cs.CL | 2024-01-30 |
952 | PipeNet: Question Answering with Semantic Pruning Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we target at finding semantically related entity nodes in the subgraph to improve the efficiency of graph reasoning with KG. |
Ying Su; Jipeng Zhang; Yangqiu Song; Tong Zhang; | arxiv-cs.CL | 2024-01-30 |
953 | LCVO: An Efficient Pretraining-Free Framework for Visual Question Answering Grounding Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this paper, the LCV2 modular method is proposed for the Grounded Visual Question Answering task in the vision-language multimodal domain. This approach relies on a frozen large … |
Yuhan Chen; Lumei Su; Lihua Chen; Zhiwei Lin; | ArXiv | 2024-01-29 |
954 | LCV2: An Efficient Pretraining-Free Framework for Grounded Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, the LCV2 modular method is proposed for the Grounded Visual Question Answering task in the vision-language multimodal domain. |
Yuhan Chen; Lumei Su; Lihua Chen; Zhiwei Lin; | arxiv-cs.CV | 2024-01-28 |
955 | Improving Data Augmentation for Robust Visual Question Answering with Effective Curriculum Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Being widely used in learning unbiased visual question answering (VQA) models, Data Augmentation (DA) helps mitigate language biases by generating extra training samples beyond the original samples. |
Yuhang Zheng; Zhen Wang; Long Chen; | arxiv-cs.CV | 2024-01-28 |
956 | A RAG-based Question Answering System Proposal for Understanding Islam: MufassirQAS LLM Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study uses a vector database-based Retrieval Augmented Generation (RAG) approach to enhance the accuracy and transparency of LLMs. |
Ahmet Yusuf Alan; Enis Karaarslan; Ömer Aydin; | arxiv-cs.CL | 2024-01-27 |
957 | Augment Before You Try: Knowledge-Enhanced Table Question Answering Via Table Expansion Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a simple yet effective method to integrate external information in a given table. |
YUJIAN LIU et. al. | arxiv-cs.CL | 2024-01-27 |
958 | DataFrame QA: A Universal LLM Framework on DataFrame Question Answering Without Data Exposure Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose DataFrame QA as a comprehensive framework that includes safe Pandas query generation and code execution. |
Junyi Ye; Mengnan Du; Guiling Wang; | arxiv-cs.CL | 2024-01-27 |
959 | Benchmarking Large Language Models in Complex Question Answering Attribution Using Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The current methods for automatically evaluating the attribution, which are often based on Large Language Models (LLMs), are still inadequate, particularly in recognizing subtle differences between attributions, and complex relationships between citations and statements. To compare these attribution evaluation methods and develop new ones, we introduce a set of fine-grained categories (i.e., supportive, insufficient, contradictory and irrelevant) for measuring the attribution, and develop a Complex Attributed Question Answering (CAQA) benchmark by leveraging knowledge graphs (KGs) for automatically generating attributions of different categories to question-answer pairs. |
NAN HU et. al. | arxiv-cs.CL | 2024-01-25 |
960 | Towards Consistent Natural-Language Explanations Via Explanation-Consistency Finetuning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose explanation-consistency finetuning (EC-finetuning), a method that adapts LLMs to generate more consistent natural-language explanations on related examples. |
YANDA CHEN et. al. | arxiv-cs.CL | 2024-01-25 |
961 | Graph Guided Question Answer Generation for Procedural Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we focus on task-specific question answering (QA). |
HAI X. PHAM et. al. | arxiv-cs.CL | 2024-01-24 |
962 | SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes the first known end-to-end framework, Speech Dense Passage Retriever (SpeechDPR), for the retrieval component of the openSQA problem. |
CHYI-JIUNN LIN et. al. | arxiv-cs.CL | 2024-01-24 |
963 | Question Answering Systems for Health Professionals at The Point of Care—a Systematic Review Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Objectives Question answering (QA) systems have the potential to improve the quality of clinical care by providing health professionals with the latest and most relevant evidence. … |
GREGORY KELL et. al. | Journal of the American Medical Informatics Association : … | 2024-01-24 |
964 | SEER: Facilitating Structured Reasoning and Explanation Via Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose SEER, a novel method that maximizes a structure-based return to facilitate structured reasoning and explanation. |
GUOXIN CHEN et. al. | arxiv-cs.CL | 2024-01-24 |
965 | Can AI Assistants Know What They Don’t Know? IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We believe that an AI assistant’s refusal to answer questions it does not know is a crucial method for reducing hallucinations and making the assistant truthful. Therefore, in this paper, we ask the question Can AI assistants know what they don’t know and express them through natural language? |
QINYUAN CHENG et. al. | arxiv-cs.CL | 2024-01-24 |
966 | TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. |
Zhiruo Wang; Daniel Fried; Graham Neubig; | arxiv-cs.AI | 2024-01-23 |
967 | Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Presently, major foundation model companies have opened up Embedding and Chat API interfaces, and frameworks like LangChain have already integrated the RAG process. |
Demiao Lin; | arxiv-cs.AI | 2024-01-23 |
968 | TAT-LLM: A Specialized Language Model for Discrete Reasoning Over Tabular and Textual Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we address question answering (QA) over a hybrid of tabular and textual data that are very common content on the Web (e.g. SEC filings), where discrete reasoning capabilities are often required. |
FENGBIN ZHU et. al. | arxiv-cs.CL | 2024-01-23 |
969 | CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question answering (QA) can only make progress if we know if an answer is correct, but for many of the most challenging and interesting QA examples, current evaluation metrics to … |
Zongxia Li; Ishani Mondal; Yijun Liang; Huy Nghiem; Jordan Boyd-Graber; | arxiv-cs.CL | 2024-01-23 |
970 | Free Form Medical Visual Question Answering in Radiology Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We innovatively augment the SLAKE dataset, enabling our model to respond to a more diverse array of questions, not limited to the immediate content of radiology or pathology images. |
ABHISHEK NARAYANAN et. al. | arxiv-cs.CV | 2024-01-23 |
971 | FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the limited data resources and reduce the annotation cost, we introduce FinLLMs, a method for generating financial question-answering data based on common financial formulas using Large Language Models. |
ZIQIANG YUAN et. al. | arxiv-cs.AI | 2024-01-19 |
972 | Reinforcement Learning for Question Answering in Programming Domain Using Public Community Scoring As A Human Feedback Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we investigate the enhancement of the GPT Neo 125M performance in Community Question Answering (CQA) with a focus on programming, through the integration of Reinforcement Learning from Human Feedback (RLHF) and the utilization of scores from Stack Overflow. |
Alexey Gorbatovski; Sergey Kovalchuk; | arxiv-cs.CL | 2024-01-19 |
973 | Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Moreover, there are no human annotations for question-critical timestamps in existing VideoQA datasets. In light of this, we propose a novel weakly supervised framework to enforce the LMMs to reason out the answers with question-critical moments as visual inputs. |
Haibo Wang; Chenghang Lai; Yixuan Sun; Weifeng Ge; | arxiv-cs.CV | 2024-01-19 |
974 | Q&A Prompts: Discovering Rich Visual Clues Through Mining Question-Answer Prompts for VQA Requiring Diverse World Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we believe that if we can collect visual clues in the given image as much as possible, we will recognize the image more accurately, understand the question better, recall relevant knowledge more easily, and finally reason out the answer. |
Haibo Wang; Weifeng Ge; | arxiv-cs.CV | 2024-01-19 |
975 | Veagle: Advancements in Multimodal Representation Learning Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Lately, researchers in artificial intelligence have been really interested in how language and vision come together, giving rise to the development of multimodal models that aim … |
RAJAT CHAWLA et. al. | ArXiv | 2024-01-18 |
976 | Instant Answering in E-Commerce Buyer-Seller Messaging Using Message-to-Question Reformulation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We seek to automate buyer inquiries to sellers in a leading e-commerce store using a domain-specific federated Question Answering (QA) system. |
BESNIK FETAHU et. al. | arxiv-cs.CL | 2024-01-18 |
977 | Veagle: Advancements in Multimodal Representation Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a novel approach to enhance the multimodal capabilities of existing models. |
RAJAT CHAWLA et. al. | arxiv-cs.CV | 2024-01-18 |
978 | Question-Answer Cross Language Image Matching for Weakly Supervised Semantic Segmentation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel Question-Answer Cross-Language-Image Matching framework for WSSS (QA-CLIMS), leveraging the vision-language foundation model to maximize the text-based understanding of images and guide the generation of activation maps. |
Songhe Deng; Wei Zhuo; Jinheng Xie; Linlin Shen; | arxiv-cs.CV | 2024-01-18 |
979 | ChatQA: Surpassing GPT-4 on Conversational QA and RAG IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on retrieval-augmented generation (RAG) and conversational question answering (QA). |
ZIHAN LIU et. al. | arxiv-cs.CL | 2024-01-18 |
980 | BERTologyNavigator: Advanced Question Answering with BERT-based Semantics Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we introduce the BERTologyNavigator — a two-phased system that combines relation extraction techniques and BERT embeddings to navigate the relationships within the DBLP Knowledge Graph (KG). |
Shreya Rajpal; Ricardo Usbeck; | arxiv-cs.CL | 2024-01-17 |
981 | Fine-tuning Strategies for Domain Specific Question Answering Under Low Annotation Budget Constraints Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The unsupervised training of a language model combined with further target task fine-tuning has become the standard QA fine-tuning procedure. In this work, we demonstrate that this strategy is sub-optimal for fine-tuning QA models, especially under a low QA annotation budget, which is a usual setting in practice due to the extractive QA labeling cost. |
Kunpeng Guo; Dennis Diefenbach; Antoine Gourru; Christophe Gravier; | arxiv-cs.CL | 2024-01-17 |
982 | QAnswer: Towards Question Answering Search Over Websites Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To illustrate the potential of QA technologies for the website search practitioner, we demonstrate web searches that combine QA over knowledge graphs and QA over free text — each being usually tackled separately. |
Kunpeng Guo; Clement Defretiere; Dennis Diefenbach; Christophe Gravier; Antoine Gourru; | arxiv-cs.CL | 2024-01-17 |
983 | MMToM-QA: Multimodal Theory of Mind Question Answering IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Theory of Mind (ToM), the ability to understand people’s minds, is an essential ingredient for developing machines with human-level social intelligence. Recent machine learning … |
CHUANYANG JIN et. al. | ArXiv | 2024-01-16 |
984 | BERT-CNN Based Evidence Retrieval and Aggregation for Chinese Legal Multi-choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yanling Li; Jiaye Wu; Xudong Luo; | Neural Comput. Appl. | 2024-01-16 |
985 | MMToM-QA: Multimodal Theory of Mind Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: People can flexibly reason about another person’s mind based on conceptual representations (e.g., goals, beliefs, plans) extracted from any available data. To address this, we introduce a multimodal Theory of Mind question answering (MMToM-QA) benchmark. |
CHUANYANG JIN et. al. | arxiv-cs.AI | 2024-01-16 |
986 | Towards Efficient Methods in Medical Question Answering Using Knowledge Graph Embeddings Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, in-domain pre-training is expensive in terms of time and resources. In this paper, we propose a resource-efficient approach for injecting domain knowledge into a model without relying on such domain-specific pre-training. |
Saptarshi Sengupta; Connor Heaton; Suhan Cui; Soumalya Sarkar; Prasenjit Mitra; | arxiv-cs.CL | 2024-01-15 |
987 | Developing ChatGPT for Biology and Medicine: A Complete Review of Biomedical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper highlights the structures and advancements of medical domain explorations against general domain methods, emphasizing their applications across different tasks and datasets. |
Qing Li; Lei Li; Yu Li; | arxiv-cs.CL | 2024-01-15 |
988 | A Study on Large Language Models’ Limitations in Multiple-Choice Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we tackle one of the most widely used tasks – answering Multiple Choice Question (MCQ). |
Aisha Khatun; Daniel G. Brown; | arxiv-cs.CL | 2024-01-15 |
989 | Generalizing Visual Question Answering from Synthetic to Human-Written Questions Via A Chain of QA with A Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, VQA models trained on those data do not perform well on complex, human-written questions. To address this issue, we propose a new method called {\it chain of QA for human-written questions} (CoQAH). |
Taehee Kim; Yeongjae Cho; Heejun Shin; Yohan Jo; Dongmyung Shin; | arxiv-cs.CL | 2024-01-12 |
990 | BOK-VQA: Bilingual Outside Knowledge-Based Visual Question Answering Via Graph Representation Pretraining Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Accordingly, we propose a bilingual outside-knowledge VQA (BOK-VQA) dataset in this study that can be extended to multilingualism. |
Minjun Kim; Seungwoo Song; Youhan Lee; Haneol Jang; Kyungtae Lim; | arxiv-cs.CL | 2024-01-12 |
991 | How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose to evaluate the understanding and generation ability of LLMs to deal with differently structured logical forms by examining the inter-conversion of natural and formal language through in-context learning of LLMs. |
JINXIN LIU et. al. | arxiv-cs.CL | 2024-01-11 |
992 | Hallucination Benchmark in Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The study provides an in-depth analysis of current models’ limitations and reveals the effectiveness of various prompting strategies. |
Jinge Wu; Yunsoo Kim; Honghan Wu; | arxiv-cs.CL | 2024-01-11 |
993 | Cross-modal Retrieval for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Named entities have diverse visual representations and are therefore difficult to recognize. We argue that cross-modal retrieval may help bridge the semantic gap between an entity and its depictions, and is foremost complementary with mono-modal retrieval. |
Paul Lerner; Olivier Ferret; Camille Guinaudeau; | arxiv-cs.CL | 2024-01-11 |
994 | TRANS-VQA: Fully Transformer-Based Image Question-Answering Model Using Question-guided Vision Attention Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Understanding multiple modalities and relating them is an easy task for humans. But for machines, this is a stimulating task. One such multi-modal reasoning task is Visual … |
Dipali Koshti; Ashutosh Gupta; M. Kalla; Arvind Sharma; | Inteligencia Artif. | 2024-01-10 |
995 | AutoAct: Automatic Agent Learning from Scratch for QA Via Self-Planning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we introduce AutoAct, an automatic agent learning framework for QA that does not rely on large-scale annotated data and synthetic planning trajectories from closed-source models (e.g., GPT-4). |
SHUOFEI QIAO et. al. | arxiv-cs.CL | 2024-01-10 |
996 | Answer Retrieval in Legal Community Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Two main challenges hinder applying existing answer retrieval approaches in other domains to the legal domain: (1) a huge knowledge gap between lawyers and non-professionals; and (2) a mix of informal and formal content on legal QA websites. To tackle these challenges, we propose CE_FS, a novel cross-encoder (CE) re-ranker based on the fine-grained structured inputs. |
Arian Askari; Zihui Yang; Zhaochun Ren; Suzan Verberne; | arxiv-cs.IR | 2024-01-09 |
997 | Building Efficient and Effective OpenQA Systems for Low-Resource Languages Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we show that effective, low-cost OpenQA systems can be developed for low-resource contexts. |
EMRAH BUDUR et. al. | arxiv-cs.CL | 2024-01-07 |
998 | A Joint-Reasoning Based Disease Q&A System Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Extant QA systems also have limitations in terms of automation and performance. We address these challenges by designing a novel, automated disease QA system which effectively utilizes both LM and KG techniques through a joint-reasoning approach to answer disease-related questions appropriate for lay users. |
Prakash Chandra Sukhwal; Vaibhav Rajan; Atreyi Kankanhalli; | arxiv-cs.CL | 2024-01-06 |
999 | Improving The Representation of Sentences with Reinforcement Learning and AMR Graph Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Sentence Embedding is a technique that represents the meaning of sentences in vector form, playing a crucial role in various natural language processing tasks such as … |
Jinwoo Park; Hosoo Shin; Dahee Jeong; Junyeong Kim; | 2024 IEEE International Conference on Consumer Electronics … | 2024-01-06 |
1000 | DocGraphLM: Documental Graph Language Model for Information Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce DocGraphLM, a novel framework that combines pre-trained language models with graph semantics. |
Dongsheng Wang; Zhiqiang Ma; Armineh Nourbakhsh; Kang Gu; Sameena Shah; | arxiv-cs.CL | 2024-01-05 |
1001 | Location Aware Modular Biencoder for Tourism Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The traditional method of encoding each pair of question and POI becomes inefficient when the number of candidates increases, making it infeasible for real-world applications. To overcome this, we propose treating the QA task as a dense vector retrieval problem, where we encode questions and POIs separately and retrieve the most relevant POIs for a question by utilizing embedding space similarity. |
Haonan Li; Martin Tomko; Timothy Baldwin; | arxiv-cs.CL | 2024-01-04 |
1002 | Navigator: A Gen-AI System for Discovery of Factual and Predictive Insights on Domain-Specific Tabular Datasets Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We demonstrate a gen-AI-based question-answering system called Navigator, which allows business users to ask natural language questions and get answers based on domain-specific … |
ARNAB CHAKRABORTY et. al. | Proceedings of the 7th Joint International Conference on … | 2024-01-04 |
1003 | Joint Multi-Facts Reasoning Network For Complex Temporal Question Answering Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose \textbf{\underline{J}}oint \textbf{\underline{M}}ulti \textbf{\underline{F}}acts \textbf{\underline{R}}easoning \textbf{\underline{N}}etwork (JMFRN), to jointly reasoning multiple temporal facts for accurately answering \emph{complex} temporal questions. |
RIKUI HUANG et. al. | arxiv-cs.CL | 2024-01-04 |
1004 | Navigating Uncertainty: Optimizing API Dependency for Hallucination Reduction in Closed-Book Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a new LLM able to self-estimate if it is able to answer directly or needs to request an external tool. |
Pierre Erbacher; Louis Falissar; Vincent Guigue; Laure Soulier; | arxiv-cs.CL | 2024-01-03 |
1005 | Evaluating Large Language Models in Semantic Parsing for Conversational Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Through a series of experiments on an extensive benchmark dataset, we compare models of varying sizes with different prompting techniques and identify common issue types in the generated output. |
Phillip Schneider; Manuel Klettner; Kristiina Jokinen; Elena Simperl; Florian Matthes; | arxiv-cs.CL | 2024-01-03 |
1006 | Benchmarking Out-of-Distribution Detection in Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: When faced with an out-of-distribution (OOD) question or image, visual question answering (VQA) systems may provide unreliable answers. If relied on by real users or secondary … |
Xiangxi Shi; Stefan Lee; | 2024 IEEE/CVF Winter Conference on Applications of Computer … | 2024-01-03 |
1007 | Scene Text Visual Question Answering By Using YOLO and STN Related Papers Related Patents Related Grants Related Venues Related Experts View |
Kimiya Nourali; Elham Dolkhani; | International Journal of Speech Technology | 2024-01-03 |
1008 | Unlocking Telecom Domain Knowledge Using LLMs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Conversational assistants have become increasingly popular as they use Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) for domain context. In this work, we … |
Sujoy Roychowdhury; Nishkarsh Jain; Sumit Soman; | 2024 16th International Conference on COMmunication Systems … | 2024-01-03 |
1009 | Question-Answering Based Summarization of Electronic Health Records Using Retrieval Augmented Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, the requirement to consider the entire content of an EHR in summarization has resulted in poor performance due to the fact that attention mechanisms in modern large language models (LLMs) adds a quadratic complexity in terms of the size of the input. We propose here a method that mitigates these shortcomings by combining semantic search, retrieval augmented generation (RAG) and question-answering using the latest LLMs. |
Walid Saba; Suzanne Wendelken; James. Shanahan; | arxiv-cs.CL | 2024-01-02 |
1010 | Sports-QA: A Large-Scale Video Question Answering Benchmark for Complex and Professional Sports Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce the first dataset, named Sports-QA, specifically designed for the sports VideoQA task. |
HAOPENG LI et. al. | arxiv-cs.CV | 2024-01-02 |
1011 | Answering from Sure to Uncertain: Uncertainty-Aware Curriculum Learning for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recognizing that conventional self-paced CL methods rely on training loss for difficulty measurement, which might not accurately reflect the intricacies of video-question pairs, we introduce the concept of uncertainty-aware CL. |
Haopeng Li; Qiuhong Ke; Mingming Gong; Tom Drummond; | arxiv-cs.CV | 2024-01-02 |
1012 | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In contrast, humans can easily tackle it by using a series of episode memories as anchors to quickly locate question-related key moments for reasoning. To mimic this effective reasoning strategy, we propose the Glance-Focus model. |
Ziyi Bai; Ruiping Wang; Xilin Chen; | arxiv-cs.CV | 2024-01-02 |
1013 | ChatQA: Building GPT-4 Level Conversational QA Models IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In this work, we introduce ChatQA, a family of conversational question answering (QA) models that obtain GPT-4 level accuracies. Specifically, we propose a two-stage instruction … |
ZIHAN LIU et. al. | ArXiv | 2024-01-01 |
1014 | IKIM at MEDIQA-M3G 2024: Multilingual Visual Question-Answering for Dermatology Through VLM Fine-tuning and LLM Translations Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents our solution to the MEDIQA-M3G Challenge at NAACL-ClinicalNLP 2024. We participated in all three languages, ranking first in Chinese and Spanish and third in … |
Marie Bauer; Constantin Seibold; J. Kleesiek; Amin Dada; | Clinical Natural Language Processing Workshop | 2024-01-01 |
1015 | DermaVQA: A Multilingual Visual Question Answering Dataset for Dermatology Related Papers Related Patents Related Grants Related Venues Related Experts View |
WEN-WAI YIM et. al. | International Conference on Medical Image Computing and … | 2024-01-01 |
1016 | Enhancing Remote Sensing Visual Question Answering: A Mask-Based Dual-Stream Feature Mutual Attention Network Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The visual question answering (VQA) method applied to remote sensing images (RSIs) can complete the interaction of image information and text information, which avoids … |
YANGYANG LI et. al. | IEEE Geoscience and Remote Sensing Letters | 2024-01-01 |
1017 | Resolving Zero-Shot and Fact-Based Visual Question Answering Via Enhanced Fact Retrieval Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Practical applications with visual question answering (VQA) systems are challenging, and recent research has aimed at investigating this important field. Many issues related to … |
Sen Wu; Guoshuai Zhao; Xueming Qian; | IEEE Transactions on Multimedia | 2024-01-01 |
1018 | CircuitVQA: A Visual Question Answering Dataset for Electrical Circuit Images Related Papers Related Patents Related Grants Related Venues Related Experts View |
Rahul Mehta; Bhavyajeet Singh; Vasudeva Varma; Manish Gupta; | ECML/PKDD | 2024-01-01 |
1019 | Uncertainty Estimation in Large Language Models to Support Biodiversity Conservation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLM) provide significant value in question answering (QA) scenarios and have practical application in complex decision-making contexts, such as biodiversity … |
Maria Mora-Cross; Saúl Calderón Ramírez; | North American Chapter of the Association for Computational … | 2024-01-01 |
1020 | Interactive Question Answering for Multimodal Lifelog Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View |
Ly-Duyen Tran; Liting Zhou; Binh T. Nguyen; C. Gurrin; | Conference on Multimedia Modeling | 2024-01-01 |
1021 | Analyze, Generate and Refine: Query Expansion with LLMs for Zero-Shot Open-Domain QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Query expansion (QE) is a critical component in the open-domain question answering (OpenQA) pipeline, enhancing the retrieval performance by broadening the scope of queries with … |
Xinran Chen; Xuanang Chen; Ben He; Tengfei Wen; Le Sun; | Annual Meeting of the Association for Computational … | 2024-01-01 |
1022 | Overview of BioASQ 2024: The Twelfth BioASQ Challenge on Large-Scale Biomedical Semantic Indexing and Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
A. NENTIDIS et. al. | Conference and Labs of the Evaluation Forum | 2024-01-01 |
1023 | EHRNoteQA: A Patient-Specific Question Answering Benchmark for Evaluating Large Language Models in Clinical Settings Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This study introduces EHRNoteQA , a novel patient-specific question answering benchmark tailored for evaluating Large Language Models (LLMs) in clinical environments. Based on … |
SUNJUN KWEON et. al. | ArXiv | 2024-01-01 |
1024 | Generative AI for Systems Thinking: Can A GPT Question-Answering System Turn Text Into The Causal Maps Produced By Human Readers? Related Papers Related Patents Related Grants Related Venues Related Experts View |
P. Giabbanelli; Nathan Witkowicz; | Hawaii International Conference on System Sciences | 2024-01-01 |
1025 | UTSA-NLP at ChemoTimelines 2024: Evaluating Instruction-Tuned Language Models for Temporal Relation Extraction Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents our approach for the 2024 ChemoTimelines shared task. Specifically, we explored using Large Language Models (LLMs) for temporal relation extraction. We … |
Xingmeng Zhao; A. Rios; | Clinical Natural Language Processing Workshop | 2024-01-01 |
1026 | BEnQA: A Question Answering Benchmark for Bengali and English Related Papers Related Patents Related Grants Related Venues Related Experts View |
SHEIKH SHAFAYAT et. al. | Annual Meeting of the Association for Computational … | 2024-01-01 |
1027 | CroMIC-QA: The Cross-Modal Information Complementation Based Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper proposes a new multi-modal question -answering task, named as Cross-Modal Information Complementation based Question Answering (CroMIC-QA), to promote the exploration … |
SHUN QIAN et. al. | IEEE Transactions on Multimedia | 2024-01-01 |
1028 | Leveraging Knowledge Graph Reasoning in A Multihop Question Answering System for Hot Rolling Line Fault Diagnosis Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multihop question answering (QA) over knowledge graph (KG) poses significant challenges in the context of industrial processes, due to the intricate semantics of natural language … |
Huihui Han; Jian Wang; Xiaowen Wang; | IEEE Transactions on Instrumentation and Measurement | 2024-01-01 |
1029 | Arabic Narrative Question Answering (QA) Using Transformer Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The Narrative question answering (QA) problem involves generating accurate, relevant, and human-like answers to questions based on the comprehension of a story consisting of … |
Mohammad A. Ateeq; Sabrina Tiun; Hamed Abdelhaq; Nawras Rahhal; | IEEE Access | 2024-01-01 |
1030 | Conversational Question Answering with Language Models Generated Reformulations Over Knowledge Graph Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Conversational question answering (ConvQA) 001 over knowledge graphs (KGs) involves answer-002 ing multi-turn natural language questions about 003 information contained in a KG. … |
Lihui Liu; Blaine Hill; Boxin Du; Fei Wang; Hanghang Tong; | Annual Meeting of the Association for Computational … | 2024-01-01 |
1031 | See, Perceive, and Answer: A Unified Benchmark for High-Resolution Postdisaster Evaluation in Remote Sensing Images Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual-language generation for remote sensing image (RSI) is an emerging and challenging research area that requires multitask learning to achieve a comprehensive understanding. … |
Danpei Zhao; Jiankai Lu; Bo Yuan; | IEEE Transactions on Geoscience and Remote Sensing | 2024-01-01 |
1032 | Towards Robust Expert Finding in Community Question Answering Platforms Related Papers Related Patents Related Grants Related Venues Related Experts View |
Maddalena Amendola; Andrea Passarella; Raffaele Perego; | European Conference on Information Retrieval | 2024-01-01 |
1033 | Multi-hop Community Question Answering Based on Multi-aspect Heterogeneous Graph Related Papers Related Patents Related Grants Related Venues Related Experts View |
YONGLIANG WU et. al. | Inf. Process. Manag. | 2024-01-01 |
1034 | Operation-Augmented Numerical Reasoning for Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question answering requiring numerical reasoning, which generally involves symbolic operations such as sorting, counting, and addition, is a challenging task. To address such a … |
Yongwei Zhou; Junwei Bao; Youzheng Wu; Xiaodong He; Tiejun Zhao; | IEEE/ACM Transactions on Audio, Speech, and Language … | 2024-01-01 |
1035 | Analysis of QA System Behavior Against Context and Question Changes Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Data quality has gained increasing attention across various research domains, including pattern recognition, image processing, and Natural Language Processing (NLP). The goal of … |
R. Karra; A. Lasfar; | Int. Arab J. Inf. Technol. | 2024-01-01 |
1036 | QPAVE: A Multi-task Question Answering Approach for Fine-Grained Product Attribute Value Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View |
Kassem Sabeh; Mouna Kacimi; J. Gamper; | International Conference on Data Warehousing and Knowledge … | 2024-01-01 |
1037 | Debiased Visual Question Answering Via The Perspective of Question Types Related Papers Related Patents Related Grants Related Venues Related Experts View |
Tianyu Huai; Shuwen Yang; Junhang Zhang; Jiabao Zhao; Liang He; | Pattern Recognit. Lett. | 2024-01-01 |
1038 | Intelligent Retrieval and Comprehension of Entrepreneurship Education Resources Based on Semantic Summarization of Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The latest technologies in natural language processing provide creative, knowledge retrieval, and question-answering technologies in the design of intelligent education, which can … |
Haiyang Yu; Entai Wang; Qi Lang; Jianan Wang; | IEEE Transactions on Learning Technologies | 2024-01-01 |
1039 | MLeVLM: Improve Multi-level Progressive Capabilities Based on Multimodal Large Language Model for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
DEXUAN XU et. al. | Annual Meeting of the Association for Computational … | 2024-01-01 |
1040 | BioASQ at CLEF2024: The Twelfth Edition of The Large-Scale Biomedical Semantic Indexing and Question Answering Challenge Related Papers Related Patents Related Grants Related Venues Related Experts View |
A. NENTIDIS et. al. | European Conference on Information Retrieval | 2024-01-01 |
1041 | Efficient Agricultural Question Classification With A BERT-Enhanced DPCNN Model Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The application of big data technology in agricultural production has led to explosive growth in agricultural data. The accurate classification of agricultural questions from vast … |
XIAOJUAN GUO et. al. | IEEE Access | 2024-01-01 |
1042 | Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
XIANG LI et. al. | Annual Meeting of the Association for Computational … | 2024-01-01 |
1043 | FakeBench: Uncover The Achilles’ Heels of Fake Images with Large Multimodal Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recently, fake images generated by artificial intelligence (AI) models have become indistinguishable from the real, exerting new challenges for fake image detection models. To … |
Yixuan Li; Xuelin Liu; Xiaoyang Wang; Shiqi Wang; Weisi Lin; | ArXiv | 2024-01-01 |
1044 | Reflection-Reinforced Self-Training for Language Agents Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Self-training can potentially improve the performance of language agents without relying on demonstrations from humans or stronger models. The general process involves generating … |
Zi-Yi Dou; Cheng-Fu Yang; Xueqing Wu; Kai-Wei Chang; Nanyun Peng; | ArXiv | 2024-01-01 |
1045 | Question-Directed Reasoning With Relation-Aware Graph Attention Network for Complex Question Answering Over Knowledge Graph Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Complex knowledge graph question answering (KGQA) aims at answering natural language questions by entities retrieving from a knowledge graph (KG). Recently, the relation … |
GENG ZHANG et. al. | IEEE/ACM Transactions on Audio, Speech, and Language … | 2024-01-01 |
1046 | MRHF: Multi-stage Retrieval and Hierarchical Fusion for Textbook Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Peide Zhu; Zhen Wang; Manabu Okumura; Jie Yang; | Conference on Multimedia Modeling | 2024-01-01 |
1047 | InfiCoder-Eval: Systematically Evaluating The Question-Answering Capabilities of Code Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models for understanding and generating code (code LLMs) have witnessed tremendous progress in recent years. With the rapid development of code LLMs, many popular … |
LINYI LI et. al. | ArXiv | 2024-01-01 |
1048 | EquinorQA: Large Language Models for Question Answering Over Proprietary Data Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: . Large Language Models (LLMs) have become the state-of-the-art technology in a variety of language understanding tasks. Accordingly, many commercial organizations have been … |
Darío Garigliotti; Bjarte Johansen; Jakob Vigerust Kallestad; Seong-Eun Cho; Cèsar Ferri; | European Conference on Artificial Intelligence | 2024-01-01 |
1049 | UIC NLP GRADS at SemEval-2024 Task 3: Two-Step Disjoint Modeling for Emotion-Cause Pair Extraction Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Disentangling underlying factors contributing to the expression of emotion in multimodal data is challenging but may accelerate progress toward many real-world applications. In … |
Sharad Chandakacherla; Vaibhav Bhargava; Natalie Parde; | International Workshop on Semantic Evaluation | 2024-01-01 |
1050 | HIJLI_JU at SemEval-2024 Task 7: Enhancing Quantitative Question Answering Using Fine-tuned BERT Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In data and numerical analysis, Quantitative Question Answering (QQA) becomes a crucial instrument that provides deep insights for analyzing large datasets and helps make … |
Partha Sengupta; Sandip Sarkar; Dipankar Das; | International Workshop on Semantic Evaluation | 2024-01-01 |
1051 | Large Language Models for Binary Health-Related Question Answering: A Zero- and Few-Shot Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View |
Marcos Fernández-Pichel; David E. Losada; J. C. Pichel; | International Conference on Conceptual Structures | 2024-01-01 |
1052 | LaFFi: Leveraging Hybrid Natural Language Feedback for Fine-tuning Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces an alternative to SFT called Natural Language Feedback for Finetuning LLMs (LaFFi). |
QIANXI LI et. al. | arxiv-cs.LG | 2023-12-31 |
1053 | Keqing: Knowledge-based Question Answering Is A Nature Chain-of-thought Mentor of LLM IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a novel framework to assist LLMs, such as ChatGPT, to retrieve question-related structured information on the knowledge graph, and demonstrate that Knowledge-based question answering (Keqing) could be a nature Chain-of-Thought (CoT) mentor to guide the LLM to sequentially find the answer entities of a complex question through interpretable logical chains. |
CHAOJIE WANG et. al. | arxiv-cs.CL | 2023-12-31 |
1054 | ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the effectiveness, due to the divergence in model architecture, the PLM and GNN are not closely integrated, limiting the knowledge sharing and fine-grained feature interactions. To solve it, we aim to simplify the above two-module approach, and develop a more capable PLM that can directly support subgraph reasoning for KGQA, namely ReasoningLM. |
Jinhao Jiang; Kun Zhou; Wayne Xin Zhao; Yaliang Li; Ji-Rong Wen; | arxiv-cs.CL | 2023-12-30 |
1055 | FusionMind — Improving Question and Answering with External Context Fusion Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Answering questions using pre-trained language models (LMs) and knowledge graphs (KGs) presents challenges in identifying relevant knowledge and performing joint reasoning.We compared LMs (fine-tuned for the task) with the previously published QAGNN method for the Question-answering (QA) objective and further measured the impact of additional factual context on the QAGNN performance. |
Shreyas Verma; Manoj Parmar; Palash Choudhary; Sanchita Porwal; | arxiv-cs.CL | 2023-12-30 |
1056 | Integrating Multimodal Features By A Two-way Co-attention Mechanism for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Himanshu Sharma; Swati Srivastava; | Multim. Tools Appl. | 2023-12-29 |
1057 | AQUALLM: Audio Question Answering Data Generation Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a scalable AQA data generation pipeline, denoted as the AQUALLM framework, which relies on Large Language Models (LLMs). |
Swarup Ranjan Behera; Krishna Mohan Injeti; Jaya Sai Kiran Patibandla; Praveen Kumar Pokala; Balakrishna Reddy Pailla; | arxiv-cs.CL | 2023-12-28 |
1058 | S2M: Converting Single-Turn to Multi-Turn Datasets for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: On the other hand, while numerous single-turn datasets are available, we have not utilized them effectively. To solve this problem, we propose a novel method to convert single-turn datasets to multi-turn datasets. |
BAOKUI LI et. al. | arxiv-cs.CL | 2023-12-27 |
1059 | Geographic Knowledge Base Question Answering Over OpenStreetMap Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In recent years, question answering on knowledge bases (KBQA) has emerged as a promising approach for providing unified, user-friendly access to knowledge bases. Nevertheless, … |
Jonghyeon Yang; Hanme Jang; Kiyun Yu; | ISPRS Int. J. Geo Inf. | 2023-12-26 |
1060 | From Text to Multimodal: A Survey of Adversarial Example Generation in Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This article aims to comprehensively review adversarial example-generation techniques in the QA field, including textual and multimodal contexts. |
Gulsum Yigit; Mehmet Fatih Amasyali; | arxiv-cs.CL | 2023-12-26 |
1061 | Conversational Question Answering with Reformulations Over Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, CornNet, which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. |
Lihui Liu; Blaine Hill; Boxin Du; Fei Wang; Hanghang Tong; | arxiv-cs.CL | 2023-12-26 |
1062 | KnowledgeNavigator: Leveraging Large Language Models for Enhanced Reasoning Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Especially in scenarios that require long logical chains or complex reasoning, the hallucination and knowledge limitation of LLM limit its performance in question answering (QA). In this paper, we propose a novel framework KnowledgeNavigator to address these challenges by efficiently and accurately retrieving external knowledge from knowledge graph and using it as a key factor to enhance LLM reasoning. |
TIEZHENG GUO et. al. | arxiv-cs.CL | 2023-12-25 |
1063 | On The Promises and Challenges of Multimodal Foundation Models for Geographical, Environmental, Agricultural, and Urban Planning Applications Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The advent of large language models (LLMs) has heightened interest in their potential for multimodal applications that integrate language and vision. This paper explores the … |
CHENJIAO TAN et. al. | ArXiv | 2023-12-23 |
1064 | Selectively Answering Ambiguous Questions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We investigate question answering from this perspective, focusing on answering a subset of questions with a high degree of accuracy, from a set of questions in which many are inherently ambiguous. |
JEREMY COLE et. al. | emnlp | 2023-12-22 |
1065 | ViGPTQA – State-of-the-Art LLMs for Vietnamese Question Answering: System Overview, Core Models Training, and Evaluations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a practical real-world implementation of a question answering system for Vietnamese, called ViGPTQA, leveraging the power of LLM. |
Minh Thuan Nguyen; Khanh Tung Tran; Nhu Van Nguyen; Xuan-Son Vu; | emnlp | 2023-12-22 |
1066 | Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we delve into why TTA causes model collapse and find that the imbalanced label distribution inherent in QA is the reason for it. |
Yi Su; Yixin Ji; Juntao Li; Hai Ye; Min Zhang; | emnlp | 2023-12-22 |
1067 | Continually Improving Extractive QA Via Human Feedback Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We study continually improving an extractive question answering (QA) system via human user feedback. |
Ge Gao; Hung-Ting Chen; Yoav Artzi; Eunsol Choi; | emnlp | 2023-12-22 |
1068 | Merging Generated and Retrieved Knowledge for Open-Domain QA IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Based on the intuition that answers supported by both sources are more likely to be correct, we propose COMBO, a Compatibility-Oriented knowledge Merging for Better Open-domain QA framework, to effectively leverage the two sources of information. |
YUNXIANG ZHANG et. al. | emnlp | 2023-12-22 |
1069 | Diversity Enhanced Narrative Question Generation for Storybooks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce a multi-question generation model (mQG), which is capable of generating multiple, diverse, and answerable questions by focusing on context and questions. |
Hokeun Yoon; JinYeong Bak; | emnlp | 2023-12-22 |
1070 | Techniques, Datasets, Evaluation Metrics and Future Directions of A Question Answering System Related Papers Related Patents Related Grants Related Venues Related Experts View |
Faiza Qamar; Seemab Latif; Asad Shah; | Knowledge and Information Systems | 2023-12-22 |
1071 | CRT-QA: A Dataset of Complex Reasoning Question Answering Over Tabular Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we first establish a comprehensive taxonomy of reasoning and operation types for tabular data analysis. Then, we construct a complex reasoning QA dataset over tabular data, named CRT-QA dataset (Complex Reasoning QA over Tabular data), with the following unique features: (1) it is the first Table QA dataset with multi-step operation and informal reasoning; (2) it contains fine-grained annotations on questions? directness, composition types of sub-questions, and human reasoning paths which can be used to conduct a thorough investigation on LLMs? reasoning ability; (3) it contains a collection of unanswerable and indeterminate questions that commonly arise in real-world situations. |
Zhehao Zhang; Xitao Li; Yan Gao; Jian-Guang Lou; | emnlp | 2023-12-22 |
1072 | ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering Over Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the effectiveness, due to the divergence in model architecture, the PLM and GNN are not closely integrated, limiting the knowledge sharing and fine-grained feature interactions. To solve it, we aim to simplify the above two-module approach, and develop a more capable PLM that can directly support subgraph reasoning for KGQA, namely ReasoningLM. |
Jinhao Jiang; Kun Zhou; Xin Zhao; Yaliang Li; Ji-Rong Wen; | emnlp | 2023-12-22 |
1073 | TheoremQA: A Theorem-driven Question Answering Dataset IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce TheoremQA, the first theorem-driven question-answering dataset designed to evaluate AI models� capabilities to apply theorems to solve challenging science problems. |
WENHU CHEN et. al. | emnlp | 2023-12-22 |
1074 | QA-NatVer: Question Answering for Natural Logic-based Fact Verification Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we propose to use question answering to predict natural logic operators, taking advantage of the generalization capabilities of instruction-tuned language models. |
Rami Aly; Marek Strong; Andreas Vlachos; | emnlp | 2023-12-22 |
1075 | Mitigating Temporal Misalignment By Discarding Outdated Facts IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To mitigate the effects of temporal misalignment, we propose fact duration prediction: the task of predicting how long a given fact will remain true. |
Michael Zhang; Eunsol Choi; | emnlp | 2023-12-22 |
1076 | Large Language Models Are Complex Table Parsers IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose to incorporate GPT-3. 5 to address such challenges, in which complex tables are reconstructed into tuples and specific prompt designs are employed for dialogues. |
BOWEN ZHAO et. al. | emnlp | 2023-12-22 |
1077 | FACTIFY3M: A Benchmark for Multimodal Fact Verification with Explainability Through 5W Question-Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite progress in automatic text-based fact verification (e. g. , FEVER, LIAR), the research community lacks substantial effort in multimodal fact verification. To address this gap, we introduce FACTIFY 3M, a dataset of 3 million samples that pushes the boundaries of the domain of fact verification via a multimodal fake news dataset, in addition to offering explainability through the concept of 5W question-answering. |
MEGHA CHAKRABORTY et. al. | emnlp | 2023-12-22 |
1078 | Towards A Unified Multimodal Reasoning Framework Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Our experiments aimed to fill the gap in current research by investigating the combined impact of CoT and VQA, contributing to the understanding of how these techniques can improve the reasoning capabilities of state-of-the-art models like GPT-4. Results from our experiments demonstrated the potential of these approaches in enhancing LM’s reasoning and question-answering capabilities, providing insights for further research and development in the field, and paving the way for more accurate and reliable AI systems that can handle complex reasoning tasks across multiple modalities. |
Abhinav Arun; Dipendra Singh Mal; Mehul Soni; Tomohiro Sawada; | arxiv-cs.CL | 2023-12-22 |
1079 | Question Answering As Programming for Solving Time-Sensitive Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This can be attributed to the LLMs� inability to perform rigorous reasoning based on surface-level text semantics. To overcome this limitation, rather than requiring LLMs to directly answer the question, we propose a novel approach where we reframe the Question Answering task as Programming (QAaP). |
XINYU ZHU et. al. | emnlp | 2023-12-22 |
1080 | Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this study, we introduce InfoSeek, a visual question answering dataset tailored for information-seeking questions that cannot be answered with only common sense knowledge. |
YANG CHEN et. al. | emnlp | 2023-12-22 |
1081 | Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the original dialog inpainting model is trained solely on the dialog reconstruction task, resulting in the generation of questions with low contextual relevance due to insufficient learning of question-answer alignment. To overcome this limitation, we propose a novel framework called Dialogizer, which has the capability to automatically generate ConvQA datasets with high contextual relevance from textual sources. |
YERIN HWANG et. al. | emnlp | 2023-12-22 |
1082 | Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To cope with the challenge, we propose a novel framework, Tree of Clarifications (ToC): It recursively constructs a tree of disambiguations for the AQ�via few-shot prompting leveraging external knowledge�and uses it to generate a long-form answer. |
Gangwoo Kim; Sungdong Kim; Byeongguk Jeon; Joonsuk Park; Jaewoo Kang; | emnlp | 2023-12-22 |
1083 | TempTabQA: Temporal Question Answering for Semi-Structured Tables IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. |
VIVEK GUPTA et. al. | emnlp | 2023-12-22 |
1084 | PreWoMe: Exploiting Presuppositions As Working Memory for Long Form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose PreWoMe, a unified approach capable of handling any type of information-seeking question. |
Wookje Han; Jinsol Park; Kyungjae Lee; | emnlp | 2023-12-22 |
1085 | GazeVQA: A Video Question Answering Dataset for Multiview Eye-Gaze Task-Oriented Collaborations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we build a novel task-oriented VQA dataset, called GazeVQA, for collaborative tasks where gaze information is captured during the task process. |
MUHAMMET ILASLAN et. al. | emnlp | 2023-12-22 |
1086 | Interview Evaluation: A Novel Approach for Automatic Evaluation of Conversational Question Answering Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a novel automatic evaluation approach, interview evaluation. |
XIBO LI et. al. | emnlp | 2023-12-22 |
1087 | MarkQA: A Large Scale KBQA Dataset with Numerical Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we focus on the complex numerical reasoning in KBQA, and propose a new task, NR-KBQA, which necessitates the ability to perform both multi-hop reasoning and numerical reasoning. |
Xiang Huang; Sitao Cheng; Yuheng Bao; Shanshan Huang; Yuzhong Qu; | emnlp | 2023-12-22 |
1088 | PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering Via Pluggable Reward-Driven Contextual Adapter IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Incorporating Large Language Models (LLMs) as generators is beneficial due to their advanced QA capabilities, but they are typically too large to be fine-tuned with budget constraints while some of them are only accessible via APIs. To tackle this issue and further improve ReQA performance, we propose a trainable Pluggable Reward-Driven Contextual Adapter (PRCA), keeping the generator as a black box. |
HAOYAN YANG et. al. | emnlp | 2023-12-22 |
1089 | API-Assisted Code Generation for Question Answering on Varied Table Structures Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In response, this paper introduces a unified TableQA framework that: (1) provides a unified representation for structured tables as multi-index Pandas data frames, (2) uses Python as a powerful querying language, and (3) uses few-shot prompting to translate NL questions into Python programs, which are executable on Pandas data frames. |
Yihan Cao; Shuyi Chen; Ryan Liu; Zhiruo Wang; Daniel Fried; | emnlp | 2023-12-22 |
1090 | A Simple Baseline for Knowledge-Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our main contribution in this paper is to propose a much simpler and readily reproducible pipeline which, in a nutshell, is based on efficient in-context learning by prompting LLaMA (1 and 2) using question-informative captions as contextual information. |
Alexandros Xenos; Themos Stafylakis; Ioannis Patras; Georgios Tzimiropoulos; | emnlp | 2023-12-22 |
1091 | A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we use language models to rewrite snippets from scientific documents to be read on their own. |
Benjamin Newman; Luca Soldaini; Raymond Fok; Arman Cohan; Kyle Lo; | emnlp | 2023-12-22 |
1092 | Language Models with Rationality Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This lack of interpretability is a growing impediment to widespread use of LLMs. To address this, our goals are to make model beliefs and their inferential relationships explicit, and to resolve inconsistencies that may exist, so that answers are supported by interpretable chains of reasoning drawn from a consistent network of beliefs. |
NORA KASSNER et. al. | emnlp | 2023-12-22 |
1093 | Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we provide a benchmark Question Answering (QA) dataset named MSQA, centered around Microsoft products and IT technical problems encountered by customers. |
FANGKAI YANG et. al. | emnlp | 2023-12-22 |
1094 | Large Language Models Are Temporal and Causal Reasoners for Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we develop LLaMA-VQA by applying Flipped-VQA to LLaMA, and it outperforms both LLMs-based and non-LLMs-based models on five challenging VideoQA benchmarks. |
Dohwan Ko; Ji Lee; Woo-Young Kang; Byungseok Roh; Hyunwoo Kim; | emnlp | 2023-12-22 |
1095 | From Parse-Execute to Parse-Execute-Refine: Improving Semantic Parser for Complex Question Answering Over Knowledge Base Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we propose three components: a parsing stage, an execution stage and a refinement stage, to enhance the ability of complex reasoning. |
Wangzhen Guo; Linyin Luo; Hanjiang Lai; Jian Yin; | emnlp | 2023-12-22 |
1096 | Navigating The Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The increased deployment of LMs for real-world tasks involving knowledge and facts makes it important to understand model epistemology: what LMs think they know, and how their attitudes toward that knowledge are affected by language use in their inputs. Here, we study an aspect of model epistemology: how epistemic markers of certainty, uncertainty, or evidentiality like �I�m sure it�s�, �I think it�s�, or �Wikipedia says it�s� affect models, and whether they contribute to model failures. |
Kaitlyn Zhou; Dan Jurafsky; Tatsunori Hashimoto; | emnlp | 2023-12-22 |
1097 | IfQA: A Dataset for Open-domain Question Answering Under Counterfactual Presuppositions IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question is based on a counterfactual presupposition via an �if� clause. |
Wenhao Yu; Meng Jiang; Peter Clark; Ashish Sabharwal; | emnlp | 2023-12-22 |
1098 | Uncertainty Guided Global Memory Improves Multi-Hop Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, attention-based token representations lack explicit global contextual information to connect reasoning steps. To address these issues, we propose GEMFormer, a two-stage method that first collects relevant information over the entire document to the memory and then combines it with local context to solve the task. |
Alsu Sagirova; Mikhail Burtsev; | emnlp | 2023-12-22 |
1099 | CarExpert: Leveraging Large Language Models for In-Car Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. |
MD RASHAD AL HASAN RONY et. al. | emnlp | 2023-12-22 |
1100 | ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems. |
Dheeraj Mekala; Jason Wolfe; Subhro Roy; | emnlp | 2023-12-22 |
1101 | Too Much of Product Information : Don�t Worry, Let�s Look for Evidence! Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a distantly supervised solution to answer customer questions by using product information. |
Aryan Jain; Jitenkumar Rana; Chetan Aggarwal; | emnlp | 2023-12-22 |
1102 | Evaluating and Modeling Attribution for Cross-Lingual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We find that Natural Language Inference models and PaLM 2 fine-tuned on a very small amount of attribution data can accurately detect attribution. With these models, we improve the attribution level of a cross-lingual QA system. |
BENJAMIN MULLER et. al. | emnlp | 2023-12-22 |
1103 | Causal Reasoning Through Two Cognition Layers for Improving Generalization in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Besides, diverse interpretations of the input lead to various modes of answer generation, highlighting the role of causal reasoning between interpreting and answering steps in VQA. Through this lens, we propose Cognitive pathways VQA (CopVQA) improving the multimodal predictions by emphasizing causal reasoning factors. |
Trang Nguyen; Naoaki Okazaki; | emnlp | 2023-12-22 |
1104 | Hop, Union, Generate: Explainable Multi-hop Reasoning Without Rationale Supervision Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. |
Wenting Zhao; Justin Chiu; Claire Cardie; Alexander Rush; | emnlp | 2023-12-22 |
1105 | Does Named Entity Recognition Truly Not Scale Up to Real-world Product Attribute Extraction? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we argue the scalability of the NER-based approach compared to the QA-based approach, since researchers have compared BERT-based QA-based models to only a weak BiLSTM-based NER baseline trained from scratch in terms of only accuracy on datasets designed to evaluate the QA-based approach. |
Wei-Te Chen; Keiji Shinzato; Naoki Yoshinaga; Yandi Xia; | emnlp | 2023-12-22 |
1106 | Best of Both Worlds: Towards Improving Temporal Knowledge Base Question Answering Via Targeted Fact Extraction Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We model the extraction problem as an open-domain question answering task using off-the-shelf language models. |
NITHISH KANNEN et. al. | emnlp | 2023-12-22 |
1107 | Continual Dialogue State Tracking Via Example-Guided Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user�s goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. |
HYUNDONG CHO et. al. | emnlp | 2023-12-22 |
1108 | Diversify Question Generation with Retrieval-Augmented Style Transfer Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: These methods, however, have not considered the potential of external knowledge for expression diversity. To bridge this gap, we propose RAST, a framework for Retrieval-Augmented Style Transfer, where the objective is to utilize the style of diverse templates for question generation. |
QI GOU et. al. | emnlp | 2023-12-22 |
1109 | LingoQA: Visual Question Answering for Autonomous Driving IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce LingoQA, a novel dataset and benchmark for visual question answering in autonomous driving. |
ANA-MARIA MARCU et. al. | arxiv-cs.RO | 2023-12-21 |
1110 | DriveLM: Driving with Graph Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving. |
CHONGHAO SIMA et. al. | arxiv-cs.CV | 2023-12-21 |
1111 | Perception Test 2023: A Summary of The First Challenge And Outcome Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We summarise in this report the task descriptions, metrics, baselines, and results. |
Joseph Heyward; João Carreira; Dima Damen; Andrew Zisserman; Viorica Pătrăucean; | arxiv-cs.CV | 2023-12-20 |
1112 | Relation-Aware Question Answering for Heterogeneous Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this way, the interaction between entity and relation is enhanced, and we derive better entity and relation representations. |
HAOWEI DU et. al. | arxiv-cs.CL | 2023-12-19 |
1113 | Cross-Modal Reasoning with Event Correlation for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the dense caption modality as a new auxiliary and distill event-correlated information from it to infer the correct answer. |
CHENGXIANG YIN et. al. | arxiv-cs.CV | 2023-12-19 |
1114 | Multi-Clue Reasoning with Memory Augmentation for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, most existing VQA methods are incapable of handling Knowledge-based Visual Question Answering (KB-VQA), which requires external knowledge beyond visible contents to answer questions about a given image. To address this issue, we propose a novel framework that endows the model with capabilities of answering more general questions, and achieves a better exploitation of external knowledge through generating Multiple Clues for Reasoning with Memory Neural Networks (MCR-MemNN). |
Chengxiang Yin; Zhengping Che; Kun Wu; Zhiyuan Xu; Jian Tang; | arxiv-cs.CV | 2023-12-19 |
1115 | VQA4CIR: Boosting Composed Image Retrieval with Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Albeit progress has been made in Composed Image Retrieval (CIR), we empirically find that a certain percentage of failure retrieval results are not consistent with their relative captions. |
CHUN-MEI FENG et. al. | arxiv-cs.CV | 2023-12-19 |
1116 | On Early Detection of Hallucinations in Factual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we explore if the artifacts associated with the model generations can provide hints that the generation will contain hallucinations. |
Ben Snyder; Marius Moisescu; Muhammad Bilal Zafar; | arxiv-cs.CL | 2023-12-19 |
1117 | UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Generative information retrieval, encompassing two major tasks of Generative Document Retrieval (GDR) and Grounded Answer Generation (GAR), has gained significant attention in … |
Xiaoxi Li; Yujia Zhou; Zhicheng Dou; | ArXiv | 2023-12-18 |
1118 | GenBoost: Generative Modeling and Boosted Learning for Multi-hop Question Answering Over Incomplete Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multi-hop question answering over incomplete knowledge graphs involves iteratively reasoning on the provided question and graph to find answers, while also tackling the inherent … |
Zhen Cheng; Jianwei Niu; Shasha Mo; Jia Chen; | 2023 IEEE 29th International Conference on Parallel and … | 2023-12-17 |
1119 | Towards Designing A Question-Answering Chatbot for Online News: Understanding Questions and Perspectives Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: By combining results from the studies, we present alignments and discrepancies between how journalists and readers want to use QA chatbots and propose a framework for designing effective QA chatbots in newsrooms. |
Md Naimul Hoque; Ayman Mahfuz; Mayukha Kindi; Naeemul Hassan; | arxiv-cs.HC | 2023-12-17 |
1120 | An Evaluation of GPT-4V and Gemini in Online VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conduct fine-grained analysis by generating seven types of metadata for nearly 2,000 visual questions, such as image type and the required image processing capabilities. |
Mengchen Liu; Chongyan Chen; Danna Gurari; | arxiv-cs.CV | 2023-12-17 |
1121 | Research on Intelligent Question-Answering Systems Based on Large Language Models and Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: With the continuous development of artificial intelligence and cloud computing technologies, the emergence of large language models (LLMs) has created new opportunities for … |
Qinglin Wu; Yan Wang; | 2023 16th International Symposium on Computational … | 2023-12-16 |
1122 | Privacy-Aware Document Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Document Visual Question Answering (DocVQA) is a fast growing branch of document understanding. Despite the fact that documents contain sensitive or copyrighted information, none … |
RUBÈN PÉREZ TITO et. al. | ArXiv | 2023-12-15 |
1123 | Privacy-Aware Document Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we explore privacy in the domain of DocVQA for the first time, highlighting privacy issues in state of the art multi-modal LLM models used for DocVQA, and explore possible solutions. |
RUBÈN TITO et. al. | arxiv-cs.CV | 2023-12-15 |
1124 | RJUA-QA: A Comprehensive QA Dataset for Urology Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce RJUA-QA, a novel medical dataset for question answering (QA) and reasoning with clinical evidence, contributing to bridge the gap between general large language models (LLMs) and medical-specific LLM applications. |
SHIWEI LYU et. al. | arxiv-cs.CL | 2023-12-15 |
1125 | GSQA: An End-to-End Model for Generative Spoken Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While this extractive-based approach is effective when answers are present directly within the input, it falls short in addressing abstractive questions, where answers are not directly extracted but inferred from the given information. To bridge this gap, we introduce the first end-to-end Generative Spoken Question Answering (GSQA) model that empowers the system to engage in abstractive reasoning. |
MIN-HAN SHIH et. al. | arxiv-cs.CL | 2023-12-15 |
1126 | Weak Supervision for Question and Answering Sentiment Analysis Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Companies and government agencies are keen on comprehending their customers’ sentiments regarding their products and services. This has given rise to the concept of Social … |
Victor Akihito Kamada Tomita; Fábio Manoel França Lobato; R. Marcacini; | 2023 International Conference on Machine Learning and … | 2023-12-15 |
1127 | ReST Meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These systems, however, suffer from various failure cases, and we cannot directly train them end-to-end to fix such failures, as interaction with external knowledge is non-differentiable. To address these deficiencies, we define a ReAct-style LLM agent with the ability to reason and act upon external knowledge. |
RENAT AKSITOV et. al. | arxiv-cs.CL | 2023-12-15 |
1128 | Advancing Surgical VQA with Scene Graph Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present a novel surgical VQA dataset and model and show that results can be significantly improved by incorporating geometric scene features in the VQA model design. |
KUN YUAN et. al. | arxiv-cs.CV | 2023-12-15 |
1129 | Knowledge Enhancement and Scene Understanding for Knowledge-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Zhenqiang Su; Gang Gou; | Knowledge and Information Systems | 2023-12-14 |
1130 | ViLA: Efficient Video-Language Alignment for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose an efficient Video-Language Alignment (ViLA) network. |
XIJUN WANG et. al. | arxiv-cs.CV | 2023-12-13 |
1131 | BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, they often suffer from (i) the data insufficient problem, which makes it difficult to train the state of the arts (SOTAs) for the domain-specific task, and (ii) the reproducibility problem, that many existing models have not been thoroughly evaluated in a unified experimental setup. To address these issues, this paper develops a Benchmark Evaluation SysTem for Medical Visual Question Answering, denoted by BESTMVQA. |
Xiaojie Hong; Zixin Song; Liangzhi Li; Xiaoli Wang; Feiyan Liu; | arxiv-cs.AI | 2023-12-12 |
1132 | Evaluating ChatGPT As A Question Answering System: A Comprehensive Analysis and Comparison with Existing Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the current era, a multitude of language models has emerged to cater to user inquiries. Notably, the GPT-3.5 Turbo language model has gained substantial attention as the … |
Hossein Bahak; Farzaneh Taheri; Zahra Zojaji; Arefeh Kazemi; | ArXiv | 2023-12-11 |
1133 | NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets Using Markup Annotations IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Markup-QA, a novel dataset annotation technique in which QAs are enclosed within markups. |
Yuichi Inoue; Yuki Yada; Kotaro Tanahashi; Yu Yamaguchi; | arxiv-cs.CV | 2023-12-11 |
1134 | PaperQA: Retrieval-Augmented Generative Agent for Scientific Research IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Retrieval-Augmented Generation (RAG) models have been proposed to reduce hallucinations and provide provenance for how an answer was generated. |
JAKUB LÁLA et. al. | arxiv-cs.CL | 2023-12-08 |
1135 | DelucionQA: Detecting Hallucinations in Domain-specific Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Detecting hallucinations through automated methods is thus paramount. To facilitate research in this direction, we introduce a sophisticated dataset, DelucionQA, that captures hallucinations made by retrieval-augmented LLMs for a domain-specific QA task. |
MOBASHIR SADAT et. al. | arxiv-cs.CL | 2023-12-08 |
1136 | Retrieval-based Video Language Model for Efficient Long Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, the presence of abundant question-irrelevant tokens introduces noise to the video QA process. To address these issues, we introduce a simple yet effective retrieval-based video language model (R-VLM) for efficient and interpretable long video QA. |
Jiaqi Xu; Cuiling Lan; Wenxuan Xie; Xuejin Chen; Yan Lu; | arxiv-cs.CV | 2023-12-08 |
1137 | LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper we introduce LifelongMemory, a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval. |
Ying Wang; Yanlai Yang; Mengye Ren; | arxiv-cs.CV | 2023-12-07 |
1138 | Language Model Knowledge Distillation for Efficient Question Answering in Spanish Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Therefore, smaller distilled models for the Spanish language could be proven to be highly scalable and facilitate their further adoption on a variety of tasks and scenarios. In this work, we take one step in this direction by developing SpanishTinyRoBERTa, a compressed language model based on RoBERTa for efficient question answering in Spanish. |
Adrián Bazaga; Pietro Liò; Gos Micklem; | arxiv-cs.CL | 2023-12-07 |
1139 | PCoQA: Persian Conversational Question Answering Dataset Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In the pursuit of conversational question answering research, we introduce the PCoQA, the first \textbf{P}ersian \textbf{Co}nversational \textbf{Q}uestion \textbf{A}nswering dataset, a resource comprising information-seeking dialogs encompassing a total of 9,026 contextually-driven questions. |
HAMED HEMATIAN HEMATI et. al. | arxiv-cs.CL | 2023-12-07 |
1140 | A Question-Answering System for Vietnamese Public Administrative Services Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the realm of legal question-answering (QA) systems, information retrieval (IR) plays a pivotal role. Despite thorough research in numerous languages, the Vietnamese research … |
Anh Pham Duy; Huong Le Thanh; | Proceedings of the 12th International Symposium on … | 2023-12-07 |
1141 | MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Moreover, their QAs are unduly narrow and modality-biased, lacking a wider view of understanding long-term video content with rich dynamics and complex narratives. To remedy this, we introduce MoVQA, a long-form movie question-answering dataset, and benchmark to assess the diverse cognitive capabilities of multimodal systems rely on multi-level temporal lengths, with considering both video length and clue length. |
HONGJIE ZHANG et. al. | arxiv-cs.CV | 2023-12-07 |
1142 | XAIQA: Explainer-Based Data Augmentation for Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel approach, XAIQA, for generating synthetic QA pairs at scale from data naturally available in electronic health records. |
JOEL STREMMEL et. al. | arxiv-cs.CL | 2023-12-06 |
1143 | Low-Resource Efficient Multi-Stage Tuning Strategy for Biomedical Question Answering Task Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The automated question-answering system plays a crucial role in improving the accuracy and efficiency of clinical decision-making. While large-scale language models perform … |
Binrui Wang; Yongping Du; Xingnan Jin; Rui Yan; Qi Zhang; | 2023 IEEE International Conference on Bioinformatics and … | 2023-12-05 |
1144 | PoQuAD – The Polish Question Answering Dataset – Description and Analysis Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper showcases PoQuAD — a SQuAD-like contribution to building Question Answering tools for Polish. It largely follows the usual Machine Reading Comprehension format, but a … |
Ryszard Tuora; Aleksandra Zwierzchowska; Natalia Zawadzka-Paluektau; Cezary Klamra; Łukasz Kobyliński; | Proceedings of the 12th Knowledge Capture Conference 2023 | 2023-12-05 |
1145 | Lingua Franca – Entity-Aware Machine Translation Approach for Question Answering Over Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This research paper proposes an approach called Lingua Franca that improves machine translation quality by utilizing information from a knowledge graph to translate named entities … |
NIKIT SRIVASTAVA et. al. | Proceedings of the 12th Knowledge Capture Conference 2023 | 2023-12-05 |
1146 | Unleashing The Potential of Large Language Model: Zero-shot VQA for Flood Disaster Scenario Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a zero-shot VQA model named Zero-shot VQA for Flood Disaster Damage Assessment (ZFDDA). |
Yimin Sun; Chao Wang; Yan Peng; | arxiv-cs.CV | 2023-12-04 |
1147 | An IoT-based Approach to Expert Recommendation in Community Question Answering for Disaster Recovery Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the dynamic field of IoT, where technologies like Bluetooth and WiFi are prevalent in home and office settings, proactively managing disasters is critical. This paper … |
David Macri; Antonio Francesco Gentile; Pietro Sabatino; | 2023 IEEE International Conference on Data Mining Workshops … | 2023-12-04 |
1148 | GNN2R: Weakly-Supervised Rationale-Providing Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Second, it is difficult to maintain high efficiency when explicit KG triples need to be retrieved to generate explanations. In this paper, we propose a novel Graph Neural Network-based Two-Step Reasoning model (GNN2R) to solve this issue. |
Ruijie Wang; Luca Rossetto; Michael Cochez; Abraham Bernstein; | arxiv-cs.CL | 2023-12-04 |
1149 | Harnessing The Power of Prompt-based Techniques for Generating School-Level Questions Using Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we propose a novel approach that utilizes prompt-based techniques to generate descriptive and reasoning-based questions. |
Subhankar Maity; Aniket Deroy; Sudeshna Sarkar; | arxiv-cs.CL | 2023-12-02 |
1150 | Towards Leveraging LLMs for Conditional QA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Utilizing the Conditional Question Answering (CQA) dataset and focusing on generative models like T5 and UL2, we assess the performance of LLMs across diverse question types. |
Syed-Amad Hussain; Parag Pravin Dakle; SaiKrishna Rallabandi; Preethi Raghavan; | arxiv-cs.CL | 2023-12-02 |
1151 | BERT and Hierarchical Cross Attention-based Question Answering Over Bridge Inspection Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
JIANXI YANG et. al. | Expert Syst. Appl. | 2023-12-01 |
1152 | Knowledge-based Visual Question Answering About Named Entities Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This thesis is positioned at the intersection of several research fields, Natural Language Processing, Information Retrieval (IR) and Computer Vision, which have unified around … |
Paul Lerner; | ACM SIGIR Forum | 2023-12-01 |
1153 | Multi-Granularity Interaction and Integration Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering, aiming to answer a natural language question related to the given video, has gained popularity in the last few years. Although significant improvements … |
Yuanyuan Wang; Meng Liu; Jianlong Wu; Liqiang Nie; | IEEE Transactions on Circuits and Systems for Video … | 2023-12-01 |
1154 | Zero-Shot Video Question Answering with Procedural Programs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose to answer zero-shot questions about videos by generating short procedural programs that derive a final answer from solving a sequence of visual subtasks. |
Rohan Choudhury; Koichiro Niinuma; Kris M. Kitani; László A. Jeni; | arxiv-cs.CV | 2023-12-01 |
1155 | Semantic Parsing for Question Answering Over Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a novel method with graph-to-segment mapping for question answering over knowledge graphs, which helps understanding question utterances. |
Sijia Wei; Wenwen Zhang; Qisong Li; Jiang Zhao; | arxiv-cs.CL | 2023-12-01 |
1156 | KI-MAG: A Knowledge-infused Abstractive Question Answering System in Medical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View |
Aizan Zafar; Sovan Kumar Sahoo; Harsh Bhardawaj; Amitava Das; Asif Ekbal; | Neurocomputing | 2023-12-01 |
1157 | Enhancing Answer Selection in Community Question Answering with Pre-trained and Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, we apply the BERT model as the encoder layer to do pre-training for question subjects, question bodies and answers, respectively, then the cross attention mechanism selects the most relevant answer for different questions. |
Xinghang Hu; | arxiv-cs.CL | 2023-11-29 |
1158 | AviationGPT: A Large Language Model for The Aviation Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The emergence of LLMs presents an opportunity to transform this situation, but there is a lack of LLMs specifically designed for the aviation domain. To address this gap, we propose AviationGPT, which is built on open-source LLaMA-2 and Mistral architectures and continuously trained on a wealth of carefully curated aviation datasets. |
Liya Wang; Jason Chou; Xin Zhou; Alex Tien; Diane M Baumgartner; | arxiv-cs.CL | 2023-11-29 |
1159 | Multi-modal Domain Adaptation for Text Visual Question Answering Tasks Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Domain adaptation aims to train a model on the labeled source data and unlabeled target data while improving the performance of the same model on the target domain. Recently, … |
Zhiyuan Li; Dongnan Liu; Weidong Cai; | 2023 International Conference on Digital Image Computing: … | 2023-11-28 |
1160 | Towards Top-Down Reasoning: An Explainable Multi-Agent Approach for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Thus, they cannot fully use the powerful VLM for the given VQA question to achieve optimal performance. Attempt to overcome this limitation and inspired by the human top-down reasoning process, i.e., systematically exploring relevant issues to derive a comprehensive answer, this work introduces a novel, explainable multi-agent collaboration framework by leveraging the expansive knowledge of Large Language Models (LLMs) to enhance the capabilities of VLMs themselves. |
ZEQING WANG et. al. | arxiv-cs.CV | 2023-11-28 |
1161 | A Survey of Consumer Health Question Answering Systems Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Consumers are increasingly using the web to find answers to their health‐related queries. Unfortunately, they often struggle with formulating the questions, further compounded by … |
A. Welivita; Pearl Pu; | AI Mag. | 2023-11-27 |
1162 | Fully Authentic Visual Question Answering Dataset from Online Communities Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the first VQA dataset in which all contents originate from an authentic use case. |
CHONGYAN CHEN et. al. | arxiv-cs.CV | 2023-11-27 |
1163 | Characterizing Video Question Answering with Sparsified Inputs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this way, we experiment over public VideoQA benchmarks and provide analysis on how sparsified inputs affect the performance. |
Shiyuan Huang; Robinson Piramuthu; Vicente Ordonez; Shih-Fu Chang; Gunnar A. Sigurdsson; | arxiv-cs.CV | 2023-11-27 |
1164 | Releasing The CRaQAn (Coreference Resolution in Question-Answering): An Open-source Dataset and Dataset Creation Methodology Using Instruction-following Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we present our Coreference Resolution in Question-Answering (CRaQAn) dataset, an open-source dataset that caters to the nuanced information retrieval requirements of coreference resolution in question-answering tasks by providing over 250 question-answer pairs containing coreferences. |
ROB GRZYWINSKI et. al. | arxiv-cs.CL | 2023-11-27 |
1165 | See and Think: Embodied Agent in Virtual Environment IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes STEVE, a comprehensive and visionary embodied agent in the Minecraft virtual environment. |
ZHONGHAN ZHAO et. al. | arxiv-cs.AI | 2023-11-26 |
1166 | Uncertainty-aware Language Modeling for Selective Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an automatic large language model (LLM) conversion approach that produces uncertainty-aware LLMs capable of estimating uncertainty with every prediction. |
QI YANG et. al. | arxiv-cs.CL | 2023-11-26 |
1167 | Optimizing and Fine-tuning Large Language Model for Urban Renewal Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims to innovatively explore adaptive applications of large language models (LLM) in urban renewal. |
XI WANG et. al. | arxiv-cs.CL | 2023-11-26 |
1168 | FlowMind: Automatic Workflow Generation with LLMs IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The rapidly evolving field of Robotic Process Automation (RPA) has made significant strides in automating repetitive processes, yet its effectiveness diminishes in scenarios … |
ZHEN ZENG et. al. | Proceedings of the Fourth ACM International Conference on … | 2023-11-25 |
1169 | AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a novel and challenging benchmark, AutoEval-Video, to comprehensively evaluate large vision-language models in open-ended video question answering. |
Xiuyuan Chen; Yuan Lin; Yuchen Zhang; Weiran Huang; | arxiv-cs.CV | 2023-11-24 |
1170 | Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree). |
SHULIN CAO et. al. | arxiv-cs.CL | 2023-11-23 |
1171 | Question Answering in Natural Language: The Special Case of Temporal Expressions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph. |
Armand Stricker; | arxiv-cs.CL | 2023-11-23 |
1172 | Drilling Down Into The Discourse Structure with LLMs for Long Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We aim to assess the applicability of large language models (LLMs) in the task of zero-shot long document evidence retrieval, owing to their unprecedented performance across various NLP tasks. |
Inderjeet Nair; Shwetha Somasundaram; Apoorv Saxena; Koustava Goswami; | arxiv-cs.CL | 2023-11-22 |
1173 | Enhancing Large Language Models’ Utility for Medical Question-Answering: A Patient Health Question Summarization Approach Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large language models (LLMs) offer tremendous potential for answering diverse questions and providing valuable insights. However, to maximize their utility, it is essential to … |
Nour Eddine Zekaoui; Siham Yousfi; M. Mikram; Maryem Rhanoui; | 2023 14th International Conference on Intelligent Systems: … | 2023-11-22 |
1174 | FinanceBench: A New Benchmark for Financial Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We test 16 state of the art model configurations (including GPT-4-Turbo, Llama2 and Claude2, with vector stores and long context prompts) on a sample of 150 cases from FinanceBench, and manually review their answers (n=2,400). |
PRANAB ISLAM et. al. | arxiv-cs.CL | 2023-11-20 |
1175 | Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical NLP tasks in different languages, We present Taiyi, a bilingual fine-tuned LLM for diverse biomedical tasks. |
LING LUO et. al. | arxiv-cs.CL | 2023-11-20 |
1176 | PEFT-MedAware: Large Language Model for Medical Awareness Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Chat models are capable of answering a wide range of questions, however, the accuracy of their responses is highly uncertain. In this research, we propose a specialized PEFT-MedAware model where we utilize parameter-efficient fine-tuning (PEFT) to enhance the Falcon-1b large language model on specialized MedQuAD data consisting of 16,407 medical QA pairs, leveraging only 0.44% of its trainable parameters to enhance computational efficiency. |
Keivalya Pandya; | arxiv-cs.CL | 2023-11-17 |
1177 | Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To deal with them, we propose a GE-Reasoning method, which directs LLMs to generate proper sub-questions and corresponding answers. |
Jinyoung Park; Ameen Patel; Omar Zia Khan; Hyunwoo J. Kim; Joo-Kyung Kim; | arxiv-cs.CL | 2023-11-16 |
1178 | Graph-Guided Reasoning for Multi-Hop Question Answering in Large Language Models Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Chain-of-Thought (CoT) prompting has boosted the multi-step reasoning capabilities of Large Language Models (LLMs) by generating a series of rationales before the final answer. We … |
Jinyoung Park; Ameen Patel; Omar Zia Khan; Hyunwoo J. Kim; Jooyeon Kim; | ArXiv | 2023-11-16 |
1179 | Downstream Trade-offs of A Family of Text Watermarks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we evaluate the performance of LLMs watermarked using three different strategies over a diverse suite of tasks including those cast as k-class classification (CLS), multiple choice question answering (MCQ), short-form generation (e.g., open-ended question answering) and long-form generation (e.g., translation) tasks. |
Anirudh Ajith; Sameer Singh; Danish Pruthi; | arxiv-cs.CL | 2023-11-16 |
1180 | Towards Robust Temporal Reasoning of Large Language Models Via A Multi-Hop QA Dataset and Pseudo-Instruction Tuning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a complex temporal question-answering dataset Complex-TR that focuses on multi-answer and multi-hop temporal reasoning. |
Qingyu Tan; Hwee Tou Ng; Lidong Bing; | arxiv-cs.CL | 2023-11-16 |
1181 | Graph Neural Networks for Visual Question Answering: A Systematic Review Related Papers Related Patents Related Grants Related Venues Related Experts View |
ABDULGANIYU ABDU YUSUF et. al. | Multim. Tools Appl. | 2023-11-16 |
1182 | Leveraging LLMs in Scholarly Knowledge Graph Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents a scholarly Knowledge Graph Question Answering (KGQA) that answers bibliographic natural language questions by leveraging a large language model (LLM) in a few-shot manner. |
Tilahun Abedissa Taffa; Ricardo Usbeck; | arxiv-cs.CL | 2023-11-16 |
1183 | Crafting In-context Examples According to LMs’ Parametric Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We perform analysis on three multi-answer question answering datasets, which allows us to further study answer set ordering strategies based on the LM’s knowledge of each answer. |
Yoonsang Lee; Pranav Atreya; Xi Ye; Eunsol Choi; | arxiv-cs.CL | 2023-11-16 |
1184 | On Evaluating The Integration of Reasoning and Action in LLM Agents with Database Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenge of accurately assessing answer quality, we introduce a multi-agent evaluation framework that simulates the academic peer-review process, enhancing the precision and reliability of our evaluations. |
LINYONG NAN et. al. | arxiv-cs.CL | 2023-11-16 |
1185 | SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce SQATIN, a new framework for dialog NLU based on (i) instruction tuning and (ii) question-answering-based formulation of ID and VE tasks. |
Evgeniia Razumovskaia; Goran Glavaš; Anna Korhonen; Ivan Vulić; | arxiv-cs.CL | 2023-11-15 |
1186 | Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. |
Mayur Patidar; Riya Sawhney; Avinash Singh; Biswajit Chatterjee; Indrajit Bhattacharya; | arxiv-cs.CL | 2023-11-15 |
1187 | Improving Zero-shot Visual Question Answering Via Large Language Models with Reasoning Question Prompts IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To this end, we present Reasoning Question Prompts for VQA tasks, which can further activate the potential of LLMs in zero-shot scenarios. |
YUNSHI LAN et. al. | arxiv-cs.CV | 2023-11-15 |
1188 | LLMRefine: Pinpointing and Refining Large Language Models Via Fine-Grained Actionable Feedback Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose LLMRefine, an inference time optimization method to refine LLM’s output. |
WENDA XU et. al. | arxiv-cs.CL | 2023-11-15 |
1189 | Never Lost in The Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The lost in the middle problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Attention Strengthening Multi-doc QA (ASM QA). |
JUNQING HE et. al. | arxiv-cs.CL | 2023-11-15 |
1190 | Long-form Question Answering: An Iterative Planning-Retrieval-Generation Approach Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Additionally, generating detailed long-form answers often entails aggregating knowledge from diverse sources. To address these limitations, we propose an LFQA model with iterative Planning, Retrieval, and Generation. |
Pritom Saha Akash; Kashob Kumar Roy; Lucian Popa; Kevin Chen-Chuan Chang; | arxiv-cs.CL | 2023-11-15 |
1191 | Pregnant Questions: The Importance of Pragmatic Awareness in Maternal Health Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In a high-risk domain such as maternal and infant health, a question-answering system must recognize these pragmatic constraints and go beyond simply answering user questions, examining them in context to respond helpfully. To achieve this, we study assumptions and implications, or pragmatic inferences, made when mothers ask questions about pregnancy and infant care by collecting a dataset of 2,727 inferences from 500 questions across three diverse sources. |
NEHA SRIKANTH et. al. | arxiv-cs.CL | 2023-11-15 |
1192 | TempTabQA: Temporal Question Answering for Semi-Structured Tables IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. |
VIVEK GUPTA et. al. | arxiv-cs.CL | 2023-11-14 |
1193 | Learning to Filter Context for Retrieval-Augmented Generation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This can cause over- or under-reliance on context, and result in problems in the generated output such as hallucinations. To alleviate these problems, we propose FILCO, a method that improves the quality of the context provided to the generator by (1) identifying useful context based on lexical and information-theoretic approaches, and (2) training context filtering models that can filter retrieved contexts at test time. |
Zhiruo Wang; Jun Araki; Zhengbao Jiang; Md Rizwan Parvez; Graham Neubig; | arxiv-cs.CL | 2023-11-14 |
1194 | Understanding Calibration for Multilingual Question Answering Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we study the calibration properties of several pre-trained multilingual large language models (LLMs) on a variety of question-answering tasks. |
Yahan Yang; Soham Dan; Dan Roth; Insup Lee; | arxiv-cs.CL | 2023-11-14 |
1195 | Insights Into Classifying and Mitigating LLMs’ Hallucinations Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our research addresses this critical issue within the HeReFaNMi (Health-Related Fake News Mitigation) project, generously supported by NGI Search, dedicated to combating Health-Related Fake News dissemination on the Internet. This endeavour represents a concerted effort to safeguard the integrity of information dissemination in an age of evolving AI technologies. |
Alessandro Bruno; Pier Luigi Mazzeo; Aladine Chetouani; Marouane Tliba; Mohamed Amine Kerkouri; | arxiv-cs.CL | 2023-11-14 |
1196 | RECALL: A Benchmark for LLMs Robustness Against External Counterfactual Knowledge IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information. |
YI LIU et. al. | arxiv-cs.CL | 2023-11-14 |
1197 | A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Challenges arise when these models grapple with understanding multi-hop relations in complex questions or lack the necessary knowledge for a comprehensive response. To address this issue, we introduce the Decompose-and-Query framework (D&Q). |
HEJING CAO et. al. | arxiv-cs.CL | 2023-11-13 |
1198 | Evaluating LLMs on Document-Based QA: Exact Answer Selection and Numerical Extraction Using Cogtale Dataset Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: While some existing work focus on evaluating large language models performance on retrieving and answering questions from documents, assessing the LLMs performance on QA types that require exact answer selection from predefined options and numerical extraction is yet to be fully assessed. In this paper, we specifically focus on this underexplored context and conduct empirical analysis of LLMs (GPT-4 and GPT-3.5) on question types, including single-choice, yes-no, multiple-choice, and number extraction questions from documents in zero-shot setting. |
ZAFARYAB RASOOL et. al. | arxiv-cs.IR | 2023-11-13 |
1199 | A Benchmark to Understand The Role of Knowledge Graphs on Large Language Model’s Accuracy for Question Answering on Enterprise SQL Databases IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims to evaluate the accuracy of LLM-powered question answering systems in the context of enterprise questions and SQL databases, while also exploring the role of knowledge graphs in improving accuracy. To achieve this, we introduce a benchmark comprising an enterprise SQL schema in the insurance domain, a range of enterprise queries encompassing reporting to metrics, and a contextual layer incorporating an ontology and mappings that define a knowledge graph. |
Juan Sequeda; Dean Allemang; Bryon Jacob; | arxiv-cs.AI | 2023-11-13 |
1200 | A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Yet, the true challenge lies in the domain of knowledge-intensive VQA tasks, which necessitate not just recognition of visual elements, but also a deep comprehension of the visual information in conjunction with a vast repository of learned knowledge. To uncover such capabilities of MLMs, particularly the newly introduced GPT-4V and Gemini, we provide an in-depth evaluation from three perspectives: 1) Commonsense Knowledge, which assesses how well models can understand visual cues and connect to general knowledge; 2) Fine-grained World Knowledge, which tests the model’s skill in reasoning out specific knowledge from images, showcasing their proficiency across various specialized fields; 3) Comprehensive Knowledge with Decision-making Rationales, which examines model’s capability to provide logical explanations for its inference, facilitating a deeper analysis from the interpretability perspective. |
YUNXIN LI et. al. | arxiv-cs.CL | 2023-11-13 |
1201 | Hallucination Augmented Recitations for Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose Hallucination Augmented Recitations (HAR) for creating counterfactual datasets by utilizing hallucination in LLMs to improve attribution. |
Abdullatif Köksal; Renat Aksitov; Chung-Ching Chang; | arxiv-cs.CL | 2023-11-13 |
1202 | Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present BYOKG, a universal question-answering (QA) system that can operate on any knowledge graph (KG), requires no human-annotated training data, and can be ready to use within a day — attributes that are out-of-scope for current KGQA systems. |
Dhruv Agarwal; Rajarshi Das; Sopan Khosla; Rashmi Gangadharaiah; | arxiv-cs.CL | 2023-11-13 |
1203 | Knowledgeable Preference Alignment for LLMs in Domain-specific Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Thus, we introduce Knowledgeable Preference AlignmenT (KnowPAT), which constructs two kinds of preference sets to tackle the two issues. |
YICHI ZHANG et. al. | arxiv-cs.CL | 2023-11-11 |
1204 | Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Large Multimodal Models (LMMs) have shown promise in vision-language tasks but struggle with high-resolution input and detailed scene understanding. Addressing these challenges, we introduce Monkey to enhance LMM capabilities. |
ZHANG LI et. al. | arxiv-cs.CV | 2023-11-11 |
1205 | BizBench: A Quantitative Reasoning Benchmark for Business and Finance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce BizBench, a benchmark for evaluating models’ ability to reason about realistic financial problems. |
RIK KONCEL-KEDZIORSKI et. al. | arxiv-cs.CL | 2023-11-11 |
1206 | Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We introduce Lumos, a novel framework for training language agents that employs a unified data format and a modular architecture based on open-source large language models (LLMs). … |
DA YIN et. al. | ArXiv | 2023-11-09 |
1207 | Hallucination-minimized Data-to-answer Framework for Financial Decision-makers Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Large Language Models (LLMs) have been applied to build several automation and personalized question-answering prototypes so far. However, scaling such prototypes to robust … |
SOHINI ROYCHOWDHURY et. al. | 2023 IEEE International Conference on Big Data (BigData) | 2023-11-09 |
1208 | SEMQA: Semi-Extractive Multi-Source Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion. |
TAL SCHUSTER et. al. | arxiv-cs.CL | 2023-11-08 |
1209 | NLQxform: A Language Model-based Question to SPARQL Transformer Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: In recent years, scholarly data has grown dramatically in terms of both scale and complexity. It becomes increasingly challenging to retrieve information from scholarly knowledge … |
Ruijie Wang; Zhiruo Zhang; Luca Rossetto; Florian Ruosch; Abraham Bernstein; | ArXiv | 2023-11-08 |
1210 | Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering, especially the reasoning process. |
Ruosen Li; Xinya Du; | arxiv-cs.CL | 2023-11-07 |
1211 | In-Context Learning for Knowledge Base Question Answering for Unmanned Systems Based on Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we focus on the CCKS2023 Competition of Question Answering with Knowledge Graph Inference for Unmanned Systems. |
Yunlong Chen; Yaming Zhang; Jianfei Yu; Li Yang; Rui Xia; | arxiv-cs.CL | 2023-11-06 |
1212 | Adapting Pre-trained Generative Models for Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we introduce a novel approach that uses the power of pre-trained generative models to address extractive QA tasks by generating indexes corresponding to context tokens or sentences that form part of the answer. |
Prabir Mallick; Tapas Nayak; Indrajit Bhattacharya; | arxiv-cs.CL | 2023-11-06 |
1213 | Divide & Conquer for Entailment-aware Multi-hop Evidence Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we demonstrate that textual entailment relation is another important relevance dimension that should be considered. |
Fan Luo; Mihai Surdeanu; | arxiv-cs.CL | 2023-11-05 |
1214 | Tailoring Self-Rationalizers with Multi-Reward Distillation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we enable small-scale LMs (approx. 200x smaller than GPT-3) to generate rationales that not only improve downstream task performance, but are also more plausible, consistent, and diverse, assessed both by automatic and human evaluation. |
SAHANA RAMNATH et. al. | arxiv-cs.CL | 2023-11-05 |
1215 | Causal Question Answering with Reinforcement Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Hence, in this paper, we aim to answer causal questions with a causality graph, a large-scale dataset of causal relations between noun phrases along with the relations’ provenance data. |
Lukas Blübaum; Stefan Heindorf; | arxiv-cs.AI | 2023-11-05 |
1216 | AI-TA: Towards An Intelligent Question-Answer Teaching Assistant Using Open-Source LLMs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the challenges of scalable and intelligent question-answering (QA), we introduce an innovative solution that leverages open-source Large Language Models (LLMs) from the LLaMA-2 family to ensure data privacy. |
Yann Hicke; Anmol Agarwal; Qianou Ma; Paul Denny; | arxiv-cs.LG | 2023-11-05 |
1217 | Perturbation-based Active Learning for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose a perturbation-based active learning acquisition strategy and demonstrate it is more effective than existing commonly used strategies. |
Fan Luo; Mihai Surdeanu; | arxiv-cs.CL | 2023-11-04 |
1218 | SAC3: Reliable Hallucination Detection in Black-Box Language Models Via Semantic-aware Cross-check Consistency IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To achieve this goal, we re-examine existing detection approaches based on the self-consistency of LMs and uncover two types of hallucinations resulting from 1) question-level and 2) model-level, which cannot be effectively identified through self-consistency check alone. Building upon this discovery, we propose a novel sampling-based method, i.e., semantic-aware cross-check consistency (SAC3) that expands on the principle of self-consistency checking. |
Jiaxin Zhang; Zhuohang Li; Kamalika Das; Bradley A. Malin; Sricharan Kumar; | arxiv-cs.CL | 2023-11-03 |
1219 | Predicting Question-Answering Performance of Large Language Models Through Semantic Consistency IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We address the task of assessing question-answering (QA) semantic consistency of contemporary large language models (LLMs) by manually creating a benchmark dataset with high-quality paraphrases for factual questions, and release the dataset to the community. |
Ella Rabinovich; Samuel Ackerman; Orna Raz; Eitan Farchi; Ateret Anaby-Tavor; | arxiv-cs.CL | 2023-11-02 |
1220 | Long Story Short: A Summarize-then-Search Method for Long Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This capability has been particularly effective in settings such as narrative question answering, where the diversity of tasks is immense, but the available supervision data is small. In this work, we investigate if such language models can extend their zero-shot reasoning abilities to long multimodal narratives in multimedia content such as drama, movies, and animation, where the story plays an essential role. |
Jiwan Chung; Youngjae Yu; | arxiv-cs.CV | 2023-11-02 |
1221 | CLRN: A Reasoning Network for Multi-relation Question Answering Over Cross-lingual Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View |
YIMING TAN et. al. | Expert Syst. Appl. | 2023-11-01 |
1222 | VQA-GEN: A Visual Question Answering Benchmark for Domain Generalization Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose VQA-GEN, the first ever multi-modal benchmark dataset for distribution shift generated through a shift induced pipeline. |
Suraj Jyothi Unni; Raha Moraffah; Huan Liu; | arxiv-cs.CV | 2023-11-01 |
1223 | Hierarchical Reasoning Based on Perception Action Cycle for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Safaa Abdullahi Moallim Mohamud; Amin Jalali; Minho Lee; | Expert Syst. Appl. | 2023-11-01 |
1224 | Confidence-based Interactable Neural-symbolic Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yajie Bao; Tianwei Xing; Xun Chen; | Neurocomputing | 2023-11-01 |
1225 | From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The work aims to navigate both beginners and experts by shedding light on the potential avenues of research and expanding the boundaries of the field. |
Md Farhan Ishmam; Md Sakib Hossain Shovon; M. F. Mridha; Nilanjan Dey; | arxiv-cs.CV | 2023-11-01 |
1226 | Chinese Mineral Question and Answering System Based on Knowledge Graph IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
CHENGJIAN LIU et. al. | Expert Syst. Appl. | 2023-11-01 |
1227 | VQAPT: A New Visual Question Answering Model for Personality Traits in Social Media Images Related Papers Related Patents Related Grants Related Venues Related Experts View |
Kunal Biswas; P. Shivakumara; U. Pal; Cheng-Lin Liu; Yue Lu; | Pattern Recognit. Lett. | 2023-11-01 |
1228 | Generating Context-Aware Natural Answers for Questions in 3D Scenes Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: 3D question answering is a young field in 3D vision-language that is yet to be explored. Previous methods are limited to a pre-defined answer space and cannot generate answers … |
Mohammed Munzer Dwedari; Matthias Nießner; Dave Zhenyu Chen; | ArXiv | 2023-10-30 |
1229 | Split-NER: Named Entity Recognition Via Two Question-Answering-based Classifications Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we address the NER problem by splitting it into two logical sub-tasks: (1) Span Detection which simply extracts entity mention spans irrespective of entity type; (2) Span Classification which classifies the spans into their entity types. |
Jatin Arora; Youngja Park; | arxiv-cs.CL | 2023-10-30 |
1230 | Fusing Temporal Graphs Into Transformers for Time-Sensitive Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Answering time-sensitive questions from long documents requires temporal reasoning over the times in questions and documents. An important open question is whether large language … |
Xin Su; Phillip Howard; Nagib Hakim; Steven Bethard; | Conference on Empirical Methods in Natural Language … | 2023-10-30 |
1231 | Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a multimodal framework that uses language guidance (LG) in the form of rationales, image captions, scene graphs, etc to answer questions more accurately. |
Deepanway Ghosal; Navonil Majumder; Roy Ka-Wei Lee; Rada Mihalcea; Soujanya Poria; | arxiv-cs.CV | 2023-10-30 |
1232 | Knowledge Compass: A Question Answering System Guiding Students with Follow-Up Question Recommendations Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Pedagogical question-answering (QA) systems have been utilized for providing individual support in online learning courses. However, existing systems often neglect the education … |
RUI SHENG et. al. | Adjunct Proceedings of the 36th Annual ACM Symposium on … | 2023-10-29 |
1233 | Multimodal ChatGPT for Medical Applications: An Experimental Study of GPT-4V IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we critically evaluate the capabilities of the state-of-the-art multimodal large language model, i.e., GPT-4 with Vision (GPT-4V), on Visual Question Answering (VQA) task. |
ZHILING YAN et. al. | arxiv-cs.CV | 2023-10-29 |
1234 | DCQA: Document-Level Chart Question Answering Towards Complex Reasoning and Common-Sense Understanding Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce a novel task named document-level chart question answering (DCQA). |
ANRAN WU et. al. | arxiv-cs.AI | 2023-10-29 |
1235 | An Empirical Study of Multilingual Scene-Text Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In recent years, the focus on multilingual modeling has intensified, driven by the necessity to enable cross-lingual Text-based Visual Question Answering (TextVQA), which requires … |
Lin Li; Haohan Zhang; Zeqin Fang; | Proceedings of the 2nd Workshop on User-centric Narrative … | 2023-10-29 |
1236 | Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose a curriculum learning strategy to enhance the performance of multimodal deep learning models. |
Huseyin Fuat Alsan; Taner Arsan; | arxiv-cs.CV | 2023-10-29 |
1237 | Prompt-Engineering and Transformer-based Question Generation and Evaluation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this research, we finetuned a pretrained distilBERT model on the SQuAD question answering dataset to generate questions. |
Rubaba Amyeen; | arxiv-cs.CL | 2023-10-28 |
1238 | EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce EHRXQA, a novel multi-modal question answering dataset combining structured EHRs and chest X-ray images. |
SEONGSU BAE et. al. | arxiv-cs.CL | 2023-10-28 |
1239 | ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in Vietnamese Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Neural models for VQA have made remarkable progress on large-scale datasets, with a primary focus on resource-rich languages like English. To address this, we introduce the ViCLEVR dataset, a pioneering collection for evaluating various visual reasoning capabilities in Vietnamese while mitigating biases. |
Khiem Vinh Tran; Hao Phu Phan; Kiet Van Nguyen; Ngan Luu Thuy Nguyen; | arxiv-cs.CL | 2023-10-27 |
1240 | Knowledge Corpus Error in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This study revisits the conventional formulation of QA and introduces the concept of knowledge corpus error. |
Yejoon Lee; Philhoon Oh; James Thorne; | arxiv-cs.CL | 2023-10-27 |
1241 | Detrimental Contexts in Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we analyze how passages can have a detrimental effect on retrieve-then-read architectures used in question answering. |
Philhoon Oh; James Thorne; | arxiv-cs.CL | 2023-10-27 |
1242 | 3D-Aware Visual Question Answering About Parts, Poses and Occlusions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. |
Xingrui Wang; Wufei Ma; Zhuowan Li; Adam Kortylewski; Alan Yuille; | arxiv-cs.CV | 2023-10-27 |
1243 | Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Evaluating text-to-image models is notoriously difficult. A strong recent approach for assessing text-image faithfulness is based on QG/A (question generation and answering), … |
JAEMIN CHO et. al. | ArXiv | 2023-10-27 |
1244 | Answer-Based Entity Extraction and Alignment for Visual Text Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: As a variant of visual question answering (VQA), visual text question answering (VTQA) provides a text-image pair for each question. Text utilizes named entities to describe … |
JUN YU et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1245 | VTQAGen: BART-based Generative Model For Visual Text Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual Text Question Answering (VTQA) is a challenging task that requires answering questions pertaining to visual content by combining image understanding and language … |
HAORU CHEN et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1246 | In-Context Ability Transfer for Question Decomposition in Complex QA Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Answering complex questions is a challenging task that requires question decomposition and multistep reasoning for arriving at the solution. While existing supervised and … |
V. Venktesh; Sourangshu Bhattacharya; Avishek Anand; | ArXiv | 2023-10-26 |
1247 | Improving Zero-shot Reader By Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study aims at the feasibility of a zero-shot reader that addresses the challenges of computational cost and the need for labeled data. |
Sukmin Cho; Jeongyeon Seo; Soyeong Jeong; Jong C. Park; | arxiv-cs.CL | 2023-10-26 |
1248 | Finetuning Language Models for Multimodal Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: To achieve multi-modal intelligence, AI must be able to process and respond to inputs from multimodal sources. However, many current question answering models are limited to … |
XIN ZHANG et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1249 | Intra- and Inter-Modal Curriculum for Multimodal Learning IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multimodal learning has been widely studied and applied due to its improvement over previous unimodal tasks and its effectiveness on emerging multimodal challenges. However, it … |
Yuwei Zhou; Xin Wang; Hong Chen; Xuguang Duan; Wenwu Zhu; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1250 | Incorporating Probing Signals Into Multimodal Machine Translation Via Visual Question-Answering Pairs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper presents an in-depth study of multimodal machine translation (MMT), examining the prevailing understanding that MMT systems exhibit decreased sensitivity to visual information when text inputs are complete. |
YUXIN ZUO et. al. | arxiv-cs.CL | 2023-10-26 |
1251 | Depth-Aware Sparse Transformer for Video-Language Learning Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In Video-Language (VL) learning tasks, a massive amount of text annotations are describing geometrical relationships of instances (e.g. 19.6% to 45.0% in MSVD, MSR-VTT, MSVD-QA … |
Haonan Zhang; Lianli Gao; Pengpeng Zeng; A. Hanjalic; H. Shen; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1252 | VTQA2023: ACM Multimedia 2023 Visual Text Question Answering Challenge Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The ideal form of Visual Question Answering requires understanding, grounding and reasoning in the joint space of vision and language and serves as a proxy for the AI task of … |
Kang Chen; Tianli Zhao; Xiangqian Wu; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1253 | Multi-Domain Lifelong Visual Question Answering Via Self-Critical Distillation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual Question Answering (VQA) has achieved significant success over the last few years, while most studies focus on training a VQA model on a stationary domain (e.g., a given … |
MINGRUI LAO et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1254 | Advancing Video Question Answering with A Multi-modal and Multi-layer Question Enhancement Network Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering is an increasingly vital research field, spurred by the rapid proliferation of video content online and the urgent need for intelligent systems that can … |
MENG LIU et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1255 | Language-Guided Visual Aggregation Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video Question Answering (VideoQA) aims to comprehend intricate relationships, actions, and events within video content, as well as the inherent links between objects and scenes, … |
XIAO LIANG et. al. | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1256 | QA-CLIMS: Question-Answer Cross Language Image Matching for Weakly Supervised Semantic Segmentation Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Class Activation Map (CAM) has emerged as a popular tool for weakly supervised semantic segmentation (WSSS), allowing the localization of object regions in an image using only … |
Songhe Deng; Wei Zhuo; Jinheng Xie; Linlin Shen; | Proceedings of the 31st ACM International Conference on … | 2023-10-26 |
1257 | TOP-Training: Target-Oriented Pretraining for Medical Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To handle those challenges, we propose TOP-Training, a target-oriented pre-training paradigm that stands out among all domain adaptation techniques with two desirable features: (i) TOP-Training moves one step further than popular domain-oriented fine-tuning since it not only moves closer to the target domain, but also familiarizes itself with the target dataset, and (ii) it does not assume the existence of a large set of unlabeled instances from the target domain. |
SAPTARSHI SENGUPTA et. al. | arxiv-cs.CL | 2023-10-25 |
1258 | Exploring Question Decomposition for Zero-Shot VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, we show that naive application of model-written decompositions can hurt performance. We introduce a model-driven selective decomposition approach for second-guessing predictions and correcting errors, and validate its effectiveness on eight VQA tasks across three domains, showing consistent improvements in accuracy, including improvements of >20% on medical VQA datasets and boosting the zero-shot performance of BLIP-2 above chance on a VQA reformulation of the challenging Winoground task. |
Zaid Khan; Vijay Kumar BG; Samuel Schulter; Manmohan Chandraker; Yun Fu; | arxiv-cs.CV | 2023-10-25 |
1259 | Binary State Recognition By Robots Using Visual Question Answering of Pre-Trained Vision-Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Until now, these states have been recognized by programmatically describing the state of a point cloud or raw image, by annotating and learning images, by using special sensors, etc. In contrast to these methods, we apply Visual Question Answering (VQA) from a Pre-Trained Vision-Language Model (PTVLM) trained on a large-scale dataset, to such binary state recognition. |
Kento Kawaharazuka; Yoshiki Obinata; Naoaki Kanazawa; Kei Okada; Masayuki Inaba; | arxiv-cs.RO | 2023-10-25 |
1260 | Hierarchical Synergy-Enhanced Multimodal Relational Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering (VideoQA) is challenging as it requires reasoning about natural language and multimodal interactive relations. Most existing methods apply attention … |
Min Peng; Xiaohu Shao; Yu Shi; Xiangdong Zhou; | ACM Transactions on Multimedia Computing, Communications … | 2023-10-25 |
1261 | Transformer-Based Question Answering Model for The Biomedical Domain Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Motivation: Question Answering (QA) is a highly focused topic in the field of Natural Language Processing (NLP). Recent progress in neural network models and the availability of … |
Ahcene Haddouche; Ikram Rabia; Aicha Aid; | 2023 5th International Conference on Pattern Analysis and … | 2023-10-25 |
1262 | Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces a deep learning model tailored for document information analysis, emphasizing document classification, entity relation extraction, and document visual question answering. |
Tofik Ali; Partha Pratim Roy; | arxiv-cs.CV | 2023-10-25 |
1263 | EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce EHRXQA, a novel multi-modal question answering dataset for structured EHRs and chest X-ray images. |
SEONGSU BAE et. al. | nips | 2023-10-24 |
1264 | EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. |
Karttikeya Mangalam; Raiymbek Akshulakov; Jitendra Malik; | nips | 2023-10-24 |
1265 | Emergent Communication in Interactive Sketch Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Vision-based emergent communication (EC) aims to learn to communicate through sketches and demystify the evolution of human communication. |
Zixing Lei; Yiming Zhang; Yuxin Xiong; Siheng Chen; | arxiv-cs.AI | 2023-10-24 |
1266 | RealTime QA: What’s The Answer Right Now? IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce RealTime QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis (weekly in this version). |
JUNGO KASAI et. al. | nips | 2023-10-24 |
1267 | 3D-Aware Visual Question Answering About Parts, Poses and Occlusions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. |
Xingrui Wang; Zhuowan Li; Wufei Ma; Adam Kortylewski; Alan Yuille; | nips | 2023-10-24 |
1268 | Benchmarking Large Language Models on CMExam – A Comprehensive Chinese Medical Exam Dataset IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National Medical Licensing Examination. |
JUNLING LIU et. al. | nips | 2023-10-24 |
1269 | BeaverTails: A Human-Preference Dataset for LLM Harmlessness Alignment Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety alignment in large language models (LLMs). |
JIAMING JI et. al. | nips | 2023-10-24 |
1270 | Foundation Model Is Efficient Multimodal Multitask Model Selector Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Although recent-advanced approaches employed lightweight metrics to measure models’ transferability, they often depend heavily on the prior knowledge of a single task, making them inapplicable in a multi-modal multi-task scenario. To tackle this issue, we propose an efficient multitask model selector (EMMS), which employs large-scale foundation models to transform diverse label formats such as categories, texts, and bounding boxes of different downstream tasks into a unified noisy label embedding. |
FANQING MENG et. al. | nips | 2023-10-24 |
1271 | ECG-QA: A Comprehensive Question Answering Dataset Combined With Electrocardiogram IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This leaves the vast potential of combining electrocardiogram (ECG) data with these systems largely untapped. To address this gap, we present ECG-QA, the first QA dataset specifically designed for ECG analysis. |
Jungwoo Oh; Seongsu Bae; Gyubok Lee; Joon-myoung Kwon; Edward Choi; | nips | 2023-10-24 |
1272 | LoRA: A Logical Reasoning Augmented Dataset for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: VQA tasks and large vision-and-language models aim to tackle reasoning problems, but the accuracy, consistency and fabrication of the generated answers is hard to evaluate in the absence of a VQA dataset that can offer formal, comprehensive and systematic complex logical reasoning questions. To address this gap, we present LoRA, a novel Logical Reasoning Augmented VQA dataset that requires formal and complex description logic reasoning based on a food-and-kitchen knowledge base. |
Jingying Gao; Qi Wu; Alan Blair; Maurice Pagnucco; | nips | 2023-10-24 |
1273 | A Theoretically Grounded Question Answering Data Set for Evaluating Machine Common Sense Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Achieving machine common sense has been a longstanding problem within Artificial Intelligence. Thus far, benchmark data sets that are grounded in a theory of common sense and can … |
Henrique Santos; Ke Shen; Alice M. Mulvehill; M. Kejriwal; Deborah L. McGuinness; | Data Intelligence | 2023-10-24 |
1274 | ToolQA: A Dataset for LLM Question Answering with External Tools IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current evaluation methods do not distinguish between questions that can be answered using LLMs’ internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs’ ability to use external tools for question answering. |
Yuchen Zhuang; Yue Yu; Kuan Wang; Haotian Sun; Chao Zhang; | nips | 2023-10-24 |
1275 | Evaluating Open-QA Evaluation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a new task, QA Evaluation (QA-Eval) and the corresponding dataset EVOUNA, designed to assess the accuracy of AI-generated answers in relation to standard answers within Open-QA. |
CUNXIANG WANG et. al. | nips | 2023-10-24 |
1276 | Exploring Question Decomposition for Zero-Shot VQA Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, we show that naive application of model-written decompositions can hurt performance. We introduce a model-driven _selective decomposition_ approach for second-guessing predictions and correcting errors, and validate its effectiveness on eight VQA tasks across three domains, showing consistent improvements in accuracy, including improvements of >20% on medical VQA datasets and boosting the zero-shot performance of BLIP-2 significantly above chance (+18%) on the challenging Winoground task. |
Zaid Khan; Vijay Kumar B G; Samuel Schulter; Manmohan Chandraker; Yun Fu; | nips | 2023-10-24 |
1277 | Large Language Models Are Temporal and Causal Reasoners for Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we develop LLaMA-VQA by applying Flipped-VQA to LLaMA, and it outperforms both LLMs-based and non-LLMs-based models on five challenging VideoQA benchmarks. |
Dohwan Ko; Ji Soo Lee; Wooyoung Kang; Byungseok Roh; Hyunwoo J. Kim; | arxiv-cs.CV | 2023-10-24 |
1278 | Towards Perceiving Small Visual Details in Zero-shot Visual Question Answering with Multimodal LLMs Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we investigate whether MLLMs can perceive small details as well as large details in images. |
Jiarui Zhang; Mahyar Khayatkhoei; Prateek Chhikara; Filip Ilievski; | arxiv-cs.CV | 2023-10-24 |
1279 | Generative Pre-trained Transformer for Vietnamese Community-based COVID-19 Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce a novel approach by conducting a comparative analysis of different Transformers vs SOTA models in the community-based COVID-19 question answering dataset. |
Tam Minh Vo; Khiem Vinh Tran; | arxiv-cs.CL | 2023-10-23 |
1280 | TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper introduces TableQAKit, the first comprehensive toolkit designed specifically for TableQA. |
FANGYU LEI et. al. | arxiv-cs.CL | 2023-10-23 |
1281 | Strong and Efficient Baselines for Open Domain Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we study the State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and Fusion-in-Decoder (FiD) reader pipeline, and show that it significantly underperforms when applied to ODConvQA tasks due to various limitations. |
Andrei C. Coman; Gianni Barlacchi; Adrià de Gispert; | arxiv-cs.CL | 2023-10-23 |
1282 | An In-Context Schema Understanding Method for Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recently, Large Language Models (LLMs) have shown strong capabilities in language understanding and can be used to solve this task. In doing so, a major challenge for LLMs is to overcome the immensity and heterogeneity of knowledge base schemas.Existing methods bypass this challenge by initially employing LLMs to generate drafts of logic forms without schema-specific details.Then, an extra module is used to inject schema information to these drafts.In contrast, in this paper, we propose a simple In-Context Schema Understanding (ICSU) method that enables LLMs to directly understand schemas by leveraging in-context learning. |
YANTAO LIU et. al. | arxiv-cs.CL | 2023-10-22 |
1283 | Retrieval-Augmented Chain-of-Thought in Semi-structured Domains Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This study explores leveraging the semi-structured nature of legal and financial data to efficiently retrieve relevant context, enabling the use of LLMs for domain-specialized QA. |
Vaibhav Mavi; Abulhair Saparov; Chen Zhao; | arxiv-cs.CL | 2023-10-22 |
1284 | Comparative Analysis of Open Source and Commercial Embedding Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this industry track presentation, we will provide a comprehensive tour of the best performing embedding models for question answering, as determined by the Massive Text Embedding Benchmark1. |
Georgios Balikas; | cikm | 2023-10-21 |
1285 | CORD: A Three-Stage Coarse-to-Fine Framework for Relation Detection in Knowledge Base Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a simple and efficient three-stage framework to exploit the coarse-to-fine paradigm. |
Yanzeng Li; Sen Hu; Wenjuan Han; Lei Zou; | cikm | 2023-10-21 |
1286 | LittleMu: Deploying An Online Virtual Teaching Assistant Via Heterogeneous Sources Integration and Chain of Teach Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present a virtual MOOC teaching assistant, LittleMu with minimum labeled training data, to provide question answering and chit-chat services. |
SHANGQING TU et. al. | cikm | 2023-10-21 |
1287 | MoqaGPT : Zero-Shot Multi-modal Open-domain Question Answering with Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To enable LLMs to tackle the task in a zero-shot manner, we introduce MoqaGPT, a straightforward and flexible framework. |
Le Zhang; Yihong Wu; Fengran Mo; Jian-Yun Nie; Aishwarya Agrawal; | arxiv-cs.CL | 2023-10-20 |
1288 | Test-Time Self-Adaptive Small Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we show and investigate the capabilities of smaller self-adaptive LMs, only with unlabeled test data. |
Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong C. Park; | arxiv-cs.CL | 2023-10-20 |
1289 | Robust Training for Conversational Question Answering Models with Reinforced Reformulation Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Models for conversational question answering (ConvQA) over knowledge graphs (KGs) are usually trained and tested on benchmarks of gold QA pairs. |
Magdalena Kaiser; Rishiraj Saha Roy; Gerhard Weikum; | arxiv-cs.CL | 2023-10-20 |
1290 | Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs of LLMs, by LLMs and for LLMs. |
Jinyuan Wang; Junlong Li; Hai Zhao; | arxiv-cs.CL | 2023-10-20 |
1291 | SALMONN: Towards Generic Hearing Abilities for Large Language Models IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose SALMONN, a speech audio language music open neural network, built by integrating a pre-trained text-based large language model (LLM) with speech and audio encoders into a single multimodal model. |
CHANGLI TANG et. al. | arxiv-cs.SD | 2023-10-20 |
1292 | ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models Via Transferable Adversarial Attacks IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Specifically, this paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases for evaluating the LLMs’ reliability in using new evidence for answering. |
Xiaodong Yu; Hao Cheng; Xiaodong Liu; Dan Roth; Jianfeng Gao; | arxiv-cs.CL | 2023-10-19 |
1293 | Reliable Academic Conference Question Answering: A Study Based on Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, these methods fail to work due to the lack of the latest conference knowledge. To address this challenge, we develop the ConferenceQA dataset, consisting of seven diverse academic conferences. |
ZHIWEI HUANG et. al. | arxiv-cs.CL | 2023-10-19 |
1294 | CLIFT: Analysing Natural Distribution Shift on Question Answering Models in Clinical Domain Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper introduces a new testbed CLIFT (Clinical Shift) for the clinical domain Question-answering task. |
Ankit Pal; | arxiv-cs.CL | 2023-10-19 |
1295 | RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: These approaches demand significant computational resources and time, and a considerable number of trainable parameters are introduced. To address these challenges, we introduce a novel method known as RSAdapter, which prioritizes runtime and parameter efficiency. |
Yuduo Wang; Pedram Ghamisi; | arxiv-cs.CV | 2023-10-19 |
1296 | PSYCHIC: A Neuro-Symbolic Framework for Knowledge Graph Question-Answering Grounding Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We answer the KGQA over DBLP (DBLP-QUAD) task by proposing a neuro-symbolic (NS) framework based on PSYCHIC, an extractive QA model capable of identifying the query and entities related to a KG question. |
Hanna Abi Akl; | arxiv-cs.AI | 2023-10-19 |
1297 | Time-Aware Representation Learning for Time-Sensitive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, language models have difficulty understanding the relationships between time specifiers, such as ‘after’ and ‘before’, and numbers, since existing QA datasets do not include sufficient time expressions. To address this issue, we propose a Time-Context aware Question Answering (TCQA) framework. |
Jungbin Son; Alice Oh; | arxiv-cs.CL | 2023-10-19 |
1298 | Understanding Retrieval Augmentation for Long-Form Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a study of retrieval-augmented language models (LMs) on long-form question answering. |
Hung-Ting Chen; Fangyuan Xu; Shane A. Arora; Eunsol Choi; | arxiv-cs.CL | 2023-10-18 |
1299 | A Summary of The ALQAC 2023 Competition Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper presents an overview of the third edition of the Automated Legal Question Answering Competition (ALQAC 2023). The primary objective of ALQAC is to address challenges … |
CHAU NGUYEN et. al. | 2023 15th International Conference on Knowledge and Systems … | 2023-10-18 |
1300 | Open Information Extraction: A Review of Baseline Techniques, Approaches, and Applications Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: It briefly discusses the main approaches and the pros and cons of each method. |
Serafina Kamp; Morteza Fayazi; Zineb Benameur-El; Shuyan Yu; Ronald Dreslinski; | arxiv-cs.IR | 2023-10-17 |
1301 | Systematic Assessment of Factual Knowledge in Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper proposes a framework to systematically assess the factual knowledge of LLMs by leveraging knowledge graphs (KGs). |
Linhao Luo; Thuy-Trang Vu; Dinh Phung; Gholamreza Haffari; | arxiv-cs.CL | 2023-10-17 |
1302 | QADYNAMICS: Training Dynamics-Driven Synthetic QA Diagnostic for Zero-Shot Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, current QA synthesis protocols may introduce noise from the CSKBs and generate ungrammatical questions and false negative options, which impede the model’s ability to generalize. To address these issues, we propose QADYNAMICS, a training dynamics-driven framework for QA diagnostics and refinement. |
HAOCHEN SHI et. al. | arxiv-cs.CL | 2023-10-17 |
1303 | Will The Prince Get True Love’s Kiss? On The Model Sensitivity to Gender Perturbation Over Fairytale Texts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recent studies show that traditional fairytales are rife with harmful gender biases. To help mitigate these gender biases in fairytales, this work aims to assess learned biases of language models by evaluating their robustness against gender perturbations. |
Christina Chance; Da Yin; Dakuo Wang; Kai-Wei Chang; | arxiv-cs.CL | 2023-10-16 |
1304 | A Search for Prompts: Generating Structured Answers from Contracts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In many legal processes being able to action on the concrete implication of a legal question can be valuable to automating human review or signalling certain conditions (e.g., alerts around automatic renewal). To support such tasks, we present a form of legal question answering that seeks to return one (or more) fixed answers for a question about a contract clause. |
ADAM ROEGIEST et. al. | arxiv-cs.CV | 2023-10-16 |
1305 | UNK-VQA: A Dataset and A Probe Into The Abstention Ability of Multi-modal Large Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper aims to bridge the research gap by contributing a comprehensive dataset, called UNK-VQA. |
Yangyang Guo; Fangkai Jiao; Zhiqi Shen; Liqiang Nie; Mohan Kankanhalli; | arxiv-cs.CV | 2023-10-16 |
1306 | Emerging Challenges in Personalized Medicine: Assessing Demographic Effects on Biomedical Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We find that irrelevant demographic information change up to 15% of the answers of a KG-grounded system and up to 23% of the answers of a text-based system, including changes that affect accuracy. |
Sagi Shaier; Kevin Bennett; Lawrence Hunter; Katharina von der Wense; | arxiv-cs.CL | 2023-10-16 |
1307 | CarExpert: Leveraging Large Language Models for In-Car Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. |
MD RASHAD AL HASAN RONY et. al. | arxiv-cs.CL | 2023-10-14 |
1308 | Progressive Evidence Refinement for Open-domain Multimodal Retrieval Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Secondly, a gap exists between the feature extraction of evidence and the question, which hinders the model from effectively extracting critical features from the evidence based on the given question. We propose a two-stage framework for evidence retrieval and question-answering to alleviate these issues. |
SHUWEN YANG et. al. | arxiv-cs.AI | 2023-10-14 |
1309 | MiniGPT-v2: Large Language Model As A Unified Interface for Vision-language Multi-task Learning IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. |
JUN CHEN et. al. | arxiv-cs.CV | 2023-10-13 |
1310 | ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, three core challenges remain: inefficient knowledge retrieval, mistakes of retrieval adversely impacting semantic parsing, and the complexity of previous KBQA methods. To tackle these challenges, we introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework, which proposes first generating the logical form with fine-tuned LLMs, then retrieving and replacing entities and relations with an unsupervised retrieval method, to improve both generation and retrieval more directly. |
HAORAN LUO et. al. | arxiv-cs.CL | 2023-10-13 |
1311 | Enhancing BERT-Based Visual Question Answering Through Keyword-Driven Sentence Selection Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The goal is to identify the document elements that answer a specific question posed in natural language. This paper describes the PoliTo’s approach to addressing this task, in particular, our best solution explores a text-only approach, leveraging an ad hoc sampling strategy. |
Davide Napolitano; Lorenzo Vaiani; Luca Cagliero; | arxiv-cs.CL | 2023-10-13 |
1312 | Question Answering for Electronic Health Records: A Scoping Review of Datasets and Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We searched for articles from January 1st, 2005 to September 30th, 2023 in four digital sources including Google Scholar, ACL Anthology, ACM Digital Library, and PubMed to collect relevant publications on EHR QA. |
Jayetri Bardhan; Kirk Roberts; Daisy Zhe Wang; | arxiv-cs.LG | 2023-10-12 |
1313 | Mitigating Bias for Question Answering Models By Tracking Bias Influence Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. |
MINGYU DEREK MA et. al. | arxiv-cs.CL | 2023-10-12 |
1314 | Open-Set Knowledge-Based Visual Question Answering with Inference Paths Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we confront the challenge of \emph{explainable open-set} KB-VQA, where the system is required to answer questions with entities at wild and retain an explainable reasoning path. |
Jingru Gan; Xinzhe Han; Shuhui Wang; Qingming Huang; | arxiv-cs.LG | 2023-10-12 |
1315 | Training Generative Question-Answering on Synthetic Data Obtained from An Instruct-tuned Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents a simple and cost-effective method for synthesizing data to train question-answering systems. |
Kosuke Takahashi; Takahiro Omi; Kosuke Arima; Tatsuya Ishigaki; | arxiv-cs.CL | 2023-10-12 |
1316 | Low-Resource Clickbait Spoiling for Indonesian Via Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our contributions include the construction of manually labeled clickbait spoiling corpus in Indonesian and an evaluation on using cross-lingual zero-shot question answering-based models to tackle clikcbait spoiling for low-resource language like Indonesian. |
Ni Putu Intan Maharani; Ayu Purwarianti; Alham Fikri Aji; | arxiv-cs.CL | 2023-10-12 |
1317 | Understanding How to Inform Blind and Low-Vision Users About Data Privacy Through Privacy Question Answering Assistants Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We conducted an in-depth qualitative study with 21 US BLV participants to understand their data privacy risk perception and mitigation, as well as their information behaviors related to data privacy. |
YUANYUAN FENG et. al. | arxiv-cs.HC | 2023-10-12 |
1318 | QASiNa: Religious Domain Question Answering Using Sirah Nabawiyah Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose the Question Answering Sirah Nabawiyah (QASiNa) dataset, a novel dataset compiled from Sirah Nabawiyah literatures in Indonesian language. |
Muhammad Razif Rizqullah; Ayu Purwarianti; Alham Fikri Aji; | arxiv-cs.CL | 2023-10-12 |
1319 | Framework for Question-Answering in Sanskrit Through Automated Construction of Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we target the problem of building knowledge graphs for particular types of relationships from sa\d{m}sk\d{r}ta texts. |
Hrishikesh Terdalkar; Arnab Bhattacharya; | arxiv-cs.CL | 2023-10-11 |
1320 | QACHECK: A Demonstration System for Question-Guided Multi-Hop Fact-Checking Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, existing fact-checking systems often lack transparency in their decision-making, making it challenging for users to comprehend their reasoning process. To address this, we propose the Question-guided Multi-hop Fact-Checking (QACHECK) system, which guides the model’s reasoning process by asking a series of questions critical for verifying a claim. |
Liangming Pan; Xinyuan Lu; Min-Yen Kan; Preslav Nakov; | arxiv-cs.CL | 2023-10-11 |
1321 | MemSum-DQA: Adapting An Efficient Long Document Extractive Summarizer for Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce MemSum-DQA, an efficient system for document question answering (DQA) that leverages MemSum, a long document extractive summarizer. |
Nianlong Gu; Yingqiang Gao; Richard H. R. Hahnloser; | arxiv-cs.CL | 2023-10-10 |
1322 | Question Classification for Intelligent Question Answering: A Comprehensive Survey Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In the era of GeoAI, Geospatial Intelligent Question Answering (GeoIQA) represents the ultimate pursuit for everyone. Even generative AI systems like ChatGPT-4 struggle to handle … |
Hao Sun; Shu Wang; Yunqiang Zhu; Wen Yuan; Zhiqiang Zou; | ISPRS Int. J. Geo Inf. | 2023-10-10 |
1323 | Jaeger: A Concatenation-Based Multi-Transformer VQA Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Although there has been encouraging progress in document-based question answering due to the utilization of large language and open-world prior models\cite{1}, several challenges persist, including prolonged response times, extended inference durations, and imprecision in matching. In order to overcome these challenges, we propose Jaegar, a concatenation-based multi-transformer VQA model. |
Jieting Long; Zewei Shi; Penghao Jiang; Yidong Gan; | arxiv-cs.CL | 2023-10-10 |
1324 | Answer Candidate Type Selection: Text-to-Text Language Model for Closed Book Question Answering Meets Knowledge Graphs Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, the capacity of the models is limited and the quality decreases for questions with less popular entities. In this paper, we present a novel approach which works on top of the pre-trained Text-to-Text QA system to address this issue. |
MIKHAIL SALNIKOV et. al. | arxiv-cs.CL | 2023-10-10 |
1325 | Towards Mitigating Hallucination in Large Language Models Via Self-Reflection IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Our investigation centers on the identification and comprehension of common problematic answers, with a specific emphasis on hallucination. To tackle this challenge, we present an interactive self-reflection methodology that incorporates knowledge acquisition and answer generation. |
ZIWEI JI et. al. | arxiv-cs.CL | 2023-10-09 |
1326 | FireAct: Toward Language Agent Fine-tuning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. |
BAIAN CHEN et. al. | arxiv-cs.CL | 2023-10-09 |
1327 | Causal Reasoning Through Two Layers of Cognition for Improving Generalization in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Besides, diverse interpretations of the input lead to various modes of answer generation, highlighting the role of causal reasoning between interpreting and answering steps in VQA. Through this lens, we propose Cognitive pathways VQA (CopVQA) improving the multimodal predictions by emphasizing causal reasoning factors. |
Trang Nguyen; Naoaki Okazaki; | arxiv-cs.AI | 2023-10-09 |
1328 | Retrieval-Generation Synergy Augmented Large Language Models IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: One is to retrieve from an external knowledge base, and the other is to utilize large language models to generate documents. |
Zhangyin Feng; Xiaocheng Feng; Dezhi Zhao; Maojin Yang; Bing Qin; | arxiv-cs.CL | 2023-10-08 |
1329 | Multi-Semantic Alignment Co-Reasoning Network for Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering challenges models on understanding textual questions with varying complexity and searching for clues from visual content with different hierarchical … |
Min Peng; Liangchen Liu; Zhenghao Li; Yu Shi; Xiangdong Zhou; | 2023 IEEE International Conference on Image Processing … | 2023-10-08 |
1330 | Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Therefore, the pertinent question to ask is: Can image-text models be adapted to video tasks and is there any benefit to using these models over pretraining directly on videos? In this work, we focus on this question by proposing a detailed study on the generalization abilities of image-text models when evaluated on video understanding tasks in a zero-shot setting. |
Avinash Madasu; Anahita Bhiwandiwalla; Vasudev Lal; | arxiv-cs.CV | 2023-10-07 |
1331 | Towards Faithful Knowledge Graph Explanation Through Deep Alignment in Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We identify confounding effects and LM-KG misalignment as key factors causing spurious explanations. To address this, we introduce the LM-KG Fidelity metric to assess KG representation reliability and propose the LM-KG Distribution-aware Alignment (\textit{LKDA}) algorithm to improve explanation faithfulness. |
Weihe Zhai; Arkaitz Zubiaga; | arxiv-cs.CL | 2023-10-07 |
1332 | Analysis of The Reasoning with Redundant Information Provided Ability of Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The study designed a modified version of the grade school math 8K (GSM-8K) dataset which has several variants focusing on different attributes of redundant information. |
Wenbei Xie; | arxiv-cs.CL | 2023-10-06 |
1333 | Retrieval-augmented Generation to Improve Math Question-Answering: Trade-offs Between Groundedness and Human Preference IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we designed prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions. |
ZACHARY LEVONIAN et. al. | arxiv-cs.CL | 2023-10-04 |
1334 | Integrating UMLS Knowledge Into Large Language Models for Medical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In our research, we develop an augmented LLM framework based on the Unified Medical Language System (UMLS), aiming to better serve the healthcare community. |
RUI YANG et. al. | arxiv-cs.CL | 2023-10-04 |
1335 | Multimodal Question Answering for Unified Information Extraction Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Due to the diversity of tasks and settings, most current MIE models are task-specific and data-intensive, which limits their generalization to real-world scenarios with diverse task requirements and limited labeled data. To address these issues, we propose a novel multimodal question answering (MQA) framework to unify three MIE tasks by reformulating them into a unified span extraction and multi-choice QA pipeline. |
Yuxuan Sun; Kai Zhang; Yu Su; | arxiv-cs.CL | 2023-10-04 |
1336 | An Empirical Study of ChatGPT-3.5 on Question Answering and Code Maintenance Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Ever since the launch of ChatGPT in 2022, a rising concern is whether ChatGPT will replace programmers and kill jobs. Motivated by this widespread concern, we conducted an empirical study to systematically compare ChatGPT against programmers in question-answering and software-maintaining. |
MD MAHIR ASEF KABIR et. al. | arxiv-cs.SE | 2023-10-03 |
1337 | SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we demonstrate that despite the effectiveness of scene graphs in VQA tasks, current methods that utilize idealized annotated scene graphs struggle to generalize when using predicted scene graphs extracted from images. To address this issue, we introduce the SelfGraphVQA framework. |
Bruno Souza; Marius Aasan; Helio Pedrini; Adín Ramírez Rivera; | arxiv-cs.CV | 2023-10-03 |
1338 | Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We introduce a unique object-level multimodal LLM architecture that merges vectorized numeric modalities with a pre-trained LLM to improve context understanding in driving situations. |
LONG CHEN et. al. | arxiv-cs.RO | 2023-10-03 |
1339 | On The Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To inspect the association of VQA models to human cognition, we designed a survey to record human thinking process and analyzed VQA models by comparing the outputs and attention maps with those of humans. |
Liben Chen; Long Chen; Tian Ellison-Chen; Zhuoyuan Xu; | arxiv-cs.CV | 2023-10-03 |
1340 | Systematic Literature Review on Ontology-based Indonesian Question Answering System Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-Answering (QA) systems at the intersection of natural language processing, information retrieval, and knowledge representation aim to provide efficient responses to … |
Fadhila Tangguh Admojo; Adidah Lajis; H. Nasir; | Knowl. Eng. Data Sci. | 2023-10-03 |
1341 | Generating Explanations in Medical Question-Answering By Expectation Maximization Inference Over Evidence Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To do so, we propose a novel approach for generating natural language explanations for answers predicted by medical QA systems. |
Wei Sun; Mingxiao Li; Damien Sileo; Jesse Davis; Marie-Francine Moens; | arxiv-cs.CL | 2023-10-02 |
1342 | External Commonsense Knowledge As A Modality for Social Intelligence Question-Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Artificial Social Intelligence (ASI) refers to the perception and understanding of social interactions. It involves the usage of contextual information about social cues to … |
Sanika Natu; Shounak Sural; Sulagna Sarkar; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1343 | Human Mobility Question Answering (Vision Paper) Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Mining human mobility data is crucial for various applications such as smart city planning, pandemic management, and personalised recommendation system. In this paper, we aim to tackle this gap and introduce a novel task, that is, human mobility question answering (MobQA). |
Hao Xue; Flora D. Salim; | arxiv-cs.CL | 2023-10-02 |
1344 | Investigating Better Context Representations for Generative Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Sumam Francis; Marie-Francine Moens; | Information Retrieval Journal | 2023-10-02 |
1345 | Multi-Modal Correlated Network with Emotional Reasoning Knowledge for Social Intelligence Question-Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The capacity for social reasoning is essential to the development of social intelligence in humans, which we easily acquire through study and experience. The acquisition of such … |
Baijun Xie; Chung Hyuk Park; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1346 | MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Video question answering is a challenging task that requires understanding the video and question in the same context. This becomes even harder when the questions involve … |
Mobeen Ahmad; Geonwoo Park; Dongchan Park; Sanguk Park; | 2023 IEEE/CVF International Conference on Computer Vision … | 2023-10-02 |
1347 | ReAcTable: Enhancing ReAct for Table Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Nonetheless, a conspicuous gap exists in the research landscape, where there is limited exploration of how innovative foundational research, which integrates incremental reasoning with external tools in the context of LLMs, as exemplified by the ReAct paradigm, could potentially bring advantages to the TQA task. In this paper, we aim to fill this gap, by introducing ReAcTable (ReAct for Table Question Answering tasks), a framework inspired by the ReAct paradigm that is carefully enhanced to address the challenges uniquely appearing in TQA tasks such as interpreting complex data semantics, dealing with errors generated by inconsistent data and generating intricate data transformations. |
YUNJIA ZHANG et. al. | arxiv-cs.DB | 2023-10-01 |
1348 | Understanding AI Cognition: A Neural Module for Inference Inspired By Human Memory Mechanisms Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: How humans and machines make sense of current inputs for relation reasoning and question-answering while putting the perceived information into context of our past memories, has … |
Xiangyu Zeng; Jie Lin; Piao Hu; Ruizheng Huang; Zhicheng Zhang; | ArXiv | 2023-10-01 |
1349 | Question Answering Models for Human-machine Interaction in The Manufacturing Industry Related Papers Related Patents Related Grants Related Venues Related Experts View |
Eneko Ruiz; M. Inés Torres; A. del Pozo; | Comput. Ind. | 2023-10-01 |
1350 | Event-Oriented Visual Question Answering: The E-VQA Dataset and Benchmark Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Visual question answering (VQA) is a challenging task that reasons over questions on images with knowledge. A prerequisite for VQA is the availability of annotated datasets, while … |
Zhenguo Yang; Jiale Xiang; Jiuxiang You; Qing Li; Wenyin Liu; | IEEE Transactions on Knowledge and Data Engineering | 2023-10-01 |
1351 | Multi-modal Spatial Relational Attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
HAIBO YAO et. al. | Image Vis. Comput. | 2023-10-01 |
1352 | Multi-aspect Attentive Text Representations for Simple Question Answering Over Knowledge Base Related Papers Related Patents Related Grants Related Venues Related Experts View |
Zhixiang Zeng; Yuefeng Li; Jianming Yong; Xiaohui Tao; Vicky Liu; | Nat. Lang. Process. J. | 2023-10-01 |
1353 | Robust Visual Question Answering Via Semantic Cross Modal Augmentation Related Papers Related Patents Related Grants Related Venues Related Experts View |
Akib Mashrur; Wei Luo; Nayyar A. Zaidi; Antonio Robles-Kelly; | Comput. Vis. Image Underst. | 2023-10-01 |
1354 | A Framework for Inference Inspired By Human Memory Mechanisms Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Inspired by human brain’s memory system and cognitive architectures, we propose a PMI framework that consists of perception, memory and inference components. |
Xiangyu Zeng; Jie Lin; Piao Hu; Ruizheng Huang; Zhicheng Zhang; | arxiv-cs.LG | 2023-10-01 |
1355 | Learning Neighbor-enhanced Region Representations and Question-guided Visual Representations for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Ling Gao; Hongda Zhang; Nan Sheng; Lida Shi; Hao Xu; | Expert Syst. Appl. | 2023-10-01 |
1356 | Testing The Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To that end, we attempt at creating a shared modeling approach in the pretraining stage with encoder-decoder style LLMs that can cater to diverse tasks. We evaluate our approach that continually pretrains and finetunes different model families of T5 with data from tables and surrounding context, on these downstream tasks at different model scales. |
Soumajyoti Sarkar; Leonard Lausen; | arxiv-cs.CL | 2023-10-01 |
1357 | Question-Answering Model for Schizophrenia Symptoms and Their Impact on Daily Life Using Mental Health Forums Data Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The purpose of this paper is to present a new methodology for building a medical dataset and obtain a QA model for analysis of symptoms and impact on daily life for a specific disease domain. |
Christian Internò; Eloisa Ambrosini; | arxiv-cs.LG | 2023-09-30 |
1358 | Question Answering Over Knowledge Graphs Using BERT Based Relation Mapping Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: A knowledge graph (KG) is a structured form of knowledge describing real‐world entities, properties and relationships as a graph. Question answering over knowledge graphs (KGQA) … |
S. C. M.; JayaramanPrem Prakash; Pramod Kumar Singh; | Expert Systems | 2023-09-29 |
1359 | Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This paper proposes Fine-grained Late-interaction Multi-modal Retrieval (FLMR) which significantly improves knowledge retrieval in RA-VQA. |
Weizhe Lin; Jinghong Chen; Jingbiao Mei; Alexandru Coca; Bill Byrne; | arxiv-cs.CL | 2023-09-29 |
1360 | Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios Via Self-knowledge Distillation Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Beyond performance improvements, we offer valuable insights through comprehensive analyses and an ablation study, further substantiating the benefits and constraints of our approach. |
Casimiro Pio Carrino; Carlos Escolano; José A. R. Fonollosa; | arxiv-cs.CL | 2023-09-29 |
1361 | Spider4SPARQL: A Complex Benchmark for Evaluating Knowledge Graph Question Answering Systems Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we introduce Spider4SPARQL – a new SPARQL benchmark dataset featuring 9,693 previously existing manually generated NL questions and 4,721 unique, novel, and complex SPARQL queries of varying complexity. |
Catherine Kosten; Philippe Cudré-Mauroux; Kurt Stockinger; | arxiv-cs.CL | 2023-09-28 |
1362 | VDC: Versatile Data Cleanser Based on Visual-Linguistic Inconsistency By Multimodal Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing detectors only focus on detecting poisoned samples or noisy labels, that are often prone to weak generalization when dealing with dirty samples from other domains.In this paper, we find a commonality of various dirty samples is visual-linguistic inconsistency between images and associated labels. To capture the semantic inconsistency between modalities, we propose versatile data cleanser (VDC) leveraging the surpassing capabilities of multimodal large language models (MLLM) in cross-modal alignment and reasoning.It consists of three consecutive modules: the visual question generation module to generate insightful questions about the image; the visual question answering module to acquire the semantics of the visual content by answering the questions with MLLM; followed by the visual answer evaluation module to evaluate the inconsistency.Extensive experiments demonstrate its superior performance and generalization to various categories and types of dirty samples. |
Zihao Zhu; Mingda Zhang; Shaokui Wei; Bingzhe Wu; Baoyuan Wu; | arxiv-cs.CV | 2023-09-28 |
1363 | Using Weak Supervision and Data Augmentation in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we explore the roles weak supervision and data augmentation play in training deep neural network QA models. |
Chumki Basu; Himanshu Garg; Allen McIntosh; Sezai Sablak; John R. Wullert II; | arxiv-cs.CL | 2023-09-28 |
1364 | Toloka Visual Question Answering Benchmark Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present Toloka Visual Question Answering, a new crowdsourced dataset allowing comparing performance of machine learning systems against human level of expertise in the grounding visual question answering task. |
Dmitry Ustalov; Nikita Pavlichenko; Sergey Koshelev; Daniil Likhobaba; Alisa Smirnova; | arxiv-cs.CV | 2023-09-28 |
1365 | MKRAG: Medical Knowledge Retrieval Augmented Generation for Medical Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To address the problem, our work employs a transparent process of retrieval augmented generation (RAG), aiming to improve LLM responses without the need for fine-tuning or retraining. Specifically, we propose a comprehensive retrieval strategy to extract medical facts from an external knowledge base, and then inject them into the LLM’s query prompt. |
YUCHENG SHI et. al. | arxiv-cs.CL | 2023-09-27 |
1366 | PromptCap: Prompt-Guided Image Captioning for VQA with GPT-3 IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Generic image captions often miss visual details essential for the LM to answer visual questions correctly. To address this challenge, we propose PromptCap (Prompt-guided image Captioning), a captioning model designed to serve as a better connector between images and black-box LMs. |
YUSHI HU et. al. | iccv | 2023-09-27 |
1367 | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating The Generalizability of Video Question Answering Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We hence propose a new benchmark, Open-vocabulary Video Question Answering (OVQA), to measure the generalizability of VideoQA models by considering rare and unseen answers. |
DOHWAN KO et. al. | iccv | 2023-09-27 |
1368 | VQA-GNN: Reasoning with Multimodal Knowledge Via Graph Neural Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To perform more expressive reasoning, we propose VQA-GNN, a new VQA model that performs bidirectional fusion between unstructured and structured multimodal knowledge to obtain unified knowledge representations. |
Yanan Wang; Michihiro Yasunaga; Hongyu Ren; Shinya Wada; Jure Leskovec; | iccv | 2023-09-27 |
1369 | Knowledge Proxy Intervention for Deconfounded Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To tackle the challenge that the confounder in VideoQA is unobserved and non-enumerable in general, we propose a model-agnostic framework called Knowledge Proxy Intervention (KPI), which introduces an extra knowledge proxy variable in the causal graph to cut the backdoor path and remove the confounder. |
Jiangtong Li; Li Niu; Liqing Zhang; | iccv | 2023-09-27 |
1370 | Variational Causal Inference Network for Explanatory Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Moreover, they neglect the complex relationships among question words, visual regions, and explanation tokens. To address these issues, we propose a Variational Causal Inference Network (VCIN) that establishes the causal correlation between predicted answers and explanations, and captures cross-modal relationships to generate rational explanations. |
Dizhan Xue; Shengsheng Qian; Changsheng Xu; | iccv | 2023-09-27 |
1371 | Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. |
Deniz Engin; Yannis Avrithis; | arxiv-cs.CV | 2023-09-27 |
1372 | Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: On the other hand, neglecting the interactions between modalities will lead to poor performance. To tackle these challenging issues, we propose a comprehensive formulation for CL-VQA from the perspective of multi-modal vision-language fusion. |
ZI QIAN et. al. | iccv | 2023-09-27 |
1373 | Question Answering Using Deep Learning in Low Resource Indian Language Marathi Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we investigate different transformer models for creating a reading comprehension-based Marathi question answering system. |
Dhiraj Amin; Sharvari Govilkar; Sagar Kulkarni; | arxiv-cs.CL | 2023-09-27 |
1374 | TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Based on this approach, we introduce TIFA v1.0, a benchmark consisting of 4K diverse text inputs and 25K questions across 12 categories (object, counting, etc.). |
YUSHI HU et. al. | iccv | 2023-09-27 |
1375 | VQA Therapy: Exploring Answer Differences By Visually Grounding Answers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Given that different people can provide different answers to a visual question, we aim to better understand why with answer groundings. |
Chongyan Chen; Samreen Anjum; Danna Gurari; | iccv | 2023-09-27 |
1376 | Discovering Spatio-Temporal Rationales for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To tackle the challenge, we highlight the importance of identifying question-critical temporal moments and spatial objects from the vast amount of video content. Towards this, we propose a Spatio-Temporal Rationalizer (STR), a differentiable selection module that adaptively collects question-critical moments and objects using cross-modal interaction. |
Yicong Li; Junbin Xiao; Chun Feng; Xiang Wang; Tat-Seng Chua; | iccv | 2023-09-27 |
1377 | Encyclopedic VQA: Visual Questions About Detailed Properties of Fine-Grained Categories IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We propose Encyclopedic-VQA, a large scale visual question answering (VQA) dataset featuring visual questions about detailed properties of fine-grained categories and instances. |
THOMAS MENSINK et. al. | iccv | 2023-09-27 |
1378 | Simple Baselines for Interactive Video Retrieval with Questions and Answers Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Recently, there has been renewed interest in interactive systems to enhance retrieval, but existing approaches are complex and deliver limited gains in performance. In this work, we revisit this topic and propose several simple yet effective baselines for interactive video retrieval via question-answering. |
Kaiqu Liang; Samuel Albanie; | iccv | 2023-09-27 |
1379 | A Question-Answering Approach to Evaluating Legal Summaries Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Traditional evaluation metrics like ROUGE compare lexical overlap between the reference and generated summaries without taking argumentative structure into account, which is … |
Huihui Xu; Kevin D. Ashley; | International Conference on Legal Knowledge and Information … | 2023-09-26 |
1380 | Fine-tuning and Aligning Question Answering Models for Complex Information Extraction Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work we propose an approach that uses and integrates extractive QA models for improved feature extraction of German business documents such as insurance reports or medical leaflets into a document analysis solution. |
Matthias Engelbach; Dennis Klau; Felix Scheerer; Jens Drawehn; Maximilien Kintz; | arxiv-cs.CL | 2023-09-26 |
1381 | Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets: the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. |
Jianing Wang; Chengyu Wang; Chuanqi Tan; Jun Huang; Ming Gao; | arxiv-cs.CL | 2023-09-26 |
1382 | Question-Answering Approach to Evaluating Legal Summaries Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a novel legal summarization evaluation framework that utilizes GPT-4 to generate a set of question-answer pairs that cover main points and information in the reference summary. |
Huihui Xu; Kevin Ashley; | arxiv-cs.CL | 2023-09-26 |
1383 | Legal Question-Answering in The Indian Context: Efficacy, Challenges, and Potential of Modern AI Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Legal QA platforms bear the promise to metamorphose the manner in which legal experts engage with jurisprudential documents. In this exposition, we embark on a comparative exploration of contemporary AI frameworks, gauging their adeptness in catering to the unique demands of the Indian legal milieu, with a keen emphasis on Indian Legal Question Answering (AILQA). |
Shubham Kumar Nigam; Shubham Kumar Mishra; Ayush Kumar Mishra; Noel Shallum; Arnab Bhattacharya; | arxiv-cs.CL | 2023-09-26 |
1384 | Analyzing The Efficacy of An LLM-Only Approach for Image-based Document Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Recent document question answering models consist of two key components: the vision encoder, which captures layout and visual elements in images, and a Large Language Model (LLM) … |
Nidhi Hegde; S. Paul; Gagan Madan; Gaurav Aggarwal; | ArXiv | 2023-09-25 |
1385 | Does The most Sinfully Decadent Cake Ever Taste Good? Answering Yes/No Questions from Figurative Contexts Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we investigate the robustness of Question Answering (QA) models on figurative text. |
Geetanjali Rakshit; Jeffrey Flanigan; | arxiv-cs.CL | 2023-09-24 |
1386 | Does The “Most Sinfully Decadent Cake Ever” Taste Good? Answering Yes/No Questions from Figurative Contexts Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Figurative language is commonplace in natural language, and while making communication memorable and creative, can be difficult to understand. In this work, we investigate the … |
Geetanjali Rakshit; Jeffrey Flanigan; | ArXiv | 2023-09-24 |
1387 | Unified Transformer with Cross-Modal Mixture Experts for Remote-Sensing Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Remote-sensing visual question answering (RSVQA) aims to provide accurate answers to remote sensing images and their associated questions by leveraging both visual and textual … |
GANG LIU et. al. | Remote. Sens. | 2023-09-24 |
1388 | Diversifying Question Generation Over Knowledge Base Via External Natural Questions Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Previous methods on knowledge base question generation (KBQG) primarily focus on enhancing the quality of a single generated question. Recognizing the remarkable paraphrasing … |
Shasha Guo; Jing Zhang; Xirui Ke; Cuiping Li; Hong Chen; | ArXiv | 2023-09-23 |
1389 | Furthest Reasoning with Plan Assessment: Stable Reasoning Path with Retrieval-Augmented Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These inaccuracies, accumulated by the iterative interaction between IR and LLM, lead to a disaster in effectiveness at the end. To overcome above barriers, in this paper, we propose a novel pipeline for MHQA called Furthest-Reasoning-with-Plan-Assessment (FuRePA), including an improved framework (Furthest Reasoning) and an attached module (Plan Assessor). |
Yin Zhu; Zhiling Luo; Gong Cheng; | arxiv-cs.CL | 2023-09-22 |
1390 | HRoT: Hybrid Prompt Strategy and Retrieval of Thought for Table-Text Hybrid Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. |
TONGXU LUO et. al. | arxiv-cs.CL | 2023-09-22 |
1391 | SQUARE: Automatic Question Answering Evaluation Using Multiple Positive and Negative References Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation), using multiple reference answers (combining multiple correct and incorrect references) for sentence-form QA. |
Matteo Gabburo; Siddhant Garg; Rik Koncel Kedziorski; Alessandro Moschitti; | arxiv-cs.CL | 2023-09-21 |
1392 | Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task that requires rich world knowledge. |
YIKE WU et. al. | arxiv-cs.CL | 2023-09-20 |
1393 | Knowledge Graph Question Answering for Materials Science (KGQA4MAT): Developing Natural Language Interface for Metal-Organic Frameworks Knowledge Graph (MOF-KG) Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: We present a comprehensive benchmark dataset for Knowledge Graph Question Answering in Materials Science (KGQA4MAT), with a focus on metal-organic frameworks (MOFs). A knowledge … |
YUAN AN et. al. | ArXiv | 2023-09-20 |
1394 | Knowledge Graph Question Answering for Materials Science (KGQA4MAT): Developing Natural Language Interface for Metal-Organic Frameworks Knowledge Graph (MOF-KG) Using LLM Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a comprehensive benchmark dataset for Knowledge Graph Question Answering in Materials Science (KGQA4MAT), with a focus on metal-organic frameworks (MOFs). |
YUAN AN et. al. | arxiv-cs.AI | 2023-09-20 |
1395 | Retrieving Supporting Evidence for Generative Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we report two simple experiments to automatically validate generated answers against a corpus. |
Siqing Huo; Negar Arabzadeh; Charles L. A. Clarke; | arxiv-cs.IR | 2023-09-20 |
1396 | Visual Question Answering in The Medical Domain Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present domain-specific pre-training strategies, including a novel contrastive learning pretraining method, to mitigate the problem of small datasets for the Med-VQA task. |
Louisa Canepa; Sonit Singh; Arcot Sowmya; | arxiv-cs.CV | 2023-09-20 |
1397 | Enhancing Open-Domain Table Question Answering Via Syntax- and Structure-aware Dense Retrieval Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Existing studies of open-domain table QA either directly adopt text retrieval methods or consider the table structure only in the encoding layer for table retrieval, which may cause syntactical and structural information loss during table scoring. To address this issue, we propose a syntax- and structure-aware retrieval method for the open-domain table QA task. |
Nengzheng Jin; Dongfang Li; Junying Chen; Joanna Siebert; Qingcai Chen; | arxiv-cs.CL | 2023-09-19 |
1398 | Benchmarks for Pirá 2.0, A Reading Comprehension Dataset About The Ocean, The Brazilian Coast, and Climate Change Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we define six benchmarks over the Pir\’a dataset, covering closed generative question answering, machine reading comprehension, information retrieval, open question answering, answer triggering, and multiple choice question answering. |
PAULO PIROZELLI et. al. | arxiv-cs.CL | 2023-09-19 |
1399 | Localize, Retrieve and Fuse: A Generalized Framework for Free-Form Question Answering Over Tables Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, this paper proposes a generalized three-stage approach: Table-to- Graph conversion and cell localizing, external knowledge retrieval, and the fusion of table and text (called TAG-QA), to address the challenge of inferring long free-form answers in generative TableQA. |
WENTING ZHAO et. al. | arxiv-cs.CL | 2023-09-19 |
1400 | QASnowball: An Iterative Bootstrapping Framework for High-Quality Question-Answering Data Generation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, obtaining sufficient data to build an effective and stable QA system still remains an open problem. For this problem, we introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball), which can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples. |
KUNLUN ZHU et. al. | arxiv-cs.CL | 2023-09-19 |
1401 | Syntax Tree Constrained Graph Network for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To fill the gap, we suggested a novel Syntax Tree Constrained Graph Network (STCGN) for VQA based on entity message passing and syntax tree. |
Xiangrui Su; Qi Zhang; Chongyang Shi; Jiachang Liu; Liang Hu; | arxiv-cs.CV | 2023-09-17 |
1402 | NOWJ1@ALQAC 2023: Enhancing Legal Task Performance with Classic Statistical Models and Pre-trained Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper describes the NOWJ1 Team’s approach for the Automated Legal Question Answering Competition (ALQAC) 2023, which focuses on enhancing legal task performance by integrating classical statistical models and Pre-trained Language Models (PLMs). |
TAN-MINH NGUYEN et. al. | arxiv-cs.CL | 2023-09-16 |
1403 | Multimodal Multi-Hop Question Answering Through A Conversation Between Tools and Efficiently Finetuned Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We employ a tool-interacting divide-and-conquer strategy enabling large language models (LLMs) to answer complex multimodal multi-hop questions. |
Hossein Rajabzadeh; Suyuchen Wang; Hyock Ju Kwon; Bang Liu; | arxiv-cs.CL | 2023-09-16 |
1404 | PDFTriage: Question Answering Over Long, Structured Documents Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: When a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the QA system. To bridge this fundamental gap in handling structured documents, we propose an approach called PDFTriage that enables models to retrieve the context based on either structure or content. |
JON SAAD-FALCON et. al. | arxiv-cs.CL | 2023-09-16 |
1405 | SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Modern open-domain question answering systems often rely on accurate and efficient retrieval components to find passages containing the facts necessary to answer the question. … |
Piotr Rybak; M. Ogrodniczuk; | ArXiv | 2023-09-15 |
1406 | Silver Retriever: Advancing Neural Passage Retrieval for Polish Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we present Silver Retriever, a neural retriever for Polish trained on a diverse collection of manually or weakly labeled datasets. |
Piotr Rybak; Maciej Ogrodniczuk; | arxiv-cs.CL | 2023-09-15 |
1407 | Investigating Answerability of LLMs for Long-Form Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We propose a question-generation method from abstractive summaries and show that generating follow-up questions from summaries of long documents can create a challenging setting for LLMs to reason and infer from long contexts. |
Meghana Moorthy Bhat; Rui Meng; Ye Liu; Yingbo Zhou; Semih Yavuz; | arxiv-cs.CL | 2023-09-15 |
1408 | D3: Data Diversity Design for Systematic Generalization in Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present new evidence in the problem of Visual Question Answering (VQA) that reveals that the diversity of simple tasks (i.e. tasks formed by a few subtasks and concepts) plays a key role in achieving systematic generalization. |
AMIR RAHIMI et. al. | arxiv-cs.AI | 2023-09-15 |
1409 | CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In recent years, large language models (LLMs) have shown remarkable capabilities at scale, particularly at generating text conditioned on a prompt. |
Rachneet Sachdeva; Martin Tutek; Iryna Gurevych; | arxiv-cs.CL | 2023-09-14 |
1410 | Enhancing Yes/no Question Answering with Weak Supervision Via Extractive Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Dimitris Dimitriadis; Grigorios Tsoumakas; | Applied Intelligence | 2023-09-14 |
1411 | Feature Engineering in Learning-to-Rank for Community Question Answering Task Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: These data are leveraged in automated CQA ranking systems where similar questions (and answers) are presented in response to the query of the user. In this work, we empirically investigate a few aspects of this domain. |
Nafis Sajid; Md Rashidul Hasan; Muhammad Ibrahim; | arxiv-cs.LG | 2023-09-14 |
1412 | Multimodal Bi-direction Guided Attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Linqin Cai; Nuoying Xu; Hang Tian; Kejia Chen; Haodu Fan; | Neural Processing Letters | 2023-09-13 |
1413 | Evaluating The Ebb and Flow: An In-depth Analysis of Question-Answering Trends Across Diverse Platforms Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Community Question Answering (CQA) platforms steadily gain popularity as they provide users with fast responses to their queries. The swiftness of these responses is contingent on … |
Rima Hazra; Agnik Saha; Somnath Banerjee; Animesh Mukherjee; | arxiv-cs.SI | 2023-09-12 |
1414 | Answering Subjective Induction Questions on Products By Summarizing Multi-sources Multi-viewpoints Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: That is quite different from the traditional QA task, in which the answer to a factoid question is unique and can be found from a single data source. To address this new task, we propose a three-steps method. |
Yufeng Zhang; Meng-xiang Wang; Jianxing Yu; | arxiv-cs.CL | 2023-09-11 |
1415 | NeCo@ALQAC 2023: Legal Domain Knowledge Acquisition for Low-Resource Languages Through Data Enrichment Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: This paper presents NeCo Team’s solutions to the Vietnamese text processing tasks provided in the Automated Legal Question Answering Competition 2023 (ALQAC 2023), focusing on legal domain knowledge acquisition for low-resource languages through data enrichment. |
HAI-LONG NGUYEN et. al. | arxiv-cs.CL | 2023-09-11 |
1416 | Two Is Better Than One: Answering Complex Questions By Multiple Knowledge Sources with Generalized Links Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we formulate the novel Multi-KB-QA task that leverages the full and partial links among multiple KBs to derive correct answers, a benchmark with diversified link and query types is also constructed to efficiently evaluate Multi-KB-QA performance. |
MINHAO ZHANG et. al. | arxiv-cs.CL | 2023-09-10 |
1417 | AGent: A Novel Pipeline for Automatically Creating Unanswerable Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: However, manually annotating unanswerable questions is labor-intensive. To address this, we propose AGent, a novel pipeline that automatically creates new unanswerable questions by re-matching a question with a context that lacks the necessary information for a correct answer. |
Son Quoc Tran; Gia-Huy Do; Phong Nguyen-Thuan Do; Matt Kretchmar; Xinya Du; | arxiv-cs.CL | 2023-09-10 |
1418 | MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering Over Text, Tables and Images Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Recently, with the rise of large language models (LLM), in-context learning (ICL) has become the most popular way to solve QA problems. We propose MMHQA-ICL framework for addressing this problems, which includes stronger heterogeneous data retriever and an image caption module. |
WEIHAO LIU et. al. | arxiv-cs.CL | 2023-09-09 |
1419 | Can NLP Models ‘Identify’, ‘Distinguish’, and ‘Justify’ Questions That Don’t Have A Definitive Answer? Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Can SOTA models accurately identify such questions and provide a reasonable response? To investigate the above question, we introduce QnotA, a dataset consisting of five different categories of questions that don’t have definitive answers. |
AYUSHI AGARWAL et. al. | arxiv-cs.CL | 2023-09-08 |
1420 | A Study on Influential Features for Predicting Best Answers in Community Question-Answering Forums Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: The knowledge provided by user communities in question-answering (QA) forums is a highly valuable source of information for satisfying user information needs. However, finding the … |
Valeria Zoratto; Daniela Godoy; Gabriela N. Aranda; | Inf. | 2023-09-07 |
1421 | Interpretable Visual Question Answering Via Reasoning Supervision Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, such models are likely to disregard crucial visual cues and often rely on multimodal shortcuts and inherent biases of the language modality to predict the correct answer, a phenomenon commonly referred to as lack of visual grounding. In this work, we alleviate this shortcoming through a novel architecture for visual question answering that leverages common sense reasoning as a supervisory signal. |
Maria Parelli; Dimitrios Mallis; Markos Diomataris; Vassilis Pitsikalis; | arxiv-cs.CV | 2023-09-07 |
1422 | Introducing Forecast Utterance for Conversational Data Science Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: A significant challenge for the agent in this endeavor is to accurately comprehend the user’s prediction goals and, consequently, formulate precise ML tasks. In this paper, we take a pioneering step towards this ambitious goal by introducing a new concept called Forecast Utterance and then focus on the automatic and accurate interpretation of users’ prediction goals from these utterances. |
Md Mahadi Hassan; Alex Knipper; Shubhra Kanti Karmaker; | arxiv-cs.CL | 2023-09-07 |
1423 | ATM: Action Temporality Modeling for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We introduce Action Temporality Modeling (ATM) for temporality reasoning via three-fold uniqueness: (1) rethinking the optical flow and realizing that optical flow is effective in capturing the long horizon temporality reasoning; (2) training the visual-text embedding by contrastive learning in an action-centric manner, leading to better action representations in both vision and text modalities; and (3) preventing the model from answering the question given the shuffled video in the fine-tuning stage, to avoid spurious correlation between appearance and motion and hence ensure faithful temporality reasoning. |
Junwen Chen; Jie Zhu; Yu Kong; | arxiv-cs.CV | 2023-09-05 |
1424 | Understanding Video Scenes Through Text: Insights from Text-based Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The NewsVideoQA dataset contains question-answer pairs related to the text in news videos, while M4-ViteVQA comprises question-answer pairs from diverse categories like vlogging, traveling, and shopping. We provide an analysis of the formulation of these datasets on various levels, exploring the degree of visual understanding and multi-frame comprehension required for answering the questions. |
Soumya Jahagirdar; Minesh Mathew; Dimosthenis Karatzas; C. V. Jawahar; | arxiv-cs.CV | 2023-09-04 |
1425 | Evaluating A Radius-based Pipeline for Question Answering Over Cultural (CIDOC-CRM Based) Knowledge Graphs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: CIDOC-CRM is an event-based international standard for cultural documentation that has been widely used for offering semantic interoperability in the Cultural Heritage (CH) … |
Nikos Gounakis; M. Mountantonakis; Yannis Tzitzikas; | Proceedings of the 34th ACM Conference on Hypertext and … | 2023-09-04 |
1426 | Enabling The Informed Patient Paradigm with Secure and Personalized Medical Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Quality patient care is a complex and multifaceted problem requiring the integration of data from multiple sources. We propose Medicient, a knowledge-graph-based question … |
Joel Oduro-Afriyie; Hasan M. Jamil; | Proceedings of the 14th ACM International Conference on … | 2023-09-03 |
1427 | Can I Trust Your Answer? Visually Grounded Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Experiments with different backbones demonstrate that this grounding mechanism improves both grounding and QA. With these efforts, we aim to push towards trustworthy VLMs in VQA systems. |
Junbin Xiao; Angela Yao; Yicong Li; Tat Seng Chua; | arxiv-cs.CV | 2023-09-03 |
1428 | MedChatZH: A Better Medical Adviser Learns from Better Instructions Summary Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Abstract: Generative large language models (LLMs) have shown great success in various applications, including question-answering (QA) and dialogue systems. However, in specialized domains … |
Yang Tan; Mingchen Li; Zijie Huang; Huiqun Yu; Guisheng Fan; | ArXiv | 2023-09-03 |
1429 | Generative Data Augmentation Using LLMs Improves Distributional Robustness in Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We take a two-step generation approach, generating both contexts and QA pairs to augment existing datasets. |
Arijit Ghosh Chowdhury; Aman Chadha; | arxiv-cs.CL | 2023-09-02 |
1430 | Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Knowledge-based visual question answering not only needs to answer the questions based on images but also incorporates external knowledge to study reasoning in the joint space of … |
YAN WANG et. al. | ACM Transactions on Multimedia Computing, Communications … | 2023-09-02 |
1431 | A Template-based Approach for Question Answering Over Knowledge Bases Related Papers Related Patents Related Grants Related Venues Related Experts View |
Anna Formica; Ida Mele; F. Taglino; | Knowledge and Information Systems | 2023-09-02 |
1432 | LeanContext: Cost-Efficient Domain-Specific Question Answering Using LLMs IF:3 Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Question-answering (QA) is a significant application of Large Language Models (LLMs), shaping chatbot capabilities across healthcare, education, and customer service. However, … |
Md. Adnan Arefeen; Biplob K. Debnath; S. Chakradhar; | ArXiv | 2023-09-02 |
1433 | Context-aware Multi-level Question Embedding Fusion for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
SHENGDONG LI et. al. | Inf. Fusion | 2023-09-01 |
1434 | CLVIN: Complete Language-vision Interaction Network for Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
Chongqing Chen; Dezhi Han; Xiang Shen; | Knowl. Based Syst. | 2023-09-01 |
1435 | A Contrastive Framework for Enhancing Knowledge Graph Question Answering: Alleviating Exposure Bias Related Papers Related Patents Related Grants Related Venues Related Experts View |
HUIFANG DU et. al. | Knowl. Based Syst. | 2023-09-01 |
1436 | Multimodal Representative Answer Extraction in Community Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Ming Li; Yating Ma; Y. Li; Yixue Bai; | J. King Saud Univ. Comput. Inf. Sci. | 2023-09-01 |
1437 | Prompt-WNQA: A Prompt-based Complex Question Answering for Wireless Network Over Knowledge Graph Related Papers Related Patents Related Grants Related Venues Related Experts View |
Pei Liu; Bing Qian; Qi Sun; Longgang Zhao; | Comput. Networks | 2023-09-01 |
1438 | Empirical Study on Using Adapters for Debiased Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Jae-Won Cho; Dawit Mureja Argaw; Youngtaek Oh; Dong-Jin Kim; In-So Kweon; | Comput. Vis. Image Underst. | 2023-09-01 |
1439 | Query Path Generation Via Bidirectional Reasoning for Multihop Question Answering From Knowledge Bases Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Multihop question answering from knowledge bases (KBQA) is a hot research topic in natural language processing. Recently, the graph neural network-based (GNN-based) methods have … |
GENG ZHANG et. al. | IEEE Transactions on Cognitive and Developmental Systems | 2023-09-01 |
1440 | Generative Retrieval for Conversational Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View |
Yongqing Li; Nan Yang; Liang Wang; Furu Wei; Wenjie Li; | Inf. Process. Manag. | 2023-09-01 |
1441 | DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper we describe the details of the training as well and the results on the different benchmarks. |
Shaltiel Shmidman; Avi Shmidman; Moshe Koppel; | arxiv-cs.CL | 2023-08-31 |
1442 | Separate and Locate: Rethink The Text in Text-based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: The 1-D position embedding can only represent the left-right sequence relationship between words in a sentence, but not the complex spatial position relationship. To tackle these problems, we propose a novel method named Separate and Locate (SaL) that explores text contextual cues and designs spatial position embedding to construct spatial relations between OCR texts. |
Chengyang Fang; Jiangnan Li; Liang Li; Can Ma; Dayong Hu; | arxiv-cs.CV | 2023-08-30 |
1443 | Hyperbolic Code Retrieval: A Novel Approach for Efficient Code Search Using Hyperbolic Space Embeddings Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: However, these methods often lead to computational and memory inefficiencies, posing a significant challenge to their real-world applicability. To tackle this challenge, we propose a novel approach, the Hyperbolic Code QA Matching (HyCoQA). |
XUNZHU TANG et. al. | arxiv-cs.SE | 2023-08-29 |
1444 | KGConv, A Conversational Corpus Grounded in Wikidata Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present KGConv, a large, conversational corpus of 71k conversations where each question-answer pair is grounded in a Wikidata fact. |
Quentin Brabant; Gwenole Lecorve; Lina M. Rojas-Barahona; Claire Gardent; | arxiv-cs.CL | 2023-08-29 |
1445 | Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models By Translation-following Demonstrations IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: This disparity is demanded in further fine-tuning and affecting the cross-lingual abilities of LLMs. In this paper, we propose to empower Instructiontuned LLMs (It-LLMs) in languages other than English by building semantic alignment between them. |
Leonardo Ranaldi; Giulia Pucci; Andre Freitas; | arxiv-cs.CL | 2023-08-27 |
1446 | Knowledge-Based Version Incompatibility Detection for Deep Learning Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Therefore, these techniques cannot detect version issues due to undocumented version constraints or issues involving hardware drivers or OS. To address this challenge, we propose to leverage the abundant discussions of DL version issues from Stack Overflow to facilitate version incompatibility detection. |
Zhongkai Zhao; Bonan Kou; Mohamed Yilmaz Ibrahim; Muhao Chen; Tianyi Zhang; | arxiv-cs.SE | 2023-08-25 |
1447 | Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Even so, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful intermediate reasoning steps, especially in the context of answering knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. |
KEHENG WANG et. al. | arxiv-cs.CL | 2023-08-25 |
1448 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. |
JINZE BAI et. al. | arxiv-cs.CV | 2023-08-24 |
1449 | TG-VQA: Ternary Game of Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we innovatively resort to game theory, which can simulate complicated relationships among multiple players with specific interaction strategies, e.g., video, question, and answer as ternary players, to achieve fine-grained alignment for VideoQA task. |
HAO LI et. al. | ijcai | 2023-08-23 |
1450 | SQuAD-SRC: A Dataset for Multi-Accent Spoken Reading Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we construct a large-scale multi-accent human spoken dataset SQuAD-SRC, in order to study the problem of multi-accent spoken reading comprehension. |
Yixuan Tang; Anthony K.H: Tung; | ijcai | 2023-08-23 |
1451 | Answer Mining from A Pool of Images: Towards Retrieval-Based Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Towards solving the RETVQA task, we propose a unified Multi Image BART (MI-BART) that takes a question and retrieved images using our relevance encoder for free-form fluent answer generation. |
Abhirama Subramanyam Penamakuri; Manish Gupta; Mithun Das Gupta; Anand Mishra; | ijcai | 2023-08-23 |
1452 | Keep Skills in Mind: Understanding and Implementing Skills in Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we introduce a new approach named Dynamic Skill-aware Commonsense Question Answering (DSCQA), which transcends the limitations of traditional methods by informing the model about the need for each skill in questions and utilizes skills as a critical driver in CQA process. |
MEIKAI BAO et. al. | ijcai | 2023-08-23 |
1453 | COOL, A Context Outlooker, and Its Application to Question Answering and Other Natural Language Processing Tasks Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present an outlook attention mechanism, COOL, for natural language processing. |
Fangyi Zhu; See-Kiong Ng; Stéphane Bressan; | ijcai | 2023-08-23 |
1454 | Local and Global: Temporal Question Answering Via Information Fusion IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Despite the fruitful efforts of previous models in temporal KGQA, they still have several limitations. (I) They neither emphasize the graph structural information between entities in KGs nor explicitly utilize a multi-hop relation path through graph neural networks to enhance answer prediction. (II) They adopt pre-trained language models (LMs) to obtain question representations, focusing merely on the global information related to the question while not highlighting the local information of the entities in KGs. To address these limitations, we introduce a novel model that simultaneously explores both Local information and Global information for the task of temporal KGQA (LGQA). |
YONGHAO LIU et. al. | ijcai | 2023-08-23 |
1455 | A Logic-based Approach to Contrastive Explainability for Neurosymbolic Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a CE framework for VQA that uses a neurosymbolic VQA architecture which disentangles perception from reasoning. |
Thomas Eiter; Tobias Geibinger; Nelson Higuera; Johannes Oetsch; | ijcai | 2023-08-23 |
1456 | HopPG: Self-Iterative Program Generation for Multi-Hop Question Answering Over Heterogeneous Knowledge Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: On the other hand, this way ignores the semantic information of the intermediate answers at each hop, which is beneficial for subsequent generation. To alleviate these challenges, we propose a self-iterative framework for multi-hop program generation (HopPG) over heterogeneous knowledge, which leverages the previous execution results to retrieve supporting facts and generate subsequent programs hop by hop. |
Yingyao Wang; Yongwei Zhou; Chaoqun Duan; Junwei Bao; Tiejun Zhao; | arxiv-cs.CL | 2023-08-22 |
1457 | Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Text-to-music generation (T2M-Gen) faces a major obstacle due to the scarcity of large-scale publicly available music datasets with natural language captions. To address this, we propose the Music Understanding LLaMA (MU-LLaMA), capable of answering music-related questions and generating captions for music files. |
Shansong Liu; Atin Sakkeer Hussain; Chenshuo Sun; Ying Shan; | arxiv-cs.SD | 2023-08-22 |
1458 | Bridging The Gap: Deciphering Tabular Data Using Large Language Model Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In the realm of natural language processing, the understanding of tabular data has perpetually stood as a focal point of scholarly inquiry. The emergence of expansive language models, exemplified by the likes of ChatGPT, has ushered in a wave of endeavors wherein researchers aim to harness these models for tasks related to table-based question answering. |
Hengyuan Zhang; Peng Chang; Zongcheng Ji; | arxiv-cs.CL | 2023-08-22 |
1459 | DocPrompt: Large-scale Continue Pretrain for Zero-shot and Few-shot Document Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose Docprompt for document question answering tasks with powerful zero-shot and few-shot performance. |
Sijin Wu; Dan Zhang; Teng Hu; Shikun Feng; | arxiv-cs.CL | 2023-08-21 |
1460 | LibriSQA: A Novel Dataset and Framework for Spoken Question Answering with Large Language Models Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Given the evident paucity of existing speech-text LLMs, we propose a lightweight, end-to-end framework to execute the SQA task on the LibriSQA, witnessing significant results. |
Zihan Zhao; Yiyang Jiang; Heyang Liu; Yanfeng Wang; Yu Wang; | arxiv-cs.CL | 2023-08-20 |
1461 | Generic Attention-model Explainability By Weighted Relevance Accumulation Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a weighted relevancy strategy, which takes the importance of token values into consideration, to reduce distortion when equally accumulating relevance. |
Yiming Huang; Aozhe Jia; Xiaodan Zhang; Jiawei Zhang; | arxiv-cs.CV | 2023-08-20 |
1462 | Towards Multi-Lingual Audio Question Answering Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: Audio Question Answering (AQA) is a multi-modal translation task where a system analyzes an audio signal and a natu-ral language question to generate a desirable natural language … |
Swarup Ranjan Behera; Pailla Balakrishna Reddy; A. Tripathi; Megavath Bharadwaj Rathod; Tejesh Karavadi; | Interspeech | 2023-08-20 |
1463 | Improving Visual Question Answering for Bridge Inspection By Pre‐training with External Data of Image–text Pairs Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: This paper explores the application of visual question answering (VQA) in bridge inspection using recent advancements in multimodal artificial intelligence (AI) systems. VQA … |
Thannarot Kunlamai; T. Yamane; M. Suganuma; Pang-jo Chun; Takayuki Okatani; | Computer‐Aided Civil and Infrastructure Engineering | 2023-08-18 |
1464 | Breaking Language Barriers: A Question Answering Dataset for Hindi and Marathi Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To tackle the challenge of data scarcity, we have developed a novel approach for translating the SQuAD 2.0 dataset into Hindi and Marathi. |
Maithili Sabane; Onkar Litake; Aman Chadha; | arxiv-cs.CL | 2023-08-18 |
1465 | Accelerated Materials Language Processing Enabled By GPT Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this study, we develop generative pretrained transformer (GPT)-enabled pipelines where the complex architectures of prior MLP models are replaced with strategic designs of prompt engineering. |
Jaewoong Choi; Byungju Lee; | arxiv-cs.CL | 2023-08-18 |
1466 | End-to-End Beam Retrieval for Multi-Hop Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we introduce Beam Retrieval, an end-to-end beam retrieval framework for multi-hop QA. |
Jiahao Zhang; Haiyang Zhang; Dongmei Zhang; Yong Liu; Shen Huang; | arxiv-cs.CL | 2023-08-17 |
1467 | Answering Ambiguous Questions with A Database of Questions, Answers, and Revisions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia. |
Haitian Sun; William W. Cohen; Ruslan Salakhutdinov; | arxiv-cs.CL | 2023-08-16 |
1468 | Learning The Meanings of Function Words from Grounded Language Using A Visual Question Answering Model Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learnt by both models and children. |
Eva Portelance; Michael C. Frank; Dan Jurafsky; | arxiv-cs.CL | 2023-08-16 |
1469 | Research on Question Answering for Knowledge Graph of Aircraft PHM Fault Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: A question recognition method based on BERT-BiLSTM-ATT-CRF is proposed to solve the problem of entity recognition difficulties faced by Question answering in the field of aircraft … |
XIANGZHEN MENG et. al. | 2023 IEEE 9th International Conference on Cloud Computing … | 2023-08-12 |
1470 | Meta-path Reasoning of Knowledge Graph for Commonsense Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Miao Zhang; Tingting He; M. Dong; | Frontiers of Computer Science | 2023-08-12 |
1471 | Multi-hop Question Answering Over Incomplete Knowledge Graph with Abstract Conceptual Evidence Related Papers Related Patents Related Grants Related Venues Related Experts View |
QIBO SUN et. al. | Applied Intelligence | 2023-08-11 |
1472 | Performance Prediction for Multi-hop Questions Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: The problem is challenging due to the multi-step nature of the retrieval process, potential dependency of the steps and the reasoning involved. To tackle this challenge, we propose multHP, a novel pre-retrieval method for predicting the performance of open-domain multi-hop questions. |
Mohammadreza Samadi; Davood Rafiei; | arxiv-cs.CL | 2023-08-11 |
1473 | Progressive Spatio-temporal Perception for Audio-Visual Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we propose a Progressive Spatio-Temporal Perception Network (PSTP-Net), which contains three modules that progressively identify key spatio-temporal regions w.r.t. questions. |
Guangyao Li; Wenxuan Hou; Di Hu; | arxiv-cs.CV | 2023-08-10 |
1474 | ADMUS: A Progressive Question Answering Framework Adaptable to Multiple Knowledge Sources Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Therefore, we present ADMUS, a progressive knowledge base question answering framework designed to accommodate a wide variety of datasets, including multiple languages, diverse backbone knowledge bases, and disparate question answering datasets. To accomplish the purpose, we decouple the architecture of conventional KBQA systems and propose this dataset-independent framework. |
Yirui Zhan; Yanzeng Li; Minhao Zhang; Lei Zou; | arxiv-cs.CL | 2023-08-09 |
1475 | Building Interpretable and Reliable Open Information Retriever for New Domains Overnight Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this work, we propose an information retrieval pipeline that uses entity/event linking model and query decomposition model to focus more accurately on different information units of the query. |
Xiaodong Yu; Ben Zhou; Dan Roth; | arxiv-cs.CL | 2023-08-09 |
1476 | Top K Relevant Passage Retrieval for Biomedical Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we work on the existing DPR framework for the biomedical domain and retrieve answers from the Pubmed articles which is a reliable source to answer medical questions. |
Shashank Gupta; | arxiv-cs.CL | 2023-08-08 |
1477 | Towards An AI to Win Ghana’s National Science and Maths Quiz Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: That is the question we seek to answer in the NSMQ AI project, an open-source project that is building AI to compete live in the NSMQ and win. |
GEORGE BOATENG et. al. | arxiv-cs.HC | 2023-08-08 |
1478 | On Monotonic Aggregation for Open-domain QA Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We identify the cause, and based on that we propose Judge-Specialist framework. |
Sang-eun Han; Yeonseok Jeong; Seung-won Hwang; Kyungjae Lee; | arxiv-cs.CL | 2023-08-08 |
1479 | SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we present SciGraphQA, a synthetic multi-turn question-answer dataset related to academic graphs. |
Shengzhi Li; Nima Tajbakhsh; | arxiv-cs.CL | 2023-08-07 |
1480 | KITLM: Domain-Specific Knowledge InTegration Into Language Models for Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To boost the domain-specific understanding, we propose, KITLM, a novel knowledge base integration approach into language model through relevant information infusion. |
Ankush Agarwal; Sakharam Gawade; Amar Prakash Azad; Pushpak Bhattacharyya; | arxiv-cs.CL | 2023-08-07 |
1481 | Prompt Guided Copy Mechanism for Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we propose a pluggable approach for extractive methods that introduces a novel prompt-guided copy mechanism to improve the fluency and appropriateness of the extracted answers. |
YONG ZHANG et. al. | arxiv-cs.CL | 2023-08-07 |
1482 | Redundancy-aware Transformer for Video Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: To this end, we propose a novel transformer-based architecture, that aims to model VideoQA in a redundancy-aware manner. |
YICONG LI et. al. | arxiv-cs.CV | 2023-08-06 |
1483 | PaniniQA: Enhancing Patient Education Through Interactive Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this paper, we present PaniniQA, a patient-centric interactive question answering system designed to help patients understand their discharge instructions. |
PENGSHAN CAI et. al. | arxiv-cs.CL | 2023-08-06 |
1484 | Decision Knowledge Graphs: Construction of and Usage in Question Answering for Clinical Practice Guidelines Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: In this paper, we present a Decision Knowledge Graph (DKG) representation to store CPGs and to perform question-answering on CPGs. |
Vasudhan Varma Kandula; Pushpak Bhattacharyya; | arxiv-cs.IR | 2023-08-05 |
1485 | Learning to Select The Relevant History Turns in Conversational Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: Irrelevant context, on the other hand, brings noise to the system, thereby resulting in a decline in the model’s performance. In this paper, we propose a framework, DHS-ConvQA (Dynamic History Selection in Conversational Question Answering), that first generates the context and question entities for all the history turns, which are then pruned on the basis of similarity they share in common with the question at hand. |
MUNAZZA ZAIB et. al. | arxiv-cs.CL | 2023-08-04 |
1486 | WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). |
XIAO LIU et. al. | kdd | 2023-08-04 |
1487 | Dual-feature Collaborative Relation-attention Networks for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Lu Yao; You Yang; Juntao Hu; | International Journal of Multimedia Information Retrieval | 2023-08-04 |
1488 | RealCQA: Scientific Chart Question Answering As A Test-bed for First-Order Logic Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. |
Saleem Ahmed; Bhavin Jawade; Shubham Pandey; Srirangaraj Setlur; Venu Govindaraju; | arxiv-cs.CV | 2023-08-03 |
1489 | BamnetTL: Bidirectional Attention Memory Network with Transfer Learning for Question Answering Matching Summary Related Papers Related Patents Related Grants Related Venues Related Experts View Abstract: In KBQA (knowledge base question answering), questions are processed using NLP (natural language processing), and knowledge base technology is used to generate the corresponding … |
Lei Su; Jiazhi Guo; Liping Wu; Han Deng; | Int. J. Intell. Syst. | 2023-08-03 |
1490 | Open-Domain Long-Form Question–Answering Using Transformer-Based Pipeline Related Papers Related Patents Related Grants Related Venues Related Experts View |
Aprameya Dash; Mohit Awachar; Anshul Patel; Bhawana Rudra; | SN Computer Science | 2023-08-03 |
1491 | Teaching Smaller Language Models To Generalise To Unseen Compositional Questions Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: To do so we propose a combination of multitask supervised pretraining on up to 93 tasks designed to instill diverse reasoning abilities, and a dense retrieval system that aims to retrieve a set of evidential paragraph fragments. |
Tim Hartill; Neset Tan; Michael Witbrock; Patricia J. Riddle; | arxiv-cs.CL | 2023-08-02 |
1492 | Improving Visual Question Answering for Remote Sensing Via Alternate-guided Attention and Combined Loss Related Papers Related Patents Related Grants Related Venues Related Experts View |
JIANGFAN FENG et. al. | Int. J. Appl. Earth Obs. Geoinformation | 2023-08-01 |
1493 | Improved Relation Span Detection in Question Answering Systems Over Extracted Knowledge Bases Related Papers Related Patents Related Grants Related Venues Related Experts View |
Somayyeh Behmanesh; Alireza Talebpour; M. Shamsfard; Mohammad Jafari; | Expert Syst. Appl. | 2023-08-01 |
1494 | DAQAS: Deep Arabic Question Answering System Based on Duplicate Question Detection and Machine Reading Comprehension Related Papers Related Patents Related Grants Related Venues Related Experts View |
H. ALAMI et. al. | J. King Saud Univ. Comput. Inf. Sci. | 2023-08-01 |
1495 | Spatio-Temporal Two-stage Fusion for Video Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
FEIFEI XU et. al. | Comput. Vis. Image Underst. | 2023-08-01 |
1496 | Question-conditioned Debiasing with Focal Visual Context Fusion for Visual Question Answering Related Papers Related Patents Related Grants Related Venues Related Experts View |
Jin Liu; Guoxiang Wang; Chongfeng Fan; F. Zhou; Huijuan Xu; | Knowl. Based Syst. | 2023-08-01 |
1497 | Neural Age Screening on Question Answering Communities Related Papers Related Patents Related Grants Related Venues Related Experts View |
Mohan Timilsina; A. Figueroa; | Eng. Appl. Artif. Intell. | 2023-08-01 |
1498 | Designing A Communication Bridge Between Communities: Participatory Design for A Question-Answering AI Agent Related Papers Related Patents Related Grants Related Venues Related Experts View Highlight: How do we design an AI system that is intended to act as a communication bridge between two user communities with different mental models and vocabularies? |
Jeonghyun Lee; Vrinda Nandan; Harshvardhan Sikka; Spencer Rugaber; Ashok Gole; | arxiv-cs.HC | 2023-08-01 |
1499 | Counting-based Visual Question Answering with Serial Cascaded Attention Deep Learning Related Papers Related Patents Related Grants Related Venues Related Experts View |
Tesfayee Meshu Welde; L. Liao; | Pattern Recognit. | 2023-08-01 |
1500 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts Related Code View Highlight: In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. |
Vaibhav Adlakha; Parishad BehnamGhader; Xing Han Lu; Nicholas Meade; Siva Reddy; | arxiv-cs.CL | 2023-07-31 |