Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual Agents as Fast and Slow Thinkers

About

Achieving human-level intelligence requires refining cognitive distinctions between System 1 and System 2 thinking. While contemporary AI, driven by large language models, demonstrates human-like traits, it falls short of genuine cognition. Transitioning from structured benchmarks to real-world scenarios presents challenges for visual agents, often leading to inaccurate and overly confident responses. To address the challenge, we introduce FaST, which incorporates the Fast and Slow Thinking mechanism into visual agents. FaST employs a switch adapter to dynamically select between System 1/2 modes, tailoring the problem-solving approach to different task complexity. It tackles uncertain and unseen objects by adjusting model confidence and integrating new contextual data. With this novel design, we advocate a flexible system, hierarchical reasoning capabilities, and a transparent decision-making pipeline, all of which contribute to its ability to emulate human-like cognitive processes in visual intelligence. Empirical results demonstrate that FaST outperforms various well-known baselines, achieving 80.8% accuracy over VQA^{v2} for visual question answering and 48.7% GIoU score over ReasonSeg for reasoning segmentation, demonstrate FaST's superior performance. Extensive testing validates the efficacy and robustness of FaST's core components, showcasing its potential to advance the development of cognitive visual agents in AI systems. The code is available at ttps://github.com/GuangyanS/Sys2-LLaVA.

Guangyan Sun, Mingyu Jin, Zhenting Wang, Cheng-Long Wang, Siqi Ma, Qifan Wang, Tong Geng, Ying Nian Wu, Yongfeng Zhang, Dongfang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy80.8
1165
Visual Question AnsweringTextVQA
Accuracy60.7
1117
Visual Question AnsweringGQA
Accuracy63.8
963
Object Hallucination EvaluationPOPE--
935
Multimodal EvaluationMME
Score1.52e+3
557
Multimodal Capability EvaluationMM-Vet
Score31
282
Multimodal EvaluationSEED-Bench
Accuracy60.1
80
Referring SegmentationRefCOCO (val)
cIoU73.3
51
Referring SegmentationRefCOCO+ (val)
cIoU64.4
44
Visual Question AnsweringScienceQA image
Accuracy68.9
33
Showing 10 of 12 rows

Other info

Code

Follow for update