LVLM-COUNT: Enhancing the Counting Ability of Large Vision-Language Models
About
Counting is a fundamental operation for various real-world visual tasks, requiring both object recognition and robust counting capabilities. Despite their advanced visual perception, large vision-language models (LVLMs) are known to struggle with counting tasks. In this work, we evaluate the performance of several LVLMs on visual counting tasks across multiple counting and vision datasets. We observe that while their performance may be less prone to error for small numbers of objects, they exhibit significant weaknesses as the number of objects increases. To alleviate this issue, we propose a simple yet effective baseline method that enhances LVLMs' counting ability for large numbers of objects using a divide-and-conquer approach. Our method decomposes counting problems into sub-tasks. Moreover, it incorporates a mechanism to prevent objects from being split during division, which could otherwise lead to repetitive counting -- a common issue in a naive divide-and-conquer implementation. We demonstrate the effectiveness of this approach across various datasets and benchmarks, establishing it as a valuable reference for evaluating future solutions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Counting | FSC-147 (test) | MAE17.86 | 297 | |
| Object Counting | Pascal VOC (test) | RMSE6.16 | 27 | |
| Visual Counting | Penguin benchmark | MAE26.76 | 19 | |
| Visual Counting | Emoji-Count (test) | MAE16.16 | 12 | |
| Counting | Emoji-Count | MAE16.16 | 10 | |
| Counting | FSC-147 (test) | MAE17.86 | 10 |