Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models
About
Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Quality Assessment | SPAQ | SRCC0.3229 | 191 | |
| Image Quality Assessment | CSIQ | SRC0.3641 | 138 | |
| Image Quality Assessment | LIVE | SRC0.4938 | 96 | |
| Image Quality Assessment | KonIQ | SRCC0.1191 | 82 | |
| Vision Question Answering | Q-Bench LLVisionQA 1.0 (dev) | Yes-or-No Score76.18 | 20 | |
| Image Quality Comparison | PIPAL | Accuracy60.6 | 16 | |
| Image Quality Comparison | LIVE-C | Accuracy56.68 | 16 | |
| Image Quality Comparison | AGIQA | Accuracy62.91 | 16 | |
| Video Quality Comparison | KoNViD-1k | Accuracy56.5 | 15 | |
| Video Quality Comparison | VDPVE | Accuracy54.18 | 15 |