Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HYVE: Hybrid Views for LLM Context Engineering over Machine Data

About

Machine data is central to observability and diagnosis in modern computing systems, appearing in logs, metrics, telemetry traces, and configuration snapshots. When provided to large language models (LLMs), this data typically arrives as a mixture of natural language and structured payloads such as JSON or Python/AST literals. Yet LLMs remain brittle on such inputs, particularly when they are long, deeply nested, and dominated by repetitive structure. We present HYVE (HYbrid ViEw), a framework for LLM context engineering for inputs containing large machine-data payloads, inspired by database management principles. HYVE surrounds model invocation with coordinated preprocessing and postprocessing, centered on a request-scoped datastore augmented with schema information. During preprocessing, HYVE detects repetitive structure in raw inputs, materializes it in the datastore, transforms it into hybrid columnar and row-oriented views, and selectively exposes only the most relevant representation to the LLM. During postprocessing, HYVE either returns the model output directly, queries the datastore to recover omitted information, or performs a bounded additional LLM call for SQL-augmented semantic synthesis. We evaluate HYVE on diverse real-world workloads spanning knowledge QA, chart generation, anomaly detection, and multi-step network troubleshooting. Across these benchmarks, HYVE reduces token usage by 50-90% while maintaining or improving output quality. On structured generation tasks, it improves chart-generation accuracy by up to 132% and reduces latency by up to 83%. Overall, HYVE offers a practical approximation to an effectively unbounded context window for prompts dominated by large machine-data payloads.

Jian Tan, Fan Bu, Yuqing Gao, Dev Khanolkar, Jason Mackay, Boris Sobolev, Lei Jin, Li Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringRunbook
GenericJudge Score4.58
4
Question AnsweringSUM
GenericJudge Score4.46
4
Question AnsweringRB-Text
GenericJudge Score4.89
4
Question AnsweringTOON-QA
Exact Match (EM)98
4
Reasoning over Large Structured ContextAnom
ReasoningJudge Score4.03
4
Reasoning over Large Structured ContextRB-JSON
ReasoningJudge Score4.92
4
Reasoning over Large Structured ContextHARD
ReasoningJudge Score5
4
Structured Chart GenerationLine
Similarity Score0.97
4
Structured Chart GenerationBar
Similarity Score100
4
Question AnsweringCert-QA
GenericJudge Score4.49
4
Showing 10 of 11 rows

Other info

Follow for update