OmniThink: Expanding Knowledge Boundaries in Machine Writing through Thinking

Zekun Xi, Wenbiao Yin, Jizhan Fang, Jialong Wu, Runnan Fang,
Ningyu Zhang, Yong Jiang, Pengjun Xie, Fei Huang Huajun Chen
{xizekun2023@zju.edu.cn}
Zhejiang University
Tongyi Lab, Alibaba Group.
Paper Code Demo
Demo of OmniThink

A demo of OmniThink. You can explore it here.

Abstract

Machine writing with large language models often relies on retrieval-augmented generation. However, these approaches remain confined within the boundaries of the model's predefined scope, limiting the generation of content with rich information. Specifically, vanilla-retrieved information tends to lack depth, utility, and suffers from redundancy, which negatively impacts the quality of generated articles, leading to shallow, repetitive, and unoriginal outputs. To address these issues, we propose OmniThink, a machine writing framework that emulates the human-like process of iterative expansion and reflection. The core idea behind OmniThink is to simulate the cognitive behavior of learners as they progressively deepen their knowledge of the topics. Experimental results demonstrate that OmniThink improves the knowledge density of generated articles without compromising metrics such as coherence and depth. Human evaluations and expert feedback further highlight the potential of OmniThink to address real-world challenges in the generation of long-form articles.

🌟Overview

đź“š We propose the Knowledge Density metric, defined as the ratio of meaningful, unique content to the overall text length. High ensures efficient knowledge transfer, minimizing reader fatigue caused by redundancy.

🤖 We propose OmniThink, a new machine writing framework that emulates the human-like cognitive process of iterative expansion and reflection. The core idea behind OmniThink is to simulate the cognitive behavior of learners as they gradually deepen their understanding of complex topics to expand knowledge boundaries.

đź“Š Experimental results demonstrate that OmniThink enhances the knowledge density of generated articles without compromising key metrics such as coherence and depth.

đź“šKnowledge Density metric

Motivation. Previous studies on generated articles focus on relevance and correctness but overlook depth, often resulting in redundancy. To address this, we propose the Knowledge Density metric, defined as the ratio of meaningful, unique content to the overall text length. High ensures efficient knowledge transfer, minimizing reader fatigue caused by redundancy. However, existing methods struggle with optimization due to repetitive retrieved information in open-domain generation. By integrating reasoning and planning to extract diverse, non-overlapping knowledge, we aim to improve in long-form generation.

Interpret. where \(N\) is the total number of atomic knowledge units identified within the document. The function \(\mathcal{U}(k_i)\) indicates whether the \(i\)-th unit information \(k_i\) is unique. \(L\) represents the total length of the text. In this formula, the numerator represents the sum of unique units of atomic knowledge extracted from a long article. The denominator corresponds to the length of the article.

🤖OmniThink

Information Acquisition. To acquire diverse and comprehensive information, OmniThink emulates the human learning process, progressively deepening its understanding of the topic through iterative Expansion and Reflection. This iterative process culminates in the construction of an information tree \(\mathcal{T}\), which organizes the retrieved information in a structured and hierarchical manner, and a conceptual pool \(\mathcal{P}\), which represents the LLMs' current understanding of the topic at time step \(m\). Together, these components form the foundation of article generation.

Outline Structuring. In the previous section, OmniThink maintains a concept pool closely related to the topic, which essentially represents the boundaries and depth of the LLM's understanding of the topic. When generating the content outline, we first create a draft outline \(O_D\), and then ask the LLM to refine and link the content from the concept pool \(\mathcal{P}\), ultimately forming the final outline \(O = \text{Polish}(O_D, \mathcal{P})\).

Article Composition. At this stage, the LLM works in parallel to write the content for each section. When writing the content of a section, we use the titles of each section and their hierarchical subsections to retrieve the most relevant \(K\) documents from the information tree by calculating the semantic similarity.

🤖Expansion & Reflection

Expansion. At time step \(m\), OmniThink evaluates all leaf nodes \(L_m = \{ N_0, N_1, \ldots, N_n \}\) of the information tree \(\mathcal{T}_m\), storing them in the conceptual buffer \(\mathcal{P}_b\). Nodes requiring expansion are processed using the conceptual pool \(\mathcal{P}_m\) to identify suitable directions. For each node \(N_i\), \(k_{N_i}\) sub-nodes \(\text{SUB}(N_i) = \{ S_0, S_1, \ldots, S_{k_{N_i}} \}\) are generated, representing specific subtopics. Relevant information is retrieved and incorporated into the updated tree \(\mathcal{T}_{m+1}\) as:

\[ \mathcal{T}_{m+1} = \text{Combine}(\mathcal{T}_m, \text{SUB}(N_0), \ldots, \text{SUB}(N_n)). \]

This ensures comprehensive and in-depth content enrichment of the information tree.

Reflection. OmniThink processes leaf nodes \(L_{m+1} = \{ N_0, \ldots, N_n \}\) by analyzing, filtering, and synthesizing the retrieved information into core insights \(I_{m+1} = \{ \text{INS}_0, \ldots, \text{INS}_n \}\). These insights update the conceptual pool \(\mathcal{P}_m\) as:

\[ \mathcal{P}_{m+1} = \text{Merge}(I_{m+1}, \mathcal{P}_m). \]

The updated conceptual pool \(\mathcal{P}_{m+1}\) supports further iterative expansion of the information tree.

đź“ŠExperiments

Main Results Table presents the evaluation results on the WildSeek dataset employing GPT-4o and Qwen-Plus as backbones. Within the framework of four key grading criteria (Relevance, Breadth, Depth, and Novelty) OmniThink delivers exceptional performance across the board, with GPT-4o as its backbone, particularly distinguishing itself in the Novelty metric. This achievement can be credited to OmniThink's robust reflective capabilities, which enable it to extract and thoroughly explore novel insights from existing knowledge. When employing Qwen-Plus as the backbone, OmniThink's performance see a decline; however, it remains highly competitive. OmniThink's strength lies in its multifaceted and profound contemplation of retrieved information, which facilitates access to more profound layers of the external knowledge. This multi-perspective approach not only enriches the diversity of citation sources but also elevates the citation diversity level beyond that of other methodologies. In terms of knowledge density, OmniThink employs a continuous and dynamic retrieval strategy to gather a wide array of information, which, in turn, allows it to draw upon a more extensive range of resources during the content generation phase. This strategic advantage positions OmniThink at an advantage in the knowledge density metric compared to existing benchmark methods.

Expansion & Reflection Analysis We provide a further analysis of how the expansion and reflection processes shape the various aspects of the final articles and contribute to its overall quality. Given the interdependent nature of expansion and reflection in OmniThink, it is impractical to assess their individual impacts in isolation. To address this challenge, we adopt an indirect yet systematic approach to evaluate their collective influence on the final articles' quality. During the information acquisition phase, we substitute the model used for expansion with a lower-performing model and measured the extent of performance decline in the generated article's metrics, which served as an indicator of the impact of the expansion process on these metrics. Similarly, the same approach is applied to assess the impact of the reflection process. Specifically, we replace the models used for the expansion and reflection processes from Qwen-Plus to Qwen2.5-7b-instruct and observe the decline in various evaluation results. This transition allows us to observe and document the subsequent changes in a range of evaluation metrics, providing insights into the expansion and reflection process's influence on the articles' overall assessment.

Human Evaluation Results To better understand the strengths and weaknesses of OmniThink, we engage 15 well-educated volunteers to conduct a human evaluation. In Figure, we present the results of human scoring. The findings indicate that OmniThink's average performance surpasses that of the current strongest baseline across various dimensions, with a notable 11\% improvement in the Breadth metric compared to Co-STORM. However, in terms of the Novelty metric, although automated evaluation shows an 11% enhancement, human assessment reveals only a marginal advantage. This discrepancy suggests that the current automated evaluation may not yet be fully aligned with human judgment, highlighting a direction for future improvement in the evaluation of long texts. It should also be noted that despite OmniThink's overall superior performance in various dimensions, approximately 30% of the articles are considered equally excellent to the baseline by human evaluators. This could be attributed to the increasing difficulty for humans to discern subtle differences as the foundational writing capabilities of large models improve. Consequently, there is an urgent need to develop more rigorous and fine-grained evaluation methods to assess model performance more accurately.