Hyperlinks:
Paper | Code | Knowledge
LumberChunker lets an LLM determine the place a protracted story must be cut up, creating extra pure chunks that assist Retrieval Augmented Era (RAG) techniques retrieve the proper info.
Introduction
Lengthy-form narrative paperwork often have an specific construction, reminiscent of chapters or sections, however these models are sometimes too broad for retrieval duties. At a decrease stage, vital semantic shifts occur inside these bigger segments with none seen structural break. After we cut up textual content solely by formatting cues, like paragraphs or mounted token home windows, passages that belong to the identical narrative unit could also be separated, whereas unrelated content material could be grouped collectively. This misalignment between construction and which means produces chunks that include incomplete or combined context, which reduces retrieval high quality and impacts downstream RAG efficiency. Because of this, segmentation ought to goal to create chunks which might be semantically impartial, moderately than relying solely on doc construction.
So how will we protect the story’s movement and nonetheless preserve chunking sensible?
In lots of circumstances, a reader can simply acknowledge the place the narrative begins to shift—for instance, when the textual content strikes to a distinct scene, introduces a brand new entity, or modifications its goal. The issue is that the majority automated chunking strategies don’t think about this semantic sign and as an alternative rely solely on floor construction. Consequently, they might produce segmentations that look cheap from a formatting perspective however break the underlying narrative coherence.
To make this concrete, think about the quick passage beneath and determine the optimum chunking boundary!
1 Learn the passage
The LumberChunker Technique
Within the instance above, Choice C supplies probably the most coherent segmentation. The boundary aligns with the purpose the place the narrative turns into semantically impartial from the previous context.
Our objective is to make this sort of segmentation resolution sensible at scale. The problem is that human-quality boundary detection requires understanding narrative context, which is pricey to use throughout 1000’s of paragraphs in long-form paperwork.
LumberChunker approaches this by treating segmentation as a boundary-finding downside: given a brief sequence of consecutive paragraphs, we ask a language mannequin to establish the earliest level the place the content material clearly shifts. This formulation permits segments to differ in size whereas remaining aligned with the underlying narrative construction. In apply, LumberChunker consists of those steps:
1) Doc Paragraph Extraction
Cleanly cut up the guide into paragraphs and assign secure IDs (ID:1, ID:2, …). This preserves the doc’s pure discourse models and offers us secure candidate boundaries.
Instance: From a novel, we extract:
ID:1“The morning solar filtered by means of the dusty home windows…”ID:2“She walked slowly to the door, hesitating…”ID:3“In the meantime, throughout city, Detective Morrison reviewed the case information…”ID:4“The earlier evening’s occasions had left him puzzled…”Every paragraph will get a singular ID for monitoring boundaries.
2) IDs Grouping for LLM
Construct a bunch G_i by appending paragraphs till the group’s size reaches a token finances θ. This supplies sufficient context for the mannequin to evaluate when a subject/scene truly shifts.
Instance: With
θ = 550tokens, we construct, per instance:
G_1= [ID:1,ID:2,ID:3,ID:4,ID:5,ID:6]This window, by spanning a number of paragraphs, will increase the prospect that at the very least one significant narrative shift is current inside the context.
3) LLM Question
Immediate the mannequin with the paragraphs in G_i and ask it to return the first paragraph the place content material clearly modifications relative to what got here earlier than. Use that returned ID because the chunk boundary; begin the following group at that paragraph and repeat to the tip of the guide.
Instance: Given G_1 = [p1, p2, p3, p4, p5, p6], the LLM responds: p3
Reply Extraction:
We extract p3 because the boundary. This creates:
- Chunk 1: [
p1,p2] - Subsequent group (
G_2) begins atp3
GutenQA: A Benchmark for Lengthy-Type Narrative Retrieval
To guage our chunking method, we introduce GutenQA, a benchmark of 100 rigorously cleaned public-domain books paired with 3,000 needle-in-a-haystack sort of questions. This enables us to measure retrieval high quality immediately after which observe how higher retrieval results in extra correct solutions in a RAG system.
Key Findings
Retrieval: LumberChunker leads ⭐
LumberChunker leads throughout each DCG@okay and Recall@okay. By okay=20, it reaches DCG ≈ 62.1% and Recall ≈ 77.9%, displaying that higher segmentation improves not solely which passages seem first, but additionally how reliably the proper context is retrieved.
Retrieval Efficiency Comparability
| 1 | 2 | 5 | 10 | 20 | |
|---|---|---|---|---|---|
| Semantic Chunking | 29.50 | 35.31 | 40.67 | 43.14 | 44.74 |
| Paragraph-Stage | 36.54 | 42.11 | 45.87 | 47.72 | 49.00 |
| Recursive Chunking | 39.04 | 45.37 | 50.66 | 53.25 | 54.72 |
| HyDE† | 33.47 | 39.74 | 45.06 | 48.14 | 49.92 |
| Proposition-Stage | 36.91 | 42.42 | 44.88 | 45.65 | 46.19 |
| LumberChunker | 48.28 | 54.86 | 59.37 | 60.99 | 62.09 |
Downstream QA: Focused Retrieval Outperforms Giant Context Home windows
We discover that even with very giant context home windows, a non-retrieval setup nonetheless performs worse than RAG, displaying that choosing targeted, related passages is simpler than merely rising the quantity of uncooked context. Below this setting, when built-in into a regular RAG pipeline on a GutenQA subset, our RAG-LumberChunker is second solely to RAG-Guide, which makes use of hand-segmented ground-truth chunks.
Downstream QA Accuracy (%)
A Candy Spot Round θ ≈ 550 Tokens
We sweep θ ∈ [450, 1000] tokens and discover that θ ≈ 550 persistently maximizes retrieval high quality: giant sufficient for context, sufficiently small to maintain the mannequin targeted on the present flip within the story.
DCG@okay vs Token Funds (θ)
This doesn’t imply the ensuing chunks are giant. In apply, because the desk exhibits, the typical chunk dimension is about 334 tokens, which means that LumberChunker usually detects earlier semantic shifts inside the window.
| Technique | Avg. #Tokens / Chunk | Complete #Chunks |
|---|---|---|
| Semantic Chunking | 185 tokens | 191059 |
| Paragraph Stage | 79 tokens | 248307 |
| Recursive Chunking | 399 tokens | 31787 |
| Proposition-Stage | 12 tokens | 914493 |
| LumberChunker | 334 tokens | 36917 |
Conclusion
LumberChunker reframes doc chunking as a semantic boundary detection downside. As an alternative of counting on mounted token limits or floor construction, it makes use of a rolling context window to establish the earliest level the place the which means of the textual content turns into impartial from what got here earlier than, producing segments that higher align with the underlying narrative construction.
On the GutenQA benchmark, LumberChunker persistently improves retrieval and downstream QA over conventional fixed-size and recursive strategies, approaching the standard of handbook, human-curated segmentations.
These outcomes counsel that segmentation isn’t just a preprocessing step, however a core design selection for retrieval techniques. By creating semantically impartial chunks, LumberChunker supplies a sensible approach to enhance how long-form paperwork are retrieved and utilized in RAG pipelines.
Quotation
If you happen to discover LumberChunker helpful in your analysis, please think about citing:
@inproceedings{duarte-etal-2024-lumberchunker,
title = "{L}umber{C}hunker: Lengthy-Type Narrative Doc Segmentation",
creator = "Duarte, Andr{'e} V. and Marques, Jo{~a}o DS and Gra{c{c}}a, Miguel and Freire, Miguel and Li, Lei and Oliveira, Arlindo L.",
editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung",
booktitle = "Findings of the Affiliation for Computational Linguistics: EMNLP 2024",
month = nov,
yr = "2024",
deal with = "Miami, Florida, USA",
writer = "Affiliation for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.377/",
doi = "10.18653/v1/2024.findings-emnlp.377",
pages = "6473--6486",
summary = "LumberChunker reframes doc chunking as a semantic boundary detection downside..."
}
Weblog created by Raymond Jiang and André Duarte






