Sometimes It’s All About The Chunking

Michael Ruminer
4 min readOct 8, 2024

--

An AI generated picture of gold colored chunks of rock or gold in a small pile on a table

As I continue my study and experimentation with coding up AI solutions, and especially, at the moment, with Retrieval-Augmented Generation (RAG), I decided to work with a post from the Metadocs blog titled “Simple Agentic RAG for Multi Vector stores with LangChain and LangGraph”. It seemed it would cover two areas of interest, agentic operations and RAG. Little did I expect to learn a valuable lesson in chunking. In this post I’ll pass along the obvious but well demonstrated lesson I gathered in the process.

It started with the prior mentioned post that referred to a prior post that it built upon. Following a link to that pre-requisite post, “Create a Langchain app with multiple vector store the easy way”, it in turn referenced an even earlier post as a pre-requisite. So down that rabbit hole I went. The earlier post was “Deploy a RAG app with Langchain in minutes”. I read the earliest of the three posts and it was a very simple RAG application. I coded it up ensuring I understood each line along the way. Most notable was that it was splitting the text into chunks on “\n\n”. I hadn’t looked at the source document they were providing as a sample. It turns out it was a text file of a US State of the Union address with a twist. Each sentence was followed by “\n\n” (two line feed carriage returns); an interesting if unrealistic formatting. I have my own example PDF that I have been using for testing out various RAG implementations and how it handles a specific prompt, so I copied two paragraphs from the document that contains the context I was after and formatted it with a “\n\n” after each sentence. Normally, I have been extracting the text from the PDF as part of the code and then chunking usually with recursive character text splitting, but I didn’t want to change this program since I was going to build on it. When done the results actually returned what I was after, a list of the 10 principles of SSI.

For no apparent reason, I decided to edit the text file and format with a single “\n” after each paragraph. This should return the same results if I edited the text split to represent this. It didn’t. I was, and still am, perplexed by this. It makes no sense that a double line feed split should return any different results than a single line feed results under the circumstances. I plan to revisit this as I believe I must be wrong despite trying multiple times. What was most important in the process was that with the right chunking, as simple as it was, I got the desired results when prompted whereas in all my past experiences it had failed. There was a difference in that I was reading a text file versus using a PDF text extractor and that I was using only two paragraphs focused on the context I wanted versus 70 pages of an academic paper that is probably very hard to extract even if the proper area of context was two clean paragraphs of text within that PDF. The real lesson for me is showing how chunking is so important. I suspect the major differential in the success was in the chunk divisions. I won’t rule out the simpler PDF as a contributor.

Next, I plan to try a few things in this naive RAG implementation before I move on to the multi-vector store — a PDF creation of just the two paragraphs that contain the needed context and split on paragraphs (\n) to see how that comes out. I’ll try the two paragraphs in a PDF with text extracted and chunked using RecursiveCharacterTextSplitter and separators=[“\n\n”, “\n”, “. “, “ “, “”] and a chunk size of 1000 and two different settings for overlap (0, and 200) as well as with SentenceTransformerEmbeddingFunction and then the default OpenAi embedding function. Let’s see how all those combinations work.

To recap, though I can’t explain why I got wildly different results depending on the parsing character used with the tex file format changed, I can suspect the simple chunking by sentence made a lot of difference. The other likely impacting result was clean simple text versus PDF-extracted text. I plan to experiment more and will report back the results. A take away for me, even if it was not entirely or even primarily the impactful element, was how important the chunking is for good results.

--

--

Michael Ruminer
Michael Ruminer

Written by Michael Ruminer

My most recent posts are on AI, especially from the perspective of a, currently, non-AI tech worker. did:web:manicprogrammer.github.io

No responses yet