-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Overall ambition :
We should be able able to parse a pdf such that we are able to get this following structure out of it.
It includes the following key capabilities :
- Ability to process pdfs with multiple languages - English/Odia/Hindi
- Ability to create chunks with headings on the basis of the way the pdf is structured. We should be able to recognize that some texts are headings, some are content and then be able to convert that into the structure above.
- Be able to process images and tables and convert them into chunks that can be passed to an LLM to answer questions based on them.
Where are we on this now :
Chunking
Free text chunking :
We are able to chunk free text (unstructured text) here
Structured pdf chunking
We have looked at 2 approaches for chunking text :
- Using Deepdoc detection to extract the text headings and structure of each page and converting it into a json format : here
- Using Pymupdf to get the boundaries of the text from the pdf and then using that to figure out the headings and the content pieces : here
What is a good chunk
- Should be around 100 to 200 words.
- The text/topic in a chunk should be on a similar topic which makes semantic sense.
- The text/topic in a chunk should be different from other chunks
- Ideally it should cover a small topic in its entirety. It could cover multiple topics but these small topics should not be a part of some other chunk.
For example :
Bad Chunk :
Here is a list of links :
Cab booking : http:/sdjnsdkgj/
Hotel form : http:/sfjgkjnfsgn/
This is a bad chunk because :
- The chunk is small is size
- The links cover multiple topics at once. Cab booking form should be a part of the chunk that should be a part that describes how to book a cab. Similarly, for the hotel booking lunch
Metadata
Metadata
Assignees
Labels
No labels