Knowledge Ply Chat

M Krishna Satya Varma; Koteswara Rao; Sai Ganesh; Venkat Sai Koushik; Rama Krishnam Raju1

1

Publication Date: 2024/04/12

Abstract: Despite their ability to store information and excel at many NLP tasks with fine-tuning, large language models tend to have issues about accurately accessing and altering knowledge, which leads to performance gaps in knowledge-intensive tasks compared to domain-specific architectures. Additionally, these models face problems when it comes to having transparent decision-making processes or updating their world knowledge. To mitigate these limitations, we propose a Retrieval Augmented Generation (RAG) system by improving the Mistral7B model specifically for RAG tasks. The novel training technique includes Parameter-Efficient Fine-Tuning (PEFT) which enables efficient adaptation of large pre- trained models on-the-fly according to task-specific requirements while reducing computational costs. In addition, this system combines pre-trained embedding models that use pre-trained cross-encoders for effective retrieval and reranking of information. This RAG system will thus leverage these state-of-the-art methodologies towards achieving top performances in a range of NLP tasks such as question answering and summarization.

Keywords: Component: RAG, PEFT, Cross Encoders.

DOI: https://doi.org/10.38124/ijisrt/IJISRT24APR285

PDF: https://ijirst.demo4.arinfotech.co/assets/upload/files/IJISRT24APR285.pdf

REFERENCES

No References Available