In this talk, we will present our work on VerifAI system, an open-source biomedical question-answering system with unique mechanism to verify answers and detect hallucinations in generated answers. Namely, sciences, especially life sciences, have a low tolerance for non-factual information, and therefore many practitioners have been skeptical of using available tools, such as ChatGPT. While providing references to the information sources is a step in the right direction, it may not be enough, and even referenced generated answer may contain hallucinations. Therefore, we have developed a set of methods that on top of advanced RAG, combining lexical and semantic search capabilities and fine-tuning LLM with performance efficient fine-tuning method, such as LoRA, verify the answer and detect any remaining hallucinations. In order to increase efficiency, we have used quantization to improve latency and decrease hardware requirements for hosting the product. We will in detail discuss these techniques. The code, models and datasets generated during the project have been published in open-source and open science manner.