# InterviewSystem **Repository Path**: zeron2022/InterviewSystem ## Basic Information - **Project Name**: InterviewSystem - **Description**: 基于大模型的医疗问答系统 - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2025-08-08 - **Last Updated**: 2025-08-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Interview Process and Q&A System Edited by CHENYAN WU && Ying-XW ## Setup - Clone repository ``` git clone https://github.com/asnbby/InterviewSystem.git ``` - Setup conda environment ``` conda create -n struct_llm python=3.9 conda activate struct_llm ``` - Install packages with a setup file ``` bash setup.sh ``` - Install pytorch module if you are linux or windows: ``` pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 ``` if you are macos : ``` pip3 install torch torchvision torchaudio ``` - Download Embedding Model From Hugging-Face(We use stella_en_400M_v5 as our embedding and retrieve model) ``` git lfs clone https://hf-mirror.com/dunzhang/stella_en_400M_v5 ``` ## Before Use - Make sure you have got your api-key and your api-key has been named as "api_key_${YourModelName}.txt" and saved at ./api_key/ - The same make sure your base_url has been named as "url_${YourModelName}.txt" and saved at ./openai_url/ ## Run System - Start Chromadb service with code ``` chroma run --path /YourPath/chromadb/SentenceBERT/interview --port 8000 ``` - Start system ``` bash run.sh ``` ## Acknowledgement ``` This project is intended for non-commercial use If you want to use it for commercial use, please contact wuchenyan0823@gmail.com ``` ## Technical Report (The Approach section of the report) ### Work Introduction In this section, I will briefly introduce the technical aspects of the project. Our main contributions are summarized as follows: - 1.We use Self-prompt Extraction to capture potential semantics from the interview corpus and use this method to transform the source text into context, question answer pairs, and summaries for retrieval. - 2.We use the LLM-based Reranker module, leveraging the semantic understanding ability of LLM to rank context, question answer pairs, and summaries in a hierarchical manner, and ultimately fusion these ranking to obtain the final retrieval results - 3.Finally, we propose a Chatbot with CoT module that fully utilizes Reranker's ranking results and multi-level semantic guidance for LLM to answer questions. ### Self-prompt Extraction Self-prompt Exraction