feat: remove sample data and support cicd

This commit is contained in:
Ivan087
2025-09-18 10:55:27 +08:00
parent ced10e4447
commit 913fc07bc8
37 changed files with 105 additions and 50575 deletions

35
.github/workflows/ci-cd.yaml vendored Normal file
View File

@ -0,0 +1,35 @@
name: CI/CD Pipeline
on:
push:
branches: [ docker ]
# pull_request:
# branches: [ main ]
env:
VERSION: 0.0.1
REGISTRY: https://harbor.bwgdi.com
REGISTRY_NAME: harbor.bwgdi.com
IMAGE_NAME: jarvis-models
jobs:
build-docker:
runs-on: self-hosted
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.BWGDI_NAME }}
password: ${{ secrets.BWGDI_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ env.REGISTRY_NAME }}/library/${{env.IMAGE_NAME}}:${{ env.VERSION }}

0
=
View File

40
Dockerfile Normal file
View File

@ -0,0 +1,40 @@
FROM python:3.10-slim
# Set working directory
WORKDIR /jarvis-models
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install system dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first to leverage Docker cache
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
# RUN adduser --disabled-password --gecos '' appuser \
# && chown -R appuser:appuser /app
# USER appuser
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# Run the application
CMD ["python", "main.py"]

View File

@ -11,3 +11,4 @@ langchain-community==0.0.36
sentence-transformers==2.7.0
openai
python-logging-loki
sse-starlette

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,79 +0,0 @@
脑机接口技术,是一种在大脑与外部设备之间建立直接连接的技术,旨在实现大脑与计算机或其他电子设备之间的信息交换和控制。这种技术允许用户通过思维活动,如脑电波、神经元活动等,来控制外部设备,同时外部设备也可以向大脑发送信号,改善大脑功能或提供反馈。脑机接口主要分为非侵入式和侵入式两种类型。脑机接口技术具有广泛的应用前景,涵盖了医疗、娱乐、教育、军事、工业以及日常生活等多个领域。
侵入式脑机接口,是一种将电极直接植入大脑皮层以采集神经信号的技术。这种接口需要将电极或微芯片直接植入大脑内部,以更直接地采集神经元的活动信号。由于信号源更接近大脑,因此侵入式接口能够提供更高质量、更详细的神经信号。然而,这种接口涉及手术风险,且长期植入可能引发免疫反应或其他并发症。
非侵入式脑机接口,是一种无需手术植入即可与大脑进行交互的技术。这种接口通常通过放置在头皮上的电极来采集脑电信号,来实现与外部设备的通信和控制,而不需要穿透皮肤或颅骨。非侵入式接口相对安全且易于使用,不存在感染和组织损伤的风险,但它采集到的信号质量相对较低,容易受到外界干扰。
干电极,是一种用于测量生物电信号的电极,它主要特点是不需要配合导电膏使用,这类电极通过其导电平面与皮肤直接接触,通常采用导电聚合物制备,具有很好的柔性,能够与皮肤密切贴合。干电极在脑电、心电采集、非侵入式脑机接口等领域有着广泛的应用前景。
脑机接口应用,具有广泛的应用场景,在医疗领域,脑机接口可帮助高位截瘫患者恢复运动功能;在教育领域,脑机接口可监测注意力和认知水平,帮助改善课堂效率等;在娱乐游戏领域,脑机接口技术可以提供新的交互方式,如使用意念控制游戏角色,增强沉浸式体验;在军事领域,脑机接口技术可能用于提高士兵的认知能力和反应速度,或用于控制军事装备;在制造业和科学发展等领域,脑机接口也起到重要作用,提高发展效率。
医疗领域,脑机接口的应用包括:一、运动障碍康复,脑机接口可以帮助因中风、脊髓损伤、肌萎缩侧索硬化症等导致的运动障碍患者恢复或改善运动功能;二、沟通辅助,对于因疾病或伤害而失去语言能力的患者,脑机接口可以提供一种通过思维进行沟通表达的方式;三、感觉恢复,脑机接口技术有潜力通过直接刺激大脑来帮助恢复或替代视觉、听觉等感官功能;四、神经疾病治疗,用于治疗帕金森病、癫痫等神经疾病,脑机接口可以提供更精确的深脑刺激控制。
教育领域,脑机接口的应用包括:一,个性化学习:脑机接口技术能够监测学生的大脑活动,分析其学习状态和理解程度,从而提供定制化的教学方案,提高学习效率;二,认知能力训练,通过神经反馈训练,脑机接口帮助学生提高注意力、记忆力等认知技能;三,特殊教育需求,对于自闭症或注意缺陷多动障碍患者,脑机接口技术可以提供定制化的干预措施,帮助他们改善专注力和行为,提高学习能力。
娱乐游戏领域脑机接口的应用包括意念控制游戏通过脑机接口玩家可以用意念控制游戏中的角色或物体实现更为直观和沉浸的游戏体验。例如Neuralink公司的首个人类志愿者诺兰通过植入脑机接口成功地用意念下国际象棋和玩文明6游戏。另外脑机接口技术能够收集玩家在游戏时的情绪数据据此自动调整音乐、敌人和关卡等为玩家提供难度合适体验极佳的游戏内容。
军事领域,脑机接口应用是一个正在探索和发展的前沿领域,相关研究包括:一,操控无人装备,脑机接口可以帮助操作员更准确、高效地操控无人机、无人车和作战机器人,甚至深入危险地区或高危场合执行任务;二,军事通信,脑机接口技术有潜力进行更高效和更保密的军事通信。如果成功应用,将颠覆现有的通信技术体制和力量编成、运用方式;三,增强军事人员能力,脑机接口技术或可帮助士兵更快地获取和处理战场信息,提高士兵大脑反应速度。
工业领域,脑机接口的应用尚处于探索阶段,但已展现出一些潜在的应用前景:一,人机协同,在制造过程中,工人可以通过脑电波信号来指挥机械臂进行精确操作,提高生产效率和安全性;二,心理状态监测,脑机接口可以用于监测一线工人的精神状态,如疲惫程度,从而保障生产安全性。在制造领域的实际应用中,脑机接口还面临着技术成熟度、成本效益等问题,随着技术的发展,有望在制造领域发挥重要作用。
科学领域,脑机接口的应用是多方面的,包括:一,基础神经科学研究,脑机接口提供了一种独特的手段来监测和解码大脑活动,有助于科学家更深入地了解大脑是如何处理信息、控制运动以及产生语言的;二,跨学科研究,脑机接口技术的发展需要神经科学、工程学、人工智能等多个学科的深度融合,促进了不同领域之间的交流与合作;三,脑机接口技术标准化,在科学领域,标准化是推动技术发展和应用的关键,相关机构正在制定脑机接口技术在医疗健康领域应用的标准化操作流程与功效评价方法。
脑机接口发展历史可以追溯到20世纪20年代当时德国精神病学家汉斯·贝格尔首次发现了脑电图。20世纪70年代科学家首次尝试将脑电波用于控制外部设备。90年代脑机接口技术开始应用于医疗领域帮助残疾人士恢复部分功能。21世纪初随着计算机和传感器技术的进步脑机接口技术得到飞速发展现在已经能够实现更加精准和复杂的控制并在多个领域展现出巨大的潜力。
脑机接口的安全性,主要是手术风险和隐私泄露风险,手术风险指的是侵入式脑机接口需要通过手术侵入电极或芯片,这可能带来组织损伤、感染和机体排异等风险。隐私泄露风险是脑机接口会读取大脑信号,涉及个人隐私数据,需要防止数据被攻击和泄露,确保数据安全和用户隐私。解决这些问题需要跨学科的合作,以及在技术发展的同时加强法律法规的建设。
脑机接口的价格昂贵吗?脑机接口目前的价格相对较高,主要面向科研和医疗市场,但随着技术的发展和产业链的成熟,预计未来价格会逐步降低,市场规模将持续扩大。小舟科技研发的非侵入式脑机头环作为消费级产品,未来价格定位将更为亲民。
意识上传,这种脑机接口技术目前只存在于科幻电影中,例如,电影黑客帝国中被描绘成能够实现极其高级的人脑与机器的交互,包括直接将技能、知识和记忆下载到大脑中。然而,由于技术的限制,现实中的脑机接口技术还远远没有达到这种水平。
小舟科技,有限公司是由博维智慧(HK:01204)投资控股的脑机接口创新型技术研发公司。公司成立以来已完成具有自主知识产权的软硬件一体脑机交互解决方案利用脑机接口融合AI人工智能技术创新赋能智能控制、车机交互、娱乐游戏、大健康等多个领域。我们致力于成长为全球领先的脑机接口科技公司小舟科技总部设在具有国家战略高地的横琴粤澳深度合作区并已在珠海高新区、北京海淀区设立研发中心。
博维智慧是澳门IT解决方案市场的领先企业。十余年来博维智慧致力于为客户提供优质、可靠、端对端的企业IT解决方案服务电讯、媒体、科技、酒店等领域的头部企业和政府公共部门业务遍及中国澳门、中国香港和中国大陆。近年来博维智慧积极发展科技创新业务涉足脑机接口、人工智能和数字化技术。通过前沿技术驱动业务与模式创新博维智慧成为澳门最具创新活力的科技企业。
小舟科技核心团队第一位是小舟科技CEO及创始人周家俊先生他是澳门特区的经验发展委员会科技委员会委员负责整个公司的策略方向尤其是产品与市场这部份。第二位是张旭院士我们的科学家顾问横琴脑机联合实验室主任张院士是中国科学院的院士是国内外有名的脑神经专家主要帮助指导我们脑神经相关领域和AI模型做科学研究。第三位是首席科学家高小榕教授他是清华大学医学院的教授是国内最早研究脑机接口的科学家之一在高速率脑机接口领域长期保持世界领先水平。第四位是联合创始人许月婵女士澳门首家上市科技公司总监有丰富的IT业务经验和管理经验。其他还包括清华、北航等脑机研发团队的核心成员。
小舟脑机核心技术基于无创非侵入式脑机接口技术通过与AI大模型技术融合突破传统脑机接口在科研与医疗领域的应用局限将脑机接口技术应用在人类日常生活与工作的各类设备系统与应用场景中。在硬件、算法、系统上均通过自主研发实现了技术突破。
硬件设备,方面,自主研发主动式柔性干电极、多模态脑电采集器,可采集科研级高精准脑电信号,最大程度还原真实脑电信息。
脑电算法方面基于多范式脑电信息挖掘方法与AI大模型技术结合脑电、场景、语音、行为等信息使AI大模型实现脑电信息解码具备自学习能力根据用户行为、生理数据等持续进行矫正优化。
交互系统,方面,开发标准化的脑机操作系统,适配主流操作终端,通过环境数据及决策习惯,实现脑机系统与使用者意识同步,智能化的辅助使用者进行意识交互操作。
小舟脑机技术优势,主要体现在一体化的脑机交互解决方案,突破了脑机交互的传统技术瓶颈,透过神经科学挖掘更多更复杂的脑电数据。市场上现有脑机产品以单一脑状态识别为主,如睡眠、专注力等,无法做到精准指令控制,而且多聚焦于医疗健康相关领域,比较缺乏针对脑机消费领域精准控制的相关产品。小舟科技从多维度脑电识别算法、高集成硬件电路、结构及材料等方面不断突破,研发出更方便更精准的脑电穿戴设备。
小舟脑机产品主要分为两个核心产品方向一是BarcoOS 脑机交互系統基于底层脑机交互驱动及脑电AI大模型算法打造新一代人机交互模式通过不断使用学习让系统理解使用者的习惯意图二是BarcoMind脑电采集穿戴设备该产品是在我们自研的科研级64信道脑电放大器基础上针对消费级产品进行高集成度优化配备6信道视觉脑电信号采集的非侵入式干电极脑机头环通过视觉诱发脑电信号识别分析并生成指令目前该系列已有三款产品分别是脑电头环、脑电耳机、脑电AR眼镜适配不同应用场景需求。
BarcoMind小舟脑机头环是小舟公司经过多年技术沉淀自主研发的第一款消费级脑机交互头环基于视觉诱发脑电等多模态脑电识别算法、高精度高集成度的脑电处理器和非侵入式干电极脑电信号采集实现多指令低延时脑电交互控制。
BarcoUI是小舟科技基于Android系统深度定制开发的首个脑机交互应用平台拥有用户友好的脑机交互界面内置丰富的多媒体功能与应用程序提供第三方应用开发者平台及标准化的脑机交互接口协议为用户提供直观便捷的脑机交互操作和多类型内容体验。
脑控智能轮椅,是搭载小舟脑机交互系统的智能控制轮椅设备。脑控智能轮椅通过脑机头环读取人脑电信号,将其转换为控制轮椅移动的指令,无需双手操控便能实现自主移动。

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -1,200 +0,0 @@
import os
import time
import chromadb
from chromadb.config import Settings
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader, TextLoader, UnstructuredHTMLLoader, JSONLoader, Docx2txtLoader, UnstructuredExcelLoader
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings, HuggingFaceEmbeddings
def get_all_files(folder_path):
# 获取文件夹下所有文件和文件夹的名称列表
files = os.listdir(folder_path)
# 初始化空列表,用于存储所有文件的绝对路径
absolute_paths = []
# 遍历文件和文件夹名称列表
for file in files:
# 拼接文件的绝对路径
absolute_path = os.path.join(folder_path, file)
# 如果是文件,将其绝对路径添加到列表中
if os.path.isfile(absolute_path):
absolute_paths.append(absolute_path)
return absolute_paths
# start_time = time.time()
# # 加载文档
# folder_path = "./text"
# txt_files = get_all_files(folder_path)
# docs = []
# ids = []
# for txt_file in txt_files:
# loader = PyPDFLoader(txt_file)
# documents = loader.load()
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
# docs_txt = text_splitter.split_documents(documents)
# docs.extend(docs_txt)
# ids.extend([os.path.basename(txt_file) + str(i) for i in range(len(docs_txt))])
# start_time1 = time.time()
# print(start_time1 - start_time)
# loader = PyPDFLoader("/code/memory/text/大语言模型应用.pdf")
# loader = TextLoader("/code/memory/text/test.txt")
# loader = CSVLoader("/code/memory/text/test1.csv")
# loader = UnstructuredHTMLLoader("/"example_data/fake-content.html"")
# pip install docx2txt
# loader = Docx2txtLoader("/code/memory/text/tesou.docx")
# pip install openpyxl
# loader = UnstructuredExcelLoader("/code/memory/text/AI Team Planning 2023.xlsx")
# pip install jq
# loader = JSONLoader("/code/memory/text/config.json", jq_schema='.', text_content=False)
# documents = loader.load()
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
# docs = text_splitter.split_documents(documents)
# print(len(docs))
# ids = ["大语言模型应用"+str(i) for i in range(len(docs))]
# 加载文档和拆分文档
loader = TextLoader("/Workspace/jarvis-models/sample/RAG_zh.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=0, length_function=len, is_separator_regex=True,separators=['\n', '\n\n'])
docs = text_splitter.split_documents(documents)
print("len(docs)", len(docs))
ids = ["20240521_store"+str(i) for i in range(len(docs))]
# 加载embedding模型和chroma server
embedding_model = SentenceTransformerEmbeddings(model_name='/Workspace/Models/BAAI/bge-large-zh-v1.5', model_kwargs={"device": "cuda"})
client = chromadb.HttpClient(host='10.6.44.141', port=7000)
id = "g2e"
#client.delete_collection(id)
collection_number = client.get_or_create_collection(id).count()
print("collection_number",collection_number)
start_time2 = time.time()
# 插入向量(如果ids已存在则会更新向量)
db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, collection_name=id, client=client)
# db = Chroma.from_texts(texts=['test by tom'], embedding=embedding_model, ids=["大语言模型应用0"], persist_directory="./data/test1", collection_name="123", metadatas=[{"source": "string"}])
start_time3 = time.time()
print("insert time ", start_time3 - start_time2)
collection_number = client.get_or_create_collection(id).count()
print("collection_number",collection_number)
# # chroma 召回
# from chromadb.utils import embedding_functions
# embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
# client = chromadb.HttpClient(host='10.6.44.141', port=7000)
# collection = client.get_collection("g2e", embedding_function=embedding_model)
# print(collection.count())
# import time
# start_time = time.time()
# query = "如何前往威尼斯人"
# # query it
# results = collection.query(
# query_texts=[query],
# n_results=3,
# )
# response = results["documents"]
# print("response: ", response)
# print("time: ", time.time() - start_time)
# # 结合大模型进行总结
# import requests
# model_name = "Qwen1.5-14B-Chat"
# chat_inputs={
# "model": model_name,
# "messages": [
# {
# "role": "user",
# "content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
# }
# ],
# # "temperature": 0,
# # "top_p": user_top_p,
# # "n": user_n,
# # "max_tokens": user_max_tokens,
# # "frequency_penalty": user_frequency_penalty,
# # "presence_penalty": user_presence_penalty,
# # "stop": 100
# }
# key ="YOUR_API_KEY"
# header = {
# 'Content-Type': 'application/json',
# 'Authorization': "Bearer " + key
# }
# url = "http://10.6.44.141:23333/v1/chat/completions"
# fastchat_response = requests.post(url, json=chat_inputs, headers=header)
# # print(fastchat_response.json())
# print("\n question: ", query)
# print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
# start_time4 = time.time()
# db = Chroma(
# client=client,
# collection_name=id,
# embedding_function=embedding_model,
# )
# 更新文档
# db = db.update_documents(ids, documents)
# 删除文档
# db.delete([ids])
# 删除集合
# db.delete_collection()
# query = "智能体核心思想"
# docs = db.similarity_search(query, k=2)
# print("result: ",docs)
# for doc in docs:
# print(doc, "\n")
# start_time5 = time.time()
# print("search time ", start_time5 - start_time4)
# docs = db._collection.get(ids=['大语言模型应用0'])
# print(docs)
# docs = db.get(where={"source": "text/大语言模型应用.pdf"})
# docs = db.get()
# print(docs)

View File

@ -1,200 +0,0 @@
import os
import time
import chromadb
from chromadb.config import Settings
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader, TextLoader, UnstructuredHTMLLoader, JSONLoader, Docx2txtLoader, UnstructuredExcelLoader
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings, HuggingFaceEmbeddings
def get_all_files(folder_path):
# 获取文件夹下所有文件和文件夹的名称列表
files = os.listdir(folder_path)
# 初始化空列表,用于存储所有文件的绝对路径
absolute_paths = []
# 遍历文件和文件夹名称列表
for file in files:
# 拼接文件的绝对路径
absolute_path = os.path.join(folder_path, file)
# 如果是文件,将其绝对路径添加到列表中
if os.path.isfile(absolute_path):
absolute_paths.append(absolute_path)
return absolute_paths
# start_time = time.time()
# # 加载文档
# folder_path = "./text"
# txt_files = get_all_files(folder_path)
# docs = []
# ids = []
# for txt_file in txt_files:
# loader = PyPDFLoader(txt_file)
# documents = loader.load()
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
# docs_txt = text_splitter.split_documents(documents)
# docs.extend(docs_txt)
# ids.extend([os.path.basename(txt_file) + str(i) for i in range(len(docs_txt))])
# start_time1 = time.time()
# print(start_time1 - start_time)
# loader = PyPDFLoader("/code/memory/text/大语言模型应用.pdf")
# loader = TextLoader("/code/memory/text/test.txt")
# loader = CSVLoader("/code/memory/text/test1.csv")
# loader = UnstructuredHTMLLoader("/"example_data/fake-content.html"")
# pip install docx2txt
# loader = Docx2txtLoader("/code/memory/text/tesou.docx")
# pip install openpyxl
# loader = UnstructuredExcelLoader("/code/memory/text/AI Team Planning 2023.xlsx")
# pip install jq
# loader = JSONLoader("/code/memory/text/config.json", jq_schema='.', text_content=False)
# documents = loader.load()
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
# docs = text_splitter.split_documents(documents)
# print(len(docs))
# ids = ["大语言模型应用"+str(i) for i in range(len(docs))]
# 加载文档和拆分文档
loader = TextLoader("/Workspace/jarvis-models/sample/RAG_en.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=0, length_function=len, is_separator_regex=True,separators=['\n', '\n\n'])
docs = text_splitter.split_documents(documents)
print("len(docs)", len(docs))
ids = ["20240521_store"+str(i) for i in range(len(docs))]
# 加载embedding模型和chroma server
embedding_model = SentenceTransformerEmbeddings(model_name='/Workspace/Models/BAAI/bge-small-en-v1.5', model_kwargs={"device": "cuda"})
client = chromadb.HttpClient(host='10.6.44.141', port=7000)
id = "g2e_english"
client.delete_collection(id)
collection_number = client.get_or_create_collection(id).count()
print("collection_number",collection_number)
start_time2 = time.time()
# 插入向量(如果ids已存在则会更新向量)
db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, collection_name=id, client=client)
# db = Chroma.from_texts(texts=['test by tom'], embedding=embedding_model, ids=["大语言模型应用0"], persist_directory="./data/test1", collection_name="123", metadatas=[{"source": "string"}])
start_time3 = time.time()
print("insert time ", start_time3 - start_time2)
collection_number = client.get_or_create_collection(id).count()
print("collection_number",collection_number)
# # chroma 召回
# from chromadb.utils import embedding_functions
# embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
# client = chromadb.HttpClient(host='10.6.44.141', port=7000)
# collection = client.get_collection("g2e", embedding_function=embedding_model)
# print(collection.count())
# import time
# start_time = time.time()
# query = "如何前往威尼斯人"
# # query it
# results = collection.query(
# query_texts=[query],
# n_results=3,
# )
# response = results["documents"]
# print("response: ", response)
# print("time: ", time.time() - start_time)
# # 结合大模型进行总结
# import requests
# model_name = "Qwen1.5-14B-Chat"
# chat_inputs={
# "model": model_name,
# "messages": [
# {
# "role": "user",
# "content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
# }
# ],
# # "temperature": 0,
# # "top_p": user_top_p,
# # "n": user_n,
# # "max_tokens": user_max_tokens,
# # "frequency_penalty": user_frequency_penalty,
# # "presence_penalty": user_presence_penalty,
# # "stop": 100
# }
# key ="YOUR_API_KEY"
# header = {
# 'Content-Type': 'application/json',
# 'Authorization': "Bearer " + key
# }
# url = "http://10.6.44.141:23333/v1/chat/completions"
# fastchat_response = requests.post(url, json=chat_inputs, headers=header)
# # print(fastchat_response.json())
# print("\n question: ", query)
# print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
# start_time4 = time.time()
# db = Chroma(
# client=client,
# collection_name=id,
# embedding_function=embedding_model,
# )
# 更新文档
# db = db.update_documents(ids, documents)
# 删除文档
# db.delete([ids])
# 删除集合
# db.delete_collection()
# query = "智能体核心思想"
# docs = db.similarity_search(query, k=2)
# print("result: ",docs)
# for doc in docs:
# print(doc, "\n")
# start_time5 = time.time()
# print("search time ", start_time5 - start_time4)
# docs = db._collection.get(ids=['大语言模型应用0'])
# print(docs)
# docs = db.get(where={"source": "text/大语言模型应用.pdf"})
# docs = db.get()
# print(docs)

View File

@ -1,196 +0,0 @@
import os
import time
import chromadb
from chromadb.config import Settings
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader, TextLoader, UnstructuredHTMLLoader, JSONLoader, Docx2txtLoader, UnstructuredExcelLoader
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings, HuggingFaceEmbeddings
def get_all_files(folder_path):
# 获取文件夹下所有文件和文件夹的名称列表
files = os.listdir(folder_path)
# 初始化空列表,用于存储所有文件的绝对路径
absolute_paths = []
# 遍历文件和文件夹名称列表
for file in files:
# 拼接文件的绝对路径
absolute_path = os.path.join(folder_path, file)
# 如果是文件,将其绝对路径添加到列表中
if os.path.isfile(absolute_path):
absolute_paths.append(absolute_path)
return absolute_paths
# start_time = time.time()
# # 加载文档
# folder_path = "./text"
# txt_files = get_all_files(folder_path)
# docs = []
# ids = []
# for txt_file in txt_files:
# loader = PyPDFLoader(txt_file)
# documents = loader.load()
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
# docs_txt = text_splitter.split_documents(documents)
# docs.extend(docs_txt)
# ids.extend([os.path.basename(txt_file) + str(i) for i in range(len(docs_txt))])
# start_time1 = time.time()
# print(start_time1 - start_time)
# loader = PyPDFLoader("/code/memory/text/大语言模型应用.pdf")
# loader = TextLoader("/code/memory/text/test.txt")
# loader = CSVLoader("/code/memory/text/test1.csv")
# loader = UnstructuredHTMLLoader("/"example_data/fake-content.html"")
# pip install docx2txt
# loader = Docx2txtLoader("/code/memory/text/tesou.docx")
# pip install openpyxl
# loader = UnstructuredExcelLoader("/code/memorinject_prompt = '(用活泼的语气说话回答回答严格限制50字以内)'
# inject_prompt = '(回答简练,不要输出重复内容,只讲中文)'
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
# docs = text_splitter.split_documents(documents)
# print(len(docs))
# ids = ["大语言模型应用"+str(i) for i in range(len(docs))]
# 加载文档和拆分文档
# loader = TextLoader("/Workspace/jarvis-models/sample/RAG_zh.txt")
# documents = loader.load()
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=1024, chunk_overlap=50)
# docs = text_splitter.split_documents(documents)
# print("len(docs)", len(docs))
# ids = ["20240521_store"+str(i) for i in range(len(docs))]
# # 加载embedding模型和chroma server
# embedding_model = SentenceTransformerEmbeddings(model_name='/Workspace/Models/BAAI/bge-large-zh-v1.5', model_kwargs={"device": "cuda"})
# client = chromadb.HttpClient(host='10.6.44.141', port=7000)
# id = "g2e"
# client.delete_collection(id)
# collection_number = client.get_or_create_collection(id).count()
# print("collection_number",collection_number)
# start_time2 = time.time()
# # 插入向量(如果ids已存在则会更新向量)
# db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, collection_name=id, client=client)
# # db = Chroma.from_texts(texts=['test by tom'], embedding=embedding_model, ids=["大语言模型应用0"], persist_directory="./data/test1", collection_name="123", metadatas=[{"source": "string"}])
# start_time3 = time.time()
# print("insert time ", start_time3 - start_time2)
# collection_number = client.get_or_create_collection(id).count()
# print("collection_number",collection_number)
# chroma 召回
from chromadb.utils import embedding_functions
embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
client = chromadb.HttpClient(host='10.6.44.141', port=7000)
collection = client.get_collection("g2e", embedding_function=embedding_model)
print(collection.count())
import time
start_time = time.time()
query = "你知道澳门银河吗"
# query it
results = collection.query(
query_texts=[query],
n_results=5,
)
response = results["documents"]
print("response: ", response)
print("time: ", time.time() - start_time)
# # 结合大模型进行总结
# import requests
# model_name = "Qwen1.5-14B-Chat"
# chat_inputs={
# "model": model_name,
# "messages": [
# {
# "role": "user",
# "content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
# }
# ],
# # "temperature": 0,
# # "top_p": user_top_p,
# # "n": user_n,
# # "max_tokens": user_max_tokens,
# # "frequency_penalty": user_frequency_penalty,
# # "presence_penalty": user_presence_penalty,
# # "stop": 100
# }
# key ="YOUR_API_KEY"
# header = {
# 'Content-Type': 'application/json',
# 'Authorization': "Bearer " + key
# }
# url = "http://10.6.44.141:23333/v1/chat/completions"
# fastchat_response = requests.post(url, json=chat_inputs, headers=header)
# # print(fastchat_response.json())
# print("\n question: ", query)
# print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
# start_time4 = time.time()
# db = Chroma(
# client=client,
# collection_name=id,
# embedding_function=embedding_model,
# )
# 更新文档
# db = db.update_documents(ids, documents)
# 删除文档
# db.delete([ids])
# 删除集合
# db.delete_collection()
# query = "智能体核心思想"
# docs = db.similarity_search(query, k=2)
# print("result: ",docs)
# for doc in docs:
# print(doc, "\n")
# start_time5 = time.time()
# print("search time ", start_time5 - start_time4)
# docs = db._collection.get(ids=['大语言模型应用0'])
# print(docs)
# docs = db.get(where={"source": "text/大语言模型应用.pdf"})
# docs = db.get()
# print(docs)

View File

@ -1,76 +0,0 @@
from sentence_transformers import CrossEncoder
import chromadb
from chromadb.utils import embedding_functions
import numpy as np
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings
import time
from pathlib import Path
path = Path("/media/verachen/e0f7a88c-ad43-4736-8829-4d06e5ed8f4f/model/BAAI")
# chroma run --path chroma_db/ --port 8000 --host 0.0.0.0
# loader = TextLoader("/Workspace/chroma_data/粤语语料.txt",encoding="utf-8")
loader = TextLoader("./RAG_boss.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=0, length_function=len, is_separator_regex=True,separators=['\n', '\n\n'])
docs = text_splitter.split_documents(documents)
print("len(docs)", len(docs))
ids = ["粤语语料"+str(i) for i in range(len(docs))]
embedding_model = SentenceTransformerEmbeddings(model_name= str(path / "bge-m3"), model_kwargs={"device": "cuda:0"})
client = chromadb.HttpClient(host="localhost", port=7000)
id = "boss2"
# client.delete_collection(id)
# 插入向量(如果ids已存在则会更新向量)
db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, collection_name=id, client=client)
embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name= str(path / "bge-m3"), device = "cuda:0")
client = chromadb.HttpClient(host='localhost', port=7000)
collection = client.get_collection(id, embedding_function=embedding_model)
reranker_model = CrossEncoder(str(path / "bge-reranker-v2-m3"), max_length=512, device = "cuda:0")
# while True:
# usr_question = input("\n 请输入问题: ")
# # query it
# time1 = time.time()
# results = collection.query(
# query_texts=[usr_question],
# n_results=10,
# )
# time2 = time.time()
# print("query time: ", time2 - time1)
# # print("query: ",usr_question)
# # print("results: ",print(results["documents"][0]))
# pairs = [[usr_question, doc] for doc in results["documents"][0]]
# # print('\n',pairs)
# scores = reranker_model.predict(pairs)
# #重新排列文件顺序:
# print("New Ordering:")
# i = 0
# final_result = ''
# for o in np.argsort(scores)[::-1]:
# if i == 3 or scores[o] < 0.5:
# break
# i += 1
# print(o+1)
# print("Scores:", scores[o])
# print(results["documents"][0][o],'\n')
# final_result += results["documents"][0][o] + '\n'
# print("\n final_result: ", final_result)
# time3 = time.time()
# print("rerank time: ", time3 - time2)

View File

@ -1,5 +0,0 @@
import torch
print("Torch version:",torch.__version__)
print("Is CUDA enabled?",torch.cuda.is_available())

View File

@ -1,82 +0,0 @@
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.6.2
appdirs==1.4.4
async-timeout==4.0.2
async-tio==1.3.2
attrs==23.1.0
audioread==3.0.0
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==2.1.1
cn2an==0.5.19
colorama==0.4.6
coloredlogs==15.0.1
Cython==0.29.34
decorator==5.1.1
filelock==3.9.0
flatbuffers==23.3.3
frozenlist==1.3.3
fsspec==2023.4.0
h11==0.14.0
httpcore==0.17.0
httpx==0.24.0
huggingface-hub==0.14.1
humanfriendly==10.0
idna==3.4
jieba==0.42.1
Jinja2==3.1.2
joblib==1.2.0
lazy_loader==0.2
librosa==0.10.0.post2
llvmlite==0.40.0
MarkupSafe==2.1.2
mpmath==1.2.1
msgpack==1.0.5
multidict==6.0.4
networkx==3.0
numba==0.57.0
numpy==1.24.1
onnxruntime==1.14.1
openai==0.27.6
OpenAIAuth==0.3.6
packaging==23.1
Pillow==9.3.0
pooch==1.6.0
proces==0.1.4
prompt-toolkit==3.0.38
protobuf==4.22.4
PyAudio==0.2.13
pycparser==2.21
pypinyin==0.48.0
pyreadline3==3.4.1
PySocks==1.7.1
pywin32==306
PyYAML==6.0
regex==2023.5.5
requests==2.28.1
revChatGPT==5.0.0
scikit-learn==1.2.2
scipy==1.10.1
sniffio==1.3.0
socksio==1.0.0
soundfile==0.12.1
soxr==0.3.5
sympy==1.11.1
threadpoolctl==3.1.0
tiktoken==0.3.3
tokenizers==0.13.3
tqdm==4.65.0
transformers==4.28.1
typeguard==2.13.3
typing_extensions==4.4.0
urllib3==1.26.13
wcwidth==0.2.6
WMI==1.5.1
yarl==1.9.2
filetype
fastapi
python-multipart
uvicorn[standard]
SpeechRecognition
gtts

View File

@ -1,52 +0,0 @@
from src.blackbox.audio_to_text import AudioToText
from src.blackbox.text_to_audio import TextToAudio
from runtime.ast.parser import Parser
from runtime.ast.runtime import Runtime
script = """
let text = audio_to_text(audio)
return tts(text)
"""
def version():
return "0.0.1"
def add(*args):
return sum(args)
def div(a,b):
return a/b
def minus(a,b):
return a-b
def mul(a,b):
return a*b
if __name__ == "__main__":
f = open("../test_data/testone.wav", "rb")
audio_data = f.read()
f.close()
tts = TextToAudio()
audio_to_text = AudioToText()
# 注入函数
runtime = Runtime(records={
"add": add,
"div": div,
"minus": minus,
"mul": mul,
"audio_to_text": audio_to_text,
"tts": tts,
"version": version,
"print": print
})
ast = Parser().parse(script)
# 注入數據
script_output = runtime.run(ast, {
"audio": audio_data,
})
f = open("../test_data/tmp.wav", "wb")
print("script:", type(script_output))
f.write(script_output.read())
f.close()

View File

@ -22,7 +22,7 @@ from time import time
import io
from PIL import Image
from lmdeploy.serve.openai.api_client import APIClient
# from lmdeploy.serve.openai.api_client import APIClient
from openai import OpenAI
@ -184,40 +184,34 @@ class VLMS(Blackbox):
total_token_usage = 0 # which can be used to count the cost of a query
model_url = self._get_model_url(config['vlm_model_name'])
if config['lmdeploy_infer']:
api_client = APIClient(model_url)
model_name = api_client.available_models[0]
for i,item in enumerate(api_client.chat_completions_v1(model=model_name,
messages=messages,stream = True,
**settings,
# session_id=,
)):
# Stream output
yield item["choices"][0]["delta"]['content']
responses += item["choices"][0]["delta"]['content']
# if config['lmdeploy_infer']:
# # api_client = APIClient(model_url)
# # model_name = api_client.available_models[0]
# for i,item in enumerate(api_client.chat_completions_v1(model=model_name,
# messages=messages,stream = True,
# **settings,
# # session_id=,
# )):
# # Stream output
# yield item["choices"][0]["delta"]['content']
# responses += item["choices"][0]["delta"]['content']
# print(item["choices"][0]["message"]['content'])
# responses += item["choices"][0]["message"]['content']
# total_token_usage += item['usage']['total_tokens'] # 'usage': {'prompt_tokens': *, 'total_tokens': *, 'completion_tokens': *}
else:
api_key = "EMPTY_API_KEY"
api_client = OpenAI(api_key=api_key, base_url=model_url+'/v1')
model_name = api_client.models.list().data[0].id
for item in api_client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
stream=True):
yield(item.choices[0].delta.content)
responses += item.choices[0].delta.content
# response = api_client.chat.completions.create(
# model=model_name,
# messages=messages,
# temperature=0.8,
# top_p=0.8)
# print(response.choices[0].message.content)
# return response.choices[0].message.content
# # print(item["choices"][0]["message"]['content'])
# # responses += item["choices"][0]["message"]['content']
# # total_token_usage += item['usage']['total_tokens'] # 'usage': {'prompt_tokens': *, 'total_tokens': *, 'completion_tokens': *}
# else:
api_key = "EMPTY_API_KEY"
api_client = OpenAI(api_key=api_key, base_url=model_url+'/v1')
model_name = api_client.models.list().data[0].id
for item in api_client.chat.completions.create(
model=model_name,
messages=messages,
**settings,
stream=True):
yield(item.choices[0].delta.content)
responses += item.choices[0].delta.content
# print(response.choices[0].message.content)
# return response.choices[0].message.content
user_context = messages + [{'role': 'assistant', 'content': responses}]

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.