mirror of
https://github.com/BoardWare-Genius/jarvis-models.git
synced 2025-12-13 16:53:24 +00:00
update chroma and chat
This commit is contained in:
@ -424,6 +424,10 @@
|
||||
|
||||
荔枝碗船厂区域,上世纪五十年代诞生,九十年代停运,沿荔枝碗马路分布,部分深入水中,形成和谐自然景观。作为澳门唯一保存完好的造船遗迹,它见证了澳门的历史变迁和昔日造船业的风貌。首期开放X11-X15地块,占地三千平方米,含展览、表演场地和工作坊。展览主题为“岁月印记──荔枝碗村的故事”,分为“船说故事”、“匠人船艺”和“记忆船承”三部分,展现村庄生活百态和历史演变。开放时间:上午10点至下午6点;导览时间:每周六下午3点至4点,4点至5点提供粤语/普通话导览。表演活动“艺游人”:周五至周日及公众假期,上午10点至下午6点。详细信息请参考:https://www.icm.gov.mo/cn/busking#busking_guidelines。黄昏音乐会:路环荔枝碗马路,全天免费入场。特别通知:遇3号风球及以上、红雨及以上、黄色风暴潮及以上天气,X11-X15号区域暂停开放,包括所有展览、导览、表演和活动。查询方式:电话见官网(办公时间内),详细见官网。
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
欢迎来到澳门,这里充满了独特的赛博朋克风情。东望洋街是您不容错过的打卡点,那里的繁华与市井生活交织在一起,能为您捕捉到与众不同的城市风光。摄影爱好者请注意,携带一支70-200mm或100-400mm的长焦镜头,将有助于您记录下这里的精彩瞬间。澳门的公共交通系统发达,您可以选择乘坐巴士或轻轨前往东望洋街。记得提前查看班次以规划行程。澳门有各种类型的酒店和民宿供您选择,从高端豪华到经济实惠,均能满足不同需求。记得提前预定,以确保您的住宿无忧。虽然视频中没有明确提及,但您可能会对澳门的夜景和当地文化活动感兴趣,比如在夜晚漫步街头,或是参加一些特色的节日庆典。祝您在澳门的旅程收获满满,留下难忘的回忆。
|
||||
|
||||
欢迎来到澳门,这里复古与现代交融。您的行程中,发现了一个拍照热门地点——澳门巴黎人酒店楼下的“大本钟”机位。这里是捕捉夜景和城市风情的理想之地,特别是在夜幕降临时分,灯光下的大本钟呈现出别样的魅力。澳门的公共交通系统便利,您可通过公交、轻轨或酒店的穿梭巴士轻松到达各景点。如需前往巴黎人酒店,您可以乘坐公交或计程车,大约15分钟即可抵达。澳门巴黎人酒店本身就是一个值得探索的度假胜地,它拥有各类房型以及丰富的娱乐设施。确保提前预订,以便享受舒适的休息环境。女孩子在澳门旅游时,不要忘记展现自己的风采,挑选心爱的“战袍”,在大本钟前或巴黎人酒店周围的景点捕捉最美的瞬间。祝您在澳门的旅行充满乐趣和美好的回忆!如果您需要更多旅行建议或详细攻略,请随时告诉我。
|
||||
@ -984,7 +988,7 @@
|
||||
|
||||
您好!澳门是个历史与现代交融的迷人城市。首先,您可以参观半岛的美高梅娱乐场,体验老澳门的奢华氛围。凼仔的美狮美高梅也值得一去,那里提供丰富的娱乐和购物体验。澳门的交通便捷,公交和轻轨系统如新葡京到美高梅,您可以选择公交或计程车前往。住宿方面,澳门有豪华酒店到经济型旅馆,美高梅酒店是个不错的选择。购物爱好者可在美高梅购物中心找到各种国际品牌。别忘了留意美高梅的特别优惠活动。祝您旅途愉快!
|
||||
|
||||
欢迎来到澳门!美高梅酒店是必游之地,豪华设施吸引明星如马丽、何炅等。可能偶遇明星,尤其是澳门国际喜剧节期间。公共交通发达,轻轨、巴士、出租车或免费穿梭巴士皆可。美高梅酒店提供住宿,周边如威尼斯人、葡京等酒店选择丰富。祝旅途愉快!
|
||||
欢迎来到澳门!美高梅酒店是必游之地,豪华设施吸引明星如马丽、何炅等。可能偶遇明星,尤其是澳门国际喜剧节期间。公共交通发达,轻轨、巴士、出租车或免费穿梭巴士皆可。
|
||||
|
||||
您好!以下是整理后的澳门旅游内容:1. **景点体验**:您参观了澳门美狮美高梅娱乐场,但房间性价比低,建议尝试葡京酒店。2. **交通**:尊贵卡会员期待额外接送服务,但实际体验与普通会员无异。3. **住宿体验**:MGM高层景观房不如葡京普通房间,价格差距大。美高梅的不愉快经历包括被绑定为网红邀请客户,导致积分损失。4. **活动**:首次来澳门未获应有接送和答疑服务,对公关安排不满。因被安排在MGM导致积分损失,是一次不愉快的活动体验。建议考虑其他知名酒店,如葡京、永利或伦敦人,它们可能提供更好的服务和设施。祝下次旅行顺利!
|
||||
|
||||
@ -1096,7 +1100,7 @@
|
||||
|
||||
亲爱的朋友,欢迎来澳门一日游!请确保携带港澳通行证、身份证和少量现金葡币。提前在支付宝购买澳门流量,并下载“海关旅客指尖服务”申请黑码健康码。从拱北口岸出发,乘坐免费发财车(酒店大巴)前往澳门。十六浦、新葡京和大三巴牌坊是必去之地,沿途品尝猪肉脯和蛋挞。别忘了澳门博物馆(需购票)和欣赏永利酒店的发财树表演。想体验大熊猫和黑沙滩,可乘坐公交或出租车,但无免费发财车服务。在郊野公园和黑沙海滩停留后,乘坐缆车游览威尼斯人、巴黎人和伦敦人商场。除了大三巴牌坊,还有酒店内的娱乐项目,如缆车可能需要排队。威尼斯人、伦敦人和巴黎人酒店是拍照的最佳地点。美食推荐尝试咖喱鱼蛋和杏仁饼。出行时招手示意乘坐公交,购物时使用实时汇率的支付应用,并保持礼貌,澳门人通常会说普通话。别忘了查看图一标注的澳门免费表演,它们是旅行中不容错过的亮点。祝您澳门之旅愉快!如有其他问题,请随时告诉我。
|
||||
|
||||
欢迎来到澳门!探索氹仔岛,这里有奢华酒店和商场,如威尼斯人、伦敦人、巴黎人,还有美食天堂官也街。澳门半岛则适合体验人文风情,大三巴牌坊、东望洋灯塔和新葡京等标志性景点等你发现。交通便捷,从珠海三大口岸(拱北、横琴、金湾机场)轻松到达澳门。通关流程简单,横琴口岸附近有25B公交直达威尼斯人和官也街。NFC支付方便,如上海紫卡可直接使用,但红卡不可。各大场所提供行李寄存服务,但横琴口岸暂无。网络方面,建议提前购买澳门流量或开通漫游。手机没信号时,屈臣氏或百佳超市有本地流量电话卡。返回珠海,只需搭乘公交或跟随指示返回口岸。祝您旅途愉快!
|
||||
欢迎来到澳门!探索氹仔岛,这里有奢华酒店和商场,如威尼斯人、伦敦人、巴黎人,还有美食天堂官也街。澳门半岛则适合体验人文风情,大三巴牌坊、东望洋灯塔和新葡京等标志性景点等你发现。交通便捷,从珠海三大口岸(拱北、横琴、金湾机场)轻松到达澳门。通关流程简单,横琴口岸附近有25B公交直达威尼斯人和官也街。各大场所提供行李寄存服务,但横琴口岸暂无。网络方面,建议提前购买澳门流量或开通漫游。手机没信号时,屈臣氏或百佳超市有本地流量电话卡。返回珠海,只需搭乘公交或跟随指示返回口岸。祝您旅途愉快!
|
||||
|
||||
尊敬的游客,欢迎来到澳门!从深圳出发,只需两小时车程,抵达横琴口岸后,三分钟轻松过关。人流较少,无需提前兑换货币,横琴口岸可兑换101港币或104澳门元,低于700元有5元手续费。澳门本地兑换点更佳,无需手续费。娱乐博爱好者可在中国银行兑换点换港币,澳门的娱乐场接受人民币兑换,但汇率可能不如澳门本地划算。大部分商家接受支付宝和微信支付,乘坐公交车需准备每人6元的零钱,港币或人民币皆可投币,发财车免费游览景点。祝您旅途愉快!如有疑问或需详细行程安排,请告诉我。
|
||||
|
||||
|
||||
8547
sample/RAG_en.txt
Normal file
8547
sample/RAG_en.txt
Normal file
File diff suppressed because it is too large
Load Diff
6109
sample/RAG_zh.txt
Normal file
6109
sample/RAG_zh.txt
Normal file
File diff suppressed because one or more lines are too long
@ -70,11 +70,11 @@ def get_all_files(folder_path):
|
||||
|
||||
|
||||
# 加载文档和拆分文档
|
||||
loader = TextLoader("/home/administrator/Workspace/jarvis-models/sample/20240529_store.txt")
|
||||
loader = TextLoader("/home/administrator/Workspace/jarvis-models/sample/RAG_zh.txt")
|
||||
|
||||
documents = loader.load()
|
||||
|
||||
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1024, chunk_overlap=50)
|
||||
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=0, length_function=len, is_separator_regex=True,separators=['\n', '\n\n'])
|
||||
|
||||
docs = text_splitter.split_documents(documents)
|
||||
|
||||
@ -99,65 +99,66 @@ db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, c
|
||||
|
||||
start_time3 = time.time()
|
||||
print("insert time ", start_time3 - start_time2)
|
||||
collection_number = client.get_or_create_collection(id).count()
|
||||
print("collection_number",collection_number)
|
||||
|
||||
|
||||
|
||||
# # chroma 召回
|
||||
# from chromadb.utils import embedding_functions
|
||||
# embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/home/administrator/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
|
||||
# client = chromadb.HttpClient(host='172.16.5.8', port=7000)
|
||||
# collection = client.get_collection("g2e", embedding_function=embedding_model)
|
||||
|
||||
# chroma 召回
|
||||
from chromadb.utils import embedding_functions
|
||||
embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/home/administrator/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
|
||||
client = chromadb.HttpClient(host='172.16.5.8', port=7000)
|
||||
collection = client.get_collection("g2e", embedding_function=embedding_model)
|
||||
# print(collection.count())
|
||||
# import time
|
||||
# start_time = time.time()
|
||||
# query = "如何前往威尼斯人"
|
||||
# # query it
|
||||
# results = collection.query(
|
||||
# query_texts=[query],
|
||||
# n_results=3,
|
||||
# )
|
||||
|
||||
print(collection.count())
|
||||
import time
|
||||
start_time = time.time()
|
||||
query = "如何前往威尼斯人"
|
||||
# query it
|
||||
results = collection.query(
|
||||
query_texts=[query],
|
||||
n_results=3,
|
||||
)
|
||||
|
||||
response = results["documents"]
|
||||
print("response: ", response)
|
||||
print("time: ", time.time() - start_time)
|
||||
# response = results["documents"]
|
||||
# print("response: ", response)
|
||||
# print("time: ", time.time() - start_time)
|
||||
|
||||
|
||||
# 结合大模型进行总结
|
||||
import requests
|
||||
# # 结合大模型进行总结
|
||||
# import requests
|
||||
|
||||
model_name = "Qwen1.5-14B-Chat"
|
||||
chat_inputs={
|
||||
"model": model_name,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
|
||||
}
|
||||
],
|
||||
# "temperature": 0,
|
||||
# "top_p": user_top_p,
|
||||
# "n": user_n,
|
||||
# "max_tokens": user_max_tokens,
|
||||
# "frequency_penalty": user_frequency_penalty,
|
||||
# "presence_penalty": user_presence_penalty,
|
||||
# "stop": 100
|
||||
}
|
||||
# model_name = "Qwen1.5-14B-Chat"
|
||||
# chat_inputs={
|
||||
# "model": model_name,
|
||||
# "messages": [
|
||||
# {
|
||||
# "role": "user",
|
||||
# "content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
|
||||
# }
|
||||
# ],
|
||||
# # "temperature": 0,
|
||||
# # "top_p": user_top_p,
|
||||
# # "n": user_n,
|
||||
# # "max_tokens": user_max_tokens,
|
||||
# # "frequency_penalty": user_frequency_penalty,
|
||||
# # "presence_penalty": user_presence_penalty,
|
||||
# # "stop": 100
|
||||
# }
|
||||
|
||||
key ="YOUR_API_KEY"
|
||||
# key ="YOUR_API_KEY"
|
||||
|
||||
header = {
|
||||
'Content-Type': 'application/json',
|
||||
'Authorization': "Bearer " + key
|
||||
}
|
||||
url = "http://172.16.5.8:23333/v1/chat/completions"
|
||||
# header = {
|
||||
# 'Content-Type': 'application/json',
|
||||
# 'Authorization': "Bearer " + key
|
||||
# }
|
||||
# url = "http://172.16.5.8:23333/v1/chat/completions"
|
||||
|
||||
fastchat_response = requests.post(url, json=chat_inputs, headers=header)
|
||||
# print(fastchat_response.json())
|
||||
# fastchat_response = requests.post(url, json=chat_inputs, headers=header)
|
||||
# # print(fastchat_response.json())
|
||||
|
||||
print("\n question: ", query)
|
||||
print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
|
||||
# print("\n question: ", query)
|
||||
# print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
|
||||
|
||||
|
||||
|
||||
|
||||
200
sample/chroma_client_en.py
Normal file
200
sample/chroma_client_en.py
Normal file
@ -0,0 +1,200 @@
|
||||
import os
|
||||
import time
|
||||
import chromadb
|
||||
from chromadb.config import Settings
|
||||
|
||||
from langchain_community.document_loaders.csv_loader import CSVLoader
|
||||
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader, TextLoader, UnstructuredHTMLLoader, JSONLoader, Docx2txtLoader, UnstructuredExcelLoader
|
||||
from langchain_community.vectorstores import Chroma
|
||||
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
|
||||
from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings, HuggingFaceEmbeddings
|
||||
|
||||
|
||||
|
||||
def get_all_files(folder_path):
|
||||
# 获取文件夹下所有文件和文件夹的名称列表
|
||||
files = os.listdir(folder_path)
|
||||
|
||||
# 初始化空列表,用于存储所有文件的绝对路径
|
||||
absolute_paths = []
|
||||
|
||||
# 遍历文件和文件夹名称列表
|
||||
for file in files:
|
||||
# 拼接文件的绝对路径
|
||||
absolute_path = os.path.join(folder_path, file)
|
||||
# 如果是文件,将其绝对路径添加到列表中
|
||||
if os.path.isfile(absolute_path):
|
||||
absolute_paths.append(absolute_path)
|
||||
|
||||
return absolute_paths
|
||||
|
||||
# start_time = time.time()
|
||||
# # 加载文档
|
||||
# folder_path = "./text"
|
||||
# txt_files = get_all_files(folder_path)
|
||||
# docs = []
|
||||
# ids = []
|
||||
# for txt_file in txt_files:
|
||||
# loader = PyPDFLoader(txt_file)
|
||||
|
||||
# documents = loader.load()
|
||||
|
||||
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
|
||||
|
||||
# docs_txt = text_splitter.split_documents(documents)
|
||||
|
||||
# docs.extend(docs_txt)
|
||||
|
||||
# ids.extend([os.path.basename(txt_file) + str(i) for i in range(len(docs_txt))])
|
||||
# start_time1 = time.time()
|
||||
# print(start_time1 - start_time)
|
||||
|
||||
|
||||
# loader = PyPDFLoader("/code/memory/text/大语言模型应用.pdf")
|
||||
# loader = TextLoader("/code/memory/text/test.txt")
|
||||
# loader = CSVLoader("/code/memory/text/test1.csv")
|
||||
# loader = UnstructuredHTMLLoader("/"example_data/fake-content.html"")
|
||||
# pip install docx2txt
|
||||
# loader = Docx2txtLoader("/code/memory/text/tesou.docx")
|
||||
# pip install openpyxl
|
||||
# loader = UnstructuredExcelLoader("/code/memory/text/AI Team Planning 2023.xlsx")
|
||||
# pip install jq
|
||||
# loader = JSONLoader("/code/memory/text/config.json", jq_schema='.', text_content=False)
|
||||
|
||||
# documents = loader.load()
|
||||
|
||||
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
|
||||
# docs = text_splitter.split_documents(documents)
|
||||
# print(len(docs))
|
||||
# ids = ["大语言模型应用"+str(i) for i in range(len(docs))]
|
||||
|
||||
|
||||
# 加载文档和拆分文档
|
||||
loader = TextLoader("/home/administrator/Workspace/jarvis-models/sample/RAG_en.txt")
|
||||
|
||||
documents = loader.load()
|
||||
|
||||
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=0, length_function=len, is_separator_regex=True,separators=['\n', '\n\n'])
|
||||
|
||||
docs = text_splitter.split_documents(documents)
|
||||
|
||||
print("len(docs)", len(docs))
|
||||
|
||||
ids = ["20240521_store"+str(i) for i in range(len(docs))]
|
||||
|
||||
|
||||
# 加载embedding模型和chroma server
|
||||
embedding_model = SentenceTransformerEmbeddings(model_name='/home/administrator/Workspace/Models/BAAI/bge-large-en-v1.5', model_kwargs={"device": "cuda"})
|
||||
client = chromadb.HttpClient(host='172.16.5.8', port=7000)
|
||||
|
||||
id = "g2e_english"
|
||||
client.delete_collection(id)
|
||||
collection_number = client.get_or_create_collection(id).count()
|
||||
print("collection_number",collection_number)
|
||||
start_time2 = time.time()
|
||||
# 插入向量(如果ids已存在,则会更新向量)
|
||||
db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, collection_name=id, client=client)
|
||||
|
||||
# db = Chroma.from_texts(texts=['test by tom'], embedding=embedding_model, ids=["大语言模型应用0"], persist_directory="./data/test1", collection_name="123", metadatas=[{"source": "string"}])
|
||||
|
||||
start_time3 = time.time()
|
||||
print("insert time ", start_time3 - start_time2)
|
||||
collection_number = client.get_or_create_collection(id).count()
|
||||
print("collection_number",collection_number)
|
||||
|
||||
|
||||
|
||||
# # chroma 召回
|
||||
# from chromadb.utils import embedding_functions
|
||||
# embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/home/administrator/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
|
||||
# client = chromadb.HttpClient(host='172.16.5.8', port=7000)
|
||||
# collection = client.get_collection("g2e", embedding_function=embedding_model)
|
||||
|
||||
# print(collection.count())
|
||||
# import time
|
||||
# start_time = time.time()
|
||||
# query = "如何前往威尼斯人"
|
||||
# # query it
|
||||
# results = collection.query(
|
||||
# query_texts=[query],
|
||||
# n_results=3,
|
||||
# )
|
||||
|
||||
# response = results["documents"]
|
||||
# print("response: ", response)
|
||||
# print("time: ", time.time() - start_time)
|
||||
|
||||
|
||||
# # 结合大模型进行总结
|
||||
# import requests
|
||||
|
||||
# model_name = "Qwen1.5-14B-Chat"
|
||||
# chat_inputs={
|
||||
# "model": model_name,
|
||||
# "messages": [
|
||||
# {
|
||||
# "role": "user",
|
||||
# "content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
|
||||
# }
|
||||
# ],
|
||||
# # "temperature": 0,
|
||||
# # "top_p": user_top_p,
|
||||
# # "n": user_n,
|
||||
# # "max_tokens": user_max_tokens,
|
||||
# # "frequency_penalty": user_frequency_penalty,
|
||||
# # "presence_penalty": user_presence_penalty,
|
||||
# # "stop": 100
|
||||
# }
|
||||
|
||||
# key ="YOUR_API_KEY"
|
||||
|
||||
# header = {
|
||||
# 'Content-Type': 'application/json',
|
||||
# 'Authorization': "Bearer " + key
|
||||
# }
|
||||
# url = "http://172.16.5.8:23333/v1/chat/completions"
|
||||
|
||||
# fastchat_response = requests.post(url, json=chat_inputs, headers=header)
|
||||
# # print(fastchat_response.json())
|
||||
|
||||
# print("\n question: ", query)
|
||||
# print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
|
||||
|
||||
|
||||
|
||||
|
||||
# start_time4 = time.time()
|
||||
# db = Chroma(
|
||||
# client=client,
|
||||
# collection_name=id,
|
||||
# embedding_function=embedding_model,
|
||||
# )
|
||||
|
||||
# 更新文档
|
||||
# db = db.update_documents(ids, documents)
|
||||
# 删除文档
|
||||
# db.delete([ids])
|
||||
# 删除集合
|
||||
# db.delete_collection()
|
||||
|
||||
# query = "智能体核心思想"
|
||||
# docs = db.similarity_search(query, k=2)
|
||||
|
||||
# print("result: ",docs)
|
||||
# for doc in docs:
|
||||
# print(doc, "\n")
|
||||
|
||||
# start_time5 = time.time()
|
||||
# print("search time ", start_time5 - start_time4)
|
||||
|
||||
# docs = db._collection.get(ids=['大语言模型应用0'])
|
||||
|
||||
# print(docs)
|
||||
|
||||
# docs = db.get(where={"source": "text/大语言模型应用.pdf"})
|
||||
# docs = db.get()
|
||||
# print(docs)
|
||||
|
||||
|
||||
|
||||
|
||||
196
sample/chroma_client_query.py
Normal file
196
sample/chroma_client_query.py
Normal file
@ -0,0 +1,196 @@
|
||||
import os
|
||||
import time
|
||||
import chromadb
|
||||
from chromadb.config import Settings
|
||||
|
||||
from langchain_community.document_loaders.csv_loader import CSVLoader
|
||||
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader, TextLoader, UnstructuredHTMLLoader, JSONLoader, Docx2txtLoader, UnstructuredExcelLoader
|
||||
from langchain_community.vectorstores import Chroma
|
||||
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
|
||||
from langchain_community.embeddings.sentence_transformer import SentenceTransformerEmbeddings, HuggingFaceEmbeddings
|
||||
|
||||
|
||||
|
||||
def get_all_files(folder_path):
|
||||
# 获取文件夹下所有文件和文件夹的名称列表
|
||||
files = os.listdir(folder_path)
|
||||
|
||||
# 初始化空列表,用于存储所有文件的绝对路径
|
||||
absolute_paths = []
|
||||
|
||||
# 遍历文件和文件夹名称列表
|
||||
for file in files:
|
||||
# 拼接文件的绝对路径
|
||||
absolute_path = os.path.join(folder_path, file)
|
||||
# 如果是文件,将其绝对路径添加到列表中
|
||||
if os.path.isfile(absolute_path):
|
||||
absolute_paths.append(absolute_path)
|
||||
|
||||
return absolute_paths
|
||||
|
||||
# start_time = time.time()
|
||||
# # 加载文档
|
||||
# folder_path = "./text"
|
||||
# txt_files = get_all_files(folder_path)
|
||||
# docs = []
|
||||
# ids = []
|
||||
# for txt_file in txt_files:
|
||||
# loader = PyPDFLoader(txt_file)
|
||||
|
||||
# documents = loader.load()
|
||||
|
||||
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
|
||||
|
||||
# docs_txt = text_splitter.split_documents(documents)
|
||||
|
||||
# docs.extend(docs_txt)
|
||||
|
||||
# ids.extend([os.path.basename(txt_file) + str(i) for i in range(len(docs_txt))])
|
||||
# start_time1 = time.time()
|
||||
# print(start_time1 - start_time)
|
||||
|
||||
|
||||
# loader = PyPDFLoader("/code/memory/text/大语言模型应用.pdf")
|
||||
# loader = TextLoader("/code/memory/text/test.txt")
|
||||
# loader = CSVLoader("/code/memory/text/test1.csv")
|
||||
# loader = UnstructuredHTMLLoader("/"example_data/fake-content.html"")
|
||||
# pip install docx2txt
|
||||
# loader = Docx2txtLoader("/code/memory/text/tesou.docx")
|
||||
# pip install openpyxl
|
||||
# loader = UnstructuredExcelLoader("/code/memorinject_prompt = '(用活泼的语气说话回答,回答严格限制50字以内)'
|
||||
# inject_prompt = '(回答简练,不要输出重复内容,只讲中文)'
|
||||
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=0)
|
||||
# docs = text_splitter.split_documents(documents)
|
||||
# print(len(docs))
|
||||
# ids = ["大语言模型应用"+str(i) for i in range(len(docs))]
|
||||
|
||||
|
||||
# 加载文档和拆分文档
|
||||
# loader = TextLoader("/home/administrator/Workspace/jarvis-models/sample/RAG_zh.txt")
|
||||
|
||||
# documents = loader.load()
|
||||
|
||||
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=1024, chunk_overlap=50)
|
||||
|
||||
# docs = text_splitter.split_documents(documents)
|
||||
|
||||
# print("len(docs)", len(docs))
|
||||
|
||||
# ids = ["20240521_store"+str(i) for i in range(len(docs))]
|
||||
|
||||
|
||||
# # 加载embedding模型和chroma server
|
||||
# embedding_model = SentenceTransformerEmbeddings(model_name='/home/administrator/Workspace/Models/BAAI/bge-large-zh-v1.5', model_kwargs={"device": "cuda"})
|
||||
# client = chromadb.HttpClient(host='172.16.5.8', port=7000)
|
||||
|
||||
# id = "g2e"
|
||||
# client.delete_collection(id)
|
||||
# collection_number = client.get_or_create_collection(id).count()
|
||||
# print("collection_number",collection_number)
|
||||
# start_time2 = time.time()
|
||||
# # 插入向量(如果ids已存在,则会更新向量)
|
||||
# db = Chroma.from_documents(documents=docs, embedding=embedding_model, ids=ids, collection_name=id, client=client)
|
||||
|
||||
# # db = Chroma.from_texts(texts=['test by tom'], embedding=embedding_model, ids=["大语言模型应用0"], persist_directory="./data/test1", collection_name="123", metadatas=[{"source": "string"}])
|
||||
|
||||
# start_time3 = time.time()
|
||||
# print("insert time ", start_time3 - start_time2)
|
||||
# collection_number = client.get_or_create_collection(id).count()
|
||||
# print("collection_number",collection_number)
|
||||
|
||||
|
||||
|
||||
# chroma 召回
|
||||
from chromadb.utils import embedding_functions
|
||||
embedding_model = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="/home/administrator/Workspace/Models/BAAI/bge-large-zh-v1.5", device = "cuda")
|
||||
client = chromadb.HttpClient(host='172.16.5.8', port=7000)
|
||||
collection = client.get_collection("g2e", embedding_function=embedding_model)
|
||||
|
||||
print(collection.count())
|
||||
import time
|
||||
start_time = time.time()
|
||||
query = "你知道澳门银河吗"
|
||||
# query it
|
||||
results = collection.query(
|
||||
query_texts=[query],
|
||||
n_results=5,
|
||||
)
|
||||
|
||||
response = results["documents"]
|
||||
print("response: ", response)
|
||||
print("time: ", time.time() - start_time)
|
||||
|
||||
|
||||
# # 结合大模型进行总结
|
||||
# import requests
|
||||
|
||||
# model_name = "Qwen1.5-14B-Chat"
|
||||
# chat_inputs={
|
||||
# "model": model_name,
|
||||
# "messages": [
|
||||
# {
|
||||
# "role": "user",
|
||||
# "content": f"问题: {query}。- 根据知识库内的检索结果,以清晰简洁的表达方式回答问题。- 只从检索内容中选取与问题密切相关的信息。- 不要编造答案,如果答案不在经核实的资料中或无法从经核实的资料中得出,请回答“我无法回答您的问题。”检索内容:{response}"
|
||||
# }
|
||||
# ],
|
||||
# # "temperature": 0,
|
||||
# # "top_p": user_top_p,
|
||||
# # "n": user_n,
|
||||
# # "max_tokens": user_max_tokens,
|
||||
# # "frequency_penalty": user_frequency_penalty,
|
||||
# # "presence_penalty": user_presence_penalty,
|
||||
# # "stop": 100
|
||||
# }
|
||||
|
||||
# key ="YOUR_API_KEY"
|
||||
|
||||
# header = {
|
||||
# 'Content-Type': 'application/json',
|
||||
# 'Authorization': "Bearer " + key
|
||||
# }
|
||||
# url = "http://172.16.5.8:23333/v1/chat/completions"
|
||||
|
||||
# fastchat_response = requests.post(url, json=chat_inputs, headers=header)
|
||||
# # print(fastchat_response.json())
|
||||
|
||||
# print("\n question: ", query)
|
||||
# print("\n ",model_name, fastchat_response.json()["choices"][0]["message"]["content"])
|
||||
|
||||
|
||||
|
||||
|
||||
# start_time4 = time.time()
|
||||
# db = Chroma(
|
||||
# client=client,
|
||||
# collection_name=id,
|
||||
# embedding_function=embedding_model,
|
||||
# )
|
||||
|
||||
# 更新文档
|
||||
# db = db.update_documents(ids, documents)
|
||||
# 删除文档
|
||||
# db.delete([ids])
|
||||
# 删除集合
|
||||
# db.delete_collection()
|
||||
|
||||
# query = "智能体核心思想"
|
||||
# docs = db.similarity_search(query, k=2)
|
||||
|
||||
# print("result: ",docs)
|
||||
# for doc in docs:
|
||||
# print(doc, "\n")
|
||||
|
||||
# start_time5 = time.time()
|
||||
# print("search time ", start_time5 - start_time4)
|
||||
|
||||
# docs = db._collection.get(ids=['大语言模型应用0'])
|
||||
|
||||
# print(docs)
|
||||
|
||||
# docs = db.get(where={"source": "text/大语言模型应用.pdf"})
|
||||
# docs = db.get()
|
||||
# print(docs)
|
||||
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user