👋 欢迎来到我的博客

记录点滴成长。

使用vLLM部署LLM进行Funcation Calling

环境依赖 运行 pip install poetry==1.8.0 安装 poetry 包管理。 如下为 poetry 包管理中具体的依赖包 openai = "^1.30.3" fastapi = "^0.111.0" transformers = "^4.41.1" tiktoken = "^0.6.0" torch = "^2.3.0" sse-starlette = "^2.1.0" sentence-transformers = "^2.7.0" sentencepiece = "^0.2.0" accelerate = "^0.30.1" pydantic = "^2.7.1" timm = "^1.0.3" pandas = "^2.2.2" vllm = "^0.4.2" 当前 vLLM 不支持 tool call 因为当前 vLLM 不支持使用 OpenAI 包的方式传入 tools,详见 vLLM PR #3237。 from openai import OpenAI client = OpenAI() messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}] tools = [...] response = client.chat.completions.create( model="gpt-4o", messages=messages,...

Date: 六月 5, 2024 | Estimated Reading Time: 2 min | Author: Simon Wei

Fine-Tuning BERT模型用于文本分类

相关论文 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding How to Fine-Tune BERT for Text Classification? 微调 BERT步骤 准备数据集 加载预训练BERT模型 加载BERT模型的分词器Tokenizer 定义超参数和优化器 微调过程 前向传播:输入批次数据,计算模型的输出。 梯度归零:清除之前的梯度信息,为下一次迭代做准备。 反向传播:计算损失相对于模型参数的...

Date: 四月 10, 2024 | Estimated Reading Time: 5 min | Author: Simon Wei