supported llm: aliyun qianwen

This commit is contained in:
harry 2024-03-28 11:01:34 +08:00
parent d445ad4cbb
commit 121ef12a74
4 changed files with 11 additions and 14 deletions

View File

@ -38,7 +38,8 @@ https://reccloud.com
supports `subtitle outlining`
- [x] Supports **background music**, either random or specified music files, with adjustable `background music volume`
- [x] Video material sources are **high-definition** and **royalty-free**
- [x] Supports integration with various models such as **OpenAI**, **moonshot**, **Azure**, **gpt4free**, **one-api**,
- [x] Supports integration with various models such as **OpenAI**, **moonshot**, **Azure**, **gpt4free**, **one-api**, *
*qianwen**
and more
### Future Plans 📅
@ -258,7 +259,8 @@ Thanks to [@wangwenqiao666](https://github.com/wangwenqiao666) for their researc
## Feedback & Suggestions 📢
- You can submit an [issue](https://github.com/harry0703/MoneyPrinterTurbo/issues) or a [pull request](https://github.com/harry0703/MoneyPrinterTurbo/pulls).
- You can submit an [issue](https://github.com/harry0703/MoneyPrinterTurbo/issues) or
a [pull request](https://github.com/harry0703/MoneyPrinterTurbo/pulls).
## Reference Projects 📚

View File

@ -31,7 +31,7 @@
- [x] 支持 **字幕生成**,可以调整 `字体`、`位置`、`颜色`、`大小`,同时支持`字幕描边`设置
- [x] 支持 **背景音乐**,随机或者指定音乐文件,可设置`背景音乐音量`
- [x] 视频素材来源 **高清**,而且 **无版权**
- [x] 支持 **OpenAI**、**moonshot**、**Azure**、**gpt4free**、**one-api** 等多种模型接入
- [x] 支持 **OpenAI**、**moonshot**、**Azure**、**gpt4free**、**one-api**、**通义千问** 等多种模型接入
### 后期计划 📅

View File

@ -2,14 +2,12 @@ import logging
import re
import json
from typing import List
import g4f
from loguru import logger
from openai import OpenAI
from openai import AzureOpenAI
from app.config import config
def _generate_response(prompt: str) -> str:
content = ""
llm_provider = config.app.get("llm_provider", "openai")
@ -18,7 +16,7 @@ def _generate_response(prompt: str) -> str:
model_name = config.app.get("g4f_model_name", "")
if not model_name:
model_name = "gpt-3.5-turbo-16k-0613"
import g4f
content = g4f.ChatCompletion.create(
model=model_name,
messages=[{"role": "user", "content": prompt}],
@ -57,21 +55,17 @@ def _generate_response(prompt: str) -> str:
raise ValueError(f"{llm_provider}: model_name is not set, please set it in the config.toml file.")
if not base_url:
raise ValueError(f"{llm_provider}: base_url is not set, please set it in the config.toml file.")
import dashscope
if llm_provider == "qwen":
import dashscope
dashscope.api_key = api_key
response = dashscope.Generation.call(
model=model_name,
messages=[{"role": "user", "content": prompt}]
messages=[{"role": "user", "content": prompt}]
)
content=response["output"]["text"]
print(content)
content = response["output"]["text"]
return content.replace("\n", "")
if llm_provider == "azure":
client = AzureOpenAI(
api_key=api_key,

View File

@ -12,4 +12,5 @@ aiohttp~=3.9.3
urllib3~=2.2.1
pillow~=9.5.0
pydantic~=2.6.3
g4f~=0.2.5.4
g4f~=0.2.5.4
dashscope~=1.15.0