666ghj

Update README.

@@ -23,7 +23,9 @@ @@ -23,7 +23,9 @@
23 23
24 ## ⚡ Project Overview 24 ## ⚡ Project Overview
25 25
26 -**"WeiYu"** is an innovative multi-agent public opinion analysis system built from scratch, not limited to Weibo, and features universal simplicity and applicability across all platforms. 26 +**"BettaFish"** is an innovative multi-agent public opinion analysis system built from scratch. It helps break information cocoons, restore the original public sentiment, predict future trends, and assist decision-making. Users only need to raise analysis needs like chatting; the agents automatically analyze 30+ mainstream social platforms at home and abroad and millions of public comments.
  27 +
  28 +> Betta is a small yet combative and beautiful fish, symbolizing "small but powerful, fearless of challenges".
27 29
28 See the system-generated research report on "Wuhan University Public Opinion": [In-depth Analysis Report on Wuhan University's Brand Reputation](./final_reports/final_report__20250827_131630.html) 30 See the system-generated research report on "Wuhan University Public Opinion": [In-depth Analysis Report on Wuhan University's Brand Reputation](./final_reports/final_report__20250827_131630.html)
29 31
@@ -103,10 +105,7 @@ Weibo_PublicOpinion_AnalysisSystem/ @@ -103,10 +105,7 @@ Weibo_PublicOpinion_AnalysisSystem/
103 ├── InsightEngine/ # Private database mining Agent 105 ├── InsightEngine/ # Private database mining Agent
104 │ ├── agent.py # Agent main logic 106 │ ├── agent.py # Agent main logic
105 │ ├── llms/ # LLM interface wrapper 107 │ ├── llms/ # LLM interface wrapper
106 -│ │ ├── deepseek.py # DeepSeek API  
107 -│ │ ├── kimi.py # Kimi API  
108 -│ │ ├── openai_llm.py # OpenAI format API  
109 -│ │ └── base.py # LLM base class 108 +│ │ └── base.py # Unified OpenAI-compatible client
110 │ ├── nodes/ # Processing nodes 109 │ ├── nodes/ # Processing nodes
111 │ │ ├── base_node.py # Base node class 110 │ │ ├── base_node.py # Base node class
112 │ │ ├── formatting_node.py # Formatting node 111 │ │ ├── formatting_node.py # Formatting node
@@ -130,7 +129,6 @@ Weibo_PublicOpinion_AnalysisSystem/ @@ -130,7 +129,6 @@ Weibo_PublicOpinion_AnalysisSystem/
130 ├── ReportEngine/ # Multi-round report generation Agent 129 ├── ReportEngine/ # Multi-round report generation Agent
131 │ ├── agent.py # Agent main logic 130 │ ├── agent.py # Agent main logic
132 │ ├── llms/ # LLM interfaces 131 │ ├── llms/ # LLM interfaces
133 -│ │ └── gemini.py # Gemini API dedicated  
134 │ ├── nodes/ # Report generation nodes 132 │ ├── nodes/ # Report generation nodes
135 │ │ ├── template_selection.py # Template selection node 133 │ │ ├── template_selection.py # Template selection node
136 │ │ └── html_generation.py # HTML generation node 134 │ │ └── html_generation.py # HTML generation node
@@ -230,7 +228,7 @@ playwright install chromium @@ -230,7 +228,7 @@ playwright install chromium
230 228
231 #### 4.1 Configure API Keys 229 #### 4.1 Configure API Keys
232 230
233 -Edit the `config.py` file and fill in your API keys (you can also choose your own models and search proxies; please see the config file for details): 231 +Edit the `config.py` file and fill in your API keys (you can also choose your own models and search proxies; see the config file for details):
234 232
235 ```python 233 ```python
236 # MySQL Database Configuration 234 # MySQL Database Configuration
@@ -238,26 +236,18 @@ DB_HOST = "localhost" @@ -238,26 +236,18 @@ DB_HOST = "localhost"
238 DB_PORT = 3306 236 DB_PORT = 3306
239 DB_USER = "your_username" 237 DB_USER = "your_username"
240 DB_PASSWORD = "your_password" 238 DB_PASSWORD = "your_password"
241 -DB_NAME = "weibo_analysis" 239 +DB_NAME = "your_db_name"
242 DB_CHARSET = "utf8mb4" 240 DB_CHARSET = "utf8mb4"
243 241
244 -# DeepSeek API (Apply at: https://www.deepseek.com/)  
245 -DEEPSEEK_API_KEY = "your_deepseek_api_key"  
246 -  
247 -# Tavily Search API (Apply at: https://www.tavily.com/)  
248 -TAVILY_API_KEY = "your_tavily_api_key"  
249 -  
250 -# Kimi API (Apply at: https://www.kimi.com/)  
251 -KIMI_API_KEY = "your_kimi_api_key"  
252 -  
253 -# Gemini API (Apply at: https://api.chataiapi.com/)  
254 -GEMINI_API_KEY = "your_gemini_api_key" 242 +# LLM configuration
  243 +# You can switch each Engine's LLM provider as long as it follows the OpenAI-compatible request format
255 244
256 -# Bocha Search API (Apply at: https://open.bochaai.com/)  
257 -BOCHA_Web_Search_API_KEY = "your_bocha_api_key"  
258 -  
259 -# Silicon Flow API (Apply at: https://siliconflow.cn/)  
260 -GUIJI_QWEN3_API_KEY = "your_guiji_api_key" 245 +# Insight Agent
  246 +INSIGHT_ENGINE_API_KEY = "your_api_key"
  247 +INSIGHT_ENGINE_BASE_URL = "https://api.moonshot.cn/v1"
  248 +INSIGHT_ENGINE_MODEL_NAME = "kimi-k2-0711-preview"
  249 +# Media Agent
  250 +...
261 ``` 251 ```
262 252
263 #### 4.2 Database Initialization 253 #### 4.2 Database Initialization
@@ -373,30 +363,28 @@ SENTIMENT_CONFIG = { @@ -373,30 +363,28 @@ SENTIMENT_CONFIG = {
373 363
374 ### Integrate Different LLM Models 364 ### Integrate Different LLM Models
375 365
376 -The system supports multiple LLM providers, switchable in each agent's configuration:  
377 -  
378 -```python  
379 -# Configure in each Engine's utils/config.py  
380 -class Config:  
381 - default_llm_provider = "deepseek" # Options: "deepseek", "openai", "kimi", "gemini", "qwen"  
382 -  
383 - # DeepSeek configuration  
384 - deepseek_api_key = "your_api_key"  
385 - deepseek_model = "deepseek-chat"  
386 -  
387 - # OpenAI compatible configuration  
388 - openai_api_key = "your_api_key"  
389 - openai_model = "gpt-3.5-turbo"  
390 - openai_base_url = "https://api.openai.com/v1"  
391 -  
392 - # Kimi configuration  
393 - kimi_api_key = "your_api_key"  
394 - kimi_model = "moonshot-v1-8k"  
395 -  
396 - # Gemini configuration  
397 - gemini_api_key = "your_api_key"  
398 - gemini_model = "gemini-pro"  
399 -``` 366 +The system supports any LLM provider that follows the OpenAI request format. You only need to fill in KEY, BASE_URL, and MODEL_NAME in `config.py`.
  367 +
  368 +> What is the OpenAI request format? Here's a simple example:
  369 +>```python
  370 +>from openai import OpenAI
  371 +>
  372 +>client = OpenAI(api_key="your_api_key",
  373 +> base_url="https://api.siliconflow.cn/v1")
  374 +>
  375 +>response = client.chat.completions.create(
  376 +> model="Qwen/Qwen2.5-72B-Instruct",
  377 +> messages=[
  378 +> {
  379 +> 'role': 'user',
  380 +> 'content': "What new opportunities will reasoning models bring to the market?"
  381 +> }
  382 +> ],
  383 +>)
  384 +>
  385 +>complete_response = response.choices[0].message.content
  386 +>print(complete_response)
  387 +>```
400 388
401 ### Change Sentiment Analysis Models 389 ### Change Sentiment Analysis Models
402 390
@@ -23,7 +23,9 @@ @@ -23,7 +23,9 @@
23 23
24 ## ⚡ 项目概述 24 ## ⚡ 项目概述
25 25
26 -**微舆**” 是一个从0实现的创新型 多智能体 舆情分析系统,不止微博,全平台简洁通用。 26 +**微舆**” 是一个从0实现的创新型 多智能体 舆情分析系统,帮助大家破除信息茧房,还原舆情原貌,预测未来走向,辅助决策。用户只需像聊天一样提出分析需求,智能体开始全自动分析 国内外30+主流社媒 与 数百万条大众评论。
  27 +
  28 +> “微舆”谐音“微鱼”,BettaFish是一种体型很小但非常好斗、漂亮的鱼,它象征着“小而强大,不畏挑战”。
27 29
28 查看系统以“武汉大学舆情”为例,生成的研究报告:[武汉大学品牌声誉深度分析报告](./final_reports/final_report__20250827_131630.html) 30 查看系统以“武汉大学舆情”为例,生成的研究报告:[武汉大学品牌声誉深度分析报告](./final_reports/final_report__20250827_131630.html)
29 31
@@ -103,10 +105,7 @@ Weibo_PublicOpinion_AnalysisSystem/ @@ -103,10 +105,7 @@ Weibo_PublicOpinion_AnalysisSystem/
103 ├── InsightEngine/ # 私有数据库挖掘Agent 105 ├── InsightEngine/ # 私有数据库挖掘Agent
104 │ ├── agent.py # Agent主逻辑 106 │ ├── agent.py # Agent主逻辑
105 │ ├── llms/ # LLM接口封装 107 │ ├── llms/ # LLM接口封装
106 -│ │ ├── deepseek.py # DeepSeek API  
107 -│ │ ├── kimi.py # Kimi API  
108 -│ │ ├── openai_llm.py # OpenAI格式API  
109 -│ │ └── base.py # LLM基类 108 +│ │ └── base.py # 统一的 OpenAI 兼容客户端
110 │ ├── nodes/ # 处理节点 109 │ ├── nodes/ # 处理节点
111 │ │ ├── base_node.py # 基础节点类 110 │ │ ├── base_node.py # 基础节点类
112 │ │ ├── formatting_node.py # 格式化节点 111 │ │ ├── formatting_node.py # 格式化节点
@@ -130,7 +129,6 @@ Weibo_PublicOpinion_AnalysisSystem/ @@ -130,7 +129,6 @@ Weibo_PublicOpinion_AnalysisSystem/
130 ├── ReportEngine/ # 多轮报告生成Agent 129 ├── ReportEngine/ # 多轮报告生成Agent
131 │ ├── agent.py # Agent主逻辑 130 │ ├── agent.py # Agent主逻辑
132 │ ├── llms/ # LLM接口 131 │ ├── llms/ # LLM接口
133 -│ │ └── gemini.py # Gemini API专用  
134 │ ├── nodes/ # 报告生成节点 132 │ ├── nodes/ # 报告生成节点
135 │ │ ├── template_selection.py # 模板选择节点 133 │ │ ├── template_selection.py # 模板选择节点
136 │ │ └── html_generation.py # HTML生成节点 134 │ │ └── html_generation.py # HTML生成节点
@@ -238,31 +236,26 @@ DB_HOST = "localhost" @@ -238,31 +236,26 @@ DB_HOST = "localhost"
238 DB_PORT = 3306 236 DB_PORT = 3306
239 DB_USER = "your_username" 237 DB_USER = "your_username"
240 DB_PASSWORD = "your_password" 238 DB_PASSWORD = "your_password"
241 -DB_NAME = "weibo_analysis" 239 +DB_NAME = "your_db_name"
242 DB_CHARSET = "utf8mb4" 240 DB_CHARSET = "utf8mb4"
243 241
244 -# DeepSeek API(申请地址:https://www.deepseek.com/)  
245 -DEEPSEEK_API_KEY = "your_deepseek_api_key"  
246 -  
247 -# Tavily搜索API(申请地址:https://www.tavily.com/)  
248 -TAVILY_API_KEY = "your_tavily_api_key"  
249 -  
250 -# Kimi API(申请地址:https://www.kimi.com/)  
251 -KIMI_API_KEY = "your_kimi_api_key"  
252 -  
253 -# Gemini API(申请地址:https://api.chataiapi.com/)  
254 -GEMINI_API_KEY = "your_gemini_api_key" 242 +# LLM配置
  243 +# 您可以更改每个部分LLM使用的API,只要兼容OpenAI请求格式都可以
255 244
256 -# 博查搜索API(申请地址:https://open.bochaai.com/)  
257 -BOCHA_Web_Search_API_KEY = "your_bocha_api_key"  
258 -  
259 -# 硅基流动API(申请地址:https://siliconflow.cn/)  
260 -GUIJI_QWEN3_API_KEY = "your_guiji_api_key" 245 +# Insight Agent
  246 +INSIGHT_ENGINE_API_KEY = "your_api_key"
  247 +INSIGHT_ENGINE_BASE_URL = "https://api.moonshot.cn/v1"
  248 +INSIGHT_ENGINE_MODEL_NAME = "kimi-k2-0711-preview"
  249 +# Media Agent
  250 +...
261 ``` 251 ```
262 252
263 #### 4.2 数据库初始化 253 #### 4.2 数据库初始化
264 254
265 **选择1:使用本地数据库** 255 **选择1:使用本地数据库**
  256 +
  257 +> MindSpider爬虫系统跟舆情系统是各自独立的,所以需要再去`MindSpider\config.py`配置一下
  258 +
266 ```bash 259 ```bash
267 # 本地MySQL数据库初始化 260 # 本地MySQL数据库初始化
268 cd MindSpider 261 cd MindSpider
@@ -373,30 +366,26 @@ SENTIMENT_CONFIG = { @@ -373,30 +366,26 @@ SENTIMENT_CONFIG = {
373 366
374 ### 接入不同的LLM模型 367 ### 接入不同的LLM模型
375 368
376 -系统支持多种LLM提供商,可在各Agent的配置中切换:  
377 -  
378 -```python  
379 -# 在各Engine的utils/config.py中配置  
380 -class Config:  
381 - default_llm_provider = "deepseek" # 可选: "deepseek", "openai", "kimi", "gemini","qwen"等  
382 -  
383 - # DeepSeek配置  
384 - deepseek_api_key = "your_api_key"  
385 - deepseek_model = "deepseek-chat"  
386 -  
387 - # OpenAI兼容配置  
388 - openai_api_key = "your_api_key"  
389 - openai_model = "gpt-3.5-turbo"  
390 - openai_base_url = "https://api.openai.com/v1"  
391 -  
392 - # Kimi配置  
393 - kimi_api_key = "your_api_key"  
394 - kimi_model = "moonshot-v1-8k"  
395 -  
396 - # Gemini配置  
397 - gemini_api_key = "your_api_key"  
398 - gemini_model = "gemini-pro"  
399 -``` 369 +支持任意openAI调用格式的LLM提供商,只需要在/config.py中填写对应的KEY、BASE_URL、MODEL_NAME即可。
  370 +
  371 +> 什么是openAI调用格式?下面提供一个简单的例子:
  372 +>```python
  373 +>from openai import OpenAI
  374 +>
  375 +>client = OpenAI(api_key="your_api_key",
  376 +> base_url="https://api.siliconflow.cn/v1")
  377 +>
  378 +>response = client.chat.completions.create(
  379 +> model="Qwen/Qwen2.5-72B-Instruct",
  380 +> messages=[
  381 +> {'role': 'user',
  382 +> 'content': "推理模型会给市场带来哪些新的机会"}
  383 +> ],
  384 +>)
  385 +>
  386 +>complete_response = response.choices[0].message.content
  387 +>print(complete_response)
  388 +>```
400 389
401 ### 更改情感分析模型 390 ### 更改情感分析模型
402 391