[!IMPORTANT]
Check out our newly released prediction engine: MiroFish - A Simple and Universal Swarm Intelligence Engine for Predicting EverythingThe "Data Analysis Three-Step Approach" is now fully connected: We are excited to announce the official release of MiroFish! With the final piece of the puzzle in place, we have built a complete pipeline from BettaFish (data collection and analysis) to MiroFish (panoramic prediction). The closed loop from raw data to intelligent decision-making is now complete, making it possible to foresee the future!
⚡ Project Overview
"BettaFish" is an innovative multi-agent public opinion analysis system built from scratch. It helps break information cocoons, restore the original public sentiment, predict future trends, and assist decision-making. Users only need to raise analysis needs like chatting; the agents automatically analyze 30+ mainstream social platforms at home and abroad and millions of public comments.
Betta is a small yet combative and beautiful fish, symbolizing "small but powerful, fearless of challenges".
See the system-generated research report on "Wuhan University Public Opinion": In-depth Analysis Report on Wuhan University's Brand Reputation
See a complete system run example on "Wuhan University Public Opinion": Video - In-depth Analysis Report on Wuhan University's Brand Reputation
Beyond just report quality, compared to similar products, we have 🚀 six major advantages:
AI-Driven Comprehensive Monitoring: AI crawler clusters operate 24/7 non-stop, comprehensively covering 10+ key domestic and international social media platforms including Weibo, Xiaohongshu, TikTok, Kuaishou, etc. Not only capturing trending content in real-time, but also drilling down to massive user comments, letting you hear the most authentic and widespread public voice.
Composite Analysis Engine Beyond LLM: We not only rely on 5 types of professionally designed Agents, but also integrate middleware such as fine-tuned models and statistical models. Through multi-model collaborative work, we ensure the depth, accuracy, and multi-dimensional perspective of analysis results.
Powerful Multimodal Capabilities: Breaking through text and image limitations, capable of deep analysis of short video content from TikTok, Kuaishou, etc., and precisely extracting structured multimodal information cards such as weather, calendar, stocks from modern search engines, giving you comprehensive control over public opinion dynamics.
Agent "Forum" Collaboration Mechanism: Endowing different Agents with unique toolsets and thinking patterns, introducing a debate moderator model, conducting chain-of-thought collision and debate through the "forum" mechanism. This not only avoids the thinking limitations of single models and homogenization caused by communication, but also catalyzes higher-quality collective intelligence and decision support.
Seamless Integration of Public and Private Domain Data: The platform not only analyzes public opinion, but also provides high-security interfaces supporting seamless integration of your internal business databases with public opinion data. Breaking through data barriers, providing powerful analysis capabilities of "external trends + internal insights" for vertical businesses.
Lightweight and Highly Extensible Framework: Based on pure Python modular design, achieving lightweight, one-click deployment. Clear code structure allows developers to easily integrate custom models and business logic, enabling rapid platform expansion and deep customization.
Starting with public opinion, but not limited to public opinion. The goal of "WeiYu" is to become a simple and universal data analysis engine that drives all business scenarios.
For example, you only need to simply modify the API parameters and prompts of the Agent toolset to transform it into a financial market analysis system.
Here's a relatively active Linux.do project discussion thread: https://linux.do/t/topic/1009280
Check out the comparison by a Linux.do fellow: Open Source Project (BettaFish) vs manus|minimax|ChatGPT Comparison
Say goodbye to traditional data dashboards. In "WeiYu", everything starts with a simple question - you just need to ask your analysis needs like a conversation
🪄 Sponsors
Provider of core agent capabilities including AI web search, file parsing, and web content scraping:
Anspire Open is a leading infrastructure provider for the agent era. We offer developers the core capability stack needed to build powerful agents. Currently available services include AI web search (multiple versions, highly competitive pricing), file parsing (limited-time free), web content scraping (limited-time free), cloud browser automation (Anspire Browser Agent, in beta), multi-turn rewriting, and more. We continue to provide a solid foundation for agents to connect and operate in complex digital worlds. Seamlessly integrates with mainstream agent platforms such as Dify, Coze, and Yuanqi. Through a transparent credit-based billing system and modular design, we provide enterprises with efficient, low-cost customized support to accelerate intelligent transformation.
🏗️ System Architecture
Overall Architecture Diagram
Insight Agent Private Database Mining: AI agent for in-depth analysis of private public opinion databases
Media Agent Multimodal Content Analysis: AI agent with powerful multimodal capabilities
Query Agent Precise Information Search: AI agent with domestic and international web search capabilities
Report Agent Intelligent Report Generation: Multi-round report generation AI agent with built-in templates
A Complete Analysis Workflow
| Step | Phase Name | Main Operations | Participating Components | Cycle Nature |
|---|---|---|---|---|
| 1 | User Query | Flask main application receives the query | Flask Main Application | - |
| 2 | Parallel Launch | Three Agents start working simultaneously | Query Agent, Media Agent, Insight Agent | - |
| 3 | Preliminary Analysis | Each Agent uses dedicated tools for overview search | Each Agent + Dedicated Toolsets | - |
| 4 | Strategy Formulation | Develop segmented research strategies based on preliminary results | Internal Decision Modules of Each Agent | - |
| 5-N | Iterative Phase | Forum Collaboration + In-depth Research | ForumEngine + All Agents | Multi-round cycles |
| 5.1 | In-depth Research | Each Agent conducts specialized search guided by forum host | Each Agent + Reflection Mechanisms + Forum Guidance | Each cycle |
| 5.2 | Forum Collaboration | ForumEngine monitors Agent communications and generates host guidance | ForumEngine + LLM Host | Each cycle |
| 5.3 | Communication Integration | Each Agent adjusts research directions based on discussions | Each Agent + forum_reader tool | Each cycle |
| N+1 | Result Integration | Report Agent collects all analysis results and forum content | Report Agent | - |
| N+2 | IR Intermediate Representation | Dynamically select templates and styles, generate metadata through multiple rounds, assemble into IR intermediate representation | Report Agent + Template Engine | - |
| N+3 | Report Generation | Perform quality checks on chunks, render into interactive HTML report based on IR | Report Agent + Stitching Engine | - |
Current Code Structure (as of 2026-04-09)
The repository now follows a cleaner top-level split:
AI-Agent-Platform/
├─ apps/ # User-facing application entrypoints
├─ services/ # Core business implementation
├─ backend/ # Flask blueprints and API composition
├─ scripts/ # Local bootstrap / startup / repair scripts
├─ tools/ # Reporting and CI helper tools
├─ infra/ # Docker / Compose / local runtime manifests
├─ research/ # Research models and experiment assets
├─ vendor/ # Third-party source dependencies
├─ tests/ # Unit / integration / e2e tests
├─ static/ templates/ # Web static assets and Flask templates
├─ utils/ # Runtime helpers
├─ var/ # Logs, reports, database data, browser data, caches
└─ start_local.bat
- Use
apps/andservices/as the only canonical development roots. - The old root-level compatibility package directories and compatibility CLI directories have been removed in
T-517. - After
T-525, the old root-levelapp.pycompatibility entrypoint is gone. The only remaining root-level thin entrypoint isstart_local.bat, while Docker entry files now live only underinfra/docker/.
Historical Legacy Tree (outdated, kept only for migration context)
After T-518, the main README no longer embeds the full legacy tree. Keeping deleted compatibility directories in the primary guide was too easy to misread as still supported.
If you need migration background, use:
🚀 Quick Start (Pure Local, Recommended)
1. Prepare the Environment File
Copy .env.local.example to .env.local, then fill in your model API settings:
# PowerShell
Copy-Item .env.local.example .env.local
If you still keep a Docker-oriented
.env, source startup will prefer.env.local, so both setups can coexist.
2. Start a Local Database
Pure local mode is designed for a local PostgreSQL 15+ instance by default, and you can still switch to MySQL if needed:
| Configuration Item | Default Value | Description |
|---|---|---|
DB_HOST |
127.0.0.1 |
Local database host |
DB_PORT |
5432 |
Default local PostgreSQL port |
DB_USER |
bettafish |
Database username |
DB_PASSWORD |
bettafish |
Database password |
DB_NAME |
bettafish |
Database name |
3. Install Dependencies
python -m scripts.dev.bootstrap_local
Before starting the site, it is recommended to verify that local PostgreSQL is actually ready:
python -m scripts.dev.prepare_local_postgres --check-only
If the PostgreSQL service is already running but the database or schema is still missing, run:
python -m scripts.dev.prepare_local_postgres --ensure-db --apply-schema
This helper only handles diagnostics, database creation, and schema initialization. It does not install PostgreSQL as a system service for you.
If PostgreSQL is already installed locally but the existing 5432 instance uses different credentials, you can repair the BettaFish local workflow with:
python -m scripts.dev.repair_local_postgres
The repair helper reuses the current .env.local values for DB_USER / DB_PASSWORD / DB_NAME, provisions a project-managed PostgreSQL instance under var/db/postgres-local/, and automatically switches .env.local to 127.0.0.1:55432.
If you prefer to run the steps manually, the equivalent commands are:
pip install -r infra/python/requirements-local.txt
playwright install chromium
npm --prefix apps/web_ui install
4. Start the Project
python -m scripts.dev.start_local
Windows one-click entry:
start_local.bat
You can also call the canonical PowerShell entry directly:
powershell -ExecutionPolicy Bypass -NoProfile -File .\scripts\dev\start_local_stack.ps1
The script checks .env.local, runs bootstrap_local when dependencies are missing, verifies frontend packages, reuses existing 5000/9527 listeners when they already belong to BettaFish, and automatically repairs local PostgreSQL when the system 5432 instance is unavailable or has mismatched credentials. In the default mode, the backend runs as an API-only Flask service and the frontend runs separately on Vite with HMR. If you prefer manual diagnosis, continue with python -m scripts.dev.prepare_local_postgres --check-only, python -m scripts.dev.prepare_local_postgres --ensure-db --apply-schema, or python -m scripts.dev.repair_local_postgres.
- Frontend dev entry: http://127.0.0.1:9527
- Backend API: http://127.0.0.1:5000
- To generate static frontend assets for Flask or Docker single-service deployment, run
python -m scripts.dev.start_local --frontend-mode build
If
9527is already used by another local project, runpython -m scripts.dev.start_local --frontend-port 9528, or on Windows usestart_local.bat -FrontendPort 9528.
🐳 Docker Startup (Optional Compatibility Mode)
1. Prepare .env
Copy .env.example to .env, then edit the environment variables as needed.
2. Start the Project
Run the following command to start all services in the background:
docker compose -f infra/docker/docker-compose.yml -f infra/docker/docker-compose.override.yml up -d --build
Note: Slow image pull speed. In the canonical file
infra/docker/docker-compose.yml, we provide alternative mirror image addresses as comments for you to replace with. The root-level Docker / Compose compatibility copies have been removed, so always use the files underinfra/docker/.
3. Database Configuration for Docker Mode (PostgreSQL)
Configure the database connection information with the following parameters. The system also supports MySQL, so you can adjust the settings as needed:
| Configuration Item | Value to Use | Description |
|---|---|---|
DB_HOST |
db |
Database service name (as defined in infra/docker/docker-compose.yml) |
DB_PORT |
5432 |
Default PostgreSQL port |
DB_USER |
bettafish |
Database username |
DB_PASSWORD |
bettafish |
Database password |
DB_NAME |
bettafish |
Database name |
| Others | Keep Default | Please keep other parameters, such as database connection pool settings, at their default values. |
Large Language Model (LLM) Configuration
All LLM calls use the OpenAI API interface standard. After you finish the database configuration, continue to configure all LLM-related parameters so the system can connect to your selected LLM service.
Once you complete and save the configurations above, the system will be ready to run normally.
🔧 Source Code Startup Guide
If you are new to building Agent systems, you can start with a very simple demo: Deep Search Agent Demo
System Requirements
- Operating System: Windows, Linux, MacOS
- Python Version: 3.9+
- Conda: Anaconda or Miniconda
- Database: PostgreSQL (recommended) or MySQL
- Memory: 2GB+ recommended
1. Create Environment
If Using Conda
# Create conda environment
conda create -n your_conda_name python=3.11
conda activate your_conda_name
If Using uv
# Create uv environment
uv venv --python 3.11 # Create Python 3.11 environment
2. Install System Dependencies for PDF Export (Optional)
This section contains detailed configuration instructions:Configure the dependencies
3. Install Dependencies
If Step 2 is skipped, the WeasyPrint library may not install correctly, and the PDF functionality may be unavailable.
# Basic dependency installation
pip install -r requirements.txt
# uv version command (faster installation)
uv pip install -r requirements.txt
# If you do not want to use the local sentiment analysis model (which has low computational requirements and defaults to the CPU version), you can comment out the 'Machine Learning' section in this file before executing the command.
4. Install Playwright Browser Drivers
# Install browser drivers (for crawler functionality)
playwright install chromium
5. Configure Local Environment and Database
Copy .env.local.example in the project root directory to .env.local.
If you still want to keep Docker startup available, keep .env for compose and place your pure-local overrides in .env.local.
Edit .env.local and fill in your API keys (you can also choose your own models and search proxies; see .env.example, .env.local.example, or services/shared/config/app_settings.py for details):
# ====================== BettaFish Local Startup ======================
HOST=127.0.0.1
PORT=5000
# ====================== Database Configuration ======================
DB_HOST=127.0.0.1
DB_PORT=5432
DB_USER=bettafish
DB_PASSWORD=bettafish
DB_NAME=bettafish
# Database character set, utf8mb4 is recommended for emoji compatibility
DB_CHARSET=utf8mb4
# Database type: postgresql or mysql
DB_DIALECT=postgresql
# Database initialization is checked automatically during startup
# ====================== LLM Configuration ======================
# You can switch each Engine's LLM provider as long as it follows the OpenAI-compatible request format
# The configuration file provides recommended LLMs for each Agent. For initial deployment, please refer to the recommended settings first
# Insight Agent
INSIGHT_ENGINE_API_KEY=
INSIGHT_ENGINE_BASE_URL=
INSIGHT_ENGINE_MODEL_NAME=
# Media Agent
...
6. Launch System
6.1 Complete System Launch (Recommended)
# In project root directory, activate conda environment
conda activate your_conda_name
# Default local development mode: Vite HMR + Flask API
python -m scripts.dev.start_local
uv version startup command:
# In project root directory, activate uv environment
.venv\Scripts\activate
# Default local development mode: Vite HMR + Flask API
python -m scripts.dev.start_local
If you only need the backend API, use
python -m apps.web_api. The recommended entrypoint for local development remainspython -m scripts.dev.start_local.Note 1: If you need static frontend assets for Flask or Docker single-service deployment, run
python -m scripts.dev.start_local --frontend-mode build. The compatibility alias--build-frontendstill works, but it is no longer the primary local development entrypoint.Note 2: Data scraping needs to be performed as a separate operation. Please refer to the instructions in section 5.3.
Note 3: On a fresh machine, run
python -m scripts.dev.bootstrap_localfirst. It prepares the Python dependencies, Playwright browser, and frontend dependencies in one place.Note 4: On Windows, you can run
start_local.batfrom the repo root. It forwards toscripts/dev/start_local_stack.ps1, starts the frontend HMR server on9527and the backend API on5000, and reuses the pure-local startup flow for dependency checks, PostgreSQL diagnostics, and automatic repair.Note 4.1: If
9527is occupied, usestart_local.bat -FrontendPort 9528, or directly runpython -m scripts.dev.start_local --frontend-port 9528.Note 5: If PostgreSQL is installed locally but you are not sure whether the database and schema are ready, run
python -m scripts.dev.prepare_local_postgres --check-only. To create the database and initialize the schema, runpython -m scripts.dev.prepare_local_postgres --ensure-db --apply-schema.
Visit http://127.0.0.1:9527 for the frontend, with the backend API available at http://127.0.0.1:5000
6.2 Launch Individual Agents
# Start QueryEngine
streamlit run apps/engine_console/query_engine_streamlit_app.py --server.port 8503
# Start MediaEngine
streamlit run apps/engine_console/media_engine_streamlit_app.py --server.port 8502
# Start InsightEngine
streamlit run apps/engine_console/insight_engine_streamlit_app.py --server.port 8501
6.3 Crawler System Standalone Use
This section has detailed configuration documentation: MindSpider Usage Guide
The recommended repo-root entry is python -m services.crawler.mindspider.main --status.
# Enter crawler directory
cd services/crawler/mindspider
# Project initialization
python main.py --setup
# Run topic extraction (get hot news and keywords)
python main.py --broad-topic
# Run complete crawler workflow
python main.py --complete --date 2024-01-20
# Run topic extraction only
python main.py --broad-topic --date 2024-01-20
# Run deep crawling only
python main.py --deep-sentiment --platforms xhs dy wb
6.4 Command-line Report Generation Tool
This tool bypasses the execution phase of all three analysis engines, directly loads their most recent log files, and generates a consolidated report without requiring the Web interface (while also skipping incremental file-validation steps). It will also generate a Markdown copy after the PDF by default (toggle via CLI flag). It is typically used when rapid retries are needed due to unsatisfactory report outputs, or when debugging the Report Engine.
# Basic usage (automatically extract topic from filename)
python -m tools.reports.report_engine_only
# Specify report topic
python -m tools.reports.report_engine_only --query "Civil Engineering Industry Analysis"
# Skip PDF generation (even if system supports it)
python -m tools.reports.report_engine_only --skip-pdf
# Skip Markdown generation
python -m tools.reports.report_engine_only --skip-markdown
# Show verbose logging
python -m tools.reports.report_engine_only --verbose
# Show help information
python -m tools.reports.report_engine_only --help
Features:
- Automatic Dependency Check: The program automatically checks system dependencies required for PDF generation and provides installation instructions if missing
-
Get Latest Files: Automatically retrieves the latest analysis reports from three engine directories (
var/reports/engines/insight/,var/reports/engines/media/,var/reports/engines/query/) -
File Confirmation: Displays all selected file names, paths, and modification times, waiting for user confirmation (default input
yto continue, inputnto exit) - Direct Report Generation: Skips file addition verification and directly calls Report Engine to generate comprehensive reports
- Automatic File Saving:
- HTML reports saved to
var/reports/final/directory - PDF reports (if dependencies available) saved to
var/reports/final/pdf/directory - Markdown reports (disable with
--skip-markdown) saved tovar/reports/final/md/directory- File naming format:
final_report_{topic}_{timestamp}.html/pdf/md
- File naming format:
Notes:
- Ensure at least one of the three engine directories contains
.mdreport files - The command-line tool is independent of the Web interface and does not interfere with each other
- PDF generation requires system dependencies, see "Install PDF Export System Dependencies" section above
Quickly re-render the latest outputs:
-
python -m tools.reports.regenerate_latest_html/python -m tools.reports.regenerate_latest_md: Re-stitch the latest chapter JSON fromCHAPTER_OUTPUT_DIRinto a Document IR and render to HTML or Markdown directly. -
python -m tools.reports.regenerate_latest_pdf: Read the newest IR undervar/reports/final/irand re-export a PDF with SVG vector charts.
⚙️ Advanced Configuration (Deprecated: Configuration has been unified to the .env file in the project root directory, and other sub-agents automatically inherit the root directory configuration)
Modify Key Parameters
Agent Configuration Parameters
Each agent has dedicated configuration files that can be adjusted according to needs:
# services/engines/query/utils/config.py
class Config:
max_reflections = 2 # Reflection rounds
max_search_results = 15 # Maximum search results
max_content_length = 8000 # Maximum content length
# services/engines/media/utils/config.py
class Config:
comprehensive_search_limit = 10 # Comprehensive search limit
web_search_limit = 15 # Web search limit
# services/engines/insight/utils/config.py
class Config:
default_search_topic_globally_limit = 200 # Global search limit
default_get_comments_limit = 500 # Comment retrieval limit
max_search_results_for_llm = 50 # Max results for LLM
Sentiment Analysis Model Configuration
# services/engines/insight/tools/sentiment_analyzer.py
SENTIMENT_CONFIG = {
'model_type': 'multilingual', # Options: 'bert', 'multilingual', 'qwen'
'confidence_threshold': 0.8, # Confidence threshold
'batch_size': 32, # Batch size
'max_sequence_length': 512, # Max sequence length
}
Integrate Different LLM Models
The system supports any LLM provider that follows the OpenAI request format. You only need to fill in KEY, BASE_URL, and MODEL_NAME in .env.local; field definitions live in services/shared/config/app_settings.py.
What is the OpenAI request format? Here's a simple example:
from openai import OpenAI client = OpenAI(api_key="your_api_key", base_url="https://aihubmix.com/v1") response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {'role': 'user', 'content': "What new opportunities will reasoning models bring to the market?"} ], ) complete_response = response.choices[0].message.content print(complete_response)
Change Sentiment Analysis Models
The system integrates multiple sentiment analysis methods, selectable based on needs:
1. Multilingual Sentiment Analysis
The multilingual sentiment research demo now lives in research/sentiment_models/WeiboMultilingualSentiment/.
The runtime workspace for InsightEngine multilingual sentiment analysis is controlled by INSIGHT_SENTIMENT_MODEL_DIR, and its default value now points to var/models/insight_weibo_multilingual.
cd research/sentiment_models/WeiboMultilingualSentiment
python predict.py --text "This product is amazing!" --lang "en"
2. Small Parameter Qwen3 Fine-tuning
cd research/sentiment_models/WeiboSentiment_SmallQwen
python predict_universal.py --text "This event was very successful"
3. BERT-based Fine-tuned Model
# Use BERT Chinese model
cd research/sentiment_models/WeiboSentiment_Finetuned/BertChinese-Lora
python predict.py --text "This product is really great"
4. GPT-2 LoRA Fine-tuned Model
cd research/sentiment_models/WeiboSentiment_Finetuned/GPT2-Lora
python predict.py --text "I'm not feeling great today"
5. Traditional Machine Learning Methods
cd research/sentiment_models/WeiboSentiment_MachineLearning
python predict.py --model_type "svm" --text "Service attitude needs improvement"
Integrate Custom Business Database
1. Modify Database Connection Configuration
If you plan to extend the project with a custom business database, first add the corresponding fields to services/shared/config/app_settings.py, then provide the actual values in .env.local:
# Provide your business database configuration in `.env.local`
BUSINESS_DB_HOST=your_business_db_host
BUSINESS_DB_PORT=3306
BUSINESS_DB_USER=your_business_user
BUSINESS_DB_PASSWORD=your_business_password
BUSINESS_DB_NAME=your_business_database
2. Create Custom Data Access Tools
# services/engines/insight/tools/custom_db_tool.py
from services.shared.config import settings
class CustomBusinessDBTool:
"""Custom business database query tool"""
def __init__(self):
self.connection_config = {
'host': settings.BUSINESS_DB_HOST,
'port': settings.BUSINESS_DB_PORT,
'user': settings.BUSINESS_DB_USER,
'password': settings.BUSINESS_DB_PASSWORD,
'database': settings.BUSINESS_DB_NAME,
}
def search_business_data(self, query: str, table: str):
"""Query business data"""
# Implement your business logic
pass
def get_customer_feedback(self, product_id: str):
"""Get customer feedback data"""
# Implement customer feedback query logic
pass
3. Integrate into InsightEngine
# Integrate custom tools in services/engines/insight/agent.py
from .tools.custom_db_tool import CustomBusinessDBTool
class DeepSearchAgent:
def __init__(self, config=None):
# ... other initialization code
self.custom_db_tool = CustomBusinessDBTool()
def execute_custom_search(self, query: str):
"""Execute custom business data search"""
return self.custom_db_tool.search_business_data(query, "your_table")
Custom Report Templates
1. Upload in Web Interface
The system supports uploading custom template files (.md or .txt format), selectable when generating reports.
2. Create Template Files
Create new templates in the services/engines/report/report_template/ directory, and our Agent will automatically select the most appropriate template.
🤝 Contributing Guide
We welcome all forms of contributions!
Please read the following contribution guidelines:
🦖 Next Development Plan
The system has now completed the final prediction step! Visit 【MiroFish - Predict Everything】: https://github.com/666ghj/MiroFish
⚠️ Disclaimer
Important Notice: This project is for educational, academic research, and learning purposes only
-
Compliance Statement:
- All code, tools, and functionalities in this project are intended solely for educational, academic research, and learning purposes
- Commercial use or profit-making activities are strictly prohibited
- Any illegal, non-compliant, or rights-infringing activities are strictly prohibited
-
Web Scraping Disclaimer:
- The web scraping functionality in this project is intended only for technical learning and research purposes
- Users must comply with the target websites' robots.txt protocols and terms of use
- Users must comply with relevant laws and regulations and must not engage in malicious scraping or data abuse
- Users are solely responsible for any legal consequences arising from the use of web scraping functionality
-
Data Usage Disclaimer:
- The data analysis functionality in this project is intended only for academic research purposes
- Using analysis results for commercial decision-making or profit-making purposes is strictly prohibited
- Users should ensure the legality and compliance of the data being analyzed
-
Technical Disclaimer:
- This project is provided "as is" without any express or implied warranties
- The authors are not responsible for any direct or indirect losses caused by the use of this project
- Users should evaluate the applicability and risks of this project independently
-
Liability Limitation:
- Users should fully understand relevant laws and regulations before using this project
- Users should ensure their usage complies with local legal and regulatory requirements
- Users are solely responsible for any consequences arising from the illegal use of this project
Please carefully read and understand the above disclaimer before using this project. Using this project indicates that you have agreed to and accepted all the above terms.
📄 License
This project is licensed under the GPL-2.0 License. Please see the LICENSE file for details.
🎉 Support & Contact
Get Help
FAQ: https://github.com/666ghj/BettaFish/issues/185
- Project Homepage: GitHub Repository
- Issue Reporting: Issues Page
- Feature Requests: Discussions Page
Contact Information
- 📧 Email: hangjiang@bupt.edu.cn
Business Cooperation
- Enterprise Custom Development
- Big Data Services
- Academic Collaboration
- Technical Training
👥 Contributors
Thanks to these excellent contributors:




