Reorganized docs for docusaurus publish (#860)

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update
This commit is contained in:
writinwaters 2024-05-21 20:53:55 +08:00 committed by GitHub
parent 1797f5ce31
commit 3cae87a902
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
19 changed files with 713 additions and 660 deletions

View File

@ -34,11 +34,11 @@
- 2024-05-15 Integrates OpenAI GPT-4o.
- 2024-05-08 Integrates LLM DeepSeek-V2.
- 2024-04-26 Adds file management.
- 2024-04-19 Supports conversation API ([detail](./docs/conversation_api.md)).
- 2024-04-19 Supports conversation API ([detail](./docs/references/api.md)).
- 2024-04-16 Integrates an embedding model 'bce-embedding-base_v1' from [BCEmbedding](https://github.com/netease-youdao/BCEmbedding), and [FastEmbed](https://github.com/qdrant/fastembed), which is designed specifically for light and speedy embedding.
- 2024-04-11 Supports [Xinference](./docs/xinference.md) for local LLM deployment.
- 2024-04-11 Supports [Xinference](./docs/guides/deploy_local_llm.md) for local LLM deployment.
- 2024-04-10 Adds a new layout recognition model for analyzing legal documents.
- 2024-04-08 Supports [Ollama](./docs/ollama.md) for local LLM deployment.
- 2024-04-08 Supports [Ollama](./docs/guides/deploy_local_llm.md) for local LLM deployment.
- 2024-04-07 Supports Chinese UI.
## 🌟 Key Features
@ -87,7 +87,7 @@
### 🚀 Start up the server
1. Ensure `vm.max_map_count` >= 262144 ([more](./docs/max_map_count.md)):
1. Ensure `vm.max_map_count` >= 262144 ([more](./docs/guides/max_map_count.md)):
> To check the value of `vm.max_map_count`:
>
@ -154,7 +154,7 @@
> With default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key.
> See [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) for more information.
> See [./docs/guides/llm_api_key_setup.md](./docs/guides/llm_api_key_setup.md) for more information.
_The show is now on!_
@ -288,7 +288,7 @@ To launch the service from source:
## 📚 Documentation
- [Quickstart](./docs/quickstart.md)
- [FAQ](./docs/faq.md)
- [FAQ](./docs/references/faq.md)
## 📜 Roadmap
@ -301,4 +301,4 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)
## 🙌 Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md) first.
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./docs/references/CONTRIBUTING.md) first.

View File

@ -34,12 +34,12 @@
- 2024-05-15 OpenAI GPT-4oを統合しました。
- 2024-05-08 LLM DeepSeek-V2を統合しました。
- 2024-04-26 「ファイル管理」機能を追加しました。
- 2024-04-19 会話 API をサポートします ([詳細](./docs/conversation_api.md))。
- 2024-04-19 会話 API をサポートします ([詳細](./docs/references/api.md))。
- 2024-04-16 [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) から埋め込みモデル「bce-embedding-base_v1」を追加します。
- 2024-04-16 [FastEmbed](https://github.com/qdrant/fastembed) は、軽量かつ高速な埋め込み用に設計されています。
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/xinference.md) をサポートします。
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/guides/deploy_local_llm.md) をサポートします。
- 2024-04-10 メソッド「Laws」に新しいレイアウト認識モデルを追加します。
- 2024-04-08 [Ollama](./docs/ollama.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
- 2024-04-08 [Ollama](./docs/guides/deploy_local_llm.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
- 2024-04-07 中国語インターフェースをサポートします。
@ -89,7 +89,7 @@
### 🚀 サーバーを起動
1. `vm.max_map_count` >= 262144 であることを確認する【[もっと](./docs/max_map_count.md)】:
1. `vm.max_map_count` >= 262144 であることを確認する【[もっと](./docs/guides/max_map_count.md)】:
> `vm.max_map_count` の値をチェックするには:
>
@ -155,7 +155,7 @@
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
6. [service_conf.yaml](./docker/service_conf.yaml) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
> 詳しくは [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) を参照してください。
> 詳しくは [./docs/guides/llm_api_key_setup.md](./docs/guides/llm_api_key_setup.md) を参照してください。
_これで初期設定完了ショーの開幕です_
@ -255,7 +255,7 @@ $ bash ./entrypoint.sh
## 📚 ドキュメンテーション
- [Quickstart](./docs/quickstart.md)
- [FAQ](./docs/faq.md)
- [FAQ](./docs/references/faq.md)
## 📜 ロードマップ
@ -268,4 +268,4 @@ $ bash ./entrypoint.sh
## 🙌 コントリビュート
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず[コントリビューションガイド](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md)をご覧ください。
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず[コントリビューションガイド](./docs/references/CONTRIBUTING.md)をご覧ください。

View File

@ -34,11 +34,11 @@
- 2024-05-15 集成大模型 OpenAI GPT-4o。
- 2024-05-08 集成大模型 DeepSeek。
- 2024-04-26 增添了'文件管理'功能。
- 2024-04-19 支持对话 API ([更多](./docs/conversation_api.md))。
- 2024-04-19 支持对话 API ([更多](./docs/references/api.md))。
- 2024-04-16 集成嵌入模型 [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) 和 专为轻型和高速嵌入而设计的 [FastEmbed](https://github.com/qdrant/fastembed)。
- 2024-04-11 支持用 [Xinference](./docs/xinference.md) 本地化部署大模型。
- 2024-04-11 支持用 [Xinference](./docs/guides/deploy_local_llm.md) 本地化部署大模型。
- 2024-04-10 为Laws版面分析增加了底层模型。
- 2024-04-08 支持用 [Ollama](./docs/ollama.md) 本地化部署大模型。
- 2024-04-08 支持用 [Ollama](./docs/guides/deploy_local_llm.md) 本地化部署大模型。
- 2024-04-07 支持中文界面。
## 🌟 主要功能
@ -87,7 +87,7 @@
### 🚀 启动服务器
1. 确保 `vm.max_map_count` 不小于 262144 【[更多](./docs/max_map_count.md)】:
1. 确保 `vm.max_map_count` 不小于 262144 【[更多](./docs/guides/max_map_count.md)】:
> 如需确认 `vm.max_map_count` 的大小:
>
@ -153,7 +153,7 @@
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
6. 在 [service_conf.yaml](./docker/service_conf.yaml) 文件的 `user_default_llm` 栏配置 LLM factory并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。
> 详见 [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md)。
> 详见 [./docs/guides/llm_api_key_setup.md](./docs/guides/llm_api_key_setup.md)。
_好戏开始接着奏乐接着舞_
@ -274,7 +274,7 @@ $ systemctl start nginx
## 📚 技术文档
- [Quickstart](./docs/quickstart.md)
- [FAQ](./docs/faq.md)
- [FAQ](./docs/references/faq.md)
## 📜 路线图
@ -287,7 +287,7 @@ $ systemctl start nginx
## 🙌 贡献指南
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的[贡献者指南](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md) 。
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的[贡献者指南](./docs/references/CONTRIBUTING.md) 。
## 👥 加入社区

8
docs/_category_.json Normal file
View File

@ -0,0 +1,8 @@
{
"label": "Get Started",
"position": 1,
"link": {
"type": "generated-index",
"description": "RAGFlow Quick Start"
}
}

View File

@ -0,0 +1,8 @@
{
"label": "User Guides",
"position": 2,
"link": {
"type": "generated-index",
"description": "RAGFlow User Guides"
}
}

View File

@ -1,3 +1,8 @@
---
sidebar_position: 1
slug: /configure_knowledge_base
---
# Configure a knowledge base
Knowledge base, hallucination-free chat, and file management are three pillars of RAGFlow. RAGFlow's AI chats are based on knowledge bases. Each of RAGFlow's knowledge bases serves as a knowledge source, *parsing* files uploaded from your local machine and file references generated in **File Management** into the real 'knowledge' for future AI chats. This guide demonstrates some basic usages of the knowledge base feature, covering the following topics:
@ -118,7 +123,7 @@ RAGFlow uses multiple recall of both full-text search and vector search in its c
## Search for knowledge base
As of RAGFlow v0.5.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
As of RAGFlow v0.6.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
![search knowledge base](https://github.com/infiniflow/ragflow/assets/93570324/836ae94c-2438-42be-879e-c7ad2a59693e)

View File

@ -0,0 +1,75 @@
---
sidebar_position: 5
slug: /deploy_local_llm
---
# Deploy a local LLM
RAGFlow supports deploying LLMs locally using Ollama or Xinference.
## Ollama
One-click deployment of local LLMs, that is [Ollama](https://github.com/ollama/ollama).
### Install
- [Ollama on Linux](https://github.com/ollama/ollama/blob/main/docs/linux.md)
- [Ollama Windows Preview](https://github.com/ollama/ollama/blob/main/docs/windows.md)
- [Docker](https://hub.docker.com/r/ollama/ollama)
### Launch Ollama
Decide which LLM you want to deploy ([here's a list for supported LLM](https://ollama.com/library)), say, **mistral**:
```bash
$ ollama run mistral
```
Or,
```bash
$ docker exec -it ollama ollama run mistral
```
### Use Ollama in RAGFlow
- Go to 'Settings > Model Providers > Models to be added > Ollama'.
![](https://github.com/infiniflow/ragflow/assets/12318111/a9df198a-226d-4f30-b8d7-829f00256d46)
> Base URL: Enter the base URL where the Ollama service is accessible, like, `http://<your-ollama-endpoint-domain>:11434`.
- Use Ollama Models.
![](https://github.com/infiniflow/ragflow/assets/12318111/60ff384e-5013-41ff-a573-9a543d237fd3)
## Xinference
Xorbits Inference([Xinference](https://github.com/xorbitsai/inference)) empowers you to unleash the full potential of cutting-edge AI models.
### Install
- [pip install "xinference[all]"](https://inference.readthedocs.io/en/latest/getting_started/installation.html)
- [Docker](https://inference.readthedocs.io/en/latest/getting_started/using_docker_image.html)
To start a local instance of Xinference, run the following command:
```bash
$ xinference-local --host 0.0.0.0 --port 9997
```
### Launch Xinference
Decide which LLM you want to deploy ([here's a list for supported LLM](https://inference.readthedocs.io/en/latest/models/builtin/)), say, **mistral**.
Execute the following command to launch the model, remember to replace ${quantization} with your chosen quantization method from the options listed above:
```bash
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
```
### Use Xinference in RAGFlow
- Go to 'Settings > Model Providers > Models to be added > Xinference'.
![](https://github.com/infiniflow/ragflow/assets/12318111/bcbf4d7a-ade6-44c7-ad5f-0a92c8a73789)
> Base URL: Enter the base URL where the Xinference service is accessible, like, `http://<your-xinference-endpoint-domain>:9997/v1`.
- Use Xinference Models.
![](https://github.com/infiniflow/ragflow/assets/12318111/b01fcb6f-47c9-4777-82e0-f1e947ed615a)
![](https://github.com/infiniflow/ragflow/assets/12318111/1763dcd1-044f-438d-badd-9729f5b3a144)

View File

@ -1,5 +1,13 @@
---
sidebar_position: 4
slug: /llm_api_key_setup
---
## Set Before Starting The System
# Set your LLM API key
You have two ways to input your LLM API key.
## Before Starting The System
In **user_default_llm** of [service_conf.yaml](./docker/service_conf.yaml), you need to specify LLM factory and your own _API_KEY_.
RagFlow supports the flowing LLM factory, and with more coming in the pipeline:
@ -13,7 +21,5 @@ After sign in these LLM suppliers, create your own API-Key, they all have a cert
You can also set API-Key in **User Setting** as following:
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/e4e4066c-e964-45ff-bd56-c3fc7fb18bd3" width="1000"/>
</div>
![](https://github.com/infiniflow/ragflow/assets/12318111/e4e4066c-e964-45ff-bd56-c3fc7fb18bd3)

View File

@ -1,3 +1,8 @@
---
sidebar_position: 3
slug: /manage_files
---
# Manage files
Knowledge base, hallucination-free chat, and file management are three pillars of RAGFlow. RAGFlow's file management allows you to upload files individually or in bulk. You can then link an uploaded file to multiple target knowledge bases. This guide showcases some basic usages of the file management feature.
@ -40,11 +45,11 @@ You can link your file to one knowledge base or multiple knowledge bases at one
## Move file to specified folder
As of RAGFlow v0.5.0, this feature is *not* available.
As of RAGFlow v0.6.0, this feature is *not* available.
## Search files or folders
As of RAGFlow v0.5.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).
As of RAGFlow v0.6.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).
![search file](https://github.com/infiniflow/ragflow/assets/93570324/77ffc2e5-bd80-4ed1-841f-068e664efffe)
@ -76,4 +81,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.5.0, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.6.0, bulk download is not supported, nor can you download an entire folder.

View File

@ -1,4 +1,9 @@
# Set vm.max_map_count to at least 262144
---
sidebar_position: 7
slug: /max_map_count
---
# Update vm.max_map_count
## Linux

View File

@ -1,3 +1,8 @@
---
sidebar_position: 2
slug: /start_chat
---
# Start an AI chat
Knowledge base, hallucination-free chat, and file management are three pillars of RAGFlow. Chats in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.

View File

@ -1,40 +0,0 @@
# Ollama
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/2019e7ee-1e8a-412e-9349-11bbf702e549" width="130"/>
</div>
One-click deployment of local LLMs, that is [Ollama](https://github.com/ollama/ollama).
## Install
- [Ollama on Linux](https://github.com/ollama/ollama/blob/main/docs/linux.md)
- [Ollama Windows Preview](https://github.com/ollama/ollama/blob/main/docs/windows.md)
- [Docker](https://hub.docker.com/r/ollama/ollama)
## Launch Ollama
Decide which LLM you want to deploy ([here's a list for supported LLM](https://ollama.com/library)), say, **mistral**:
```bash
$ ollama run mistral
```
Or,
```bash
$ docker exec -it ollama ollama run mistral
```
## Use Ollama in RAGFlow
- Go to 'Settings > Model Providers > Models to be added > Ollama'.
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/a9df198a-226d-4f30-b8d7-829f00256d46" width="1300"/>
</div>
> Base URL: Enter the base URL where the Ollama service is accessible, like, `http://<your-ollama-endpoint-domain>:11434`.
- Use Ollama Models.
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/60ff384e-5013-41ff-a573-9a543d237fd3" width="530"/>
</div>

View File

@ -1,4 +1,9 @@
# Quickstart
---
sidebar_position: 1
slug: /
---
# Quick start
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. When integrated with LLMs, it is capable of providing truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
@ -20,7 +25,7 @@ This quick start guide describes a general process from:
## Start up the server
1. Ensure `vm.max_map_count` >= 262144 ([more](./docs/max_map_count.md)):
1. Ensure `vm.max_map_count` >= 262144:
> To check the value of `vm.max_map_count`:
>

View File

@ -3,7 +3,7 @@ sidebar_position: 0
slug: /contribution_guidelines
---
# Contribution Guidelines
# Contribution guidelines
Thanks for wanting to contribute to RAGFlow. This document offers guidlines and major considerations for submitting your contributions.

View File

@ -0,0 +1,8 @@
{
"label": "References",
"position": 1,
"link": {
"type": "generated-index",
"description": "RAGFlow References"
}
}

View File

@ -1,11 +1,14 @@
# Conversation API Instruction
---
sidebar_position: 1
slug: /api
---
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/df0dcc3d-789a-44f7-89f1-7a5f044ab729" width="830"/>
</div>
# API reference
![](https://github.com/infiniflow/ragflow/assets/12318111/df0dcc3d-789a-44f7-89f1-7a5f044ab729)
## Base URL
```buildoutcfg
```
https://demo.ragflow.io/v1/
```

View File

@ -1,4 +1,9 @@
# Frequently Asked Questions
---
sidebar_position: 3
slug: /faq
---
# Frequently asked questions
## General
@ -31,7 +36,7 @@ Currently, we only support x86 CPU and Nvidia GPU.
### 2. Do you offer an API for integration with third-party applications?
The corresponding APIs are now available. See the [Conversation API](./conversation_api.md) for more information.
The corresponding APIs are now available. See the [RAGFlow API Reference](./api.md) for more information.
### 3. Do you support stream output?
@ -186,14 +191,12 @@ Parsing requests have to wait in queue due to limited server resources. We are c
If your RAGFlow is deployed *locally*, try the following:
1. Click the red cross icon next to **Parsing Status** and refresh the file parsing process.
2. If the issue still persists, try the following:
- check the log of your RAGFlow server to see if it is running properly:
```bash
docker logs -f ragflow-server
```
- Check if the **task_executor.py** process exists.
- Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
1. Check the log of your RAGFlow server to see if it is running properly:
```bash
docker logs -f ragflow-server
```
2. Check if the **task_executor.py** process exists.
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
#### 4.5 Why does my pdf parsing stall near completion, while the log does not show any error?
@ -356,7 +359,7 @@ You limit what the system responds to what you specify in **Empty response** if
### 4. How to run RAGFlow with a locally deployed LLM?
You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/ollama.md) for more information.
You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.md) for more information.
### 5. How to link up ragflow and ollama servers?

View File

@ -1,43 +0,0 @@
# Xinference
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/2c5e86a7-807b-4d29-bd2b-f73fb1018866" width="130"/>
</div>
Xorbits Inference([Xinference](https://github.com/xorbitsai/inference)) empowers you to unleash the full potential of cutting-edge AI models.
## Install
- [pip install "xinference[all]"](https://inference.readthedocs.io/en/latest/getting_started/installation.html)
- [Docker](https://inference.readthedocs.io/en/latest/getting_started/using_docker_image.html)
To start a local instance of Xinference, run the following command:
```bash
$ xinference-local --host 0.0.0.0 --port 9997
```
## Launch Xinference
Decide which LLM you want to deploy ([here's a list for supported LLM](https://inference.readthedocs.io/en/latest/models/builtin/)), say, **mistral**.
Execute the following command to launch the model, remember to replace ${quantization} with your chosen quantization method from the options listed above:
```bash
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
```
## Use Xinference in RAGFlow
- Go to 'Settings > Model Providers > Models to be added > Xinference'.
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/bcbf4d7a-ade6-44c7-ad5f-0a92c8a73789" width="1300"/>
</div>
> Base URL: Enter the base URL where the Xinference service is accessible, like, `http://<your-xinference-endpoint-domain>:9997/v1`.
- Use Xinference Models.
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b01fcb6f-47c9-4777-82e0-f1e947ed615a" width="530"/>
</div>
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/1763dcd1-044f-438d-badd-9729f5b3a144" width="530"/>
</div>

View File

@ -88,7 +88,7 @@ const ChatOverviewModal = ({
<Button onClick={showApiKeyModal}>{t('apiKey')}</Button>
<a
href={
'https://github.com/infiniflow/ragflow/blob/main/docs/conversation_api.md'
'https://github.com/infiniflow/ragflow/blob/main/docs/references/api.md'
}
target="_blank"
rel="noreferrer"