XSimple
返回发现
OllamaOllama
@Ollama
45 个模型
Ollama 提供的模型广泛涵盖代码生成、数学运算、多语种处理和对话互动等领域,支持企业级和本地化部署的多样化需求。

支持模型

Ollama
最大上下文长度
128K
最大输出长度
--
输入价格
--
输出价格
--
最大上下文长度
128K
最大输出长度
--
输入价格
--
输出价格
--
最大上下文长度
128K
最大输出长度
--
输入价格
--
输出价格
--
最大上下文长度
16K
最大输出长度
--
输入价格
--
输出价格
--

Using Ollama in LobeChat

Using Ollama in LobeChat

Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.

This document will guide you on how to use Ollama in LobeChat:

Using Ollama on macOS

Local Installation of Ollama

Download Ollama for macOS and unzip/install it.

Configure Ollama for Cross-Origin Access

Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting OLLAMA_ORIGINS is required for cross-origin access and port listening. Use launchctl to set the environment variable:

bash
launchctl setenv OLLAMA_ORIGINS "*"

After setting up, restart the Ollama application.

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Chat with llama3 in LobeChat

Using Ollama on Windows

Local Installation of Ollama

Download Ollama for Windows and install it.

Configure Ollama for Cross-Origin Access

Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS is needed for cross-origin access and port listening.

On Windows, Ollama inherits your user and system environment variables.

  1. First, exit the Ollama program by clicking on it in the Windows taskbar.
  2. Edit system environment variables from the Control Panel.
  3. Edit or create the Ollama environment variable OLLAMA_ORIGINS for your user account, setting the value to *.
  4. Click OK/Apply to save and restart the system.
  5. Run Ollama again.

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Using Ollama on Linux

Local Installation of Ollama

Install using the following command:

bash
curl -fsSL https://ollama.com/install.sh | sh

Alternatively, you can refer to the Linux manual installation guide.

Configure Ollama for Cross-Origin Access

Due to Ollama's default configuration, which allows local access only, additional environment variable setting OLLAMA_ORIGINS is required for cross-origin access and port listening. If Ollama runs as a systemd service, use systemctl to set the environment variable:

  1. Edit the systemd service by calling sudo systemctl edit ollama.service:
bash
sudo systemctl edit ollama.service
  1. Add Environment under [Service] for each environment variable:
bash
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
  1. Save and exit.
  2. Reload systemd and restart Ollama:
bash
sudo systemctl daemon-reload
sudo systemctl restart ollama

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Deploying Ollama using Docker

Pulling Ollama Image

If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:

docker pull ollama/ollama

Configure Ollama for Cross-Origin Access

Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS is needed for cross-origin access and port listening.

If Ollama runs as a Docker container, you can add the environment variable to the docker run command.

bash
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Installing Ollama Models

Ollama supports various models, which you can view in the Ollama Library and choose the appropriate model based on your needs.

Installation in LobeChat

In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.

LobeChat guide your to install Ollama model

Once downloaded, you can start conversing.

Pulling Models to Local with Ollama

Alternatively, you can install models by executing the following command in the terminal, using llama3 as an example:

ollama pull llama3

Custom Configuration

You can find Ollama's configuration options in Settings -> Language Models, where you can configure Ollama's proxy, model names, etc.

Ollama Provider Settings

Visit Integrating with Ollama to learn how to deploy LobeChat to meet integration needs with Ollama.

相关服务商

OpenAIOpenAI
@OpenAI
25 个模型
OpenAI 是全球领先的人工智能研究机构,其开发的模型如GPT系列推动了自然语言处理的前沿。OpenAI 致力于通过创新和高效的AI解决方案改变多个行业。他们的产品具有显著的性能和经济性,广泛用于研究、商业和创新应用。
Anthropic
ClaudeClaude
@Anthropic
10 个模型
Anthropic 是一家专注于人工智能研究和开发的公司,提供了一系列先进的语言模型,如 Claude 3.5 Sonnet、Claude 3 Sonnet、Claude 3 Opus 和 Claude 3 Haiku。这些模型在智能、速度和成本之间取得了理想的平衡,适用于从企业级工作负载到快速响应的各种应用场景。Claude 3.5 Sonnet 作为其最新模型,在多项评估中表现优异,同时保持了较高的性价比。
AWS
BedrockBedrock
@Bedrock
17 个模型
Bedrock 是亚马逊 AWS 提供的一项服务,专注于为企业提供先进的 AI 语言模型和视觉模型。其模型家族包括 Anthropic 的 Claude 系列、Meta 的 Llama 3.1 系列等,涵盖从轻量级到高性能的多种选择,支持文本生成、对话、图像处理等多种任务,适用于不同规模和需求的企业应用。
Google
GeminiGemini
@Google
13 个模型
Google 的 Gemini 系列是其最先进、通用的 AI模型,由 Google DeepMind 打造,专为多模态设计,支持文本、代码、图像、音频和视频的无缝理解与处理。适用于从数据中心到移动设备的多种环境,极大提升了AI模型的效率与应用广泛性。
DeepSeekDeepSeek
@DeepSeek
2 个模型
DeepSeek 是一家专注于人工智能技术研究和应用的公司,其最新模型 DeepSeek-V3 多项评测成绩超越 Qwen2.5-72B 和 Llama-3.1-405B 等开源模型,性能对齐领军闭源模型 GPT-4o 与 Claude-3.5-Sonnet。