LobeChat
Ctrl K
Back to Discovery
GiteeAIGiteeAI
@Gitee AI
14 models
Gitee AI's Serverless API provides AI developers with an out of the box large model inference API service.

Supported Models

GiteeAI
Maximum Context Length
--
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
--
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
4K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
8K
Maximum Output Length
--
Input Price
--
Output Price
--

Using Gitee AI in LobeChat

cover

Gitee AI is an open-source platform based on Git code hosting technology, specifically designed for AI application scenarios. It aims to provide developers and businesses with a one-stop solution for AI application development services, including model experience, inference, fine-tuning, and deployment.

This article will guide you on how to use Gitee AI in LobeChat.

Step 1: Obtain the Gitee AI API Key

Gitee Serverless API
  • In Settings, click on the Access Tokens section
  • Create a new access token
  • Save the access token in the pop-up window
Gitee Serverless API

Please keep the access token safe as it will only appear once. If you accidentally lose it, you will need to create a new one.

Step 2: Configure Gitee AI in LobeChat

  • Access the Settings page in LobeChat
  • Under Language Models, find the settings for Gitee AI
Enter API Key
  • Enter the obtained API key
  • Select a Gitee AI model for your AI assistant to begin chatting
Select Gitee AI Model and Start Chatting

During usage, you may need to make payments to the API service provider; please refer to Gitee AI's relevant pricing policy.

Now you can start having conversations using the models provided by Gitee AI in LobeChat!

Related Providers

OpenAIOpenAI
@OpenAI
21 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
46 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
16 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.