LobeChat
Ctrl K
Back to Discovery
QwenQwen
@Qwen
21 models
Tongyi Qianwen is a large-scale language model independently developed by Alibaba Cloud, featuring strong natural language understanding and generation capabilities. It can answer various questions, create written content, express opinions, and write code, playing a role in multiple fields.

Supported Models

Qwen
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$0.04
Output Price
$0.08
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$0.11
Output Price
$0.28
Maximum Context Length
32K
Maximum Output Length
--
Input Price
$2.80
Output Price
$8.40
Maximum Context Length
1M
Maximum Output Length
--
Input Price
$0.07
Output Price
$0.28

Using Tongyi Qianwen in LobeChat

Using Tongyi Qianwen in LobeChat

Tongyi Qianwen is a large-scale language model independently developed by Alibaba Cloud, with powerful natural language understanding and generation capabilities. It can answer various questions, create text content, express opinions, write code, and play a role in multiple fields.

This document will guide you on how to use Tongyi Qianwen in LobeChat:

Step 1: Activate DashScope Model Service

  • Visit and log in to Alibaba Cloud's DashScope platform.
  • If it is your first time, you need to activate the DashScope service.
  • If you have already activated it, you can skip this step.
Activate DashScope service

Step 2: Obtain DashScope API Key

  • Go to the API-KEY interface and create an API key.
Create Tongyi Qianwen API key
  • Copy the API key from the pop-up dialog box and save it securely.
Copy Tongyi Qianwen API key

Please store the key securely as it will only appear once. If you accidentally lose it, you will need to create a new key.

Step 3: Configure Tongyi Qianwen in LobeChat

  • Visit the Settings interface in LobeChat.
  • Find the setting for Tongyi Qianwen under Language Model.
Enter API key
  • Open Tongyi Qianwen and enter the obtained API key.
  • Choose a Qwen model for your AI assistant to start the conversation.
Select Qwen model and start conversation

During usage, you may need to pay the API service provider. Please refer to Tongyi Qianwen's relevant pricing policies.

You can now engage in conversations using the models provided by Tongyi Qianwen in LobeChat.

Related Providers

OpenAIOpenAI
@OpenAI
21 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
46 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
16 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.