How to Build a No‑Code AI Chatbot App with Kimi AI and Bubble

Integrating a powerful AI model into a no-code app is now easier than ever. Kimi AI – notably the Kimi K2 large language model – offers a cutting-edge API (via Moonshot AI) with a 1 trillion parameter Mixture-of-Experts architecture and a stunning 128,000-token context window. In other words, Kimi can handle very long conversations and complex tasks, making it a formidable alternative to OpenAI’s GPT models. On the other side, Bubble.io is a popular no-code platform that lets you build web applications without coding. In this guide, we’ll walk through creating a chatbot-style knowledge assistant on Bubble that sends user prompts to the Kimi AI API and returns structured, conversational responses. The app will support:

  • Text input → API call: Users type a prompt and get an AI-generated answer.
  • JSON-formatted outputs: The AI’s response can include structured JSON (or any text format) as needed.
  • Context-aware conversation: The chatbot will remember context from earlier in the conversation (multi-turn dialogue).
  • Optional data storage: We’ll discuss how to store chat history or other data in Bubble’s database for persistence.

This step-by-step tutorial is industry-agnostic – the approach works for educational tutors, financial advisors, e-commerce support bots, internal company assistants, SaaS app helpers, and beyond. By the end, you’ll have a solid template for integrating Kimi’s AI into Bubble, without writing code, and best practices to ensure it’s secure, scalable, and user-friendly.

Getting a Kimi AI API Key (Setup and Authentication)

Before building anything in Bubble, you need access to the Kimi API:

  1. Sign Up on Moonshot AI: Kimi’s API is provided through the Moonshot AI Open Platform. Start by registering for a developer account on the Moonshot AI Console. There’s a free trial tier (with a limited number of queries) and affordable pricing for higher usage. Signing up can be as simple as using a Google account. Once logged in, you can add a payment method or credit for expanded access beyond the free quota.
  2. Generate an API Key: In the Moonshot console, find the API Key Management section and create a new API key. Give it a descriptive name (e.g. “BubbleChatbotKey”) for your own reference. Upon creation, you’ll be shown the key string (often starting with sk-...) only once – copy it and store it securely. This API key is essentially a password to Kimi’s API, so never share it or expose it publicly.
  3. Secure Your Key: Treat the API key like a secret. Do not hard-code it directly into any client-side page. In coding environments you’d use environment variables, but in Bubble we will configure it in the API Connector with the “private” setting so it’s not visible to users. If you ever suspect the key is compromised, revoke or rotate it in the console (you can manage multiple keys there).
  4. Kimi API Endpoint: Kimi’s API is OpenAI-compatible, meaning it uses a similar endpoint structure as OpenAI’s. The base URL is https://api.moonshot.ai/v1, and for chat completions the endpoint path is /chat/completions. Essentially, it mirrors OpenAI’s ChatGPT API schema, which makes integration straightforward if you’re familiar with OpenAI. We’ll be using the chat completion endpoint for our chatbot example.
  5. Authentication Method: All requests to Kimi must include the API key in an HTTP Authorization header. Specifically, you add a header Authorization: Bearer YOUR_API_KEY for each call. Also, ensure you include Content-Type: application/json in headers when sending JSON data. We’ll configure these in Bubble next.

By completing the above, you have your Kimi API credentials ready. Now let’s move to the Bubble side to set up the integration.

Setting Up Bubble and the API Connector Plugin

Bubble.io will be our no-code platform to build the app’s front-end and workflows. We’ll use Bubble’s built-in API Connector plugin to connect to the Kimi AI API. Follow these steps:

  1. Create a Bubble App: If you haven’t already, create a new app on Bubble (or use an existing one). You can start with a blank template.
  2. Install the API Connector Plugin: In the Bubble editor, go to the Plugins tab, click “Add Plugin” and search for “API Connector” (this is an official Bubble plugin). Install it into your app.
  3. Open API Connector: After installation, open the API Connector plugin settings. Here we will set up a new API integration for Kimi. Click “Add another API” and give it a name, for example “Kimi AI API”, so you can identify it later in workflows.
  4. Authentication Settings: For our Kimi API, the authentication method is a private API key via header. In the API Connector, set Authentication to “Private key in header” (Bubble might have a dropdown for common auth methods). In the fields that appear, enter:
    • Key name: AuthorizationKey value: Bearer your_Kimi_API_key
    Make sure the value starts with the word Bearer followed by a space and then your actual API key string (paste the key you obtained). Mark this field as Private (Bubble typically checks “Private” by default for this auth type) so that the key is stored securely and only sent from the server side. Using the private header method ensures the key will not be exposed in the app’s client-side code, preventing users from seeing or tampering with it. Bubble Note: If Bubble doesn’t have a dedicated auth field for this, you can alternatively choose “None/self-handled” and manually add an HTTP Header called Authorization with value Bearer your_key. Just be absolutely sure to check the “Private” box next to that header value. As the Bubble team notes, never put API keys in option sets or page text – only in the API Connector with privacy on.
  5. Shared Headers: While still in the API setup, add a header for Content-Type as application/json (if Bubble hasn’t added it by default). This tells Kimi’s API we are sending JSON data. You can add this under Shared Headers so it applies to all calls under this API connection.

With these steps, we’ve configured the Bubble plugin to include the Kimi API key securely on all calls. Next, we will define the specific API call to get AI responses.

Configuring the Kimi API Call in Bubble

Now we’ll set up the actual API request (the chat completion call) in the API Connector:

Add a New API Call: In the API Connector section for “Kimi AI API”, click “Add another call”. Name this call something like “Chat Completion” or “Send Prompt”. This name will appear in Bubble’s workflows later.

Use as Action: In the call setup, set the Use as dropdown to Action. This means we can trigger this API call in a workflow (e.g. when a button is clicked). We choose Action because the user will initiate the call (as opposed to loading data automatically in an element).

HTTP Method and URL: Select POST as the method (we are sending data to get a result). In the URL field, enter:

https://api.moonshot.ai/v1/chat/completions

This is the Kimi chat completions endpoint (as noted earlier, same path as OpenAI’s). Double-check there’s no extra whitespace and that it’s https.

Request Body (JSON): We need to send a JSON body with the prompt and parameters for the AI. Bubble’s API Connector allows you to construct the JSON. Ensure the Body Type is set to JSON. In the body editor, you can either write raw JSON or use the UI fields. Writing raw JSON with Bubble’s dynamic placeholders is straightforward: For example, you can input:

{
  "model": "kimi-k2-0711-preview",
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "<user_prompt>" }
  ],
  "max_tokens": 200,
  "temperature": 0.7
}

Let’s break this down:

model: This specifies which Kimi model to use. We used "kimi-k2-0711-preview" in this example, which was a recent K2 model ID. Moonshot updates model IDs periodically (e.g. a newer one might be "kimi-k2-128k-v1"). Check the Kimi docs or console for available model IDs and use the one you prefer (the preview model or a 128k context version).

messages: This is an array of message objects, following the ChatGPT format. We include a system message to prime the assistant’s behavior (here telling it to be helpful) and the user message which is the prompt from our app user. Notice the "<user_prompt>" in the JSON – Bubble will treat text inside < > as a dynamic variable. After writing this JSON, the API connector will automatically create a field for user_prompt. We will later hook this up to our input box value in the workflow.

max_tokens: This limits the length of the AI’s response. We set 200 as an example, meaning the reply won’t exceed ~200 tokens (roughly ~150 words). You can adjust this as needed.

temperature: This controls randomness. 0.7 is a moderately creative setting; lower (e.g. 0.2) would give more deterministic answers, while higher (e.g. 1.0) yields more variety. Choose based on how creative or predictable you want the assistant to be.

Tip: You can add other parameters supported by the API if needed, such as n (number of responses) or stop sequences, but for a basic chatbot these aren’t required. Kimi’s API accepts the same fields as OpenAI’s (like temperature, top_p, etc.).

Initialize the Call: This is an important step. Click the “Initialize call” button in the API Connector. Bubble will prompt you to enter sample values for any dynamic fields. Enter a test prompt in the user_prompt field (e.g. “Hello, what is Kimi AI?”). When you run the initialization, Bubble will send the request to Kimi. If everything is set up correctly, you should receive a response JSON. Bubble will display the response structure, which typically includes an id, object, created timestamp, model echo, a choices array with the AI’s answer, and a usage object with token counts.

For example, the response may look like:

{
  "id": "chatcmpl-abc123...",
  "object": "chat.completion",
  "created": 1697500000,
  "model": "kimi-k2-0711-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello there! I'm Kimi, a large language model. How can I assist you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 16,
    "total_tokens": 41
  }
}

In this JSON:

The assistant’s reply is found under choices[0].message.content. That’s the text we want to display to the user.The usage shows token counts – useful for monitoring API usage but not needed for basic functionality.finish_reason indicates why the output ended (e.g. “stop” means it finished naturally).Bubble will automatically parse this structure. After initialization, you’ll be able to select fields like choices X message content in the Bubble editor. Click Save to save the call configuration.

At this point, the Bubble API Connector knows how to communicate with Kimi. We have an action call that takes a user_prompt and will return the assistant’s response. Next, we’ll design the UI and workflows to use this call.

Building the Chatbot UI in Bubble

Now, let’s create a simple interface for our chatbot within the Bubble app. We’ll need an input for the user’s question, a button to submit, and a display of the conversation (so the user can see both their queries and the AI’s answers). Here’s one way to set it up:

  • Input Field: Drag a Multi-Line Input element onto the page (so users can type multi-sentence questions if needed). This will be for the user’s prompt. Give it a clear placeholder like “Type your question…” for guidance.
  • Send Button: Place a Button element next to or below the input, labeled “Send” (or an icon like a paper plane). This button will trigger the workflow to call the API.
  • Conversation Display: To show the ongoing conversation, use a Repeating Group element. Set its data type to a text or a custom type that represents a message. For simplicity, we can use text and store combined “role: content” strings, or we can create a custom data structure. A quick approach without setting up the database is:
    • Set the Type of content of the repeating group to text.For Data source, we will use a custom state (or later, a list from the database) that holds a list of messages. We’ll populate this list as the conversation progresses.In the repeating group cell, you can have a Text element that displays the current cell’s text. We will format messages like “User: Hello” or “Assistant: Hi there” for clarity.
    Alternatively: For a more robust setup, you might create a new Thing (data type) in Bubble called “Message” with fields like role (text) and content (text). You could also have a “Conversation” type to group messages if you want multiple separate chat sessions. However, to keep this guide straightforward, we’ll handle messages without a complex database – using either a custom state or a simple list.
  • Layout Considerations: Design the repeating group to look like a chat log. For instance, you might set the cell layout with conditional formatting: if the message text starts with “User:”, align it to the right and maybe use a different bubble color; if “Assistant:”, align left with another color. This step is purely for a nicer UI feel. Each cell’s text could strip out the “User:” or “Assistant:” label when displaying, or you could split the role and content into two elements for styling.

Now the page has the elements needed. Let’s wire up the functionality with workflows.

Workflow: Sending the Prompt and Displaying the Response

We will create a workflow for when the user clicks the Send button (or presses enter, if you set that up) to: (a) send the API request with the user’s prompt, (b) receive the AI’s answer, and (c) update the UI to show the new messages.

Step 1: Trigger API Call
In the Bubble Editor, go to the Workflow tab and add a new event: “When Button Send is clicked”. For the first action, choose Plugins → Kimi AI API – Chat Completion (the exact menu name will include the API name and call name you set). Bubble will present fields for the call, including the user_prompt. Set the user_prompt to the value of the Multi-Line Input (e.g. Input A’s value).

When this action runs, it will send the JSON we set up: the system message and user message, and get the assistant’s reply. Because we initialized the call, Bubble knows the structure of the response and will let us use it in subsequent workflow steps.

Step 2: Add User Message to UI (optional, for immediate feedback)
It can be nice to immediately show the user’s question in the chat log as soon as they send it. You can do this by updating the repeating group’s data source:

  • If using a custom state (let’s say a page-level state called conversation of type list of texts), add an action Element Actions → Set State. Target the page (or wherever you defined the state) and choose the conversation state. Set its value to conversation (current value) :append "User: " + Input's value. This appends the new user message to the list.
  • If using the database (Message things), create a new thing in the Message data type with role = “User” and content = Input’s value, and attach it to a Conversation if applicable. Then you’d set the repeating group’s data source to search for all messages in this conversation.

Either way, after this step, the user’s message appears in the chat history display. You might also want to clear the input field (e.g. reset the input) so it’s empty for the next question – use the Reset relevant inputs action for that.

Step 3: Handle the AI Response
Now, we wait for the API call (Step 1) to return with Kimi’s answer. Bubble’s workflow will pause until the response comes back (this happens very quickly, usually a second or two for short answers). We can then take the result and add it to our chat log:

  • Add another Set State (or create new thing) action after the API call step. This time, we append the assistant’s reply. The response from the API action can be accessed as “Result of step 1 (Chat Completion)’s choices first item’s message’s content”. In Bubble’s expression builder, it will likely be something like Result of Step 1 (Chat Completion) → choices (item #1) → message content. This is the text of the assistant’s answer. Append this to the conversation state list, prefixing with “Assistant: ” for clarity. (If using the database, create a new Message thing with role = “Assistant” and content = that result text.)

After this step, the repeating group’s data source (our conversation list) now has the assistant’s reply added. The UI will automatically update to show the new message. Congratulations – you’ve just processed one full question-answer round with Kimi!

Step 4: Error Handling
What if something goes wrong – say the API key was wrong, or the request hit a rate limit? By default, if the API call returns an error (HTTP status 4xx or 5xx), Bubble will throw an error dialog and halt the workflow. To handle this gracefully:

  • You can enable the option “Ignore response codes” in the API call configuration if available, which treats errors as data. But a simpler method is to add an “Only when” condition on the subsequent steps that require the response. For instance, on the step that appends the assistant message, you could set “Only when Result of Step 1’s error is empty” (Bubble may have an error object or you might infer error if the content is empty). If there’s an error, you might instead show a message to the user.
  • A common approach is to use a Toast/Alert element to display a friendly error. For example, if the API returns an error, trigger an alert that says something like “Sorry, I’m having trouble reaching the AI right now. Please try again.” The Kimi guide suggests messaging like “The AI is busy, please try again in a moment” if rate limits or other issues are hit.

At minimum, consider cases where the input is empty (disable the send button if no text) and catch API errors to avoid a confusing experience.

Step 5: Maintain Context for Conversations
Right now, our setup includes a system prompt and the latest user prompt in each API call. Kimi will respond based only on that info. But what if we want the AI to remember previous messages in the chat (context-awareness)? Kimi K2’s huge 128k context window is perfect for carrying a long dialogue – it can remember a large conversation history or reference documents provided in the messages. To utilize this, we need to send more than just the last user message. We should include prior messages in the messages array on each API call.

How to implement context: One straightforward way is to include all previous user and assistant messages in the API request. For example, if a user asks follow-up questions, you’d send an array like:

[{"role":"system", "content": "You are a helpful assistant."}, {"role":"user","content":"First question"}, {"role":"assistant","content":"First answer"}, {"role":"user","content":"Follow-up question"}]

The challenge is constructing this array dynamically in Bubble:

If you kept your chat history in a list (state or database), you can format it to a JSON string before the API call. Bubble’s “:formatted as text” operator is useful here. You could create a workflow action before the API call to build a JSON array of past messages. For instance, take the list of Message things and format each as {"role":"User","content":"this Message's content"} (and similarly for assistant). Join them with commas and surround with brackets to form a JSON array text. Then send that as part of the API JSON (maybe as a single <history> placeholder).

A simpler approach is to use Bubble’s ability to send lists in the API call if supported. In the API Connector, you might set the messages field to type Array and pass the list directly. However, Bubble might need the structure defined in initialization for that to work.

For this tutorial, detailing every step of multi-turn formatting might be too advanced, but the key idea is: maintain a list of messages and include them each time. If you only include the latest user input (and a static system message), the AI won’t have prior context. By including prior Q&A pairs, Kimi will understand follow-up questions in context. Thanks to Kimi’s large context, you can include quite a lot of history if needed.

Optional Persistent Storage: If you want the conversation to persist (say a user comes back later or you want to analyze chat logs), use Bubble’s database:

  • Create a data type “Conversation” (fields: maybe user, or conversation ID) and a data type “Message” (fields: conversation (linked to Conversation), role, content, timestamp).
  • When a new chat session starts, create a Conversation entry. Save each Message with the Conversation link. The repeating group can load messages by searching for Message where conversation = current conversation.
  • This way, context is not only kept in memory but also saved. You could even retrieve and send only the last N messages to the API for efficiency (if the history grows very large, though 128k tokens is huge).

For general applications, storing chat history can be useful (e.g., an internal chatbot might log interactions for quality review). If you do store data, mind any privacy concerns and use Bubble’s Privacy Rules to restrict who can read those messages (especially if the app involves multiple users).

Testing and Usage

With everything set up, it’s time to test the chatbot:

  • Run in Preview: Open the app in Bubble’s preview mode. Try typing a question and hit Send. You should see your question appear in the chat log and after a moment, the AI’s answer appear. For example, ask: “What is the capital of France?” The assistant should respond with something like “The capital of France is Paris.”.
  • Follow-up question: To test context, you might ask a follow-up like “What is its population?” If you haven’t enabled context passing, the AI might not know you mean Paris – it might ask for clarification. But if you implemented context (including the previous Q&A in the messages array), the AI can reference that Paris is the subject and give the population. This demonstrates the importance of passing conversation history for a truly context-aware assistant.
  • Structured output: If your use case requires JSON or structured answers, you can prompt Kimi accordingly. For instance, you might ask the AI to output a response as JSON (e.g., “Give me a JSON list of three tourist attractions in Paris with their descriptions.”). The system message can enforce this format. Kimi (like GPT-4) will then return JSON text which you can parse. Since the API returns the output as a string in the JSON, you might need to parse it within Bubble if you want to use it as data (Bubble doesn’t automatically parse arbitrary JSON in the answer). Plugins or regex can help parse, but that’s beyond the basic scope. The key is you can obtain structured data by instructing the AI.
  • Error scenarios: Test what happens if the API fails. For example, you could temporarily use a wrong API key to see how your app behaves (you should see your error handling message instead of Bubble’s default error). Also test very long prompts or rapid-fire prompts to see how your app handles them.

Rate Limit Considerations and Performance

When using any AI API, it’s important to consider rate limits and performance optimizations:

Rate Limits: Moonshot’s Kimi API will have rate limits (e.g., number of requests per minute) depending on your tier. If your app is successful and many users are chatting simultaneously, you could hit these limits. Kimi’s free tier is limited, but adding a payment method increases the allowance. Always check the latest rate limit info on Moonshot. To avoid hitting limits, you can:

Prevent spamming: disable the Send button while a request is in progress, or implement a short cooldown between calls for a single user.

Batch or queue requests: If you had to handle a sudden surge, consider using Bubble’s backend workflows to queue calls or a third-party service. In a more code-oriented solution, one might queue tasks and process a few at a time. With Bubble, you might schedule API calls spaced out by a few seconds if needed.

Handle the 429 error: If the API returns a “Too Many Requests” (HTTP 429) due to rate limiting, catch it and show a friendly message (e.g., “The assistant is getting a lot of traffic, please try again in a moment.”). As noted, our workflow can detect an error and display an alert instead of failing silently.

Performance: Kimi is a very powerful model, but response time can depend on input size and complexity. Short prompts usually return in a second or two. If you send very large context (hundreds of KB of text), expect slower responses. Use the max_tokens wisely to cap response length – no need to ask for a 1000-token answer if a 100-token summary suffices. Shorter outputs are faster and save your token quota. Also, monitor the usage in responses to gauge how many tokens each call uses; this can help optimize prompt sizes and thereby performance.

Costs: While not the focus of this guide, keep in mind token usage might incur cost if you exceed free limits. Moonshot’s pricing (as of 2025) was around $0.15 per million input tokens and $2.50 per million output tokens – significantly cheaper than some competitors. Still, optimizing tokens is good practice to control costs.

Security: Ensure you never expose the API key in the front-end (we covered using the private key setting). All calls should go through the API Connector (server-side). Additionally, if your chatbot could be used to query sensitive internal data, be mindful of what you send to the AI. Kimi’s service will receive whatever you send in prompts – avoid sending personally identifiable information or secrets unless you trust the service and have user consent. Use HTTPS (the endpoint is HTTPS by default) so data in transit is encrypted.

Adapting the App to Various Use Cases

One great aspect of this no-code integration is that it’s not tied to a specific domain – you can repurpose it for countless scenarios:

Education: Build a study helper that explains concepts or quizzes the student. With Kimi’s long context, you could feed entire textbook chapters or lecture notes for the AI to answer questions on.

Finance: Create a personal finance Q&A bot. For example, feed it some financial data or rules and let users ask “What’s my budget looking like for this month?” (Just be careful with sensitive data).

E-commerce: Use the chatbot as a customer support assistant on an online store. It can answer product questions, give recommendations, or track orders if integrated with your database (via Bubble workflows).

Corporate Operations: An internal company bot could assist employees with HR questions, IT support, or training. You might load company policy documents or FAQs into the context so Kimi can refer to them when answering.

SaaS Applications: Improve your software app by adding an AI helper that can guide users, generate content (like a marketing copy generator), or analyze data they input.

Developer Tools: Although our focus is no-code, even developers can use this approach on Bubble to prototype tools – e.g., an AI that reviews code snippets or generates regex patterns, etc., using Kimi’s coding capabilities.

Because the integration is done via API, you can tailor the prompts and post-process the results for your specific needs. The core steps (sending input to Kimi and getting output) remain the same across industries, which makes this guide applicable to a broad audience. Keep the implementation details generic, and you’ll have an easier time ranking on Google for various AI chatbot queries.

Conclusion

In this comprehensive walkthrough, we covered how to build a no-code chatbot app on Bubble using the Kimi AI API. We started from scratch – obtaining a Kimi API key and securing it – then set up Bubble’s API Connector to communicate with Kimi’s OpenAI-compatible endpoints. We configured a chat completion request with the proper headers and dynamic fields, and built a Bubble UI to send user prompts and display AI responses. Along the way, we discussed enabling context for multi-turn conversations (leveraging Kimi’s 128k token memory for more coherent, long dialogues) and optionally storing data in Bubble’s database for persistence.

We also emphasized best practices: secure your API key (Bubble’s private key setting), handle errors gracefully (never expose raw errors to end-users), and be mindful of rate limits and performance as your usage grows. By focusing on an industry-neutral implementation, this guide can serve as a template for many AI-powered apps – from education to e-commerce.

Finally, remember that building an AI feature is an iterative process. Test your chatbot thoroughly, fine-tune the system prompt to shape the AI’s tone and output format, and adjust parameters like temperature to get the desired balance of creativity and accuracy. Bubble’s no-code environment combined with Kimi’s advanced AI capabilities gives you a powerful toolkit to innovate without worrying about backend servers or machine learning infrastructure.

Now it’s your turn to bring your idea to life. With these steps, you can quickly launch a chatbot or AI assistant that adds real value to your users – all with no code, just smart configuration. Happy building, and enjoy your new Kimi AI-powered Bubble app!

Leave a Reply

Your email address will not be published. Required fields are marked *