file: ./content/docs/community.en.mdx
meta: {
"title": "社区",
"description": "欢迎加入 Sealos 开源社区"
}
file: ./content/docs/index.en.mdx
meta: {
"title": "Sealos Docs",
"description": "Find quickstart guides, tutorials, best practices, deployment templates, system design docs, self-hosting solutions and more cloud native resources."
}
import { Text, Code, Play, Database } from 'lucide-react';
import { templateDomain } from '@/config/site';
} href="/docs/overview/intro" title="How Sealos DevBox works">
Understand what Sealos DevBox is and what it can do, include the key
features and advantages.
} href="/docs/guides/fundamentals" title="Guides">
Learn how to create and manage your Projects, develop your application,
create releases, and deploy your application.
} href="/docs/quick-start" title="Quickstarts">
Develop & deploy in minutes. Jump start your development with Sealos DevBox.
} href="/docs/guides/databases/postgresql" title="Databases">
Step-by-step guides on how to deploy and connect to databases in Sealos
DevBox.
***
Comprehensive development stack support
Launch specialized development environments for any framework or language.
import {
NodejsIcon,
GoIcon,
JavaIcon,
PHPIcon,
PythonIcon,
RustIcon,
} from '@/components/ui/icons';
} href="/docs/guides/databases/postgresql/nodejs" title="Node.js">
Node.js is a runtime environment that allows you to run JavaScript code
outside of a browser.
} href="/docs/guides/databases/postgresql/go" title="Go">
Go is a statically typed, compiled programming language designed at Google.
} href="/docs/guides/databases/postgresql/java" title="Java">
Java is a high-level, class-based, object-oriented programming language that
is designed to have as few implementation dependencies as possible.
} href="/docs/guides/databases/postgresql/php" title="PHP">
PHP is a popular general-purpose scripting language that is especially
suited to web development.
} href="/docs/guides/databases/postgresql/python" title="Python">
Python is an interpreted, high-level, general-purpose programming language.
} href="/docs/guides/databases/postgresql/rust" title="Rust">
Rust is a systems programming language that runs blazingly fast, prevents
segfaults, and guarantees thread safety.
file: ./content/docs/quick-start.en.mdx
meta: {
"title": "Quick Start Tutorial",
"icon": "Album",
"keywords": [
"Sealos DevBox",
"Next.js",
"cloud development",
"Kubernetes",
"OCI image",
"Cursor IDE",
"remote development",
"cloud deployment",
"containerization",
"DevOps"
],
"description": "Learn how to create, develop, and deploy a Next.js app using Sealos DevBox. This guide covers project setup, remote development with Cursor IDE, and cloud deployment."
}
import { AppDashboardLink } from '@/components/docs/Links';
Sealos DevBox is an all-in-one platform designed for integrated online development, testing, and production. It offers a seamless solution for creating environments and database dependencies with just a single click, allows developers to work locally using their preferred IDEs while streamlining setup processes and enabling automatic application deployment.
**In this guide We'll demonstrate how to how to create a minimal Next.js demo project with Sealos DevBox.**
## Create a DevBox Project
Click on the "DevBox" icon from your , then click on the
"Create New Project" button to create a new project.
In the "Runtime" section, choose "Next.js" as the development framework. Use
the sliders to set the CPU cores and memory for the project.

After setting up the basic environment, you'll need to configure the network
settings for your project:

* Scroll down to the "Network" section of the configuration page.
* Container Port:
* Enter "3000" in the Container Port field. This is the default port that Next.js uses for development.
* If you need additional ports, click the "Add Port" button and specify them.
* Enable Internet Access:
* Toggle the switch to enable internet access for your DevBox. This allows external users to access your Next.js application through the public internet using the provided domain.
* Domain:
* By default, Sealos provides a subdomain for your application.
* If you want to use a custom domain, click on "Custom Domain" and follow the instructions to set it up.
Remember that the container port (3000) should match the port your Next.js application is configured to run on. If you change the port in your Next.js configuration, make sure to update it here as well.
Click on the "Create" button to create your project.
## Connect with Cursor IDE
After creating your project, you'll see it listed in the DevBox List. Each project has an "Operation" column with various options.

To connect to your project's DevBox runtime using Cursor IDE:
* Locate your project in the DevBox List.
* In the "Operation" column, click on the dropdown arrow next to the VSCode icon.
* From the dropdown menu, select "Cursor".
* Click on the "Cursor" option that appears.
When you click on "Cursor", it will launch the Cursor IDE application on your
local machine. Within Cursor, a popup window will appear, prompting you to
install the DevBox plugin for Cursor. This plugin enables SSH remote
connection to the DevBox runtime.
* Follow the instructions in the Cursor popup to install the DevBox plugin.
* Once installed, Cursor will establish a remote connection to your DevBox runtime.
You can switch between different IDE options (VSCode, Cursor, or VSCode
Insiders) at any time by using the dropdown menu in the "Operation" column.
## Develop
After the connection is established, you'll be able to access and edit your
project files directly within the Cursor IDE environment.

This remote connection allows you to develop your Next.js application using Cursor IDE, with all the benefits of a cloud-based development environment: - Your code runs in the DevBox runtime, ensuring consistency across development and production environments. - You can access your project from anywhere, on any device with Cursor installed. - Collaboration becomes easier as team members can connect to the same DevBox runtime.
You can start debugging your Next.js application:
* Open the terminal within Cursor IDE.
* Navigate to your project directory if you're not already there.
* Run the following command to start the Next.js development server:
```bash
npm run dev
```
* This command will start your Next.js application in development mode.
To access your running application:
* Return to the Sealos DevBox List in your browser.
* Find the project you just created.
* Click on the "Detail" button on the right side of your project's row.
In the project details page:
* Look for the "Network" section.
* You'll see an "External Address" field.
* Click on this external address.

This will open your Next.js application in a new browser tab, allowing you to
view and interact with your running service.

## Release
After you've developed and tested your Next.js application, you can release it as an OCI (Open Container Initiative) image. This allows you to version your application and prepare it for deployment.
1. In the Cursor IDE terminal, navigate to your project directory and run the build command:
```bash
npm run build
```
This command creates a production-ready build of your Next.js application in the '.next' directory.
2. Navigate to your project's details page:
* Go to the Sealos DevBox List in your browser.
* Find your project and click on the "Detail" button on the right side of your project's row.
3. On the project details page, look for the "Version" section.
4. Click on the "Release" button located in the top right corner of the "Version" section.
5. A "Release" dialog box will appear. Here, you need to provide the following information:
* Image Name: This field is pre-filled with your project's image name.
* Tag: Enter a version tag for your release (e.g., v1.0).
* Description: Provide a brief description of this release (e.g., "Initial release" or "Bug fixes for login feature").

6. After filling in the required information, click the "Release" button at the bottom of the dialog box.
7. The system will process your release. Once completed, you'll see a new entry in the "Version" section of your project details page, showing the tag, status, creation time, and description of your release.

By following these steps, you've successfully created an OCI image of your Next.js application. This image can now be used for deployment or shared with other team members. Each release creates a snapshot of your current code, allowing you to maintain different versions of your application and easily roll back if needed.
Remember to create new releases whenever you make significant changes or reach
important milestones in your project. This practice helps in maintaining a
clear history of your application's development and facilitates easier
deployment and collaboration.
## Deploy
After releasing your Next.js application as an OCI image, you can deploy it to Sealos Cloud for production use. Here's how to do it:
1. In your project's details page, locate the "Version" section.
2. Find the release you want to deploy and click the "Deploy" button in the "Operation" column.
3. This will redirect you to the App Launchpad application within Sealos.
4. In the App Launchpad application, follow the deployment wizard to configure your application settings. This may include:
* Selecting the appropriate environment
* Setting resource limits (CPU, memory)
* Configuring environment variables if needed
* Setting up any required volumes or persistent storage

5. Once you've configured all necessary settings, click the "Deploy Application" button in the top right corner to start the deployment process.
6. You'll be taken to the application details view within App Launchpad.
7. Once the status is "Running", Click on the address provided under "Public Address". This will open your deployed Next.js application in a new browser tab.

By following these steps, you've successfully deployed your Next.js application to Sealos Cloud using the App Launchpad application. Your application is now accessible via the public address, allowing users to interact with it from anywhere on the internet.
You can always update your application by creating a new release in DevBox and
repeating this deployment process with the new version using App Launchpad.
This workflow allows you to develop and debug your Next.js application in a cloud environment while still using your preferred local IDE. The external address makes it easy to share your work with team members or clients, as they can access your running application from anywhere with an internet connection.
file: ./content/docs/advanced/architecture.en.mdx
meta: {
"title": "Architecture"
}
Sealos DevBox is an advanced development environment solution that leverages cloud-native Container and Kubernetes technologies to offer a unified and flexible development Runtime. Its key feature is simulating a traditional virtual machine experience while maintaining containerization benefits.
## Architecture
Sealos DevBox is built on a layered architecture, comprising these key components:

## Control Flow
DevBox implements a decoupled front-end and back-end design:
1. Users trigger actions via the web interface or plugins
2. DevBox Controller receives and processes these requests
3. Controller translates the processed instructions into Kubernetes API calls
4. Kubernetes executes the corresponding Container operations

## State Persistence Mechanism
DevBox employs an intelligent state preservation system to maintain user environment consistency:
### Automatic Saving
* System auto-saves user environment changes under specific conditions
* Changes are packaged as image layers
* New layers are appended to the base image as commits
* Updated images are securely stored in an isolated internal registry
### Environment Recovery
* On subsequent startups, system boots from the most recent image
* Ensures full preservation of historical changes
### Optimization
The Container-shim layer provides automated maintenance:
* Regularly merges image layers to optimize storage
* Automatically cleans up redundant data
* Optimizes image size and layer count for optimal system performance

file: ./content/docs/AI/mcp.en.mdx
meta: {
"title": "MCP (Model Context Protocol)",
"description": "Learn how to use Model Context Protocol (MCP) services on the Sealos platform to connect AI models with external tools and data sources through standardized interfaces.",
"keywords": [
"MCP",
"Model Context Protocol",
"Sealos",
"AI integration",
"large language models",
"tool calling"
]
}
## What is Model Context Protocol (MCP)?
The [Model Context Protocol](https://modelcontextprotocol.io/) (MCP) is a game-changing standard that lets AI models talk to external tools and data sources seamlessly. Instead of building custom integrations for every service, developers can integrate once and connect to any MCP-compatible system.
### The Problem MCP Solves
Here's the challenge: AI models are smart, but they're isolated. They can't access your databases, check your billing, or run code in your development environment. To make AI truly useful, you need to connect it to real-world systems.
The traditional approach? Build custom integrations for every single service. Each one has different APIs, authentication methods, and data formats. It's a maintenance nightmare that gets worse as you add more tools.
### MCP Changes Everything
MCP is like having a universal translator for AI integrations. Here's how it works:
**For Developers**: Write one MCP integration and connect to any MCP-compatible service. No more custom connectors for every tool.
**For Service Providers**: Build one MCP interface and instantly work with any MCP-enabled AI application.
**Think USB-C for AI**: Just like USB-C replaced dozens of different charging cables, MCP replaces dozens of different API integrations with one standard protocol.
### How MCP Works: The Three-Part System
MCP uses a simple client-server model with three key components:
* **MCP Host**: Your AI application (like Cursor, VS Code, or ChatGPT)
* **MCP Client**: The connection bridge that your AI app creates
* **MCP Server**: The external service that provides tools and data (like Sealos)
**Simple Example**: Your Cursor editor (Host) creates a connection (Client) to talk to Sealos services (Server). Want to connect to multiple services? Your editor just opens multiple connections.
## Sealos MCP: Your Cloud Platform, AI-Ready
Sealos has built MCP servers for all its major platform capabilities. Using StreamableHttp communication, these servers work seamlessly with any MCP-compatible IDE or AI application.
**Bottom line**: You can now control your entire Sealos infrastructure through natural language conversations with AI.
### What You Can Do with Sealos MCP
Here's what's available right now:
* **🛠️ DevBox**: Spin up development environments and run code
* **🗄️ Database**: Query and manage your databases
* **💰 Cost Center**: Check billing and manage expenses
* **📊 Observability**: Monitor performance and view logs
**Note**: Service availability varies by region. Check your Sealos console to see what's available in your zone.
### Quick Authentication Setup
Sealos uses your KubeConfig for authentication. Here's the simple 3-step process:
**Step 1: Get Your KubeConfig**
1. Go to [Sealos Console](https://os.sealos.io)
2. Click your profile picture (top right)
3. Select "KubeConfig" and copy the content
**Step 2: URL Encode It**
Choose any of these methods to URL-encode your KubeConfig:
**Method 1: Online Tool (Easiest)**
1. Open [URL Encoder](https://www.urlencoder.org/) or search "URL encoder"
2. Paste your KubeConfig content
3. Click "Encode"
4. Copy the encoded result
**Method 2: Browser Console**
1. Press `F12` on any webpage to open developer tools
2. Go to "Console" tab
3. Type this and press Enter:
```javascript
encodeURIComponent(`paste your KubeConfig here`)
```
4. Copy the encoded output
**Method 3: Python (if installed)**
1. Open terminal
2. Type `python3` to enter Python
3. Run this code:
```python
import urllib.parse
kubeconfig = """paste your KubeConfig here"""
encoded = urllib.parse.quote(kubeconfig)
print(encoded)
```
4. Copy the encoded result
**Step 3: Use It**
Add this header to your MCP configuration:
```
Authorization:
```
**Example**: `apiVersion: v1` becomes `apiVersion%3A%20v1`
**Pro Tip**: Save your encoded KubeConfig somewhere safe—you'll use it for all MCP configurations.
## Ready to Get Started?
### Before You Begin: Two Things You Need
**1. Your MCP Service URL**
* Log into [Sealos Console](https://os.sealos.io)
* Find the MCP services section for your region
* Copy the URL for the service you want to use
**2. Your Authentication Token**
* Follow the authentication steps above to get your URL-encoded KubeConfig
* Keep it handy—you'll paste it into your IDE configuration
**That's it!** Now let's connect your favorite AI tool to Sealos.
## Connect Your AI Tool to Sealos
### Cherry Studio Setup
[Cherry Studio](https://cherry-ai.com/) makes MCP configuration visual and straightforward.
**Quick Setup:**
1. **Settings** → **MCP Servers** → **Add Server**
2. **Fill in the details:**
* **Name**: `Sealos` (or whatever you prefer)
* **Type**: `Streamable HTTP (StreamableHttp)`
* **URL**: Your Sealos MCP service URL
* **Headers**: `Authorization=`
3. **Hit the green start button** and wait for "Connected" status
**Start Using It:**
Once connected, just chat normally. When you need Sealos functionality, select the MCP tools and ask.
### VS Code Setup
VS Code gets MCP powers through extensions. Here's the streamlined setup:
**Quick Setup:**
1. **Open AI Chat** (right sidebar) → **Agent Mode** → **Tools icon**
2. **Find MCP settings** → **Gear icon** → **Add new server**
3. **Configure:**
* **Name**: `Sealos Platform`
* **URL**: Your Sealos MCP service URL
* **Headers**:
```json
{
"Authorization": ""
}
```
4. **Save** → VS Code auto-connects
**Code While You Chat:**
Now you can ask your AI assistant to interact with Sealos while you code.
### Cursor Setup
[Cursor](https://www.cursor.com/) has the best native MCP support. Setup is super clean:
**Quick Setup:**
1. **Settings gear** (top right) → **MCP Servers** → **New MCP Server**
2. **Paste this config** (replace the URL and auth token):
```json
{
"name": "Sealos MCP",
"url": "https://your-sealos-mcp-endpoint.com",
"headers": {
"Authorization": ""
}
}
```
3. **Save** → Cursor auto-verifies → Look for "Connected" status
**AI-Powered Development:**
Now your AI coding assistant can manage your entire Sealos infrastructure.
### Trae Setup
Trae keeps it simple with manual JSON configuration.
**Quick Setup:**
1. **Gear icon** (top right) → **MCP** → **Add** → **Manual Configuration**
2. **Paste this config** (update URL and auth):
```json
{
"name": "Sealos MCP",
"url": "https://your-sealos-mcp-endpoint.com",
"headers": {
"Authorization": ""
}
}
```
3. **Confirm** → You're connected!
**Explore Your Tools:**
Click your server name to see all available Sealos tools and what they do.
### Cline Setup
[Cline](https://github.com/cline/cline) is a powerful AI coding assistant that runs directly in VS Code with native MCP support. It can write code, execute commands, browse the web, and more.
**Setup Steps:**
1. **Install the Cline extension** in VS Code and restart
2. **Add server via interface**:
* Click **"MCP Servers" icon** (purple button) → **"Remote Servers" tab**
* **Server Name**: `sealos`
* **Server URL**: `https://your-sealos-mcp-endpoint.com`
* Click **"Add Server"**
3. **Complete configuration**:
* Click **"Edit Configuration"** to open `cline_mcp_settings.json`
* Add the complete configuration for the `sealos` server:
```json
{
"mcpServers": {
"sealos": {
"type": "streamableHttp",
"url": "https://your-sealos-mcp-endpoint.com",
"headers": {
"Authorization": ""
},
"disabled": false,
"autoApprove": [],
"timeout": 60
}
}
}
```
**Important**: Replace `` with your actual URL-encoded KubeConfig from the steps above.
4. **Save** → Cline auto-verifies → Look for "Connected" status
**What You Can Do:**
Now you can ask Cline to manage your entire Sealos infrastructure.
**Pro Tips:**
* Add trusted tools to `autoApprove` for faster workflows
* Use environment variables for sensitive auth tokens
* Set appropriate `timeout` values for long-running operations
## When Things Don't Work
### "Connection Failed" - Fix It Fast
**Most Common Cause**: Wrong URL or bad auth token
**Quick Fixes:**
1. **Double-check your MCP service URL** - Copy it fresh from Sealos console
2. **Re-encode your KubeConfig** - Authentication tokens can get corrupted
3. **Test your network** - Can you reach sealos.run in your browser?
4. **Check the obvious** - Typos in configuration happen to everyone
### "Authentication Error" - Get Back In
**What Happened**: Your KubeConfig is invalid or expired
**Fix It:**
1. **Get a fresh KubeConfig** from Sealos console
2. **Make sure you copied the whole thing** - Missing characters break everything
3. **Re-encode it properly** - Use a reliable URL encoder
4. **Check your header format** - Should be `Authorization: `
### "Tool Calls Failing" - Restore Functionality
**Likely Issues**: Permissions or service availability
**Solutions:**
1. **Verify your account permissions** - Do you have access to the service you're calling?
2. **Check service status** - Is the service running in your region?
3. **Look at error messages** - They usually tell you exactly what's wrong
4. **Try a simple test** - Start with basic operations before complex ones
### Pro Debugging Tips
* **Check connection status first** - Your IDE shows if MCP is connected
* **Enable verbose logging** - More info = faster fixes
* **Test one thing at a time** - Don't change multiple settings simultaneously
* **When in doubt, restart** - Sometimes IDEs need a fresh start after configuration changes
## Resources & Next Steps
### Essential Links
**Learn More:**
* [Official MCP Documentation](https://modelcontextprotocol.io/) - Deep dive into the protocol
* [Sealos Platform Docs](/docs) - Everything about Sealos
* [AI Proxy Service](/docs/guides/ai-proxy) - Sealos AI integration options
**Compatible Tools:**
* [Cherry Studio](https://cherry-ai.com/) - Clean AI chat interface
* [Cursor](https://www.cursor.com/) - Best-in-class AI code editor
* [VS Code](https://code.visualstudio.com/) - Works with MCP extensions
* [Cline](https://github.com/cline/cline) - AI coding assistant for VS Code
* [Claude Desktop](https://claude.ai/) - Anthropic's desktop app
* Any other MCP-compatible application
### Need Help?
**Stuck? Here's where to go:**
1. **Start here** - Re-read the troubleshooting section above
2. **Community help** - [Discord](https://discord.gg/wdUn538zVP) for peer support
3. **Bug reports** - [GitHub Issues](https://github.com/labring/sealos/issues) for technical problems
4. **Direct support** - Contact Sealos support for urgent issues
file: ./content/docs/examples/build-your-own-discord-bot.en.mdx
meta: {
"title": "Build Your Own Discord Bot",
"description": "Build an AI-powered Discord bot using DevBox. You'll create a bot that can respond to messages using AI capabilities powered by FastGPT."
}
## Overview
This example demonstrates how to build an AI-powered Discord bot using DevBox. You'll create a bot that can respond to messages using AI capabilities powered by [FastGPT](https://tryfastgpt.ai), showcasing how to integrate multiple services and APIs in a DevBox environment.
**Technologies Used:**
* Node.js
* Discord.js
* FastGPT API
* DevBox development environment
**Expected Outcome:**
* A functioning Discord bot that:
* Responds to basic commands
* Integrates with FastGPT for AI-powered responses
* Handles real-time message events
* Processes natural language queries
## Prerequisites
* A Discord account with administrator privileges
* Basic knowledge of JavaScript/Node.js
* Access to [FastGPT platform](https://cloud.tryfastgpt.ai)
* The following credentials:
* Discord Bot Token
* FastGPT API Key
* FastGPT Base URL
## Step-by-Step Guide
### 1. Setting Up Discord Application
#### Create Discord Application
1. Visit the [Discord Developer Portal](https://discord.com/developers/applications)
2. Click "New Application" and choose a name for your bot. This name will be displayed in your Discord server

#### Configure Bot Permissions
1. Navigate to the "Bot" section
2. Enable required intents:
* Server Members Intent
* Message Content Intent

These permissions allow the bot to:
* Access member-related events
* Read and process message content
3. Copy your bot token by clicking "Reset Token" and then "Copy"
* Keep this token secure and never share it
* You'll need this token to authenticate your bot
#### Set Up OAuth2
1. Go to OAuth2 section
2. Select "bot" scope

3. Choose "Administrator" permissions

4. Copy the generated OAuth2 URL
5. Use the URL to add the bot to your server
### 2. Creating DevBox Project
#### Initialize Node.js Project
1. [Create a new DevBox project](/docs/guides/fundamentals/create-a-project)
2. Select Node.js as the runtime
3. Configure project resources:
* Set appropriate CPU cores
* Allocate required memory
#### Set Up Development Environment
1. [Connect to Your Development Environment](/docs/guides/fundamentals/develop#connect-to-your-development-environment)
2. Install required dependencies:
```bash
npm init -y
npm install discord.js axios dotenv
```
3. Update `package.json` to enable ES modules:
```json
{
"type": "module",
"scripts": {
"start": "node src/index.js"
}
}
```
#### Project Structure
Create the following file structure:
```
project/
├── src/
│ ├── index.js
│ ├── services/
│ │ └── aiService.js
├── package.json
└── README.md
```
### 3. Implementing the Bot
#### Basic Bot Setup
Create `src/index.js`:
```javascript title="src/index.js"
import { Client, GatewayIntentBits } from 'discord.js';
import 'dotenv/config';
import aiService from './services/aiService.js';
const client = new Client({
intents: [
GatewayIntentBits.Guilds,
GatewayIntentBits.GuildMessages,
GatewayIntentBits.MessageContent
]
});
client.on('ready', () => {
console.log(`${client.user.tag} is ready!`);
});
client.on('messageCreate', async message => {
if (message.author.bot) return;
if (message.content === '!ping') {
message.reply('Pong!');
}
if (message.content.startsWith('!ask ')) {
const question = message.content.slice(5).trim();
if (!question) {
message.reply('Please enter your question after !ask. For example: !ask what is Sealos?');
return;
}
try {
message.channel.sendTyping();
const response = await aiService.getChatGPTResponse(question);
message.reply(response);
} catch (error) {
console.error('Error getting AI response:', error);
message.reply('Sorry, I cannot answer right now. Please try again later.');
}
}
});
client.login(process.env.DISCORD_TOKEN);
```
#### AI Service Integration
Create `src/services/aiService.js`:
```javascript title="src/services/aiService.js"
import axios from 'axios';
class AiService {
constructor() {
this.openaiAxios = axios.create({
baseURL: process.env.FASTGPT_BASE_URL,
headers: {
'Authorization': `Bearer ${process.env.FASTGPT_API_KEY}`,
'Content-Type': 'application/json'
}
});
}
async getChatGPTResponse(message) {
try {
const response = await this.openaiAxios.post('/v1/chat/completions', {
chatId: "session_" + Date.now(),
stream: false,
detail: false,
messages: [{
role: "user",
content: message
}]
});
return response.data.choices[0].message.content.trim();
} catch (error) {
console.error('AI API call failed:', error);
return 'Sorry, I cannot answer right now. Please try again later.';
}
}
}
export default new AiService();
```
#### Configuration Setup
Create `.env` file in the project root with your credentials:
```ini title=".env"
DISCORD_TOKEN=your_discord_bot_token
FASTGPT_API_KEY=your_fastgpt_api_key
FASTGPT_BASE_URL=your_fastgpt_base_url
```
Update `src/index.js` to include dotenv configuration:
```javascript
import 'dotenv/config';
import { Client, GatewayIntentBits } from 'discord.js';
// ... rest of the existing code ...
```
Start the bot with:
```bash
npm start
```
### 4. FastGPT Integration
FastGPT is an open-source LLM application development platform that allows you to build AI applications with knowledge base capabilities. In this section, we'll walk through the process of setting up FastGPT to power our Discord bot with custom knowledge.
#### Dataset Creation
1. Log into your FastGPT account at [FastGPT](https://tryfastgpt.ai)
2. Navigate to the "Datasets" section and click "Create Dataset"
3. Choose "General Dataset" as the dataset type

4. Select "Text Dataset" for document processing
5. Upload your documentation files (supports PDF, TXT, Word, etc.)

6. Configure chunking settings if needed (default settings work well for most cases)
7. Wait for the training process to complete - this may take several minutes depending on the size of your documentation
#### Application Setup
1. Go to the "Applications" section and create a new FastGPT application
2. In the Flow Editor, add a "Dataset Search" module
3. Connect your trained dataset to the search module
4. Add an "AI Chat" module and connect it to the search module
5. Configure the AI Chat module settings:
* Set temperature (0.7 recommended for balanced responses)
* Adjust max tokens as needed
* Customize the system prompt

#### API Publication
1. Once satisfied with the configuration, click "Publish Channel" to make your application live
2. Go to the "API Request" section of your application

3. Generate new API credentials if you haven't already
4. Save these important details for the next steps:
* API Base URL: The endpoint for your FastGPT API
* API Key: Your authentication token
These credentials will be used in the `.env` file we created earlier to connect our Discord bot to the FastGPT backend.
file: ./content/docs/guides/ai-proxy.en.mdx
meta: {
"title": "AI Proxy",
"description": "AI Proxy 是 Sealos 平台提供的统一 AI 模型调用服务,支持多平台 API Key 管理、统一计费和监控,让开发者轻松接入各类 AI 模型。",
"keywords": [
"AI Proxy",
"Sealos",
"AI模型调用",
"API管理",
"统一计费",
"开发者工具"
]
}
file: ./content/docs/guides/cronjob.en.mdx
meta: {
"title": "定时任务",
"keywords": [
"Sealos",
"定时任务",
"云平台",
"任务调度",
"云服务",
"应用管理",
"在线修改",
"快速部署"
],
"description": "了解如何在 Sealos 云平台上轻松创建和管理定时任务。"
}
file: ./content/docs/guides/object-storage.en.mdx
meta: {
"title": "对象存储",
"keywords": [
"对象存储",
"Sealos云存储",
"S3兼容存储",
"存储桶管理",
"文件上传下载",
"SDK集成",
"静态网站托管",
"访问权限控制",
"Kubernetes存储",
"云原生存储",
"数据安全",
"存储管理"
],
"description": "Sealos 对象存储提供企业级云存储解决方案,支持S3兼容接口、多语言SDK集成和精细权限管理。"
}
file: ./content/docs/k8s/QA.en.mdx
meta: {
"title": "常见问题",
"keywords": [
"Sealos",
"镜像构建",
"Kubernetes运行时",
"版本兼容性",
"文件目录位置"
],
"description": "了解Sealos常见问题及解决方案,包括镜像构建、Kubernetes运行时选择、版本兼容性和文件目录位置调整。"
}
file: ./content/docs/k8s/lifecycle-management.en.mdx
meta: {
"title": "K8s 集群生命周期管理",
"keywords": [
"Kubernetes集群",
"Sealos工具",
"集群生命周期管理",
"分布式应用",
"Kubernetes安装"
],
"description": "使用Sealos工具轻松管理Kubernetes集群生命周期,支持分布式应用和自定义集群镜像,提供高可用性和离线安装功能。"
}
file: ./content/docs/msa/privacy-policy.en.mdx
meta: {
"title": "Privacy Policy"
}
## 1. Introduction
Sealos Cloud ("we," "our," or "us") is committed to protecting your privacy. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you use our cloud computing services and related websites (collectively, "Services").
## 2. Information We Collect
### 2.1 Personal Information
* Name and contact information (email address, phone number)
* Account credentials
* Billing and payment information
* Professional information (company name, job title)
* Usage data and preferences
### 2.2 Technical Information
* IP addresses and device identifiers
* Browser type and settings
* Operating system information
* Log data and usage statistics
* Performance metrics
### 2.3 Cookies and Similar Technologies
We use cookies and similar tracking technologies to enhance your experience:
* Essential cookies for service functionality
* Analytics cookies for performance monitoring
* Preference cookies for customization
You can manage cookie preferences through your browser settings.
## 3. How We Use Your Information
We process your information for the following purposes:
1. Providing and maintaining our Services
2. Authentication and security
3. Service optimization and personalization
4. Technical support and communication
5. Legal compliance and enforcement
6. Analytics and service improvement
## 4. Data Sharing and Disclosure
We may share your information with:
1. Service providers and partners under strict confidentiality agreements
2. Legal authorities when required by law
3. Third parties with your explicit consent
4. Affiliated companies within our corporate group
We do not sell your personal information to third parties.
## 5. Data Security
We implement industry-standard security measures:
1. Encryption in transit and at rest
2. Access controls and authentication
3. Regular security assessments
4. Incident response procedures
5. Employee training and compliance
## 6. Data Retention
We retain your information for:
1. The duration of your account activity
2. Legal compliance requirements
3. Backup and disaster recovery purposes
4. Dispute resolution if necessary
## 7. Your Privacy Rights
You have the right to:
1. Access your personal information
2. Correct inaccurate data
3. Request deletion of your data
4. Opt-out of certain data processing
5. Data portability
6. Withdraw consent
## 8. International Data Transfers
We may transfer your data internationally. We ensure:
1. Adequate data protection measures
2. Compliance with international privacy laws
3. Standard contractual clauses where required
4. Privacy Shield compliance where applicable
## 9. Children's Privacy
Our Services are not intended for users under 16. We do not knowingly collect information from children. If you believe we have inadvertently collected children's data, please contact us.
## 10. GDPR and CCPA Compliance
We comply with:
* EU General Data Protection Regulation (GDPR)
* California Consumer Privacy Act (CCPA)
* Other applicable privacy laws
## 11. Changes to This Policy
We may update this Privacy Policy periodically. We will notify you of material changes through:
1. Email notifications
2. Service announcements
3. Website notices
## 12. Contact Information
For privacy-related inquiries:
* Email: [privacy@sealos.io](mailto:privacy@sealos.io)
* Data Protection Officer: [dpo@sealos.io](mailto:dpo@sealos.io)
For immediate assistance: [fanux@sealos.io](mailto:fanux@sealos.io)
file: ./content/docs/msa/terms-of-service.en.mdx
meta: {
"title": "Terms of Service"
}
These Sealos Cloud Terms of Service (this "Agreement") are entered into by and between you and Labring Computing Co., LTD. (referred to as "we/us" or "the Company") regarding the use of our cloud services (hereafter referred to as "the Service"). Please read this Agreement carefully, particularly provisions regarding limitations of liability, restrictions on your rights, and dispute resolution procedures. If you do not accept any terms of this Agreement, please refrain from using the Service.
**Article 1: Scope of Services**
1. We provide cloud computing services including but not limited to:
* Compute resources and virtual machines
* Storage solutions
* Network services
* Database services
* Container orchestration
* Application deployment platforms
2. Technical support is provided according to your service tier level.
3. We maintain the right to modify, enhance, or discontinue any aspect of the Service with appropriate notice.
**Article 2: Service Level Agreement**
1. We commit to maintaining the following service levels:
* Monthly uptime percentage: 99.9%
* Response time for critical issues: Within 1 hour
* Resolution time for critical issues: Within 4 hours
2. Service credits will be provided for failure to meet these commitments.
3. Scheduled maintenance windows are excluded from uptime calculations.
**Article 3: Account Management**
1. You must register an account to use the Service, providing accurate and current information.
2. You are responsible for:
* Maintaining account security
* Protecting access credentials
* All activities under your account
3. Notify us immediately of any unauthorized account access.
4. We reserve the right to suspend accounts for security violations.
**Article 4: Acceptable Use Policy**
1. Prohibited activities include:
* Illegal content distribution
* Unauthorized access attempts
* Network abuse or disruption
* Malware distribution
* Cryptocurrency mining without authorization
2. Resource usage must comply with our fair use policies.
3. API usage is subject to rate limiting and quotas.
**Article 5: Data Protection and Privacy**
1. We process personal data in accordance with our Privacy Policy and applicable laws.
2. Data protection measures include:
* Encryption in transit and at rest
* Access controls and authentication
* Regular security assessments
3. You retain ownership of your data.
4. We maintain appropriate technical safeguards for data security.
**Article 6: Fees and Payment**
1. Service fees are based on:
* Resource usage
* Selected service tier
* Additional features or support
2. Payment terms:
* Invoices are generated monthly
* Payment is due within 30 days
* Late payments may result in service suspension
3. Price changes will be announced 30 days in advance.
**Article 7: Intellectual Property**
1. We retain all rights to:
* Service infrastructure
* APIs and documentation
* Platform features and improvements
2. You retain rights to:
* Your applications and content
* Custom configurations
* Your data and metadata
**Article 8: Limitation of Liability**
1. Our liability is limited to:
* Direct damages up to fees paid
* Excludes indirect or consequential damages
2. Force majeure events exclude liability, including:
* Natural disasters
* Network outages beyond our control
* Government actions
* Security incidents despite reasonable precautions
**Article 9: Term and Termination**
1. This Agreement remains valid until terminated.
2. Termination conditions:
* By you: With 30 days notice
* By us: For violations or non-payment
3. Post-termination:
* Data retrieval period: 30 days
* Final billing resolution
* Return of confidential information
**Article 10: Export Compliance**
1. Services are subject to export laws and regulations.
2. You must comply with all applicable export restrictions.
3. Prohibited users or regions are excluded from service.
**Article 11: General Provisions**
1. Governed by the laws of the United States and by the the laws of the State of Delaware without regard to its conflicts of laws provisions.
2. Dispute resolution:
* Initial good faith negotiation
* Mediation if necessary
* Litigation as last resort
3. Severability: Invalid terms do not affect other provisions
4. Entire agreement: Supersedes prior agreements
5. Modifications: We may update these terms with notice
For questions about these terms, contact us at [legal@sealos.io](mailto:legal@sealos.io).
file: ./content/docs/overview/about-sealos.en.mdx
meta: {
"title": "About Sealos",
"keywords": [
"cloud operating system",
"Kubernetes",
"Sealos",
"cloud-native",
"distributed applications",
"private cloud",
"public cloud",
"high-availability",
"cluster imaging",
"app store"
],
"description": "Sealos is a Kubernetes-based cloud operating system that deploys, manages, and scales your applications in seconds, not minutes, not hours."
}
Sealos is the cloud OS for deploying, managing and scaling your applications in seconds, not minutes, not hours. By embracing a cloud-native approach, it abandons traditional cloud architectures in favor of a new Kubernetes-based model, allowing businesses to use the cloud **as easily as they use their own PCs**.
With Sealos, users can deploy any highly available distributed application on Kubernetes with just one click, much like installing software on a PC, and with minimal operational overhead. Through its powerful and flexible app store functionality, Sealos caters to the diverse needs of a wide range of users.

## Use Cases & Advantages
Sealos is a robust application runtime platform that excels in supporting a wide range of applications - including Java, Go, Python, PHP, and more - without any programming language limitations. This platform provides a stable runtime environment for applications while seamlessly resolving backend dependency issues such as databases, object storage, and message queues. In addition to these core features, it efficiently handles various operational aspects including application configuration management, service discovery, public network exposure, and auto-scaling.
#### Capabilities of Sealos
* 🛠️ **All-in-one Development Environment**: An all-in-one platform for integrated online development, testing, and production. Create environments and database dependencies with a single click. Seamlessly develop at local with your IDE, streamlining setup and deploy applications automatically.
* 🚀 **Application Management**: Easy management and quick release of publicly accessible distributed applications in the
app store.
* 🗄️ **Database Management**: Create high-availability databases in seconds, offering support for MySQL, PostgreSQL,
MongoDB, and Redis.
* 💾 **Object Storage**: Secure cloud data migration with built-in redundancy and disaster recovery. Seamlessly integrates with multi-language SDKs.
#### Advantages of Sealos
* 💰 **Efficient & Economical**: Pay solely for the containers you utilize; automatic scaling prevents resource
squandering and substantially reduces costs.
* 🌐 **High Universality & Ease of Use**: Concentrate on your core business activities without worrying about system
complexities; negligible learning costs involved.
* 🛡️ **Agility & Security**: The distinctive multi-tenancy sharing model ensures both effective resource segmentation
and collaboration, all under a secure framework.
file: ./content/docs/overview/intro.en.mdx
meta: {
"title": "About Sealos DevBox",
"keywords": [
"cloud development environment",
"Kubernetes-based platform",
"instant collaborative environments",
"continuous delivery",
"environment isolation",
"cloud-native infrastructure",
"DevBox",
"Sealos",
"integrated development platform",
"seamless deployment"
],
"description": "A platform for instant collaborative development, seamless deployment, and strict environment isolation. Streamline your workflow with our all-in-one solution."
}
Sealos DevBox is an all-in-one platform designed for integrated online development, testing, and production. It offers a seamless solution for creating environments and database dependencies with just a single click. This innovative platform allows developers to work locally using their preferred IDEs while streamlining setup processes and enabling automatic application deployment.

## Key Features and Advantages
### Instant collaborative environments
Sealos DevBox provides quick and easy setup of development environments for a wide range of programming languages and frameworks, including less common ones. This feature enables teams to start collaborating instantly, regardless of the technology stack they're using.
### Cloud development environment
One of the primary advantages of Sealos DevBox is its ability to eliminate environment inconsistencies. By offering a unified cloud platform, it allows teams to share code, configurations, and test data effortlessly. This streamlined approach accelerates development processes, enhances efficiency, and promotes seamless collaboration within a single, harmonious environment.
### Headless development experience
Sealos DevBox simplifies the development process by unifying development, testing, and production environments. It automates environment creation and integrates smoothly with local IDEs, providing a hassle-free setup experience for developers.
### Effortless continuous delivery
With Sealos DevBox, teams can deliver applications smoothly without requiring expertise in Docker or Kubernetes. Developers simply need to specify the version, and DevBox handles all the complex tasks, including building containers.
### Strict environment isolation
Sealos DevBox offers isolated development environments, helping teams avoid dependency conflicts. Each project can have its own consistent and reproducible workspace, allowing developers to focus on relevant tasks without worrying about environmental inconsistencies.
### Access from any network
Sealos DevBox provides access to applications from both internal networks and the Internet, with automatic TLS configuration. This feature ensures secure and flexible development capabilities, allowing teams to work from any network, anywhere in the world.
file: ./content/docs/self-hosting/faq.en.mdx
meta: {
"title": "常见问题",
"keywords": [
"Sealos Cloud",
"常见问题",
"部署问题",
"证书更新",
"域名更换",
"用户注册",
"集群管理",
"Kubernetes"
],
"description": "探索Sealos Cloud部署和使用中的常见问题解决方案,包括系统配置、证书更新、域名更换等关键操作指南,助您轻松管理Kubernetes集群。"
}
file: ./content/docs/self-hosting/install.en.mdx
meta: {
"title": "Sealos 集群部署",
"description": "了解如何使用 Sealos 一键部署 Kubernetes 集群,支持多种安装方式,确保通信安全,适用于大规模集群和企业生产环境。"
}
file: ./content/docs/system-design/billing-system.en.mdx
meta: {
"title": "计费系统",
"keywords": [
"Sealos 计费系统",
"云成本管理方案",
"Kubernetes 资源计费",
"容器计费架构",
"云原生计费设计",
"实时资源计量",
"自动费用计算",
"精准账单系统",
"跨集群成本优化",
"FinOps 实践",
"云资源计价模型",
"分布式计费方案"
],
"description": "深度剖析 Sealos 云操作系统的计费系统架构设计,详解基于 CRD 的实时资源计量模型、多维度费用计算算法及分布式账单生成机制,涵盖CPU/内存/存储/网络等全资源类型的精准计价策略,提供 Kubernetes 集群成本优化最佳实践与 FinOps 落地指南。"
}
file: ./content/docs/system-design/devbox-architecture.en.mdx
meta: {
"title": "Architecture"
}
Sealos Devbox is an advanced development environment solution that leverages cloud-native Container and Kubernetes technologies to offer a unified and flexible development Runtime. Its key feature is simulating a traditional virtual machine experience while maintaining containerization benefits.
## Architecture
Sealos Devbox is built on a layered architecture, comprising these key components:

## Control Flow
Devbox implements a decoupled front-end and back-end design:
1. Users trigger actions via the web interface or plugins
2. Devbox Controller receives and processes these requests
3. Controller translates the processed instructions into Kubernetes API calls
4. Kubernetes executes the corresponding Container operations

## State Persistence Mechanism
Devbox employs an intelligent state preservation system to maintain user environment consistency:
### Automatic Saving
* System auto-saves user environment changes under specific conditions
* Changes are packaged as image layers
* New layers are appended to the base image as commits
* Updated images are securely stored in an isolated internal registry
### Environment Recovery
* On subsequent startups, system boots from the most recent image
* Ensures full preservation of historical changes
### Optimization
The Container-shim layer provides automated maintenance:
* Regularly merges image layers to optimize storage
* Automatically cleans up redundant data
* Optimizes image size and layer count for optimal system performance

file: ./content/docs/system-design/monitor-system.en.mdx
meta: {
"title": "监控告警系统",
"keywords": [
"Sealos 监控系统",
"云原生告警体系",
"Kubernetes 监控方案",
"容器集群监控",
"VictoriaMetrics 实战",
"Loki 日志管理",
"Grafana 可视化",
"PrometheusAlert 集成",
"实时指标监控",
"日志聚合系统",
"多平台告警推送",
"运维自动化",
"分布式系统监控",
"可观测性最佳实践"
],
"description": "深度解析 Sealos 云原生监控告警系统架构设计,详解基于 VictoriaMetrics 的实时指标采集方案与 Loki 日志管理系统的集成实践,涵盖多可用区高可用部署、Grafana 可视化配置、飞书/微信多平台告警推送等核心功能,提供云原生环境下的全栈监控与日志分析最佳实践。"
}
file: ./content/docs/system-design/system-application.en.mdx
meta: {
"title": "系统应用",
"keywords": [
"Sealos系统应用架构",
"云原生桌面系统设计",
"Kubernetes CRD 应用管理",
"分布式费用中心实现",
"定时任务调度系统",
"KubeBlocks 数据库集成",
"多租户对象存储方案",
"云原生应用开发平台",
"微服务模块化设计",
"系统组件交互协议",
"容器化应用生命周期管理",
"可观测性系统集成"
],
"description": "主要介绍 Sealos 当前各个系统应用的基本原理与实现。"
}
file: ./content/docs/system-design/system-architecture.en.mdx
meta: {
"title": "系统架构",
"description": "深度解析 Sealos 云操作系统的分层架构设计与核心组件实现。"
}
file: ./content/docs/system-design/user-system.en.mdx
meta: {
"title": "用户系统",
"description": "深度解析 Sealos 云操作系统的分层架构设计与核心组件实现。"
}
file: ./content/docs/guides/app-launchpad/add-a-domain.en.mdx
meta: {
"title": "Add a Domain",
"keywords": [
"custom domain",
"Sealos",
"app deployment",
"domain provider",
"public access"
],
"description": "Learn how to assign a custom domain to your project using Sealos for a tailored brand experience. Follow our step-by-step guide for seamless app deployment."
}
import { AppDashboardLink } from '@/components/docs/Links';
Assigning a custom domain to your project guarantees that visitors to your application will have a tailored experience
that aligns with your brand.
## When Deploying
Just enable "Public Access" when you're deploying, and will sorts you out with a
domain.

Now, on your domain provider's end, link the 'CNAME' to the one Sealos provided.
Once it's active, jump back to Sealos, click on "Custom Domain" to the side:

Enter your custom domain in the pop-up box and click confirm.

To wrap up, click the "Deploy" button. Once your app's live, click on the external address to access the app via the
custom domain.
## Post Deployment
For the apps you've deployed, just click "Update" top-right on the app details page. Then, follow the earlier steps to
integrate your custom domain.
file: ./content/docs/guides/app-launchpad/autoscaling.en.mdx
meta: {
"title": "Autoscaling",
"keywords": [
"Sealos",
"Autoscaling",
"Horizontal Pod Autoscaler",
"HPA",
"application scaling",
"CPU usage",
"memory usage",
"cloud scaling",
"Sealos controller",
"performance monitoring"
],
"description": "Learn how Sealos' Autoscaling feature dynamically adjusts application instances based on CPU and memory usage, ensuring optimal performance and resource utilization."
}
import { AppDashboardLink } from '@/components/docs/Links';
In , the "[App Launchpad](/guides/applaunchpad/applaunchpad.md)" feature enables the
automatic adjustment of application instance numbers to effectively respond to varying load conditions. This
functionality is known as "**Autoscaling**," or more technically, the Horizontal Pod Autoscaler (HPA).
Autoscaling operates by dynamically altering the count of application instances based on specific metrics like CPU and
memory usage. This ensures that the applications run efficiently and resources are optimally utilized.
> Key Point: In Autoscaling, "usage" typically refers to the average use across all instances of an application. For
> example, if an app runs on two instances, its average CPU usage is calculated as the mean of the usage of these two
> instances.
The workings of Autoscaling are as follows:
1. **Monitoring**: It continuously monitors crucial performance indicators like CPU and memory usage.
2. **Decision Making**: Based on predefined thresholds (e.g., maintaining CPU usage below 50%), it calculates the
required adjustments in the instance count.
3. **Adjustment**: Following this, Autoscaling automatically instructs the Sealos controller to modify the number of
instances, ensuring the usage stays within the desired range.
For instance, if we set up an application with specific Autoscaling rules such as a maximum CPU usage of 50% and the
ability for instance numbers to vary between 1 and 5, Autoscaling will:
* Increase the number of instances when the average CPU usage exceeds 50%, up to a maximum of 5.
* Decrease the number of instances when the average CPU usage drops below 50%, but always maintain at least one instance
in operation.

file: ./content/docs/guides/app-launchpad/configmap.en.mdx
meta: {
"title": "ConfigMap",
"keywords": [
"Sealos",
"configuration files",
"Nginx container",
"environment variables",
"application settings"
],
"description": "Learn how to effectively manage application settings in Sealos using configuration files, including Nginx container setup and environment variables."
}
import { AppDashboardLink } from '@/components/docs/Links';
In , the implementation of configuration files plays a vital role, particularly when
the application deals with numerous or complex configurations. Differing from environment variables, configuration files
are a more versatile and dependable means for managing settings. Environment variables are more apt for simple,
small-scale configuration tasks.
The primary strength of configuration files is their capability to hold and control elaborate configuration data,
including aspects like configuration files, command-line arguments, and environment variables. These pieces of data can
be incorporated into the container upon the launch of the application container, facilitating adjustments to the
application's functionalities without the necessity of recompiling the image.
Take, for example, the Nginx container. The utilization of configuration files in this context can be described as
follows:
* **Filename**: This pertains to a file within the Nginx container, for which references can be drawn from the
instructions provided by the image supplier.
* **File Value**: This is the content corresponding to the file. In cases where the content is elaborate, it's
recommended to complete editing it offline and then paste it into the specified location.
* **Key Points**: The approach involves mounting an individual file, not an entire directory. It is imperative to
precisely identify the file to be mounted, rather than just a directory path.

file: ./content/docs/guides/app-launchpad/create-app.en.mdx
meta: {
"title": "安装应用",
"keywords": [
"Sealos",
"应用管理",
"云操作系统",
"Nginx安装",
"快速部署",
"外网访问",
"持久化存储",
"应用详情",
"系统自带App"
],
"description": "了解如何在 Sealos 云操作系统中使用【应用管理】功能快速安装应用。本指南详细介绍了部署过程、外网访问设置和数据持久化配置。"
}
file: ./content/docs/guides/app-launchpad/custom-certificates.en.mdx
meta: {
"title": "Custom Certificates",
"keywords": [
"custom domain certificate",
"cert-manager",
"Kubernetes secret",
"ingress patch",
"App Launchpad"
],
"description": "Learn how to manually set up a custom domain certificate in App Launchpad using Kubernetes secrets and ingress patching."
}
Successfully set up a custom domain in "App Launchpad" but cannot access the domain, and it shows the certificate is not
secure. This is because cert-manager did not successfully issue the certificate. To resolve this issue, we can manually
set up the certificate.
First, make sure you have successfully set up CNAME in your cloud provider and have downloaded the certificate
corresponding to your domain.
Open "App Launchpad", set a custom domain.
Open the "Terminal", and execute the following commands in sequence.
```bash
# Create tls.crt using the certificate file information (replace xxxx with the actual certificate file information).
cat > tls.crt < tls.key <, environment variables are pivotal in managing the configuration data for container
applications. These variables enable the provision of essential configuration information to applications without
necessitating changes to the application's code or image, thus bolstering their maintainability and scalability.
The process of defining environment variables in the "[App Launchpad](/guides/applaunchpad/applaunchpad.md)" interface
is streamlined through a bulk input method. Users can define multiple variables by entering them line by line. Each
variable comprises a key and a value, separated by either an equal sign (=) or a colon (:). The interface is designed to
automatically eliminate any invalid characters from the key, ensuring the accuracy and validity of the environment
variables.

**Environment Variable Formats That Are Correctly Interpreted:**
```shell
host=127.0.0.1
port:3000
name: sealos
- username=123
- password:123
# Comments like this line are ignored, as they don't include an equal sign (=) or a colon (:), which are the key markers.
```
**Environment Variable Formats That Cannot Be Interpreted:**
```shell
host=127.0.0.1 # This line is interpreted because it contains an equal sign (=) or a colon (:). The comment here is also considered part of the variable due to the preceding equal sign (=).
```
file: ./content/docs/guides/app-launchpad/expose-multiple-ports.en.mdx
meta: {
"title": "Expose Multiple Ports",
"keywords": [
"multi-port application",
"expose multiple ports",
"Sealos platform",
"app launchpad",
"network configuration"
],
"description": "Learn how to expose multiple ports for complex applications using Sealos App Launchpad. Simplify network configuration and enhance accessibility."
}
import { AppDashboardLink } from '@/components/docs/Links';
In complex application environments, it's common for services to expose multiple ports simultaneously to cater to
diverse needs. These requirements can emerge from various scenarios:
* **Multi-protocol support**: For example, an application might support both HTTP and HTTPS, necessitating the exposure
of both ports 80 and 443.
* **Multi-functional application**: An application might have a web service and an admin service, both of which listen
on different ports.
* **Compatibility considerations**: To remain compatible with older versions or other services, you might need to expose
ports for both the new and old interfaces.
* **Combined Database + App**: For instance, if you have an application and a database within the same Pod, you might
need to expose ports for both the application and the database.
* **Prometheus Monitoring and App Service**: If your application has a business port and another port for Prometheus
monitoring via `/metrics`, you might need to expose both.
* **Coexistence of GRPC and RESTful services**: If your application offers both GRPC and RESTful services, you might
need to expose separate ports for each type of service.
When deploying applications using "[App Launchpad](/guides/applaunchpad/applaunchpad.md)"
on , you can easily choose to expose multiple ports. During the deployment process,
users simply click on the "Network" option and then select "Add Port" to configure multiple ports.

Furthermore, the Sealos platform offers external access to these ports. Once exposed to the public network, each port
will be assigned a unique sub-domain, facilitating easier remote access and management.
file: ./content/docs/guides/app-launchpad/index.en.mdx
meta: {
"title": "Deployments",
"keywords": [
"App Launchpad",
"Sealos",
"application deployment",
"private images",
"horizontal pod autoscaling"
],
"description": "App Launchpad in Sealos simplifies application deployment with features like private images, HPA, custom domains, and real-time monitoring."
}
**App Launchpad** is a feature within Sealos that serves as a single-image deployment tool. Its main goal is to
streamline and expedite the process of deploying applications, allowing you to launch your application in as little as 5
minutes.
The tool currently boasts a range of functionalities:
* Capability to deploy applications using private images.
* Flexibility to tailor CPU and memory resources according to the specific needs of the application.
* Support for deploying multiple replicas.
* Horizontal Pod Autoscaling (HPA) for dynamic scaling.
* Provision of external URLs for easy access from the public network.
* Option to assign custom domain to applications, enhancing both brand visibility and the user experience.
* Utilization of ConfigMap for configuration file management.
* Persistent storage solutions for application data, ensuring both its security and continuity.
* Real-time monitoring features for applications and Pods to facilitate prompt issue detection and resolution.
* Comprehensive logging of application activities, aiding in troubleshooting and performance optimization.
* Analysis of system events (Events) to extract critical insights for enhancing application performance.
* A convenient one-click feature to access the container terminal, simplifying management and debugging tasks.
* Ability to expose several ports of an application to the external network.
## [Quick Start](./use-app-launchpad.md)
For quick and easy installation of commonly utilized applications.
## [Update Application](./update-app.md)
Guidance on modifying application configurations after initial deployment.
## [Add a domain](./add-domain.md)
Instructions for integrating a custom domain with your application.
## [Exposing Multiple Ports](./expose-multi-ports.md)
Details on how to make multiple ports of an application accessible externally.
## [Environment](./environment.md)
Directions for configuring applications through the use of environment variables.
## [ConfigMap](./configmap.md)
Guidelines for setting up application configurations via configuration files.
## [Autoscaling](./autoscale.md)
Strategy for autoscaling the number of application instances in response to varying workloads.
## [Persistent Volume](./persistent-volume.md)
Utilizing persistent storage for the long-term preservation of data.
file: ./content/docs/guides/app-launchpad/persistent-volume.en.mdx
meta: {
"title": "Persistent Volume",
"keywords": [
"persistent storage",
"data persistence",
"container storage",
"Sealos",
"external storage",
"container deployment",
"data retention",
"Nextcloud",
"application container"
],
"description": "Ensure data persistence in Sealos with external storage solutions, maintaining data continuity even through container restarts or redeployments."
}
import { AppDashboardLink } from '@/components/docs/Links';
offers a flexible environment where containers can be
effortlessly created and destroyed. This flexibility is advantageous for
application deployment and management, but it also raises the issue of
maintaining data persistence. In scenarios where data is stored within an
application container, its destruction leads to the loss of all stored data.
To counter this problem, the use of persistent storage is essential. Persistent storage ensures that data is stored
externally, thereby preserving it even through container restarts or redeployments. This is particularly vital for
applications requiring data retention, like databases, file storage systems, or any services involving user data.
For instance, in deploying Nextcloud, all data associated with its container is located in the `/var/www/html`
directory. To maintain data continuity, it's necessary to use external storage solutions for persisting data in this
directory.

file: ./content/docs/guides/app-launchpad/update-apps.en.mdx
meta: {
"title": "Update Apps",
"keywords": [
"Sealos",
"update application",
"app settings",
"cloud platform",
"deploy app"
],
"description": "Update your application settings on Sealos easily. Click the \"Update\" button on the app details page to make changes anytime."
}
import { AppDashboardLink } from '@/components/docs/Links';
After deploying an application on , you can update the application settings at any
time. Just click the "Update" button at the top-right of the app details page. When you're done, click on the "Update"
button at the top-right.
file: ./content/docs/guides/app-store/index.en.mdx
meta: {
"title": "应用商店",
"description": "应用商店是 Sealos 云操作系统中的一个功能,用于管理应用的安装、更新、删除等操作。"
}
file: ./content/docs/guides/databases/database-migration-guide.en.mdx
meta: {
"title": "Database Migration Guide",
"description": "Your first document"
}
file: ./content/docs/guides/fundamentals/create-a-project.en.mdx
meta: {
"title": "Create a Project",
"description": "Learn how to create a new project using Sealos DevBox"
}
import { AppDashboardLink } from '@/components/docs/Links';
This guide will walk you through the steps to create a new project using Sealos DevBox.
## Access Sealos DevBox
1. Navigate to in your web browser.
2. Locate and click on the "DevBox" icon.
## Create a New Project
Click on the "Create New Project" button
This will open the project creation interface where you can configure your new project.
Configure the Runtime
In the "Runtime" section:
* Choose the development framework or language for your project.
* Use the sliders to set the CPU cores and memory for your project. Adjust these based on your project's requirements.

Configure Network Settings
Scroll down to the "Network" section to set up your project's network configuration:

* **Container Port**:
* Enter the main port your application will use.
* If you need additional ports, click the "Add Port" button and specify them.
* **Enable Internet Access**:
* Toggle the switch to enable internet access for your DevBox. This allows external users to access your application through the public internet using the provided domain.
* **Domain**:
* By default, Sealos provides a subdomain for your application.
* If you want to use a custom domain, click on "Custom Domain" and follow the instructions to set it up.
Create Your Project
After configuring all the settings, click on the "Create" button to create your project.
## What Happens Next?
After creating your project, Sealos DevBox will set up the development environment based on your configurations. This process includes:
1. Provisioning the necessary resources (CPU, memory) for your project.
2. Setting up the chosen framework or language environment.
3. Configuring the network settings and domain.
Once the setup is complete, your project will appear in the DevBox List, where you can manage and access it.
## Next Steps
After creating your project, you're ready to start developing. The next step is to connect to your project using an IDE and begin building your application. Refer to the "Develop" guide for detailed instructions on how to connect and start coding.
file: ./content/docs/guides/fundamentals/deploy.en.mdx
meta: {
"title": "Deploy",
"description": "Learn how to deploy your application using Sealos DevBox and App Launchpad"
}
import { AppDashboardLink } from '@/components/docs/Links';
After releasing your application as an OCI image, the next step is to deploy it to for production use. This guide will walk you through the deployment process using App Launchpad.
## Initiate Deployment
Access Project Details
* Go to the Sealos DevBox List in your .
* Find your project and click on the "Detail" button on the right side of your project's row.
Select Release Version
In your project's details page:
* Locate the "Version" section.
* Find the release you want to deploy.
* Click the "Deploy" button in the "Operation" column next to your chosen release.
Navigate to App Launchpad
Clicking "Deploy" will redirect you to the App Launchpad application within Sealos.

## Configure Deployment Settings
In the App Launchpad application, you'll need to configure your application settings:
Set Resource Limits
Configure the resources for your application:
* CPU allocation
* Memory allocation
* Storage requirements
Configure Environment Variables
If your application requires specific environment variables:
* Add each variable with its corresponding value.
* Ensure sensitive information is properly secured.
Set Up Volumes (if needed)
If your application requires persistent storage:
* Configure any necessary volumes.
* Specify mount paths for these volumes.
Network Configuration
Set up your application's network configuration:
* Specify the container port your application listens on.
* Configure any additional ports if required.
Review Settings
Carefully review all the settings you've configured to ensure they match your application's requirements.

## Deploy Your Application
Initiate Deployment
Once you've configured all necessary settings, click the "Deploy Application" button in the top right corner to start the deployment process.
Monitor Deployment Progress
You'll be taken to the application details view within App Launchpad. Here, you can monitor the deployment progress.
Verify Deployment
Once the status changes to "Running", your application is successfully deployed.
Access Your Deployed Application
* Look for the "Public Address" in the application details.
* Click on this address to open your deployed application in a new browser tab.

Remember that you can always update your application by creating a new release
in DevBox and repeating this deployment process with the new version using App
Launchpad.
## Conclusion
Congratulations! You've now completed the fundamental workflow of developing and deploying an application using Sealos DevBox. Let's recap the key steps we've covered:
1. **Create a Project**: We started by setting up a new project in Sealos DevBox, configuring the runtime and network settings.
2. **Develop**: We connected to our development environment using Cursor IDE and wrote our application code.
3. **Release**: We prepared our application for release, created an OCI image, and versioned our project.
4. **Deploy**: Finally, we deployed our application to Sealos Cloud using App Launchpad, making it accessible to users.
This workflow demonstrates the power and flexibility of Sealos DevBox in streamlining the entire development process, from initial setup to final deployment. By leveraging cloud-based development environments and containerization, Sealos DevBox enables developers to create, test, and deploy applications more efficiently than ever before.
As you continue to use Sealos DevBox, you'll discover more advanced features and optimizations that can further enhance your development workflow.
Happy coding, and may your deployments always be smooth and successful!
file: ./content/docs/guides/fundamentals/develop.en.mdx
meta: {
"title": "Develop",
"description": "Learn how to develop your project using Sealos DevBox and Cursor IDE"
}
import { AppDashboardLink } from '@/components/docs/Links';
After creating your project in Sealos DevBox, you're ready to start development. This guide will walk you through the process of connecting to your development environment using Cursor IDE and running your application.
## Connect to Your Development Environment
Access the DevBox List
Navigate to the Sealos DevBox List in your .
Connect with Cursor IDE
* Find your project in the DevBox List.
* In the "Operation" column, click on the dropdown arrow next to the VSCode icon.
* From the dropdown menu, select "Cursor".
* Click on the "Cursor" option that appears.
Install the DevBox Plugin
* When you click on "Cursor", it will launch the Cursor IDE application on your local machine.
* A popup window will appear in Cursor, prompting you to install the DevBox plugin.
* Follow the instructions in the Cursor popup to install the DevBox plugin.
* Once installed, Cursor will establish a remote connection to your DevBox runtime.
You can switch between different IDE options (VSCode, Cursor, or VSCode
Insiders) at any time by using the dropdown menu in the "Operation" column.
## Develop
Once connected, you'll be able to access and edit your project files directly within the Cursor IDE environment.

This remote connection offers several benefits:
* Your code runs in the DevBox runtime, ensuring consistency across development and production environments.
* You can access your project from anywhere, on any device with Cursor installed.
* Collaboration becomes easier as team members can connect to the same DevBox runtime.
## Run Your Application
Open the Terminal
Open the terminal within Cursor IDE.
Navigate to Your Project Directory
If you're not already there, navigate to your project directory.
Start Your Development Server
Run the appropriate command to start your development server. For example, if you're using Next.js:
```bash
npm run dev
```
This command will start your application in development mode.
## Access Your Running Application
Return to the Sealos DevBox List
Go back to the Sealos DevBox List in your browser.
Access Project Details
Find the project you just created and click on the "Detail" button on the right side of your project's row.
Find the External Address
In the project details page:
* Look for the "Network" section.
* You'll see an "External Address" field.
* Click on this external address.

View Your Application
This will open your application in a new browser tab, allowing you to view and interact with your running service.

## Next Steps
As you develop your project, you'll eventually want to release and deploy it. Check out the "Release" and "Deploy" guides for information on these next steps in your project's lifecycle.
file: ./content/docs/guides/fundamentals/entrypoint-sh.en.mdx
meta: {
"title": "Entry Point",
"description": "Learn how to configure startup commands for your DevBox project"
}
In Sealos DevBox, `entrypoint.sh` is a special script file that defines how your application starts after deployment. This guide will help you understand how to properly configure this file to ensure your application starts and runs correctly in the deployment environment.
## What is entrypoint.sh
In a DevBox project, `entrypoint.sh` is your application's entry point that:
* Defines the startup command for your application
* Executes automatically after application deployment
* Ensures your application starts correctly
* Serves as a key component when publishing your project as an OCI image
`entrypoint.sh` should only be responsible for starting your application, not building it. All build steps should be completed in the development environment.
## Building in the Development Environment
Before configuring `entrypoint.sh`, complete your application build in the development environment. This approach:
* Reduces application startup time
* Avoids installing build dependencies in production
* Ensures you're deploying verified build artifacts
Build Your Application
Execute the build in your development environment:
```bash
# Next.js app
npm run build # generates .next directory
# TypeScript app
npm run build # generates dist directory
# Go app
go build -o main # generates executable
```
Verify Build Results
Confirm your build artifacts were generated correctly:
```bash
# Check build directory
ls -l dist/ # or .next/, build/, etc.
# Test build artifacts
node dist/main.js # or other startup command
```
Configure Startup Command
Configure `entrypoint.sh` based on your build artifacts:
```bash title="entrypoint.sh"
#!/bin/bash
# Next.js
NODE_ENV=production node .next/standalone/server.js
# TypeScript Node.js
node dist/main.js
# Go
./main
```
* Complete all build steps in the development environment
* Verify the integrity of build artifacts
* Test that the built application starts correctly
Use the Correct Port
Ensure your application listens on the correct port:
* Use the port from environment variables (if available)
* If manually specified, use standard ports (like 3000, 8080, etc.)
* Make sure to listen on 0.0.0.0 rather than localhost
Handle Environment Variables
If your application depends on environment variables:
* Configure environment variables during deployment via the Application Management interface
* Don't hardcode sensitive information in `entrypoint.sh`
Don't add background processes or daemons in entrypoint.sh. If you need to run multiple services, create and deploy separate DevBox projects for each.
## Common Issues
### Build-related Problems
If you encounter build-related issues:
1. Ensure all build steps are completed in the development environment
2. Check that build artifacts are complete
3. Verify runtime dependencies for your build artifacts
### Startup Failures
If your application fails to start:
1. Check if the startup command points to the correct build artifacts
2. Confirm all required environment variables are configured
3. Review application logs for detailed error information
### Permission Issues
If you encounter permission errors:
* Ensure `entrypoint.sh` has execution permissions:
```bash
chmod +x entrypoint.sh
```
* Check if the permissions for build artifacts are correct
## Testing and Validation
Before publishing your application:
1. Complete the build in the development environment
2. Test the build artifacts
3. Verify that `entrypoint.sh` starts your application correctly:
```bash
./entrypoint.sh
```
## Next Steps
After completing your application build and configuring `entrypoint.sh`, you're ready to [publish](./release) your application. During the publishing process:
1. Ensure all build artifacts are correctly generated
2. The system will use your configured `entrypoint.sh` as the application's startup entry point
3. After publishing is complete, during the [deployment](./deploy) phase, your application will start according to the method defined in `entrypoint.sh`
file: ./content/docs/guides/fundamentals/index.en.mdx
meta: {
"title": "Fundamentals",
"index": true,
"description": "Get started with the core concepts and workflow of Sealos DevBox"
}
Welcome! Let's get started with Sealos DevBox!
To begin, you should familiarize yourself with the core components and features of the platform. The goal of this section is to guide you through the fundamental steps of creating, developing, releasing, and deploying your application using Sealos DevBox.
## Core Components
| Component | Description |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Project | A Project in Sealos DevBox is like an application stack, encapsulating everything needed for your application, including runtime environments and configurations. |
| Development Environment | The cloud-based environment where you write and test your code, accessible through IDEs like Cursor. |
| Release | A versioned snapshot of your application, packaged as an OCI image for deployment. |
The following pages will guide you through how to create and manage your Projects, develop your application, create releases, and deploy your application. They will also explain how to use the various features of Sealos DevBox to streamline your development workflow.
1. [Create a Project](./fundamentals/create-a-project): Learn how to set up a new project in Sealos DevBox.
2. [Develop](./fundamentals/develop): Understand how to connect to your development environment and write code.
3. [Release](./fundamentals/release): Learn how to package your application as an OCI image for deployment.
4. [Deploy](./fundamentals/deploy): Discover how to deploy your application to Sealos Cloud.
By following these guides, you'll gain a solid understanding of the Sealos DevBox workflow, from initial project creation to final deployment.
If you prefer a quicker overview, check out our [Quick Start Tutorial](/docs/quick-start)!
file: ./content/docs/guides/fundamentals/release.en.mdx
meta: {
"title": "Release",
"description": "Learn how to release your project using Sealos DevBox"
}
After you've developed and tested your application, the next step is to release it as an OCI (Open Container Initiative) image. This process allows you to version your application and prepare it for deployment.
## Prepare Your Application for Release
Open the Terminal in Cursor IDE
In the Cursor IDE terminal, navigate to your project directory if you're not already there.
Prepare Your Application (if necessary)
Depending on your project's language or framework, you may need to prepare your application for release. This step varies widely between different technologies:
* For compiled languages (e.g., Java, Go):
Run your build command (e.g., `mvn package`, `go build`)
* For interpreted languages with build steps (e.g., TypeScript, some JavaScript frameworks):
Run your build or transpilation command (e.g., `npm run build`, `tsc`)
* For interpreted languages without build steps (e.g., Python, Ruby):
Ensure all dependencies are listed in your requirements file (e.g., `requirements.txt`, `Gemfile`)
If your project doesn't require any preparation, you can skip this step.
Review and Update entrypoint.sh
Each DevBox project has an `entrypoint.sh` file that contains the startup command for your OCI image. It's crucial to review and, if necessary, update this file:
1. Open the `entrypoint.sh` file in your project directory.
2. Review the startup command. It should correctly start your application.
3. If needed, modify the command to match your application's requirements.
For example, a Java application might have:
```bash
#!/bin/bash
java -jar your-application.jar
```
While a Python application might have:
```bash
#!/bin/bash
python your_main_script.py
```
Ensure this file is executable by running:
```bash
chmod +x entrypoint.sh
```
The `entrypoint.sh` file is crucial for your application's startup in the OCI image. Make sure it correctly launches your application before proceeding with the release.
## Release as OCI Image
Access Project Details
* Go to the Sealos DevBox List in your browser.
* Find your project and click on the "Detail" button on the right side of your project's row.
Initiate Release Process
On the project details page:
* Look for the "Version" section.
* Click on the "Release" button located in the top right corner of the "Version" section.
Configure Release Details
In the "Release" dialog box that appears, provide the following information:
* **Image Name**: This field is pre-filled with your project's image name.
* **Tag**: Enter a version tag for your release (e.g., v1.0).
* **Description**: Provide a brief description of this release (e.g., "Initial release" or "Feature update: user authentication").

Complete the Release
After filling in the required information, click the "Release" button at the bottom of the dialog box.
Verify the Release
Once the release process is complete, you'll see a new entry in the "Version" section of your project details page. This entry will show:
* The tag you assigned
* The status of the release
* The creation time
* The description you provided

## Best Practices for Releasing
1. **Semantic Versioning**: Consider using semantic versioning (e.g., v1.0.0) for your tags. This helps in tracking major, minor, and patch releases.
2. **Descriptive Releases**: Provide clear and concise descriptions for each release. This helps team members understand what changes or features are included in each version.
3. **Regular Releases**: Create new releases whenever you make significant changes or reach important milestones in your project. This practice helps in maintaining a clear history of your application's development.
4. **Pre-release Testing**: Always thoroughly test your application before creating a release. This ensures that the released version is stable and ready for deployment.
5. **Consistent Build Process**: Ensure your build process is consistent and reproducible. Consider using build scripts or Makefiles to standardize the build process across your team.
## Next Steps
After successfully releasing your application as an OCI image, you're ready to move on to the deployment phase. The OCI image you've created can be used for deployment or shared with other team members.
Check out the "Deploy" guide for information on how to deploy your released application to a production environment.
file: ./content/docs/guides/fundamentals/template-market.en.mdx
meta: {
"title": "DevBox 模板市场",
"description": "了解如何使用 Sealos DevBox 模板市场创建和管理开发环境模板"
}
file: ./content/docs/k8s/advanced-guide/build-image-using-registry-sync.en.mdx
meta: {
"title": "镜像构建改进指南",
"keywords": [
"Sealos镜像构建",
"镜像缓存",
"registry-proxy",
"skopeo copy",
"镜像仓库同步"
],
"description": "了解Sealos镜像构建的改进指南,提升构建效率,使用registry-proxy和skopeo copy实现镜像仓库同步,优化镜像管理。"
}
file: ./content/docs/k8s/advanced-guide/dual-stack-cluster.en.mdx
meta: {
"title": "使用calico安装双栈集群",
"keywords": [
"双栈集群",
"calico安装",
"sealos",
"Kubernetes",
"IPv6配置",
"Clusterfile",
"k8s双栈",
"网络配置",
"Kubernetes集群",
"Calico双栈"
],
"description": "使用calico安装双栈集群,详细步骤包括生成和编辑Clusterfile,添加IPv6的pod和svc的CIDR范围,确保Kubernetes集群的双栈网络配置。"
}
file: ./content/docs/k8s/advanced-guide/image-build-standardized.en.mdx
meta: {
"title": "镜像构建与标准化目录配置",
"keywords": [
"Sealos镜像构建",
"Kubernetes部署",
"目录结构标准化",
"Kubefile参数",
"容器镜像管理",
"集群初始化",
"应用配置",
"Helm chart",
"环境变量设置",
"镜像仓库配置"
],
"description": "学习Sealos镜像构建的标准化目录配置,掌握Kubefile参数使用,优化Kubernetes部署流程。本指南助您规范化构建过程,提高效率,降低错误率。"
}
file: ./content/docs/k8s/advanced-guide/sealos-run.en.mdx
meta: {
"title": "Sealos Run 的工作原理",
"keywords": [
"Sealos Run",
"Kubernetes集群部署",
"容器编排",
"节点管理",
"镜像分发",
"集群初始化",
"证书管理",
"IPVS规则",
"Kubeadm配置",
"Clusterfile"
],
"description": "深入解析Sealos Run命令的工作原理,包括节点检查、镜像分发、集群初始化等九大步骤,助您轻松部署和管理Kubernetes集群。"
}
file: ./content/docs/k8s/advanced-guide/template-function.en.mdx
meta: {
"title": "模板引入与函数增强",
"keywords": [
"Sealos模板渲染",
"集群镜像构建",
"模板函数增强",
"Kubernetes配置管理",
"动态生成资源清单",
"semverCompare",
"多版本Kubernetes支持",
"镜像构建灵活性"
],
"description": "探索Sealos的模板渲染功能,了解如何在集群镜像构建中动态生成配置、脚本和Kubernetes资源清单,以及如何使用增强的模板函数实现多版本Kubernetes支持。"
}
file: ./content/docs/k8s/quick-start/build-ingress-cluster-image.en.mdx
meta: {
"title": "构建一个 Ingress 集群镜像",
"keywords": [
"Ingress集群镜像",
"sealos",
"helm",
"nginx-ingress",
"镜像缓存代理",
"Dockerfile",
"集群镜像构建",
"镜像列表",
"镜像registry"
],
"description": "学习如何使用sealos和helm构建Ingress集群镜像,包括下载chart、添加镜像列表、编写Dockerfile、构建和推送镜像,以及运行集群镜像的完整流程。"
}
file: ./content/docs/k8s/quick-start/deploy-kubernetes.en.mdx
meta: {
"title": "安装 K8s 集群",
"keywords": [
"Kubernetes安装",
"Sealos",
"K8s集群部署",
"离线安装K8s",
"Containerd",
"集群镜像",
"高可用K8s",
"节点管理"
],
"description": "使用Sealos快速部署Kubernetes集群,支持在线和离线安装,适用于amd64和arm64架构。轻松管理节点,安装分布式应用,支持Containerd和Docker运行时。"
}
file: ./content/docs/k8s/quick-start/install-cli.en.mdx
meta: {
"title": "下载 Sealos 命令行工具",
"keywords": [
"Sealos命令行工具",
"Sealos安装",
"Kubernetes集群部署",
"二进制下载",
"包管理工具安装",
"源码安装",
"版本选择",
"Linux系统"
],
"description": "本文详细介绍了如何下载和安装Sealos命令行工具,包括版本选择、二进制下载、包管理工具安装和源码安装等多种方法,助您快速部署Kubernetes集群。"
}
file: ./content/docs/k8s/reference/image-cri-shim.en.mdx
meta: {
"title": "image-cri-shim 使用指南",
"keywords": [
"image-cri-shim",
"Kubernetes",
"容器运行时",
"CRI",
"kubelet",
"镜像自动识别",
"容器部署",
"中间件",
"镜像仓库"
],
"description": "image-cri-shim 使用指南,简化Kubernetes容器部署,自动识别镜像名称,提高操作便利性,支持CRI API v1alpha2和v1。"
}
file: ./content/docs/k8s/reference/lvscare.en.mdx
meta: {
"title": "LVScare 使用指南",
"keywords": [
"LVScare",
"Sealos",
"Kubernetes高可用性",
"IPVS负载均衡",
"健康检查工具"
],
"description": "LVScare是一款基于IPVS的轻量级负载均衡和健康检查工具,与Sealos集成,提升Kubernetes集群的高可用性和稳定性。"
}
file: ./content/docs/guides/databases/kafka/go.en.mdx
meta: {
"title": "Go",
"description": "Learn how to connect to Kafka in Sealos DevBox using Go"
}
This guide will walk you through the process of connecting to Kafka using Go within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Go environment
* [A Kafka cluster created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
go get github.com/joho/godotenv
go get github.com/confluentinc/confluent-kafka-go/v2/kafka
```
This command installs:
* `github.com/confluentinc/confluent-kafka-go/v2/kafka`: The Confluent Kafka client for Go
## System Dependencies
The `confluent-kafka-go` package requires `librdkafka` as a system dependency. In Sealos DevBox, you might need to install it manually. Run the following commands in your Cursor terminal:
```bash
sudo apt-get update
sudo apt-get install -y gcc libc6-dev librdkafka-dev
```
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our Kafka connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
KAFKA_BROKER=your_kafka_host:9092
KAFKA_GROUP_ID=group-id
KAFKA_TOPIC=topic-name
```
Replace the placeholders with your actual Kafka broker address, group ID, and topic name from the Database app in Sealos.
#### Create the main.go file
Create a new file named `main.go` with the following content:
```go title="main.go"
package main
import (
"fmt"
"log"
"os"
"github.com/confluentinc/confluent-kafka-go/v2/kafka"
"github.com/joho/godotenv"
)
var (
broker string
groupId string
topic string
)
func loadEnv() error {
// Load environment variables from .env file
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
broker = os.Getenv("KAFKA_BROKER")
groupId = os.Getenv("KAFKA_GROUP_ID")
topic = os.Getenv("KAFKA_TOPIC")
return nil
}
func startProducer() {
p, err := kafka.NewProducer(&kafka.ConfigMap{
"bootstrap.servers": broker,
"allow.auto.create.topics": true,
})
if err != nil {
panic(err)
}
go func() {
for e := range p.Events() {
switch ev := e.(type) {
case *kafka.Message:
if ev.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", ev.TopicPartition)
} else {
fmt.Printf("Delivered message to %v\n", ev.TopicPartition)
}
}
}
}()
for _, word := range []string{"message 1", "message 2", "message 3"} {
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(word),
}, nil)
}
}
func startConsumer() {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": broker,
"group.id": groupId,
"auto.offset.reset": "earliest",
})
if err != nil {
panic(err)
}
c.Subscribe(topic, nil)
for {
msg, err := c.ReadMessage(-1)
if err == nil {
fmt.Printf("Message on %s: %s\n", msg.TopicPartition, string(msg.Value))
} else {
fmt.Printf("Consumer error: %v (%v)\n", err, msg)
break
}
}
c.Close()
}
func main() {
if err := loadEnv(); err != nil {
fmt.Println(err)
return
}
startProducer()
startConsumer()
}
```
Let's break down the main components of this code:
1. **Imports and Variables**: We import the necessary packages and define global variables for the broker address, group ID, and topic name.
2. **startProducer function**:
* Creates a new Kafka producer
* Uses a goroutine to handle delivery reports
* Produces sample messages to the specified topic
3. **startConsumer function**:
* Creates a new Kafka consumer
* Subscribes to the specified topic
* Continuously reads messages from the topic and prints them
4. **Main function**: Calls both `startProducer()` and `startConsumer()` to demonstrate producing and consuming messages.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
go run main.go
```
This will execute the `main` function, demonstrating both producing and consuming messages with Kafka.
## Best Practices
1. In a real-world scenario, separate the producer and consumer into different applications or services.
2. Use environment variables for Kafka configuration instead of hardcoding values.
3. Implement proper error handling and logging.
4. Implement graceful shutdown to properly close Kafka connections.
## Troubleshooting
If you encounter connection issues:
1. Verify your Kafka broker address in the `broker` variable.
2. Ensure your Kafka cluster is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages and system dependencies are correctly installed.
5. If you encounter `cgo` related errors, make sure you have the necessary build tools installed (`sudo apt-get install build-essential`).
For more detailed information on using Kafka with Go, refer to the [confluent-kafka-go documentation](https://github.com/confluentinc/confluent-kafka-go).
file: ./content/docs/guides/databases/kafka/index.en.mdx
meta: {
"title": "Kafka",
"description": "Deploy and connect to Kafka clusters in Sealos DevBox"
}
Apache Kafka is a distributed event streaming platform that allows you to build real-time data pipelines and streaming applications. In Sealos DevBox, you can easily set up and connect to Kafka clusters for your development projects.
## Deploy Kafka in Sealos
Sealos makes it easy to deploy a Kafka cluster with just a few clicks. Follow these steps:
From the Sealos desktop, click on the "Database" icon to open the Database app.

Click on the "Create New Database" button. In the deployment form:
* Select "Kafka" as the database type.
* Choose the desired Kafka version (e.g., kafka-3.3.2).
* Enter a name for your Kafka cluster (use lowercase letters and numbers only).
* Adjust the CPU and Memory sliders to set the resources for your Kafka brokers.
* Set the number of brokers (1 for single-node development and testing, 3 or more for production).
* Specify the storage size for each broker (e.g., 3 Gi).

Review the projected cost on the left sidebar. Click the "Deploy" button in the top right corner to create your Kafka cluster.
Once deployed, Sealos will provide you with the necessary connection details.

## Connect to Kafka in DevBox
Here are examples of how to connect to your Kafka cluster using different programming languages and frameworks within your DevBox environment:
file: ./content/docs/guides/databases/kafka/java.en.mdx
meta: {
"title": "Java",
"description": "Learn how to connect to Kafka in Sealos DevBox using Java"
}
This guide will walk you through the process of connecting to Kafka using Java within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Java environment
* [A Kafka cluster created using the Database app in Sealos](./)
## Project Setup
#### Create a new Maven project
In your Sealos DevBox terminal, initialize a new Maven project:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=kafka-java-example -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
mv kafka-java-example/* .
rm -rf kafka-java-example
rm -rf test
```
#### Update pom.xml
Replace the content of your `pom.xml` file with the following:
```xml
4.0.0com.examplekafka-java-example1.0-SNAPSHOTUTF-81111org.apache.kafkakafka-clients3.4.0org.slf4jslf4j-simple2.0.5org.apache.maven.pluginsmaven-compiler-plugin3.8.11111org.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.App
```
This `pom.xml` file includes the necessary dependencies (Kafka client and SLF4J for logging) and configures the Maven Shade plugin to create an executable JAR.
#### Create a configuration file
Create a file named `kafka.properties` in the `src/main/resources` directory:
```ini
bootstrap.servers=your_kafka_bootstrap_servers:9092
topic=your_topic_name
group.id=your_consumer_group_id
```
Replace the placeholders with your actual Kafka credentials from the Database app in Sealos.
#### Create Java classes
Create the following Java classes in the `src/main/java/com/example` directory:
1. `KafkaProducerExample.java`:
```java
package com.example;
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.StringSerializer;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Properties;
public class KafkaProducerExample {
public static void main(String[] args) {
Properties props = loadConfig();
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
Producer producer = new KafkaProducer<>(props);
String topic = props.getProperty("topic");
String message = "Hello from Sealos DevBox!";
ProducerRecord record = new ProducerRecord<>(topic, message);
producer.send(record, (metadata, exception) -> {
if (exception == null) {
System.out.println("Message sent successfully. Topic: " + metadata.topic() +
", Partition: " + metadata.partition() +
", Offset: " + metadata.offset());
} else {
System.err.println("Error sending message: " + exception.getMessage());
}
});
producer.flush();
producer.close();
}
private static Properties loadConfig() {
Properties props = new Properties();
try (FileInputStream fis = new FileInputStream("src/main/resources/kafka.properties")) {
props.load(fis);
} catch (IOException e) {
throw new RuntimeException("Error loading Kafka configuration", e);
}
return props;
}
}
```
This class demonstrates how to create a Kafka producer, send a message, and handle the result asynchronously.
2. `KafkaConsumerExample.java`:
```java
package com.example;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
public class KafkaConsumerExample {
public static void main(String[] args) {
Properties props = loadConfig();
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Consumer consumer = new KafkaConsumer<>(props);
String topic = props.getProperty("topic");
consumer.subscribe(Collections.singletonList(topic));
try {
while (true) {
ConsumerRecords records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord record : records) {
System.out.println("Received message: " + record.value() +
" from topic: " + record.topic() +
", partition: " + record.partition() +
", offset: " + record.offset());
}
}
} finally {
consumer.close();
}
}
private static Properties loadConfig() {
Properties props = new Properties();
try (FileInputStream fis = new FileInputStream("src/main/resources/kafka.properties")) {
props.load(fis);
} catch (IOException e) {
throw new RuntimeException("Error loading Kafka configuration", e);
}
return props;
}
}
```
This class shows how to create a Kafka consumer, subscribe to a topic, and continuously poll for new messages.
Both classes use a loadConfig() method to read the Kafka properties from the kafka.properties file, allowing for easy configuration changes without modifying the code.
## Build and Run
To build and run the project, use the following commands in your terminal:
```bash
mvn clean package
java -cp target/kafka-java-example-1.0-SNAPSHOT.jar com.example.KafkaProducerExample
java -cp target/kafka-java-example-1.0-SNAPSHOT.jar com.example.KafkaConsumerExample
```
Run the producer and consumer in separate terminal windows to see the message being sent and received.
## Best Practices
1. Use a properties file to store Kafka configuration details.
2. Implement proper error handling and logging.
3. Use the try-with-resources statement to ensure that Kafka producers and consumers are properly closed.
4. Consider using Kafka's AdminClient for managing topics and other Kafka resources.
5. Implement proper serialization and deserialization for your message keys and values.
## Troubleshooting
If you encounter connection issues:
1. Verify your Kafka credentials in the `kafka.properties` file.
2. Ensure your Kafka cluster is running and accessible from your DevBox environment.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that all required dependencies are correctly specified in your `pom.xml` file.
For more detailed information on using Kafka with Java, refer to the [Apache Kafka documentation](https://kafka.apache.org/documentation/).
file: ./content/docs/guides/databases/kafka/nodejs.en.mdx
meta: {
"title": "Node.js",
"description": "Learn how to connect to Kafka in Sealos DevBox using Node.js"
}
This guide will walk you through the process of connecting to Kafka using Node.js within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Node.js environment
* [A Kafka cluster created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
npm install kafkajs dotenv
```
This command installs:
* `kafkajs`: A modern Apache Kafka client for Node.js
* `dotenv`: A zero-dependency module that loads environment variables from a `.env` file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our Kafka connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
KAFKA_BROKERS=your_kafka_host:9092
KAFKA_CLIENT_ID=my-app
KAFKA_TOPIC=my-topic
```
Replace the placeholders with your actual Kafka broker address, client ID, and topic name from the Database app in Sealos.
#### Create a Kafka client
Create a new file named `kafkaClient.js` with the following content:
```javascript
const { Kafka } = require('kafkajs');
require('dotenv').config();
const kafka = new Kafka({
clientId: process.env.KAFKA_CLIENT_ID,
brokers: process.env.KAFKA_BROKERS.split(','),
});
const producer = kafka.producer();
const consumer = kafka.consumer({ groupId: 'test-group' });
module.exports = { kafka, producer, consumer };
```
This file creates a Kafka client and exports it along with a producer and consumer.
#### Create a producer script
Now, let's create a file named `producer.js` to demonstrate how to produce messages:
```javascript
const { kafka, producer } = require('./kafkaClient');
require('dotenv').config();
async function produceMessage() {
try {
await producer.connect();
// Check if the topic exists
const admin = kafka.admin();
await admin.connect();
const topics = await admin.listTopics();
if (!topics.includes(process.env.KAFKA_TOPIC)) {
console.log(`Topic ${process.env.KAFKA_TOPIC} does not exist. Creating it...`);
await admin.createTopics({
topics: [{ topic: process.env.KAFKA_TOPIC, numPartitions: 1, replicationFactor: 1 }]
});
console.log(`Topic ${process.env.KAFKA_TOPIC} created successfully.`);
}
await admin.disconnect();
// Send the message
const result = await producer.send({
topic: process.env.KAFKA_TOPIC,
messages: [
{ value: 'Hello from Sealos DevBox!' },
],
});
console.log('Message sent successfully', result);
} catch (error) {
console.error('Error producing message:', error);
if (error.name === 'KafkaJSNumberOfRetriesExceeded') {
console.error('Connection details:', {
clientId: process.env.KAFKA_CLIENT_ID,
brokers: process.env.KAFKA_BROKERS,
});
}
} finally {
await producer.disconnect();
}
}
produceMessage();
```
This script does the following:
1. Connects to Kafka using the producer.
2. Checks if the specified topic exists, creating it if necessary.
3. Sends a message to the topic.
4. Handles errors, including connection issues.
5. Disconnects from Kafka after the operation.
This approach ensures the topic exists before sending messages and provides detailed error information if the connection fails.
#### Create a consumer script
Create another file named `consumer.js` to demonstrate how to consume messages:
```javascript
const { consumer } = require('./kafkaClient');
require('dotenv').config();
async function consumeMessages() {
try {
await consumer.connect();
await consumer.subscribe({ topic: process.env.KAFKA_TOPIC, fromBeginning: true });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
topic,
partition,
offset: message.offset,
value: message.value.toString(),
});
},
});
} catch (error) {
console.error('Error consuming messages:', error);
}
}
consumeMessages();
```
This consumer script:
1. Connects to Kafka using the consumer instance.
2. Subscribes to the specified topic, starting from the beginning of the log.
3. Continuously runs and processes each incoming message.
4. Logs the topic, partition, offset, and message value for each received message.
5. Handles any errors that occur during the consumption process.
This setup allows for real-time processing of messages as they arrive in the Kafka topic.
## Usage
To run the producer script, use the following command in your Cursor terminal:
```bash
node producer.js
```
To run the consumer script, open another terminal and use:
```bash
node consumer.js
```
The consumer will start listening for messages. When you run the producer script, you should see the message being received by the consumer.
## Best Practices
1. Use environment variables for Kafka configuration details.
2. Implement proper error handling and logging.
3. Use the `kafkajs` built-in retry mechanism for better reliability.
4. Consider implementing a graceful shutdown mechanism for your consumer.
5. Use compression for better performance when dealing with large messages or high throughput.
## Troubleshooting
If you encounter connection issues:
1. Verify your Kafka broker address in the `.env` file.
2. Ensure your Kafka cluster is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
For more detailed information on using Kafka with Node.js, refer to the [KafkaJS documentation](https://kafka.js.org/).
file: ./content/docs/guides/databases/kafka/php.en.mdx
meta: {
"title": "PHP",
"description": "Learn how to connect to Kafka in Sealos DevBox using PHP"
}
This guide will walk you through the process of connecting to Kafka using PHP within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with PHP environment
* [A Kafka cluster created using the Database app in Sealos](./)
## Install Required Extensions
In your Cursor terminal, first install the necessary system dependencies:
```bash
sudo apt-get update
sudo apt-get install -y librdkafka-dev
```
Then, install the Kafka extension for PHP:
```bash
sudo pecl install rdkafka
sudo sh -c 'echo "extension=rdkafka.so" > /etc/php/*/mods-available/rdkafka.ini'
sudo phpenmod rdkafka
```
## Connection Setup
#### Create a Configuration File
First, let's create a configuration file to store our Kafka connection parameters. Create a file named `config.php` in your project directory with the following content:
```php
'your_kafka_broker:9092',
'topic' => 'your_topic_name',
'group_id' => 'your_consumer_group_id'
];
```
Replace the placeholders with your actual Kafka credentials from the Database app in Sealos.
#### Create a Kafka Producer
Create a file named `kafka_producer.php` with the following content:
```php
set('metadata.broker.list', $config['brokers']);
$producer = new RdKafka\Producer($conf);
$topic = $producer->newTopic($config['topic']);
$message = "Hello from Sealos DevBox!";
$topic->produce(RD_KAFKA_PARTITION_UA, 0, $message);
$producer->flush(10000);
echo "Message sent: $message\n";
```
This script creates a Kafka producer and sends a message to the specified topic.
#### Create a Kafka Consumer
Create another file named `kafka_consumer.php` with the following content:
```php
set('group.id', $config['group_id']);
$conf->set('metadata.broker.list', $config['brokers']);
$conf->set('auto.offset.reset', 'earliest');
$consumer = new RdKafka\KafkaConsumer($conf);
$consumer->subscribe([$config['topic']]);
echo "Waiting for messages...\n";
while (true) {
$message = $consumer->consume(120*1000);
switch ($message->err) {
case RD_KAFKA_RESP_ERR_NO_ERROR:
echo "Received message: " . $message->payload . "\n";
break;
case RD_KAFKA_RESP_ERR__PARTITION_EOF:
echo "No more messages; will wait for more\n";
break;
case RD_KAFKA_RESP_ERR__TIMED_OUT:
echo "Timed out\n";
break;
default:
throw new \Exception($message->errstr(), $message->err);
break;
}
}
```
This script creates a Kafka consumer that listens for messages on the specified topic.
## Usage
To run the producer script, use the following command in your Cursor terminal:
```bash
php kafka_producer.php
```
To run the consumer script, open another terminal and use:
```bash
php kafka_consumer.php
```
The consumer will start listening for messages. When you run the producer script, you should see the message being received by the consumer.
## Best Practices
1. Use environment variables for Kafka configuration details.
2. Implement proper error handling and logging.
3. Consider using a library like `monolog` for better logging capabilities.
4. Implement a graceful shutdown mechanism for your consumer.
5. Use compression for better performance when dealing with large messages or high throughput.
## Troubleshooting
If you encounter connection issues:
1. Verify your Kafka broker address in the `config.php` file.
2. Ensure your Kafka cluster is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `rdkafka` extension is correctly installed and enabled.
For more detailed information on using Kafka with PHP, refer to the [php-rdkafka documentation](https://github.com/arnaud-lb/php-rdkafka).
file: ./content/docs/guides/databases/kafka/python.en.mdx
meta: {
"title": "Python",
"description": "Learn how to connect to Kafka in Sealos DevBox using Python"
}
This guide will walk you through the process of connecting to Kafka using Python within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Python environment
* [A Kafka cluster created using the Database app in Sealos](./)
## Activating the Python Environment
Before you start, you need to activate the Python virtual environment in your DevBox. Open the terminal within Cursor IDE and run:
```bash
source ./bin/activate
```
You should see your prompt change, indicating that the virtual environment is now active.
## Installing Required Packages
In your Cursor terminal, install the necessary packages:
```bash
pip install kafka-python python-dotenv
```
This command installs:
* `kafka-python`: The Apache Kafka client for Python
* `python-dotenv`: A Python package that allows you to load environment variables from a .env file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our Kafka connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
KAFKA_BOOTSTRAP_SERVERS=your_kafka_bootstrap_servers:9092
KAFKA_TOPIC=your_topic_name
KAFKA_CONSUMER_GROUP=your_consumer_group_id
```
Replace the placeholders with your actual Kafka credentials from the Database app in Sealos.
#### Create a Kafka client module
Create a new file named `kafka_client.py` with the following content:
```python title="kafka_client.py"
import os
from dotenv import load_dotenv
from kafka import KafkaProducer, KafkaConsumer
# Load environment variables
load_dotenv()
def get_kafka_producer():
try:
producer = KafkaProducer(bootstrap_servers=os.getenv('KAFKA_BOOTSTRAP_SERVERS'))
print("Successfully connected to Kafka producer")
return producer
except Exception as e:
print(f"Error connecting to Kafka producer: {e}")
return None
def get_kafka_consumer(topic, group_id=None):
try:
consumer = KafkaConsumer(
topic,
bootstrap_servers=os.getenv('KAFKA_BOOTSTRAP_SERVERS'),
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id=group_id or 'my-default-group'
)
print(f"Successfully connected to Kafka consumer for topic: {topic}")
return consumer
except Exception as e:
print(f"Error connecting to Kafka consumer: {e}")
return None
```
This module provides two main functions:
1. `get_kafka_producer()`: This function creates a Kafka producer using the bootstrap servers specified in the environment variables.
2. `get_kafka_consumer(topic)`: This function creates a Kafka consumer for a specified topic.
#### Create a test script
Now, let's create a test script to verify our connection and perform some basic Kafka operations. Create a file named `test_kafka.py` with the following content:
```python title="test_kafka.py"
import os
from dotenv import load_dotenv
from kafka_client import get_kafka_producer, get_kafka_consumer
# Load environment variables
load_dotenv()
def test_kafka_producer():
producer = get_kafka_producer()
if producer:
topic = os.getenv('KAFKA_TOPIC')
message = "Hello from Sealos DevBox!"
producer.send(topic, message.encode('utf-8'))
producer.flush()
print(f"Message sent to topic {topic}: {message}")
producer.close()
def test_kafka_consumer():
topic = os.getenv('KAFKA_TOPIC')
group_id = os.getenv('KAFKA_CONSUMER_GROUP')
consumer = get_kafka_consumer(topic, group_id)
if consumer:
print(f"Waiting for messages on topic {topic}...")
for message in consumer:
print(f"Received message: {message.value.decode('utf-8')}")
break # Exit after receiving one message
consumer.close()
if __name__ == "__main__":
test_kafka_producer()
test_kafka_consumer()
```
This script demonstrates how to:
1. Create a Kafka producer and send a message to a topic.
2. Create a Kafka consumer and read a message from a topic.
## Running the Test Script
To run the test script, make sure your virtual environment is activated, then execute:
```bash
python test_kafka.py
```
If everything is set up correctly, you should see output indicating successful connection to Kafka, message sending, and message receiving.
## Best Practices
1. Always activate the virtual environment before running your Python scripts or installing packages.
2. Use environment variables to store sensitive information like Kafka bootstrap servers.
3. Handle exceptions appropriately to manage potential errors.
4. Consider using asynchronous Kafka clients for better performance in production environments.
5. Implement proper logging instead of print statements in production code.
## Troubleshooting
If you encounter connection issues:
1. Ensure you've activated the virtual environment with `source ./bin/activate`.
2. Verify that your Kafka cluster is running and accessible.
3. Double-check your Kafka credentials in the `.env` file.
4. Check the Kafka logs in the Database app for any error messages.
5. Make sure your DevBox environment has network access to the Kafka bootstrap servers.
For more detailed information on using Kafka with Python, refer to the [kafka-python documentation](https://kafka-python.readthedocs.io/).
file: ./content/docs/guides/databases/kafka/rust.en.mdx
meta: {
"title": "Rust",
"description": "Learn how to connect to Kafka in Sealos DevBox using Rust"
}
This guide will walk you through the process of connecting to Kafka using Rust within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Rust environment
* [A Kafka cluster created using the Database app in Sealos](./)
## Install Required Dependencies
In your Cursor terminal, add the necessary dependencies to your `Cargo.toml` file:
```toml
[dependencies]
rdkafka = "0.28"
tokio = { version = "1.28", features = ["full"] }
dotenv = "0.15"
```
These dependencies include:
* `rdkafka`: A high-level Apache Kafka client library for Rust
* `tokio`: An asynchronous runtime for Rust
* `dotenv`: A library for loading environment variables from a file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our Kafka connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
KAFKA_BROKERS=your_kafka_bootstrap_servers:9092
KAFKA_TOPIC=your_topic_name
KAFKA_GROUP_ID=rust-consumer-group
```
Replace the placeholders with your actual Kafka credentials from the Database app in Sealos.
#### Create the main.rs file
Create a new file named `src/main.rs` with the following content:
```rust
use rdkafka::config::ClientConfig;
use rdkafka::producer::{FutureProducer, FutureRecord};
use rdkafka::consumer::{StreamConsumer, Consumer};
use rdkafka::message::Message;
use std::time::Duration;
use dotenv::dotenv;
use std::env;
#[tokio::main]
async fn main() -> Result<(), Box> {
dotenv().ok();
let brokers = env::var("KAFKA_BROKERS").expect("KAFKA_BROKERS must be set");
let topic = env::var("KAFKA_TOPIC").expect("KAFKA_TOPIC must be set");
let group_id = env::var("KAFKA_GROUP_ID").expect("KAFKA_GROUP_ID must be set");
// Producer setup
let producer: FutureProducer = ClientConfig::new()
.set("group.id", &group_id)
.set("bootstrap.servers", &brokers)
.set("message.timeout.ms", "5000")
.create()?;
// Produce a message
let delivery_status = producer
.send(
FutureRecord::to(&topic)
.payload("Hello from Sealos DevBox!")
.key("key"),
Duration::from_secs(0),
)
.await;
println!("Delivery status: {:?}", delivery_status);
// Add a delay to ensure the message is processed
tokio::time::sleep(Duration::from_secs(1)).await;
// Consumer setup
let consumer: StreamConsumer = ClientConfig::new()
.set("group.id", "rust-consumer-group")
.set("bootstrap.servers", &brokers)
.set("enable.partition.eof", "false")
.set("session.timeout.ms", "6000")
.set("enable.auto.commit", "true")
.set("auto.offset.reset", "earliest") // Add this line
.create()?;
consumer.subscribe(&[&topic])?;
// Consume messages
println!("Waiting for messages...");
let mut message_count = 0;
let max_messages = 5; // Set the maximum number of messages to receive
while message_count < max_messages {
match tokio::time::timeout(Duration::from_secs(5), consumer.recv()).await {
Ok(Ok(msg)) => {
println!("Received message: {:?}", msg.payload_view::());
message_count += 1;
}
Ok(Err(e)) => println!("Error while receiving message: {:?}", e),
Err(_) => {
println!("No more messages received after {} seconds. Exiting.", 5);
break;
}
}
}
println!("Received {} messages in total.", message_count);
Ok(())
}
```
This script demonstrates how to:
1. Set up a Kafka producer and send a message to a topic.
2. Set up a Kafka consumer and read messages from a topic.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
cargo run
```
This will compile and execute the `main` function, demonstrating the connection to Kafka, message production, and consumption.
## Best Practices
1. Use environment variables for Kafka configuration details.
2. Implement proper error handling using Rust's `Result` type.
3. Use asynchronous programming with Tokio for better performance.
4. Consider implementing more robust consumer logic for production use, including proper error handling and graceful shutdown.
## Troubleshooting
If you encounter connection issues:
1. Verify your Kafka broker addresses in the `.env` file.
2. Ensure your Kafka cluster is running and accessible from your DevBox environment.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that all required dependencies are correctly specified in your `Cargo.toml` file.
For more detailed information on using Kafka with Rust, refer to the [rdkafka documentation](https://docs.rs/rdkafka).
file: ./content/docs/guides/databases/milvus/go.en.mdx
meta: {
"title": "Go",
"description": "Learn how to connect to Milvus databases in Sealos DevBox using Go"
}
This guide will walk you through the process of connecting to a Milvus database using Go within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Go environment
* [A Milvus database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
go get github.com/milvus-io/milvus-sdk-go/v2
go get github.com/joho/godotenv
```
These commands install the Milvus Go SDK and the godotenv package for loading environment variables.
## Connection Setup
#### Set up the environment variables
Create a `.env` file in your project root with the following content:
```ini title=".env"
MILVUS_ADDR=your_milvus_host:19530
COLLECTION_NAME=your_collection_name
DIMENSION=128
ID_COLUMN=id
EMBEDDING_COLUMN=embedding
```
Replace the placeholders with your actual Milvus credentials and desired configuration.
#### Create the main.go file
Create a new file named `main.go` with the following content:
```go title="main.go"
package main
import (
"context"
"log"
"os"
"strconv"
"time"
"github.com/joho/godotenv"
"github.com/milvus-io/milvus-sdk-go/v2/client"
"github.com/milvus-io/milvus-sdk-go/v2/entity"
)
func main() {
// Load environment variables from .env file
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
// Get configuration from environment variables
milvusAddr := os.Getenv("MILVUS_ADDR")
collectionName := os.Getenv("COLLECTION_NAME")
dimStr := os.Getenv("DIMENSION")
idCol := os.Getenv("ID_COLUMN")
embeddingCol := os.Getenv("EMBEDDING_COLUMN")
// Convert dimension to int64
dim, err := strconv.ParseInt(dimStr, 10, 64)
if err != nil {
log.Fatalf("Failed to parse DIMENSION: %v", err)
}
// Setup context for client creation, use 10 seconds here
ctx := context.Background()
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
// Create a new Milvus client
c, err := client.NewClient(ctx, client.Config{
Address: milvusAddr,
})
if err != nil {
log.Fatal("failed to connect to milvus:", err.Error())
}
// Check if the collection exists
collExists, err := c.HasCollection(ctx, collectionName)
if err != nil {
log.Fatal("failed to check collection exists:", err.Error())
}
if collExists {
// Drop the old collection if it exists
_ = c.DropCollection(ctx, collectionName)
}
// Define collection schema
schema := entity.NewSchema().WithName(collectionName).WithDescription("this is the basic example collection").
WithField(entity.NewField().WithName(idCol).WithDataType(entity.FieldTypeInt64).WithIsPrimaryKey(true).WithIsAutoID(false)).
WithField(entity.NewField().WithName(embeddingCol).WithDataType(entity.FieldTypeFloatVector).WithDim(dim))
// Create the collection
err = c.CreateCollection(ctx, schema, entity.DefaultShardNumber)
if err != nil {
log.Fatal("failed to create collection:", err.Error())
}
// List all collections
collections, err := c.ListCollections(ctx)
if err != nil {
log.Fatal("failed to list collections:", err.Error())
}
for _, collection := range collections {
log.Printf("Collection id: %d, name: %s\n", collection.ID, collection.Name)
}
// Show collection partitions
partitions, err := c.ShowPartitions(ctx, collectionName)
if err != nil {
log.Fatal("failed to show partitions:", err.Error())
}
for _, partition := range partitions {
log.Printf("partition id: %d, name: %s\n", partition.ID, partition.Name)
}
// Create a new partition
partitionName := "new_partition"
err = c.CreatePartition(ctx, collectionName, partitionName)
if err != nil {
log.Fatal("failed to create partition:", err.Error())
}
log.Println("After create partition")
// Show collection partitions again to check creation
partitions, err = c.ShowPartitions(ctx, collectionName)
if err != nil {
log.Fatal("failed to show partitions:", err.Error())
}
for _, partition := range partitions {
log.Printf("partition id: %d, name: %s\n", partition.ID, partition.Name)
}
// Clean up by dropping the collection
_ = c.DropCollection(ctx, collectionName)
c.Close()
}
```
This code demonstrates how to connect to Milvus, create a collection, list collections, show and create partitions, and clean up by dropping the collection.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
go run main.go
```
This will execute the `main` function, demonstrating the connection to Milvus and basic operations with collections and partitions.
## Best Practices
1. Use environment variables for Milvus connection details and configuration.
2. Always handle potential errors using proper error checking.
3. Use contexts with timeouts for operations to prevent hanging in case of network issues.
4. Close the Milvus client connection after operations are complete.
5. Clean up resources (like dropping test collections) after you're done with them.
## Troubleshooting
If you encounter connection issues:
1. Verify your Milvus credentials in the `.env` file.
2. Ensure your Milvus database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
For more detailed information on using Milvus with Go, refer to the [Milvus Go SDK documentation](https://github.com/milvus-io/milvus-sdk-go).
file: ./content/docs/guides/databases/milvus/index.en.mdx
meta: {
"title": "Milvus",
"description": "在 Sealos DevBox 中部署和连接 Milvus 集群"
}
file: ./content/docs/guides/databases/milvus/nodejs.en.mdx
meta: {
"title": "Node.js",
"description": "Learn how to connect to Milvus databases in Sealos DevBox using Node.js"
}
This guide will walk you through the process of connecting to a Milvus database using Node.js within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Node.js environment
* [A Milvus database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
npm install @zilliz/milvus2-sdk-node dotenv
```
This command installs:
* `@zilliz/milvus2-sdk-node`: The official Milvus Node.js SDK
* `dotenv`: A zero-dependency module that loads environment variables from a `.env` file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our Milvus connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
MILVUS_HOST=your_milvus_host
MILVUS_PORT=19530
COLLECTION_NAME=test_collection
DIMENSION=128
```
Replace the placeholders with your actual Milvus credentials from the Database app in Sealos.
#### Create a connection file
Create a new file named `milvusClient.js` with the following content:
```javascript title="milvusClient.js"
const { MilvusClient } = require('@zilliz/milvus2-sdk-node');
require('dotenv').config();
const client = new MilvusClient({
address: `${process.env.MILVUS_HOST}:${process.env.MILVUS_PORT}`,
});
module.exports = client;
```
#### Create database operations
Now, let's create a file named `milvusOperations.js` to handle our database operations:
```javascript title="milvusOperations.js"
const client = require('./milvusClient');
require('dotenv').config();
async function createCollection() {
try {
const collectionName = process.env.COLLECTION_NAME;
const dimension = parseInt(process.env.DIMENSION);
// Define collection schema
const collectionSchema = {
collection_name: collectionName,
fields: [
{
name: 'id',
data_type: 'Int64',
is_primary_key: true,
auto_id: true
},
{
name: 'vector',
data_type: 'FloatVector',
dim: dimension
},
{
name: 'metadata',
data_type: 'VarChar',
max_length: 255
}
]
};
// Create collection
await client.createCollection(collectionSchema);
console.log(`Collection ${collectionName} created successfully`);
// Create index
const indexParams = {
collection_name: collectionName,
field_name: 'vector',
extra_params: {
index_type: 'IVF_FLAT',
metric_type: 'L2',
params: JSON.stringify({ nlist: 1024 })
}
};
await client.createIndex(indexParams);
console.log('Index created successfully');
} catch (error) {
console.error('Error creating collection:', error);
throw error;
}
}
async function insertData(vectors, metadata) {
try {
const collectionName = process.env.COLLECTION_NAME;
const data = {
collection_name: collectionName,
fields_data: vectors.map((vector, index) => ({
id: [],
vector,
metadata: metadata[index]
}))
};
const result = await client.insert(data);
console.log('Data inserted successfully:', result);
return result;
} catch (error) {
console.error('Error inserting data:', error);
throw error;
}
}
async function search(queryVector, topK = 5) {
try {
const collectionName = process.env.COLLECTION_NAME;
// Load collection
await client.loadCollection({
collection_name: collectionName
});
const searchParams = {
collection_name: collectionName,
vector: queryVector,
output_fields: ['metadata'],
limit: topK,
params: { nprobe: 10 }
};
const result = await client.search(searchParams);
console.log('Search results:', result);
return result;
} catch (error) {
console.error('Error searching:', error);
throw error;
}
}
module.exports = {
createCollection,
insertData,
search
};
```
#### Create a main script
Finally, let's create a `main.js` file to demonstrate all the operations:
```javascript title="main.js"
const { createCollection, insertData, search } = require('./milvusOperations');
async function main() {
try {
// Create collection
await createCollection();
// Generate sample vectors and metadata
const sampleVectors = [
new Array(128).fill(0).map(() => Math.random()),
new Array(128).fill(0).map(() => Math.random())
];
const sampleMetadata = ['Sample 1', 'Sample 2'];
// Insert data
await insertData(sampleVectors, sampleMetadata);
// Perform search
const queryVector = new Array(128).fill(0).map(() => Math.random());
await search(queryVector, 2);
} catch (error) {
console.error('An error occurred:', error);
}
}
main();
```
## Usage
To run the script, use the following command in your Cursor terminal:
```bash
node main.js
```
This will execute all the operations defined in the `main` function, demonstrating the connection to Milvus, collection creation, data insertion, and vector search.
## Best Practices
1. Use environment variables for Milvus connection details.
2. Create indexes for better search performance.
3. Load collections before performing search operations.
4. Implement proper error handling.
5. Use batch operations for inserting multiple vectors.
6. Release resources by releasing collections when they're no longer needed.
## Troubleshooting
If you encounter connection issues:
1. Verify your Milvus credentials in the `.env` file.
2. Ensure your Milvus database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
5. Verify that the vector dimensions match your collection schema.
For more detailed information on using Milvus with Node.js, refer to the [Milvus Node.js SDK documentation](https://github.com/milvus-io/milvus-sdk-node).
file: ./content/docs/guides/databases/mongodb/go.en.mdx
meta: {
"title": "Go",
"description": "Learn how to connect to MongoDB databases in Sealos DevBox using Go"
}
This guide will walk you through the process of connecting to a MongoDB database using Go within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Go environment
* [A MongoDB database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
go get go.mongodb.org/mongo-driver/mongo
go get github.com/joho/godotenv
```
These commands install:
* `go.mongodb.org/mongo-driver/mongo`: The official MongoDB driver for Go
* `github.com/joho/godotenv`: A Go port of the Ruby dotenv library
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
MONGO_URI=mongodb://your_username:your_password@your_database_host:27017
DB_NAME=your_database_name
```
Replace the placeholders with your actual MongoDB credentials from the Database app in Sealos. Note that we're not including the database name in the URI, as we'll create it programmatically if it doesn't exist.
#### Create the main.go file
Create a new file named `main.go` with the following content:
```go title="main.go"
package main
import (
"context"
"fmt"
"log"
"os"
"time"
"github.com/joho/godotenv"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
// Employee struct represents the structure of our data
type Employee struct {
Name string
Position string
}
func main() {
// Load environment variables from .env file
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
// Get MongoDB connection URI and database name from environment variables
mongoURI := os.Getenv("MONGO_URI")
dbName := os.Getenv("DB_NAME")
// Create a new client and connect to the server
client, err := mongo.NewClient(options.Client().ApplyURI(mongoURI))
if err != nil {
log.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
err = client.Connect(ctx)
if err != nil {
log.Fatal(err)
}
defer client.Disconnect(ctx)
// Check if the database exists, if not create it
err = createDatabaseIfNotExists(client, ctx, dbName)
if err != nil {
log.Fatal(err)
}
// Get a handle for your collection
collection := client.Database(dbName).Collection("employees")
// Insert a document
employee := Employee{"John Doe", "Developer"}
insertResult, err := collection.InsertOne(ctx, employee)
if err != nil {
log.Fatal(err)
}
fmt.Println("Inserted a single document: ", insertResult.InsertedID)
// Find a document
var result Employee
err = collection.FindOne(ctx, bson.M{"name": "John Doe"}).Decode(&result)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Found a single document: %+v\n", result)
}
func createDatabaseIfNotExists(client *mongo.Client, ctx context.Context, dbName string) error {
// List all database names
databases, err := client.ListDatabaseNames(ctx, bson.M{})
if err != nil {
return err
}
// Check if our database exists
for _, db := range databases {
if db == dbName {
fmt.Printf("Database '%s' already exists\n", dbName)
return nil
}
}
// If the database doesn't exist, create it by inserting a document
fmt.Printf("Creating database '%s'\n", dbName)
err = client.Database(dbName).RunCommand(ctx, bson.D{{"create", "employees"}}).Err()
if err != nil {
return err
}
fmt.Printf("Database '%s' created successfully\n", dbName)
return nil
}
```
This code demonstrates how to connect to MongoDB, create a database if it doesn't exist, insert a document, and find a document. It uses environment variables for the MongoDB URI and database name.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
go run main.go
```
This will execute the `main` function, demonstrating the connection to MongoDB, database creation (if necessary), and performing basic operations.
## Best Practices
1. Use environment variables for database credentials and configuration.
2. Always handle potential errors using proper error checking.
3. Use contexts for operations that might need to be cancelled or timed out.
4. Close the database connection after operations are complete.
5. Use indexes for frequently queried fields to improve performance.
## Troubleshooting
If you encounter connection issues:
1. Verify your MongoDB credentials in the `.env` file.
2. Ensure your MongoDB database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
For more detailed information on using MongoDB with Go, refer to the [official MongoDB Go driver documentation](https://pkg.go.dev/go.mongodb.org/mongo-driver/mongo).
file: ./content/docs/guides/databases/mongodb/index.en.mdx
meta: {
"title": "MongoDB",
"description": "Deploy and connect to MongoDB databases in Sealos DevBox"
}
MongoDB is a popular, open-source document-oriented database system. In Sealos DevBox, you can easily set up and connect to MongoDB databases for your development projects.
## Deploy MongoDB in Sealos
Sealos makes it easy to deploy a MongoDB database with just a few clicks. Follow these steps:
From the Sealos desktop, click on the "Database" icon to open the Database app.

Click on the "Create New Database" button. In the deployment form:
* Select "MongoDB" as the database type.
* Choose the desired MongoDB version (e.g., mongodb-6.0).
* Enter a name for your database (use lowercase letters and numbers only).
* Adjust the CPU and Memory sliders to set the resources for your database.
* Set the number of replicas (1 for single-node development and testing).
* Specify the storage size (e.g., 3 Gi).

Review the projected cost on the left sidebar. Click the "Deploy" button in the top right corner to create your MongoDB database.
Once deployed, Sealos will provide you with the necessary connection details.

## Connect to MongoDB in DevBox
Here are examples of how to connect to your MongoDB database using different programming languages and frameworks within your DevBox environment:
file: ./content/docs/guides/databases/mongodb/java.en.mdx
meta: {
"title": "Java",
"description": "Learn how to connect to MongoDB databases in Sealos DevBox using Java"
}
This guide will walk you through the process of connecting to a MongoDB database using Java within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Java environment
* [A MongoDB database created using the Database app in Sealos](./)
## Project Setup
#### Create a new Maven project
In your Sealos DevBox terminal, initialize a new Maven project:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=mongodb-java-example -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
mv mongodb-java-example/* .
rm -rf mongodb-java-example/
```
#### Project Structure
After setting up, your project structure should look like this:
```
/
├── pom.xml
├── src
│ ├── main
│ │ ├── java
│ │ │ └── com
│ │ │ └── example
│ │ │ ├── App.java
│ │ │ ├── MongoConfig.java
│ │ │ └── Employee.java
│ │ └── resources
│ │ └── mongodb.properties
│ └── test
│ └── java
│ └── com
│ └── example
│ └── AppTest.java
```
#### Update pom.xml
Replace the content of your `pom.xml` file with the following:
```xml title="pom.xml"
4.0.0com.examplemongodb-java-example1.0-SNAPSHOTUTF-81111org.mongodbmongodb-driver-sync5.2.0ch.qos.logbacklogback-classic1.5.3junitjunit4.13.2testorg.apache.maven.pluginsmaven-compiler-plugin3.8.11111org.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.App
```
This `pom.xml` file includes the necessary dependencies (MongoDB Java driver and Logback for logging) and configures the Maven Shade plugin to create an executable JAR.
#### Create a configuration file
Create a file named `mongodb.properties` in the `src/main/resources` directory:
```ini title="mongodb.properties"
mongodb.uri=mongodb://your_mongodb_host:27017
mongodb.database=your_database_name
```
Replace the placeholders with your actual MongoDB credentials from the Database app in Sealos.
#### Create Java classes
Create the following Java classes in the `src/main/java/com/example` directory:
1. `MongoConfig.java`:
```java title="MongoConfig.java"
package com.example;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
public class MongoConfig {
private static final Properties properties = new Properties();
static {
try (InputStream input = MongoConfig.class.getClassLoader().getResourceAsStream("mongodb.properties")) {
if (input == null) {
System.out.println("Sorry, unable to find mongodb.properties");
System.exit(1);
}
properties.load(input);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String getMongoUri() {
return properties.getProperty("mongodb.uri");
}
public static String getDatabase() {
return properties.getProperty("mongodb.database");
}
}
```
This class loads the MongoDB connection details from the `mongodb.properties` file.
2. `Employee.java`:
```java title="Employee.java"
package com.example;
import org.bson.Document;
public class Employee {
private String id;
private String name;
private String position;
public Employee(String name, String position) {
this.name = name;
this.position = position;
}
public Employee(Document doc) {
this.id = doc.getObjectId("_id").toString();
this.name = doc.getString("name");
this.position = doc.getString("position");
}
public Document toDocument() {
return new Document("name", name)
.append("position", position);
}
@Override
public String toString() {
return "Employee{" +
"id='" + id + '\'' +
", name='" + name + '\'' +
", position='" + position + '\'' +
'}';
}
}
```
This class represents an Employee document in MongoDB.
3. `App.java`:
```java title="App.java"
package com.example;
import com.mongodb.client.*;
import org.bson.Document;
public class App {
public static void main(String[] args) {
try (MongoClient mongoClient = MongoClients.create(MongoConfig.getMongoUri())) {
MongoDatabase database = mongoClient.getDatabase(MongoConfig.getDatabase());
MongoCollection collection = database.getCollection("employees");
System.out.println("Connected to MongoDB");
// Insert a document
Employee newEmployee = new Employee("John Doe", "Developer");
collection.insertOne(newEmployee.toDocument());
System.out.println("Inserted a new employee");
// Find all documents
System.out.println("All employees:");
try (MongoCursor cursor = collection.find().iterator()) {
while (cursor.hasNext()) {
Employee employee = new Employee(cursor.next());
System.out.println(employee);
}
}
// Update a document
Document query = new Document("name", "John Doe");
Document update = new Document("$set", new Document("position", "Senior Developer"));
collection.updateOne(query, update);
System.out.println("Updated John Doe's position");
// Delete a document
Document deleteQuery = new Document("name", "John Doe");
collection.deleteOne(deleteQuery);
System.out.println("Deleted John Doe from the database");
} catch (Exception e) {
System.err.println("Error connecting to MongoDB: " + e.getMessage());
}
}
}
```
This is the main class that demonstrates basic MongoDB operations using the Java driver:
* It connects to the MongoDB database.
* It inserts a new employee document.
* It finds and prints all employee documents.
* It updates an employee's position.
* It deletes an employee document.
## Build and Run
To build and run the project, use the following commands in your terminal:
```bash
mvn clean package
java -jar target/mongodb-java-example-1.0-SNAPSHOT.jar
```
If everything is set up correctly, you should see output demonstrating the MongoDB operations.
## Best Practices
1. Use a properties file to store MongoDB connection details.
2. Implement a configuration class to load and provide access to MongoDB properties.
3. Use the try-with-resources statement to ensure that the MongoClient is properly closed.
4. Handle exceptions appropriately and provide meaningful error messages.
5. Use Maven for dependency management and build automation.
## Troubleshooting
If you encounter connection issues:
1. Verify your MongoDB credentials in the `mongodb.properties` file.
2. Ensure your MongoDB database is running and accessible from your DevBox environment.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the MongoDB Java driver dependency is correctly specified in your `pom.xml` file.
5. Make sure you're using the correct version of Java (11 in this example).
For more detailed information on using MongoDB with Java, refer to the [MongoDB Java Driver documentation](https://mongodb.github.io/mongo-java-driver/).
file: ./content/docs/guides/databases/mongodb/nodejs.en.mdx
meta: {
"title": "Node.js",
"description": "Learn how to connect to MongoDB databases in Sealos DevBox using Node.js"
}
This guide will walk you through the process of connecting to a MongoDB database using Node.js within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Node.js environment
* [A MongoDB database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
npm install mongodb dotenv
```
This command installs:
* `mongodb`: The official MongoDB driver for Node.js
* `dotenv`: A zero-dependency module that loads environment variables from a `.env` file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
MONGO_URI=mongodb://your_username:your_password@your_database_host:27017/your_database_name?authSource=admin
```
Replace the placeholders with your actual MongoDB credentials from the Database app in Sealos.
#### Create a connection file
Next, create a file named `db.js` with the following content:
```javascript title="db.js"
const { MongoClient } = require('mongodb');
require('dotenv').config();
const uri = process.env.MONGO_URI;
const client = new MongoClient(uri);
async function connectToDatabase() {
try {
await client.connect();
console.log('Connected to MongoDB');
return client.db();
} catch (error) {
console.error('Error connecting to MongoDB:', error);
process.exit(1);
}
}
module.exports = { connectToDatabase, client };
```
This file creates a MongoDB client and exports a function to connect to the database.
#### Create database operations
Now, let's create a file named `dbOperations.js` to handle our database operations:
```javascript title="dbOperations.js"
const { connectToDatabase, client } = require('./db');
async function createDocument(collection, document) {
const db = await connectToDatabase();
const result = await db.collection(collection).insertOne(document);
console.log(`Document inserted with _id: ${result.insertedId}`);
return result.insertedId;
}
async function readDocuments(collection, query = {}) {
const db = await connectToDatabase();
const documents = await db.collection(collection).find(query).toArray();
console.log('Documents found:', documents);
return documents;
}
async function updateDocument(collection, filter, update) {
const db = await connectToDatabase();
const result = await db.collection(collection).updateOne(filter, { $set: update });
console.log(`${result.modifiedCount} document(s) updated`);
return result.modifiedCount;
}
async function deleteDocument(collection, filter) {
const db = await connectToDatabase();
const result = await db.collection(collection).deleteOne(filter);
console.log(`${result.deletedCount} document(s) deleted`);
return result.deletedCount;
}
module.exports = {
createDocument,
readDocuments,
updateDocument,
deleteDocument
};
```
#### Create a main script
Finally, let's create a `main.js` file to demonstrate all the operations:
```javascript title="main.js"
const {
createDocument,
readDocuments,
updateDocument,
deleteDocument
} = require('./dbOperations');
const { client } = require('./db');
async function main() {
try {
// Create a document
const newEmployeeId = await createDocument('employees', { name: 'John Doe', position: 'Developer' });
// Read all documents
await readDocuments('employees');
// Update a document
await updateDocument('employees', { _id: newEmployeeId }, { position: 'Senior Developer' });
// Read the updated document
await readDocuments('employees', { _id: newEmployeeId });
// Delete the document
await deleteDocument('employees', { _id: newEmployeeId });
// Confirm deletion
await readDocuments('employees');
} catch (error) {
console.error('An error occurred:', error);
} finally {
await client.close();
}
}
main();
```
## Usage
To run the script, use the following command in your Cursor terminal:
```bash
node main.js
```
This will execute all the operations defined in the `main` function, demonstrating the connection to the database, document creation, reading, updating, and deletion.
## Best Practices
1. Use environment variables for database credentials.
2. Use connection pooling for better performance (MongoDB driver handles this automatically).
3. Always handle potential errors using try-catch blocks.
4. Close the database connection after operations are complete.
5. Use indexes for frequently queried fields to improve performance.
## Troubleshooting
If you encounter connection issues:
1. Verify your MongoDB credentials in the `.env` file.
2. Ensure your MongoDB database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `mongodb` package is correctly installed.
For more detailed information on using MongoDB with Node.js, refer to the [official MongoDB Node.js driver documentation](https://docs.mongodb.com/drivers/node/).
file: ./content/docs/guides/databases/mongodb/php.en.mdx
meta: {
"title": "PHP",
"description": "Learn how to connect to MongoDB databases in Sealos DevBox using PHP"
}
This guide will walk you through the process of connecting to a MongoDB database using PHP within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with PHP environment
* [A MongoDB database created using the Database app in Sealos](./)
## Install Required Extensions
In your Cursor terminal, ensure that the MongoDB extension for PHP is installed:
```bash
sudo pecl install mongodb
```
## Install the MongoDB PHP Library
To install the MongoDB PHP Library, run the following command in your php-quickstart directory:
```bash
composer require mongodb/mongodb
```
## Connection Setup
#### Create a Configuration File
First, let's create a configuration file to store our database connection parameters. Create a file named `config.php` in your project directory with the following content:
```php title="config.php"
'your_mongodb_host',
'port' => '27017',
'database' => 'your_database_name',
'username' => 'your_username',
'password' => 'your_password'
];
```
Replace the placeholders with your actual MongoDB credentials from the Database app in Sealos.
#### Create a Database Connection Function
Next, let's create a PHP file that will handle the database connection. Create a file named `db_connect.php` with the following content:
```php title="db_connect.php"
selectDatabase($config['database']);
} catch (Exception $e) {
die("Connection failed: " . $e->getMessage());
}
}
```
This function reads the configuration from `config.php` and establishes a connection to the MongoDB database.
#### Create a Test Script
Now, let's create a test script to verify our connection and perform some basic database operations. Create a file named `test_mongodb.php` with the following content:
```php title="test_mongodb.php"
employees;
$insertResult = $collection->insertOne([
'name' => 'John Doe',
'position' => 'Developer'
]);
echo "Inserted document with ID: " . $insertResult->getInsertedId() . "\n";
// Find documents
$cursor = $collection->find();
echo "Employees:\n";
foreach ($cursor as $document) {
echo "ID: " . $document['_id'] . ", Name: " . $document['name'] . ", Position: " . $document['position'] . "\n";
}
// Update a document
$updateResult = $collection->updateOne(
['name' => 'John Doe'],
['$set' => ['position' => 'Senior Developer']]
);
echo "Modified " . $updateResult->getModifiedCount() . " document(s)\n";
// Delete a document
$deleteResult = $collection->deleteOne(['name' => 'John Doe']);
echo "Deleted " . $deleteResult->getDeletedCount() . " document(s)\n";
```
## Usage
To run the test script, use the following command in your Cursor terminal:
```bash
php test_mongodb.php
```
This will execute the script, demonstrating the connection to the database, document insertion, querying, updating, and deletion.
## Best Practices
1. Use environment variables or a separate configuration file for database credentials.
2. Always handle potential errors using try-catch blocks.
3. Use the MongoDB PHP library for better performance and features.
4. Close the database connection after operations are complete (in this case, it's handled automatically).
5. Use appropriate indexing for frequently queried fields to improve performance.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `config.php` file.
2. Ensure your MongoDB database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the MongoDB PHP extension is correctly installed.
For more detailed information on using MongoDB with PHP, refer to the [official MongoDB PHP library documentation](https://docs.mongodb.com/drivers/php/).
file: ./content/docs/guides/databases/mongodb/python.en.mdx
meta: {
"title": "Python",
"description": "Learn how to connect to MongoDB databases in Sealos DevBox using Python"
}
This guide will walk you through the process of connecting to a MongoDB database using Python within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Python environment
* [A MongoDB database created using the Database app in Sealos](./)
## Activating the Python Environment
Before you start, you need to activate the Python virtual environment in your DevBox. Open the terminal within Cursor IDE and run:
```bash
source ./bin/activate
```
You should see your prompt change, indicating that the virtual environment is now active.
## Installing Required Packages
In your Cursor terminal, install the necessary packages:
```bash
pip install pymongo python-dotenv
```
This command installs:
* `pymongo`: The official MongoDB driver for Python
* `python-dotenv`: A Python package that allows you to load environment variables from a .env file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
MONGO_URI=mongodb://your_username:your_password@your_database_host:27017/your_database_name?authSource=admin
```
Replace the placeholders with your actual MongoDB credentials from the Database app in Sealos.
#### Create a database connection module
Create a new file named `db_connection.py` with the following content:
```python title="db_connection.py"
import os
from dotenv import load_dotenv
from pymongo import MongoClient
from pymongo.errors import ConnectionFailure
# Load environment variables
load_dotenv()
def get_db_connection():
try:
client = MongoClient(os.getenv('MONGO_URI'))
# The ismaster command is cheap and does not require auth.
client.admin.command('ismaster')
print("Successfully connected to MongoDB")
return client
except ConnectionFailure:
print("Server not available")
return None
def close_connection(client):
if client:
client.close()
print("MongoDB connection closed")
```
This module provides two main functions:
1. `get_db_connection()`: This function establishes a connection to the MongoDB database using the credentials stored in the environment variables. It returns the client object if successful, or None if an error occurs.
2. `close_connection(client)`: This function closes the database connection when it's no longer needed.
#### Create a test script
Now, let's create a test script to verify our connection and perform some basic database operations. Create a file named `test_mongodb.py` with the following content:
```python title="test_mongodb.py"
from db_connection import get_db_connection, close_connection
def insert_document(collection, document):
result = collection.insert_one(document)
print(f"Inserted document with ID: {result.inserted_id}")
def find_documents(collection, query={}):
documents = collection.find(query)
for doc in documents:
print(doc)
def update_document(collection, query, update):
result = collection.update_one(query, {'$set': update})
print(f"Modified {result.modified_count} document(s)")
def delete_document(collection, query):
result = collection.delete_one(query)
print(f"Deleted {result.deleted_count} document(s)")
def main():
client = get_db_connection()
if client:
try:
db = client.get_database()
collection = db['test_collection']
# Insert a document
insert_document(collection, {'name': 'John Doe', 'age': 30})
# Find all documents
print("\nAll documents:")
find_documents(collection)
# Update a document
update_document(collection, {'name': 'John Doe'}, {'age': 31})
# Find the updated document
print("\nUpdated document:")
find_documents(collection, {'name': 'John Doe'})
# Delete the document
delete_document(collection, {'name': 'John Doe'})
# Verify deletion
print("\nAfter deletion:")
find_documents(collection)
finally:
close_connection(client)
if __name__ == "__main__":
main()
```
This script demonstrates basic CRUD operations:
1. Inserting a document
2. Finding documents
3. Updating a document
4. Deleting a document
## Running the Test Script
To run the test script, make sure your virtual environment is activated, then execute:
```bash
python test_mongodb.py
```
If everything is set up correctly, you should see output indicating successful connection, document insertion, retrieval, update, and deletion.
## Best Practices
1. Always activate the virtual environment before running your Python scripts or installing packages.
2. Use environment variables to store sensitive information like database credentials.
3. Close database connections after use to free up resources.
4. Use try-except blocks to handle potential errors gracefully.
5. Use PyMongo's built-in methods for database operations to ensure proper handling of MongoDB-specific data types.
## Troubleshooting
If you encounter connection issues:
1. Ensure you've activated the virtual environment with `source ./bin/activate`.
2. Verify that your MongoDB database is running and accessible.
3. Double-check your database credentials in the `.env` file.
4. Check the MongoDB logs in the Database app for any error messages.
For more detailed information on using MongoDB with Python, refer to the [official PyMongo documentation](https://pymongo.readthedocs.io/).
file: ./content/docs/guides/databases/mongodb/rust.en.mdx
meta: {
"title": "Rust",
"description": "Learn how to connect to MongoDB databases in Sealos DevBox using Rust"
}
This guide will walk you through the process of connecting to a MongoDB database using Rust within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Rust environment
* [A MongoDB database created using the Database app in Sealos](./)
## Install Required Dependencies
In your Cursor terminal, add the necessary dependencies to your `Cargo.toml` file:
```toml title="Cargo.toml"
[dependencies]
mongodb = "3.1.0"
tokio = { version = "1.28", features = ["full"] }
dotenv = "0.15"
serde = { version = "1.0", features = ["derive"] }
futures-util = "0.3"
```
These dependencies include:
* `mongodb`: The official MongoDB driver for Rust
* `tokio`: An asynchronous runtime for Rust
* `dotenv`: A library for loading environment variables from a file
* `serde`: A framework for serializing and deserializing Rust data structures
* `futures-util`: Provides utility types for working with futures, including `StreamExt` which we'll use for cursor iteration
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
MONGODB_URI=mongodb://your_username:your_password@your_database_host:27017/your_database_name?authSource=admin
```
Replace the placeholders with your actual MongoDB credentials from the Database app in Sealos.
#### Create the main.rs file
Create a new file named `src/main.rs` with the following content:
```rust title="src/main.rs"
use mongodb::{Client, options::ClientOptions};
use mongodb::bson::doc;
use dotenv::dotenv;
use std::env;
use serde::{Serialize, Deserialize};
use futures_util::stream::TryStreamExt;
#[derive(Debug, Serialize, Deserialize)]
struct Employee {
name: String,
position: String,
}
#[tokio::main]
async fn main() -> mongodb::error::Result<()> {
// Load environment variables from .env file
dotenv().ok();
// Get the MongoDB URI from the environment
let mongodb_uri = env::var("MONGODB_URI").expect("MONGODB_URI must be set");
// Parse a connection string into an options struct
let mut client_options = ClientOptions::parse(mongodb_uri).await?;
// Manually set an option
client_options.app_name = Some("Sealos DevBox Rust App".to_string());
// Get a handle to the deployment
let client = Client::with_options(client_options)?;
// Get a handle to the database specified in the connection string
let db = client.default_database()
.expect("No default database found in the connection string");
// Get a handle to a collection in the database
let collection = db.collection::("employees");
// Insert a document
let new_employee = Employee {
name: "John Doe".to_string(),
position: "Developer".to_string(),
};
let insert_result = collection.insert_one(new_employee).await?;
println!("Inserted document with ID: {:?}", insert_result.inserted_id);
// Query the documents in the collection
let mut cursor = collection.find(doc! {}).await?;
// Iterate over the results of the cursor
while let Some(employee) = cursor.try_next().await? {
println!("Found employee: {:?}", employee);
}
Ok(())
}
```
Let's break down the main components of this code:
1. **Imports**: We import necessary modules from `mongodb`, `dotenv`, `std::env`, and `serde`.
2. **Employee struct**: We define a struct to represent our data, using Serde for serialization and deserialization.
3. **Main function**: The `main` function is marked with `#[tokio::main]` to use Tokio's async runtime.
4. **Environment setup**: We load environment variables from the `.env` file and retrieve the MongoDB URI.
5. **Connection**: We create a MongoDB client using the URI and connect to the database.
6. **Data insertion**: We insert a sample employee into the database.
7. **Data querying**: We query and display all employees in the database.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
cargo run
```
This will compile and execute the `main` function, demonstrating the connection to the database, document insertion, and querying.
## Best Practices
1. Use environment variables for database credentials.
2. Use the `dotenv` crate to manage environment variables in development.
3. Implement proper error handling using Rust's `Result` type.
4. Use Serde for serializing and deserializing data structures.
5. Use async/await for efficient database operations.
## Troubleshooting
If you encounter connection issues:
1. Verify your MongoDB credentials in the `.env` file.
2. Ensure your MongoDB database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that all required dependencies are correctly specified in your `Cargo.toml` file.
For more detailed information on using MongoDB with Rust, refer to the [MongoDB Rust driver documentation](https://docs.rs/mongodb/latest/mongodb/).
file: ./content/docs/guides/databases/mysql/go.en.mdx
meta: {
"title": "Go",
"description": "Learn how to connect to MySQL databases in Sealos DevBox using Go"
}
This guide will walk you through the process of connecting to a MySQL database using Go within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Go environment
* [A MySQL database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
go get github.com/go-sql-driver/mysql
go get github.com/joho/godotenv
```
These commands install:
* `github.com/go-sql-driver/mysql`: A MySQL driver for Go's database/sql package
* `github.com/joho/godotenv`: A Go port of the Ruby dotenv library
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
DB_HOST=your_database_host
DB_PORT=3306
DB_USER=your_username
DB_PASSWORD=your_password
DB_NAME=your_database_name
```
Replace the placeholders with your actual MySQL credentials from the Database app in Sealos.
#### Create the main.go file
Create a new file named `main.go` with the following content:
```go title="main.go"
package main
import (
"database/sql"
"fmt"
"log"
"os"
_ "github.com/go-sql-driver/mysql"
"github.com/joho/godotenv"
)
// Employee struct represents the structure of our data
type Employee struct {
ID int
Name string
Position string
}
// connectDB establishes a connection to the MySQL database
func connectDB() (*sql.DB, error) {
// Load environment variables from .env file
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
// Retrieve database connection details from environment variables
dbHost := os.Getenv("DB_HOST")
dbPort := os.Getenv("DB_PORT")
dbUser := os.Getenv("DB_USER")
dbPassword := os.Getenv("DB_PASSWORD")
dbName := os.Getenv("DB_NAME")
// First, connect without specifying the database
dsnWithoutDB := fmt.Sprintf("%s:%s@tcp(%s:%s)/", dbUser, dbPassword, dbHost, dbPort)
db, err := sql.Open("mysql", dsnWithoutDB)
if err != nil {
return nil, err
}
// Create the database if it doesn't exist
_, err = db.Exec("CREATE DATABASE IF NOT EXISTS " + dbName)
if err != nil {
return nil, err
}
// Close the connection and reconnect with the database specified
db.Close()
dsn := fmt.Sprintf("%s:%s@tcp(%s:%s)/%s", dbUser, dbPassword, dbHost, dbPort, dbName)
db, err = sql.Open("mysql", dsn)
if err != nil {
return nil, err
}
// Verify the connection
err = db.Ping()
if err != nil {
return nil, err
}
fmt.Println("Successfully connected to the database")
return db, nil
}
// createTable creates the employees table if it doesn't exist
func createTable(db *sql.DB) error {
_, err := db.Exec(`
CREATE TABLE IF NOT EXISTS employees (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)
`)
return err
}
// insertEmployee inserts a new employee into the database
func insertEmployee(db *sql.DB, name, position string) error {
_, err := db.Exec("INSERT INTO employees (name, position) VALUES (?, ?)", name, position)
return err
}
// getEmployees retrieves all employees from the database
func getEmployees(db *sql.DB) ([]Employee, error) {
rows, err := db.Query("SELECT id, name, position FROM employees")
if err != nil {
return nil, err
}
defer rows.Close()
var employees []Employee
for rows.Next() {
var emp Employee
err := rows.Scan(&emp.ID, &emp.Name, &emp.Position)
if err != nil {
return nil, err
}
employees = append(employees, emp)
}
return employees, nil
}
func main() {
// Connect to the database
db, err := connectDB()
if err != nil {
log.Fatal(err)
}
// Ensure the database connection is closed when the function exits
defer func() {
if err := db.Close(); err != nil {
log.Printf("Error closing database connection: %v", err)
} else {
fmt.Println("Database connection closed successfully")
}
}()
// Create the employees table
err = createTable(db)
if err != nil {
log.Fatal(err)
}
// Insert sample employees
err = insertEmployee(db, "John Doe", "Developer")
if err != nil {
log.Fatal(err)
}
err = insertEmployee(db, "Jane Smith", "Designer")
if err != nil {
log.Fatal(err)
}
// Retrieve and display all employees
employees, err := getEmployees(db)
if err != nil {
log.Fatal(err)
}
fmt.Println("Employees:")
for _, emp := range employees {
fmt.Printf("ID: %d, Name: %s, Position: %s\n", emp.ID, emp.Name, emp.Position)
}
// The database connection will be closed automatically when main() exits
// due to the defer statement at the beginning of the function
}
```
Let's break down the main components of this code:
1. **Imports**: We import necessary packages, including `database/sql` for database operations and `github.com/go-sql-driver/mysql` as the MySQL driver.
2. **Employee struct**: Defines the structure for our employee data.
3. **connectDB function**: Loads environment variables, constructs the connection string, and establishes a connection to the database.
4. **createTable function**: Creates the `employees` table if it doesn't exist.
5. **insertEmployee function**: Inserts a new employee into the database.
6. **getEmployees function**: Retrieves all employees from the database.
7. **main function**: Orchestrates the program flow, demonstrating database connection, table creation, data insertion, and retrieval.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
go run main.go
```
This will execute the `main` function, demonstrating the connection to the database, table creation, data insertion, and querying.
## Best Practices
1. Use environment variables for database credentials.
2. Always handle potential errors using proper error checking.
3. Close the database connection after operations are complete.
4. Use prepared statements for queries to prevent SQL injection.
5. Consider using a connection pool for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `.env` file.
2. Ensure your MySQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
For more detailed information on using MySQL with Go, refer to the [go-sql-driver/mysql documentation](https://github.com/go-sql-driver/mysql).
file: ./content/docs/guides/databases/mysql/index.en.mdx
meta: {
"title": "MySQL",
"description": "Deploy and connect to MySQL databases in Sealos DevBox"
}
MySQL is a popular, open-source relational database management system. In Sealos DevBox, you can easily set up and connect to MySQL databases for your development projects.
## Deploy MySQL in Sealos
Sealos makes it easy to deploy a MySQL database with just a few clicks. Follow these steps:
From the Sealos desktop, click on the "Database" icon to open the Database app.

Click on the "Create New Database" button. In the deployment form:
* Select "MySQL" as the database type.
* Choose the desired MySQL version (e.g., ac-mysql-8.0.30).
* Enter a name for your database (use lowercase letters and numbers only).
* Adjust the CPU and Memory sliders to set the resources for your database.
* Set the number of replicas (1 for single-node development and testing).
* Specify the storage size (e.g., 3 Gi).

Review the projected cost on the left sidebar. Click the "Deploy" button in the top right corner to create your MySQL database.
Once deployed, Sealos will provide you with the necessary connection details.

## Connect to MySQL in DevBox
Here are examples of how to connect to your MySQL database using different programming languages and frameworks within your DevBox environment:
file: ./content/docs/guides/databases/mysql/java.en.mdx
meta: {
"title": "Java",
"description": "Learn how to connect to MySQL databases in Sealos DevBox using Java"
}
This guide will walk you through the process of connecting to a MySQL database using Java within your Sealos DevBox project, including basic CRUD (Create, Read, Update, Delete) operations.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Java environment
* [A MySQL database created using the Database app in Sealos](./)
## Setup
#### Download MySQL JDBC Driver
To connect to the MySQL server from a Java program, you need a MySQL JDBC driver.
You can download the latest version of the driver from the [MySQL Connector/J download page](https://dev.mysql.com/downloads/connector/j/). The downloaded file is a JAR file, e.g., mysql-connector-j-9.0.0.jar.
#### Create a database configuration file
Create a file named `db.properties` in your project directory with the following content:
```ini title="db.properties"
db.url=jdbc:mysql://your_database_host:3306/your_database_name
db.username=your_username
db.password=your_password
```
Replace the placeholders with your actual MySQL credentials from the Database app in Sealos.
#### Create a DatabaseConfig class
Create a new file named `DatabaseConfig.java` with the following content:
```java title="DatabaseConfig.java"
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
public class DatabaseConfig {
private static final Properties properties = new Properties();
static {
try (InputStream input = DatabaseConfig.class.getClassLoader().getResourceAsStream("db.properties")) {
if (input == null) {
System.out.println("Sorry, unable to find db.properties");
System.exit(1);
}
properties.load(input);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String getDbUrl() {
return properties.getProperty("db.url");
}
public static String getDbUsername() {
return properties.getProperty("db.username");
}
public static String getDbPassword() {
return properties.getProperty("db.password");
}
}
```
The `DatabaseConfig` class is responsible for loading database configuration from the `db.properties` file. It has three static methods that expose the database configuration:
* `getDbUrl()` – Returns the database URL.
* `getDbUsername()` – Returns the username.
* `getDbPassword()` – Returns the password.
This class ensures that sensitive database credentials are not hardcoded in the application.
#### Create an Employee class
Create a new file named `Employee.java` with the following content:
```java title="Employee.java"
public class Employee {
private int id;
private String name;
private String position;
public Employee(int id, String name, String position) {
this.id = id;
this.name = name;
this.position = position;
}
// Getters and setters
public int getId() { return id; }
public void setId(int id) { this.id = id; }
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public String getPosition() { return position; }
public void setPosition(String position) { this.position = position; }
@Override
public String toString() {
return "Employee{" +
"id=" + id +
", name='" + name + '\'' +
", position='" + position + '\'' +
'}';
}
}
```
The `Employee` class represents the data model for an employee. It includes fields for id, name, and position, along with a constructor, getters, setters, and a `toString` method for easy printing of employee information.
#### Create a DB class
Create a new file named `DB.java` with the following content:
```java title="DB.java"
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
public class DB {
public static Connection getConnection() throws SQLException {
String jdbcUrl = DatabaseConfig.getDbUrl();
String user = DatabaseConfig.getDbUsername();
String password = DatabaseConfig.getDbPassword();
return DriverManager.getConnection(jdbcUrl, user, password);
}
public static void createTable() throws SQLException {
String sql = "CREATE TABLE IF NOT EXISTS employees (" +
"id INT AUTO_INCREMENT PRIMARY KEY," +
"name VARCHAR(100) NOT NULL," +
"position VARCHAR(100) NOT NULL)";
try (Connection conn = getConnection();
Statement stmt = conn.createStatement()) {
stmt.execute(sql);
}
}
public static void insertEmployee(String name, String position) throws SQLException {
String sql = "INSERT INTO employees (name, position) VALUES (?, ?)";
try (Connection conn = getConnection();
PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setString(1, name);
pstmt.setString(2, position);
pstmt.executeUpdate();
}
}
public static List getEmployees() throws SQLException {
List employees = new ArrayList<>();
String sql = "SELECT id, name, position FROM employees";
try (Connection conn = getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql)) {
while (rs.next()) {
employees.add(new Employee(
rs.getInt("id"),
rs.getString("name"),
rs.getString("position")
));
}
}
return employees;
}
public static void updateEmployee(int id, String name, String position) throws SQLException {
String sql = "UPDATE employees SET name = ?, position = ? WHERE id = ?";
try (Connection conn = getConnection();
PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setString(1, name);
pstmt.setString(2, position);
pstmt.setInt(3, id);
pstmt.executeUpdate();
}
}
public static void deleteEmployee(int id) throws SQLException {
String sql = "DELETE FROM employees WHERE id = ?";
try (Connection conn = getConnection();
PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setInt(1, id);
pstmt.executeUpdate();
}
}
}
```
The `DB` class is responsible for database operations:
* The `getConnection()` method connects to the MySQL database using the connection parameters from `DatabaseConfig`.
* It returns a `Connection` object if successful, or throws a `SQLException` if there's an error.
* Other methods (`createTable`, `insertEmployee`, etc.) use this connection to perform CRUD operations.
* Each method opens a new connection, performs its operation, and then closes the connection using try-with-resources, ensuring proper resource management.
#### Create the main Java program
Create a new file named `Main.java` with the following content:
```java title="Main.java"
import java.sql.SQLException;
import java.util.List;
public class Main {
public static void main(String[] args) {
try {
System.out.println("Connecting to the MySQL database...");
// Create the employees table
DB.createTable();
System.out.println("Employees table created (if not exists).");
// Insert sample employees
DB.insertEmployee("John Doe", "Developer");
DB.insertEmployee("Jane Smith", "Designer");
System.out.println("Sample employees inserted.");
// Retrieve and display all employees
List employees = DB.getEmployees();
System.out.println("Employees:");
for (Employee emp : employees) {
System.out.println(emp);
}
// Update an employee
DB.updateEmployee(1, "John Doe", "Senior Developer");
System.out.println("Employee updated.");
// Delete an employee
DB.deleteEmployee(2);
System.out.println("Employee deleted.");
// Display updated employee list
employees = DB.getEmployees();
System.out.println("\nUpdated Employees:");
for (Employee emp : employees) {
System.out.println(emp);
}
} catch (SQLException e) {
System.err.println("Database operation error: " + e.getMessage());
}
}
}
```
The `Main` class demonstrates the usage of the `DB` class to perform various database operations:
* It creates a table, inserts sample data, retrieves and displays employees, updates an employee, deletes an employee, and displays the updated list.
* Each operation is wrapped in a try-catch block to handle potential `SQLException`s.
* The program uses the methods from the `DB` class, which manage their own connections, ensuring that connections are properly opened and closed for each operation.
## Compile and Run
To compile and run the example, use the following commands in your terminal:
```bash
javac -cp .:mysql-connector-j-9.0.0.jar *.java
java -cp .:mysql-connector-j-9.0.0.jar Main
```
Make sure to replace `mysql-connector-j-9.0.0.jar` with the actual name of your MySQL JDBC driver JAR file.
If everything is set up correctly, you should see output demonstrating the CRUD operations on the employees table.
## Best Practices
1. Use a properties file to store database connection details.
2. Implement a configuration class to load and provide access to database properties.
3. Create a separate class for database connection management and operations.
4. Use try-with-resources to ensure proper closure of database connections.
5. Use prepared statements to prevent SQL injection.
6. Handle exceptions appropriately and provide meaningful error messages.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `db.properties` file.
2. Ensure your MySQL database is running and accessible from your DevBox environment.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the MySQL JDBC driver JAR file is in the same directory as your Java files.
For more detailed information on using MySQL with Java, refer to the [official MySQL Connector/J documentation](https://dev.mysql.com/doc/connector-j/en/).
file: ./content/docs/guides/databases/mysql/nodejs.en.mdx
meta: {
"title": "Node.js",
"description": "Learn how to connect to MySQL databases in Sealos DevBox using Node.js"
}
This guide will walk you through the process of connecting to a MySQL database using Node.js within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Node.js environment
* [A MySQL database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
npm install mysql2 dotenv
```
This command installs:
* `mysql2`: A MySQL client for Node.js with focus on performance
* `dotenv`: A zero-dependency module that loads environment variables from a `.env` file
## Connection Setup
#### Set up the environment and create a client
First, we'll create a `.env` file to store our database credentials and a configuration file to load them:
```ini title=".env"
DB_HOST=your_database_host
DB_USER=your_username
DB_PASSWORD=your_password
DB_NAME=your_database_name
DB_PORT=3306
```
Replace the placeholders with your actual MySQL credentials from the Database app in Sealos.
Next, create a file named `db.js` with the following content:
```javascript title="db.js"
const mysql = require('mysql2/promise');
require('dotenv').config();
const pool = mysql.createPool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
waitForConnections: true,
connectionLimit: 10,
queueLimit: 0
});
module.exports = pool;
```
This creates a connection pool, which is more efficient for handling multiple database operations.
#### Create database operations
Now, let's create a file named `dbOperations.js` to handle our database operations:
```javascript title="dbOperations.js"
const pool = require('./db');
async function createTable() {
const createTableQuery = `
CREATE TABLE IF NOT EXISTS employees (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)
`;
await pool.query(createTableQuery);
console.log('Table created successfully');
}
async function insertEmployee(name, position) {
const insertQuery = 'INSERT INTO employees (name, position) VALUES (?, ?)';
const [result] = await pool.query(insertQuery, [name, position]);
console.log('Employee inserted:', result.insertId);
return result.insertId;
}
async function updateEmployee(id, name, position) {
const updateQuery = 'UPDATE employees SET name = ?, position = ? WHERE id = ?';
const [result] = await pool.query(updateQuery, [name, position, id]);
console.log('Employee updated:', result.affectedRows > 0);
return result.affectedRows > 0;
}
async function getAllEmployees() {
const selectQuery = 'SELECT * FROM employees';
const [rows] = await pool.query(selectQuery);
console.log('All employees:', rows);
return rows;
}
async function deleteEmployee(id) {
const deleteQuery = 'DELETE FROM employees WHERE id = ?';
const [result] = await pool.query(deleteQuery, [id]);
console.log('Employee deleted:', result.affectedRows > 0);
return result.affectedRows > 0;
}
module.exports = {
createTable,
insertEmployee,
updateEmployee,
getAllEmployees,
deleteEmployee
};
```
#### Create a main script
Finally, let's create a `main.js` file to demonstrate all the operations:
```javascript title="main.js"
const {
createTable,
insertEmployee,
updateEmployee,
getAllEmployees,
deleteEmployee
} = require('./dbOperations');
async function main() {
try {
await createTable();
const johnId = await insertEmployee('John Doe', 'Developer');
await insertEmployee('Jane Smith', 'Designer');
await updateEmployee(johnId, 'John Updated', 'Senior Developer');
const employees = await getAllEmployees();
console.log('Current employees:', employees);
await deleteEmployee(johnId);
const remainingEmployees = await getAllEmployees();
console.log('Remaining employees:', remainingEmployees);
} catch (error) {
console.error('An error occurred:', error);
} finally {
process.exit();
}
}
main();
```
## Usage
To run the script, use the following command in your Cursor terminal:
```bash
node main.js
```
This will execute all the operations defined in the `main` function, demonstrating the connection to the database, table creation, data insertion, updating, querying, and deletion.
## Best Practices
1. Use environment variables for database credentials.
2. Use connection pooling for better performance.
3. Use prepared statements to prevent SQL injection.
4. Always handle potential errors using try-catch blocks.
5. Close the database connection after operations are complete (in this case, the pool handles this automatically).
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `.env` file.
2. Ensure your MySQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `mysql2` package is correctly installed.
For more detailed information on using MySQL with Node.js, refer to the [mysql2 documentation](https://github.com/sidorares/node-mysql2#readme).
file: ./content/docs/guides/databases/mysql/php.en.mdx
meta: {
"title": "PHP",
"description": "Learn how to connect to MySQL databases in Sealos DevBox using PHP"
}
This guide will walk you through the process of connecting to a MySQL database using PHP within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with PHP environment
* [A MySQL database created using the Database app in Sealos](./)
## Install Required Extensions
In your Cursor terminal, ensure that the MySQL extension for PHP is installed:
```bash
sudo apt-get update
sudo apt-get install php-mysql -y
```
## Connection Setup
#### Create a Configuration File
First, let's create a configuration file to store our database connection parameters. Create a file named `config.php` in your project directory with the following content:
```php title="config.php"
'your_database_host',
'port' => '3306',
'dbname' => 'your_database_name',
'user' => 'your_username',
'password' => 'your_password'
];
```
Replace the placeholders with your actual MySQL credentials from the Database app in Sealos.
#### Create a Database Connection Function
Next, let's create a PHP file that will handle the database connection. Create a file named `db_connect.php` with the following content:
```php title="db_connect.php"
PDO::ERRMODE_EXCEPTION]);
echo "Connected successfully to the database.\n";
return $pdo;
} catch (PDOException $e) {
die("Connection failed: " . $e->getMessage());
}
}
```
This function reads the configuration from `config.php` and establishes a connection to the MySQL database using PDO (PHP Data Objects).
#### Create a Test Script
Now, let's create a test script to verify our connection and perform some basic database operations. Create a file named `test_db.php` with the following content:
```php title="test_db.php"
exec("CREATE TABLE IF NOT EXISTS employees (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)");
echo "Table created successfully.\n";
// Insert a record
$stmt = $pdo->prepare("INSERT INTO employees (name, position) VALUES (?, ?)");
$stmt->execute(['John Doe', 'Developer']);
echo "Record inserted successfully.\n";
// Query the table
$stmt = $pdo->query("SELECT * FROM employees");
echo "Employees:\n";
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
echo "ID: {$row['id']}, Name: {$row['name']}, Position: {$row['position']}\n";
}
// Close the connection
$pdo = null;
```
This script demonstrates creating a table, inserting a record, and querying the table.
## Usage
To run the test script, use the following command in your Cursor terminal:
```bash
php test_db.php
```
This will execute the script, demonstrating the connection to the database, table creation, data insertion, and querying.
## Best Practices
1. Use environment variables or a separate configuration file for database credentials.
2. Always use prepared statements to prevent SQL injection.
3. Handle potential errors using try-catch blocks.
4. Close the database connection after operations are complete.
5. Use connection pooling for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `config.php` file.
2. Ensure your MySQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `php-mysql` extension is correctly installed.
For more detailed information on using MySQL with PHP, refer to the [official PHP MySQL documentation](https://www.php.net/manual/en/book.mysql.php).
file: ./content/docs/guides/databases/mysql/python.en.mdx
meta: {
"title": "Python",
"description": "Learn how to connect to MySQL databases in Sealos DevBox using Python"
}
This guide will walk you through the process of connecting to a MySQL database using Python within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Python environment
* [A MySQL database created using the Database app in Sealos](./)
## Activating the Python Environment
Before you start, you need to activate the Python virtual environment in your DevBox. Open the terminal within Cursor IDE and run:
```bash
source ./bin/activate
```
You should see your prompt change, indicating that the virtual environment is now active.
## Installing Required Packages
In your Cursor terminal, install the necessary packages:
```bash
pip install mysql-connector-python python-dotenv
```
This command installs:
* `mysql-connector-python`: The official MySQL driver for Python
* `python-dotenv`: A Python package that allows you to load environment variables from a .env file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
DB_HOST=your_database_host
DB_USER=your_username
DB_PASSWORD=your_password
DB_NAME=your_database_name
DB_PORT=3306
```
Replace the placeholders with your actual MySQL credentials from the Database app in Sealos.
#### Create a database connection module
Create a new file named `db_connection.py` with the following content:
```python title="db_connection.py"
import os
from dotenv import load_dotenv
import mysql.connector
from mysql.connector import Error
# Load environment variables
load_dotenv()
def get_db_connection():
try:
# First, connect without specifying a database
connection = mysql.connector.connect(
host=os.getenv('DB_HOST'),
user=os.getenv('DB_USER'),
password=os.getenv('DB_PASSWORD'),
port=os.getenv('DB_PORT')
)
if connection.is_connected():
cursor = connection.cursor()
# Create the database if it doesn't exist
db_name = os.getenv('DB_NAME')
cursor.execute(f"CREATE DATABASE IF NOT EXISTS {db_name}")
# Close the initial connection
cursor.close()
connection.close()
# Reconnect with the database specified
connection = mysql.connector.connect(
host=os.getenv('DB_HOST'),
user=os.getenv('DB_USER'),
password=os.getenv('DB_PASSWORD'),
database=db_name,
port=os.getenv('DB_PORT')
)
if connection.is_connected():
print(f"Successfully connected to MySQL database '{db_name}'")
return connection
except Error as e:
print(f"Error connecting to MySQL database: {e}")
return None
def close_connection(connection):
if connection:
connection.close()
print("MySQL connection closed")
```
This module provides two main functions:
1. `get_db_connection()`: This function establishes a connection to the MySQL database using the credentials stored in the environment variables. It returns the connection object if successful, or None if an error occurs.
2. `close_connection(connection)`: This function closes the database connection when it's no longer needed.
#### Create a test script
Now, let's create a test script to verify our connection and perform some basic database operations. Create a file named `test_mysql.py` with the following content:
```python title="test_mysql.py"
from mysql.connector import Error
from db_connection import get_db_connection, close_connection
def create_table(cursor):
create_table_query = """
CREATE TABLE IF NOT EXISTS employees (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
)
"""
cursor.execute(create_table_query)
print("Table 'employees' created successfully")
def insert_employee(cursor, name, email):
insert_query = "INSERT INTO employees (name, email) VALUES (%s, %s)"
cursor.execute(insert_query, (name, email))
print(f"Employee {name} inserted successfully")
def get_all_employees(cursor):
select_query = "SELECT * FROM employees"
cursor.execute(select_query)
employees = cursor.fetchall()
for employee in employees:
print(f"ID: {employee[0]}, Name: {employee[1]}, Email: {employee[2]}")
def main():
connection = get_db_connection()
if connection:
try:
cursor = connection.cursor()
create_table(cursor)
insert_employee(cursor, "John Doe", "john@example.com")
insert_employee(cursor, "Jane Smith", "jane@example.com")
print("\nAll Employees:")
get_all_employees(cursor)
connection.commit()
except Error as e:
print(f"Error: {e}")
finally:
if cursor:
cursor.close()
close_connection(connection)
if __name__ == "__main__":
main()
```
Let's break down the main components of this script:
1. `create_table(cursor)`: This function creates a table named 'employees' if it doesn't already exist. It demonstrates how to execute a CREATE TABLE SQL statement.
2. `insert_employee(cursor, name, email)`: This function inserts a new employee record into the 'employees' table. It shows how to use parameterized queries to safely insert data.
3. `get_all_employees(cursor)`: This function retrieves all records from the 'employees' table and prints them. It demonstrates how to execute a SELECT query and fetch results.
4. `main()`: This is the main function that ties everything together. It:
* Establishes a database connection
* Creates the 'employees' table
* Inserts two sample employees
* Retrieves and prints all employees
* Handles any exceptions that might occur
* Ensures that the cursor and connection are properly closed
## Running the Test Script
To run the test script, make sure your virtual environment is activated, then execute:
```bash
python test_mysql.py
```
If everything is set up correctly, you should see output indicating successful connection, table creation, data insertion, and retrieval.
## Best Practices
1. Always activate the virtual environment before running your Python scripts or installing packages.
2. Use environment variables to store sensitive information like database credentials.
3. Close database connections and cursors after use to free up resources.
4. Use parameterized queries to prevent SQL injection.
5. Handle exceptions appropriately to manage potential errors.
## Troubleshooting
If you encounter connection issues:
1. Ensure you've activated the virtual environment with `source ./bin/activate`.
2. Verify that your MySQL database is running and accessible.
3. Double-check your database credentials in the `.env` file.
4. Check the MySQL logs in the Database app for any error messages.
For more detailed information on using MySQL with Python, refer to the [official MySQL Connector/Python documentation](https://dev.mysql.com/doc/connector-python/en/).
file: ./content/docs/guides/databases/mysql/rust.en.mdx
meta: {
"title": "Rust",
"description": "Learn how to connect to MySQL databases in Sealos DevBox using Rust"
}
This guide will walk you through the process of connecting to a MySQL database using Rust within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Rust environment
* [A MySQL database created using the Database app in Sealos](./)
## Install Required Dependencies
In your Cursor terminal, add the necessary dependencies to your `Cargo.toml` file:
```toml
[dependencies]
tokio = { version = "1.28", features = ["full"] }
sqlx = { version = "0.6", features = ["runtime-tokio-rustls", "mysql"] }
dotenv = "0.15"
```
These dependencies include:
* `tokio`: An asynchronous runtime for Rust
* `sqlx`: A database toolkit for Rust with async support
* `dotenv`: A library for loading environment variables from a file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
DATABASE_URL=mysql://your_username:your_password@your_database_host:3306/your_database_name
```
Replace the placeholders with your actual MySQL credentials from the Database app in Sealos.
#### Create the main.rs file
Create a new file named `src/main.rs` with the following content:
```rust title="src/main.rs"
use sqlx::mysql::MySqlPoolOptions;
use sqlx::Row;
use dotenv::dotenv;
use std::env;
#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
// Load environment variables from .env file
dotenv().ok();
// Get the database URL from the environment
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create a connection pool
let pool = MySqlPoolOptions::new()
.max_connections(5)
.connect(&database_url)
.await?;
// Create the employees table if it doesn't exist
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS employees (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)
"#,
)
.execute(&pool)
.await?;
println!("Table created successfully");
// Insert a sample employee
let new_employee = sqlx::query(
r#"
INSERT INTO employees (name, position)
VALUES (?, ?)
"#,
)
.bind("John Doe")
.bind("Developer")
.execute(&pool)
.await?;
println!(
"Inserted employee: ID: {}",
new_employee.last_insert_id()
);
// Query all employees
let employees = sqlx::query("SELECT id, name, position FROM employees")
.fetch_all(&pool)
.await?;
println!("All employees:");
for employee in employees {
println!(
"ID: {}, Name: {}, Position: {}",
employee.get::("id"),
employee.get::("name"),
employee.get::("position")
);
}
Ok(())
}
```
Let's break down the main components of this code:
1. **Imports**: We import necessary modules from `sqlx`, `dotenv`, and `std::env`.
2. **Main function**: The `main` function is marked with `#[tokio::main]` to use Tokio's async runtime.
3. **Environment setup**: We load environment variables from the `.env` file and retrieve the database URL.
4. **Connection pool**: We create a connection pool using `MySqlPoolOptions`.
5. **Table creation**: We create the `employees` table if it doesn't exist.
6. **Data insertion**: We insert a sample employee into the database.
7. **Data querying**: We query and display all employees in the database.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
cargo run
```
This will compile and execute the `main` function, demonstrating the connection to the database, table creation, data insertion, and querying.
## Best Practices
1. Use environment variables for database credentials.
2. Use connection pooling for better performance and resource management.
3. Use prepared statements (as demonstrated with `sqlx::query`) to prevent SQL injection.
4. Handle errors appropriately using Rust's `Result` type.
5. Use async/await for efficient database operations.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `.env` file.
2. Ensure your MySQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that all required dependencies are correctly specified in your `Cargo.toml` file.
For more detailed information on using MySQL with Rust, refer to the [sqlx documentation](https://github.com/launchbadge/sqlx).
file: ./content/docs/guides/databases/postgresql/go.en.mdx
meta: {
"title": "Go",
"description": "Learn how to connect to PostgreSQL databases in Sealos DevBox using Go"
}
This guide will walk you through the process of connecting to a PostgreSQL database using Go within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Go environment
* [A PostgreSQL database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
go get github.com/lib/pq
go get github.com/joho/godotenv
```
These commands install:
* `github.com/lib/pq`: A pure Go PostgreSQL driver for the database/sql package
* `github.com/joho/godotenv`: A Go port of the Ruby dotenv library
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
DB_HOST=your_database_host
DB_PORT=5432
DB_USER=your_username
DB_PASSWORD=your_password
DB_NAME=your_database_name
```
Replace the placeholders with your actual PostgreSQL credentials from the Database app in Sealos.
#### Create the main.go file
Create a new file named `main.go` with the following content:
```go title="main.go"
package main
import (
"database/sql"
"fmt"
"log"
"os"
"github.com/joho/godotenv"
_ "github.com/lib/pq"
)
// Employee struct represents the structure of our data
type Employee struct {
ID int
Name string
Position string
}
// connectDB establishes a connection to the PostgreSQL database
func connectDB() (*sql.DB, error) {
// Load environment variables from .env file
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
// Retrieve database connection details from environment variables
dbHost := os.Getenv("DB_HOST")
dbPort := os.Getenv("DB_PORT")
dbUser := os.Getenv("DB_USER")
dbPassword := os.Getenv("DB_PASSWORD")
dbName := os.Getenv("DB_NAME")
// Construct the connection string
connStr := fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=disable",
dbHost, dbPort, dbUser, dbPassword, dbName)
// Open a connection to the database
db, err := sql.Open("postgres", connStr)
if err != nil {
return nil, err
}
// Verify the connection
err = db.Ping()
if err != nil {
return nil, err
}
fmt.Println("Successfully connected to the database")
return db, nil
}
// createTable creates the employees table if it doesn't exist
func createTable(db *sql.DB) error {
_, err := db.Exec(`
CREATE TABLE IF NOT EXISTS employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)
`)
return err
}
// insertEmployee inserts a new employee into the database
func insertEmployee(db *sql.DB, name, position string) error {
_, err := db.Exec("INSERT INTO employees (name, position) VALUES ($1, $2)", name, position)
return err
}
// getEmployees retrieves all employees from the database
func getEmployees(db *sql.DB) ([]Employee, error) {
rows, err := db.Query("SELECT id, name, position FROM employees")
if err != nil {
return nil, err
}
defer rows.Close()
var employees []Employee
for rows.Next() {
var emp Employee
err := rows.Scan(&emp.ID, &emp.Name, &emp.Position)
if err != nil {
return nil, err
}
employees = append(employees, emp)
}
return employees, nil
}
func main() {
// Connect to the database
db, err := connectDB()
if err != nil {
log.Fatal(err)
}
// Ensure the database connection is closed when the function exits
defer func() {
if err := db.Close(); err != nil {
log.Printf("Error closing database connection: %v", err)
} else {
fmt.Println("Database connection closed successfully")
}
}()
// Create the employees table
err = createTable(db)
if err != nil {
log.Fatal(err)
}
// Insert sample employees
err = insertEmployee(db, "John Doe", "Developer")
if err != nil {
log.Fatal(err)
}
err = insertEmployee(db, "Jane Smith", "Designer")
if err != nil {
log.Fatal(err)
}
// Retrieve and display all employees
employees, err := getEmployees(db)
if err != nil {
log.Fatal(err)
}
fmt.Println("Employees:")
for _, emp := range employees {
fmt.Printf("ID: %d, Name: %s, Position: %s\n", emp.ID, emp.Name, emp.Position)
}
// The database connection will be closed automatically when main() exits
// due to the defer statement at the beginning of the function
}
```
Let's break down the main components of this code:
1. **Imports**: We import necessary packages, including `database/sql` for database operations and `github.com/lib/pq` as the PostgreSQL driver.
2. **Employee struct**: Defines the structure for our employee data.
3. **connectDB function**: Loads environment variables, constructs the connection string, and establishes a connection to the database.
4. **createTable function**: Creates the `employees` table if it doesn't exist.
5. **insertEmployee function**: Inserts a new employee into the database.
6. **getEmployees function**: Retrieves all employees from the database.
7. **main function**: Orchestrates the program flow, demonstrating database connection, table creation, data insertion, and retrieval.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
go run main.go
```
This will execute the `main` function, demonstrating the connection to the database, table creation, data insertion, and querying.
## Best Practices
1. Use environment variables for database credentials.
2. Always handle potential errors using proper error checking.
3. Close the database connection after operations are complete.
4. Use prepared statements for queries to prevent SQL injection.
5. Consider using a connection pool for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `.env` file.
2. Ensure your PostgreSQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
For more detailed information on using PostgreSQL with Go, refer to the [lib/pq documentation](https://pkg.go.dev/github.com/lib/pq).
file: ./content/docs/guides/databases/postgresql/index.en.mdx
meta: {
"title": "PostgreSQL",
"description": "Deploy and connect to PostgreSQL databases in Sealos DevBox"
}
PostgreSQL is a powerful, open-source object-relational database system. In Sealos DevBox, you can easily set up and connect to PostgreSQL databases for your development projects.
## Deploy PostgreSQL in Sealos
Sealos makes it easy to deploy a PostgreSQL database with just a few clicks. follow these steps:
From the Sealos desktop, click on the "Database" icon to open the Database app.

Click on the "Create New Database" button. In the deployment form:
* Select "Postgres" as the database type.
* Choose the desired PostgreSQL version (e.g., postgresql-14.8.0).
* Enter a name for your database (use lowercase letters and numbers only).
* Adjust the CPU and Memory sliders to set the resources for your database.
* Set the number of replicas (1 for single-node development and testing).
* Specify the storage size (e.g., 3 Gi).

Review the projected cost on the left sidebar. Click the "Deploy" button in the top right corner to create your PostgreSQL database.
Once deployed, Sealos will provide you with the necessary connection details.

## Connect to PostgreSQL in DevBox
Here are examples of how to connect to your PostgreSQL database using different programming languages and frameworks within your DevBox environment:
file: ./content/docs/guides/databases/postgresql/java.en.mdx
meta: {
"title": "Java",
"description": "Learn how to connect to PostgreSQL databases in Sealos DevBox using Java"
}
This guide will walk you through the process of connecting to a PostgreSQL database using Java within your Sealos DevBox project, including basic CRUD (Create, Read, Update, Delete) operations.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Java environment
* [A PostgreSQL database created using the Database app in Sealos](./)
## Setup
#### Download PostgreSQL JDBC Driver
To connect to the PostgreSQL server from a Java program, you need a PostgreSQL JDBC driver.
You can download the latest version of the driver on the [jdbc.postgresql.org download page](https://jdbc.postgresql.org/download/). The downloaded file is a jar file e.g., postgresql-42.7.1.jar.
#### Create a database configuration file
Create a file named `db.properties` in your project directory with the following content:
```ini title="db.properties"
db.url=jdbc:postgresql://your_database_host:5432/your_database_name
db.username=your_username
db.password=your_password
```
Replace the placeholders with your actual PostgreSQL credentials from the Database app in Sealos.
#### Create a DatabaseConfig class
Create a new file named `DatabaseConfig.java` with the following content:
```java title="DatabaseConfig.java"
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
public class DatabaseConfig {
private static final Properties properties = new Properties();
static {
try (InputStream input = DatabaseConfig.class.getClassLoader().getResourceAsStream("db.properties")) {
if (input == null) {
System.out.println("Sorry, unable to find db.properties");
System.exit(1);
}
properties.load(input);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String getDbUrl() {
return properties.getProperty("db.url");
}
public static String getDbUsername() {
return properties.getProperty("db.username");
}
public static String getDbPassword() {
return properties.getProperty("db.password");
}
}
```
The `DatabaseConfig` class is responsible for loading database configuration from the `db.properties` file. It has three static methods that expose the database configuration:
* `getDbUrl()` – Returns the database URL.
* `getDbUsername()` – Returns the username.
* `getDbPassword()` – Returns the password.
This class ensures that sensitive database credentials are not hardcoded in the application.
#### Create an Employee class
Create a new file named `Employee.java` with the following content:
```java title="Employee.java"
public class Employee {
private int id;
private String name;
private String position;
public Employee(int id, String name, String position) {
this.id = id;
this.name = name;
this.position = position;
}
// Getters and setters
public int getId() { return id; }
public void setId(int id) { this.id = id; }
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public String getPosition() { return position; }
public void setPosition(String position) { this.position = position; }
@Override
public String toString() {
return "Employee{" +
"id=" + id +
", name='" + name + '\'' +
", position='" + position + '\'' +
'}';
}
}
```
The `Employee` class represents the data model for an employee. It includes fields for id, name, and position, along with a constructor, getters, setters, and a `toString` method for easy printing of employee information.
#### Create a DB class
Create a new file named `DB.java` with the following content:
```java title="DB.java"
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
public class DB {
public static Connection getConnection() throws SQLException {
String jdbcUrl = DatabaseConfig.getDbUrl();
String user = DatabaseConfig.getDbUsername();
String password = DatabaseConfig.getDbPassword();
return DriverManager.getConnection(jdbcUrl, user, password);
}
public static void createTable() throws SQLException {
String sql = "CREATE TABLE IF NOT EXISTS employees (" +
"id SERIAL PRIMARY KEY," +
"name VARCHAR(100) NOT NULL," +
"position VARCHAR(100) NOT NULL)";
try (Connection conn = getConnection();
Statement stmt = conn.createStatement()) {
stmt.execute(sql);
}
}
public static void insertEmployee(String name, String position) throws SQLException {
String sql = "INSERT INTO employees (name, position) VALUES (?, ?)";
try (Connection conn = getConnection();
PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setString(1, name);
pstmt.setString(2, position);
pstmt.executeUpdate();
}
}
public static List getEmployees() throws SQLException {
List employees = new ArrayList<>();
String sql = "SELECT id, name, position FROM employees";
try (Connection conn = getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql)) {
while (rs.next()) {
employees.add(new Employee(
rs.getInt("id"),
rs.getString("name"),
rs.getString("position")
));
}
}
return employees;
}
public static void updateEmployee(int id, String name, String position) throws SQLException {
String sql = "UPDATE employees SET name = ?, position = ? WHERE id = ?";
try (Connection conn = getConnection();
PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setString(1, name);
pstmt.setString(2, position);
pstmt.setInt(3, id);
pstmt.executeUpdate();
}
}
public static void deleteEmployee(int id) throws SQLException {
String sql = "DELETE FROM employees WHERE id = ?";
try (Connection conn = getConnection();
PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setInt(1, id);
pstmt.executeUpdate();
}
}
}
```
The `DB` class is responsible for database operations:
* The `getConnection()` method connects to the PostgreSQL database using the connection parameters from `DatabaseConfig`.
* It returns a `Connection` object if successful, or throws a `SQLException` if there's an error.
* Other methods (`createTable`, `insertEmployee`, etc.) use this connection to perform CRUD operations.
* Each method opens a new connection, performs its operation, and then closes the connection using try-with-resources, ensuring proper resource management.
#### Create the main Java program
Create a new file named `Main.java` with the following content:
```java title="Main.java"
import java.sql.SQLException;
import java.util.List;
public class Main {
public static void main(String[] args) {
try {
System.out.println("Connecting to the PostgreSQL database...");
// Create the employees table
DB.createTable();
System.out.println("Employees table created (if not exists).");
// Insert sample employees
DB.insertEmployee("John Doe", "Developer");
DB.insertEmployee("Jane Smith", "Designer");
System.out.println("Sample employees inserted.");
// Retrieve and display all employees
List employees = DB.getEmployees();
System.out.println("Employees:");
for (Employee emp : employees) {
System.out.println(emp);
}
// Update an employee
DB.updateEmployee(1, "John Doe", "Senior Developer");
System.out.println("Employee updated.");
// Delete an employee
DB.deleteEmployee(2);
System.out.println("Employee deleted.");
// Display updated employee list
employees = DB.getEmployees();
System.out.println("\nUpdated Employees:");
for (Employee emp : employees) {
System.out.println(emp);
}
} catch (SQLException e) {
System.err.println("Database operation error: " + e.getMessage());
}
}
}
```
The `Main` class demonstrates the usage of the `DB` class to perform various database operations:
* It creates a table, inserts sample data, retrieves and displays employees, updates an employee, deletes an employee, and displays the updated list.
* Each operation is wrapped in a try-catch block to handle potential `SQLException`s.
* The program uses the methods from the `DB` class, which manage their own connections, ensuring that connections are properly opened and closed for each operation.
## Compile and Run
To compile and run the example, use the following commands in your terminal:
```bash
javac -cp .:postgresql-42.6.0.jar *.java
java -cp .:postgresql-42.6.0.jar Main
```
If everything is set up correctly, you should see output demonstrating the CRUD operations on the employees table.
## Best Practices
1. Use a properties file to store database connection details.
2. Implement a configuration class to load and provide access to database properties.
3. Create a separate class for database connection management and operations.
4. Use try-with-resources to ensure proper closure of database connections.
5. Use prepared statements to prevent SQL injection.
6. Handle exceptions appropriately and provide meaningful error messages.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `db.properties` file.
2. Ensure your PostgreSQL database is running and accessible from your DevBox environment.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the PostgreSQL JDBC driver JAR file is in the same directory as your Java files.
For more detailed information on using PostgreSQL with Java, refer to the [official PostgreSQL JDBC driver documentation](https://jdbc.postgresql.org/documentation/head/index.html).
file: ./content/docs/guides/databases/postgresql/nodejs.en.mdx
meta: {
"title": "Node.js",
"description": "Learn how to connect to PostgreSQL databases in Sealos DevBox using Node.js"
}
This guide will walk you through the process of connecting to a PostgreSQL database using Node.js within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Node.js environment
* [A PostgreSQL database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
npm install pg dotenv
```
This command installs:
* `pg`: The PostgreSQL client for Node.js
* `dotenv`: A zero-dependency module that loads environment variables from a `.env` file
## Connection Setup
#### Set up the environment and create a client
First, we'll import the required modules and set up the database configuration:
```javascript
const { Client } = require('pg');
require('dotenv').config();
const dbConfig = {
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
host: process.env.DB_HOST,
port: process.env.DB_PORT,
database: process.env.DB_NAME,
};
const client = new Client(dbConfig);
```
#### Create connection and query functions
Next, we'll create functions to handle database connection and query execution:
```javascript
async function connectToDatabase() {
try {
await client.connect();
console.log('Connected to PostgreSQL database');
} catch (err) {
console.error('Error connecting to PostgreSQL database', err);
throw err;
}
}
async function executeQuery(query, values = []) {
try {
const result = await client.query(query, values);
return result.rows;
} catch (err) {
console.error('Error executing query', err);
throw err;
}
}
async function closeDatabaseConnection() {
try {
await client.end();
console.log('Connection to PostgreSQL closed');
} catch (err) {
console.error('Error closing connection', err);
}
}
```
#### Implement database operations
Now, let's implement functions for various database operations:
```javascript
async function createTable() {
const createTableQuery = `
CREATE TABLE IF NOT EXISTS employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
);
`;
await executeQuery(createTableQuery);
console.log('Table created successfully');
}
async function insertEmployee(name, position) {
const insertQuery = 'INSERT INTO employees(name, position) VALUES ($1, $2) RETURNING *';
const values = [name, position];
const result = await executeQuery(insertQuery, values);
console.log('Employee inserted:', result[0]);
}
async function updateEmployee(id, name, position) {
const updateQuery = 'UPDATE employees SET name = $1, position = $2 WHERE id = $3 RETURNING *';
const values = [name, position, id];
const result = await executeQuery(updateQuery, values);
console.log('Employee updated:', result[0]);
}
async function getAllEmployees() {
const selectQuery = 'SELECT * FROM employees';
const employees = await executeQuery(selectQuery);
console.log('All employees:', employees);
}
```
#### Create a main function to run operations
Finally, let's create a main function to demonstrate all the operations:
```javascript
async function main() {
try {
await connectToDatabase();
await createTable();
await insertEmployee('John Doe', 'Developer');
await insertEmployee('Jane Smith', 'Designer');
await updateEmployee(1, 'John Updated', 'Senior Developer');
await getAllEmployees();
} catch (err) {
console.error('An error occurred:', err);
} finally {
await closeDatabaseConnection();
}
}
main();
```
Here’s the complete code to connect to the Postgres database with Node.js:
```javascript title="test-connection.js"
const { Client } = require('pg');
require('dotenv').config();
const dbConfig = {
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
host: process.env.DB_HOST,
port: process.env.DB_PORT,
database: process.env.DB_NAME,
};
const client = new Client(dbConfig);
async function connectToDatabase() {
try {
await client.connect();
console.log('Connected to PostgreSQL database');
} catch (err) {
console.error('Error connecting to PostgreSQL database', err);
throw err;
}
}
async function executeQuery(query, values = []) {
try {
const result = await client.query(query, values);
return result.rows;
} catch (err) {
console.error('Error executing query', err);
throw err;
}
}
async function closeDatabaseConnection() {
try {
await client.end();
console.log('Connection to PostgreSQL closed');
} catch (err) {
console.error('Error closing connection', err);
}
}
async function createTable() {
const createTableQuery = `
CREATE TABLE IF NOT EXISTS employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
);
`;
await executeQuery(createTableQuery);
console.log('Table created successfully');
}
async function insertEmployee(name, position) {
const insertQuery = 'INSERT INTO employees(name, position) VALUES ($1, $2) RETURNING *';
const values = [name, position];
const result = await executeQuery(insertQuery, values);
console.log('Employee inserted:', result[0]);
}
async function updateEmployee(id, name, position) {
const updateQuery = 'UPDATE employees SET name = $1, position = $2 WHERE id = $3 RETURNING *';
const values = [name, position, id];
const result = await executeQuery(updateQuery, values);
console.log('Employee updated:', result[0]);
}
async function getAllEmployees() {
const selectQuery = 'SELECT * FROM employees';
const employees = await executeQuery(selectQuery);
console.log('All employees:', employees);
}
async function main() {
try {
await connectToDatabase();
await createTable();
await insertEmployee('John Doe', 'Developer');
await insertEmployee('Jane Smith', 'Designer');
await updateEmployee(1, 'John Updated', 'Senior Developer');
await getAllEmployees();
} catch (err) {
console.error('An error occurred:', err);
} finally {
await closeDatabaseConnection();
}
}
main();
```
## Usage
Before running the script, create a `.env` file in the same directory with the following content:
```ini title=".env"
DB_USER=your_username
DB_PASSWORD=your_password
DB_HOST=your_database_host
DB_PORT=5432
DB_NAME=your_database_name
```
Replace the placeholders with your actual PostgreSQL credentials from the Database app in Sealos.
To test the connection, run the test script:
```bash
node test-connection.js
```
This will execute all the operations defined in the `main` function, demonstrating the connection to the database, table creation, data insertion, updating, and querying.
## Best Practices
1. Use environment variables for database credentials.
2. Always handle potential errors using try-catch blocks.
3. Close the database connection after operations are complete.
4. Use parameterized queries to prevent SQL injection.
5. Consider using connection pooling for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `.env` file.
2. Ensure your PostgreSQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `pg` package is correctly installed.
For more detailed information on using PostgreSQL with Node.js, refer to the [node-postgres documentation](https://node-postgres.com/).
file: ./content/docs/guides/databases/postgresql/php.en.mdx
meta: {
"title": "PHP",
"description": "Learn how to connect to PostgreSQL databases in Sealos DevBox using PHP"
}
This guide will walk you through the process of connecting to a PostgreSQL database using PHP within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with PHP environment
* [A PostgreSQL database created using the Database app in Sealos](./)
## Install Required Extensions
In your Cursor terminal, ensure that the PostgreSQL extension for PHP is installed:
```bash
sudo apt-get update
sudo apt-get install php-pgsql
```
## Connection Setup
#### Create a Configuration File
First, let's create a configuration file to store our database connection parameters. Create a file named `config.php` in your project directory with the following content:
```php title="config.php"
'your_database_host',
'port' => '5432',
'dbname' => 'your_database_name',
'user' => 'your_username',
'password' => 'your_password'
];
```
Replace the placeholders with your actual PostgreSQL credentials from the Database app in Sealos.
#### Create a Database Connection Function
Next, let's create a PHP file that will handle the database connection. Create a file named `db_connect.php` with the following content:
```php title="db_connect.php"
PDO::ERRMODE_EXCEPTION]);
echo "Connected successfully to the database.\n";
return $pdo;
} catch (PDOException $e) {
die("Connection failed: " . $e->getMessage());
}
}
```
This function reads the configuration from `config.php` and establishes a connection to the PostgreSQL database using PDO (PHP Data Objects).
#### Create a Test Script
Now, let's create a test script to verify our connection and perform some basic database operations. Create a file named `test_db.php` with the following content:
```php title="test_db.php"
exec("CREATE TABLE IF NOT EXISTS employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)");
echo "Table created successfully.\n";
// Insert a record
$stmt = $pdo->prepare("INSERT INTO employees (name, position) VALUES (?, ?)");
$stmt->execute(['John Doe', 'Developer']);
echo "Record inserted successfully.\n";
// Query the table
$stmt = $pdo->query("SELECT * FROM employees");
echo "Employees:\n";
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
echo "ID: {$row['id']}, Name: {$row['name']}, Position: {$row['position']}\n";
}
// Close the connection
$pdo = null;
```
This script demonstrates creating a table, inserting a record, and querying the table.
## Usage
To run the test script, use the following command in your Cursor terminal:
```bash
php test_db.php
```
This will execute the script, demonstrating the connection to the database, table creation, data insertion, and querying.
## Best Practices
1. Use environment variables or a separate configuration file for database credentials.
2. Always use prepared statements to prevent SQL injection.
3. Handle potential errors using try-catch blocks.
4. Close the database connection after operations are complete.
5. Use connection pooling for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `config.php` file.
2. Ensure your PostgreSQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `php-pgsql` extension is correctly installed.
For more detailed information on using PostgreSQL with PHP, refer to the [official PHP PostgreSQL documentation](https://www.php.net/manual/en/book.pgsql.php).
file: ./content/docs/guides/databases/postgresql/python.en.mdx
meta: {
"title": "Python",
"description": "Learn how to connect to PostgreSQL databases in Sealos DevBox using Python"
}
This guide will walk you through the process of connecting to a PostgreSQL database using Python within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Python environment
* [A PostgreSQL database created using the Database app in Sealos](./)
## Activating the Python Environment
Before you start, you need to activate the Python virtual environment in your DevBox. Open the terminal within Cursor IDE and run:
```bash
source ./bin/activate
```
You should see your prompt change, indicating that the virtual environment is now active.
## Installing psycopg2-binary
In your Cursor terminal, install `psycopg2-binary` using pip:
```bash
pip install psycopg2-binary
```
## Connection Setup
#### Create a Configuration File
First, we'll create a configuration file to store our database connection parameters. This approach allows us to easily change settings without modifying our code.
Create a file named `database.ini` in your project directory with the following content:
```ini title="database.ini"
[postgresql]
host=your_database_host
database=your_database_name
user=your_username
password=your_password
port=5432
```
Replace `your_database_name`, `your_username`, and `your_password` with your actual PostgreSQL credentials from the Database app in Sealos.
#### Create a Configuration Loader
Next, we'll create a Python module to load the configuration from our `database.ini` file. This module will use the built-in `configparser` to read the configuration data.
Create a file named `config.py` with the following content:
```python title="config.py"
from configparser import ConfigParser
def load_config(filename='database.ini', section='postgresql'):
parser = ConfigParser()
parser.read(filename)
# get section, default to postgresql
config = {}
if parser.has_section(section):
params = parser.items(section)
for param in params:
config[param[0]] = param[1]
else:
raise Exception('Section {0} not found in the {1} file'.format(section, filename))
return config
if __name__ == '__main__':
config = load_config()
print(config)
```
This `load_config()` function reads the `database.ini` file and returns a dictionary with the connection parameters. If you run this script directly, it will print out the configuration, which can be useful for debugging.
#### Create a Connection Function
Now, we'll create a module that uses our configuration loader to connect to the PostgreSQL database.
Create a file named `connect.py` with the following content:
```python title="connect.py"
import psycopg2
from config import load_config
def connect(config):
""" Connect to the PostgreSQL database server """
try:
# connecting to the PostgreSQL server
conn = psycopg2.connect(**config)
print('Connected to the PostgreSQL server.')
return conn
except (psycopg2.DatabaseError, Exception) as error:
print(f"Error: {error}")
return None
def execute_query(conn, query):
""" Execute a SQL query and return the results """
try:
with conn.cursor() as cur:
cur.execute(query)
return cur.fetchall()
except (psycopg2.DatabaseError, Exception) as error:
print(f"Error executing query: {error}")
return None
if __name__ == '__main__':
config = load_config()
conn = connect(config)
if conn:
# Execute SELECT version() query
version_query = "SELECT version();"
result = execute_query(conn, version_query)
if result:
print(f"PostgreSQL version: {result[0][0]}")
# Don't forget to close the connection
conn.close()
print("Connection closed.")
```
This module does several things:
1. The `connect()` function uses `psycopg2.connect()` to establish a connection to the database using the configuration we loaded.
2. The `execute_query()` function demonstrates how to execute a SQL query and fetch the results.
3. In the `if __name__ == '__main__':` block, we test the connection by connecting to the database and querying its version.
## Usage
To test the connection, make sure your virtual environment is activated, then run the `connect.py` script:
```bash
python connect.py
```
If successful, you should see output similar to:
```bash
Connected to the PostgreSQL server.
PostgreSQL version: PostgreSQL 14.10 (Ubuntu 14.10-1.pgdg22.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
Connection closed.
```
## Best Practices
1. Always activate the virtual environment before running your Python scripts or installing packages.
2. Use a configuration file (`database.ini`) to store database credentials. This makes it easier to change settings without modifying your code.
3. If using version control (e.g., git), add `database.ini` to your `.gitignore` file to avoid committing sensitive information.
4. Use connection pooling for better performance in production environments.
5. Always close database connections properly to avoid resource leaks.
6. Use try-except blocks to handle potential database errors gracefully.
## Troubleshooting
If you encounter connection issues:
1. Ensure you've activated the virtual environment with `source ./bin/activate`.
2. Verify that your PostgreSQL database is running.
3. Double-check your database credentials in the `database.ini` file.
4. Check the PostgreSQL logs in the Database app for any error messages.
For more detailed information on using PostgreSQL with Python, refer to the [official psycopg2 documentation](https://www.psycopg.org/docs/).
file: ./content/docs/guides/databases/postgresql/rust.en.mdx
meta: {
"title": "Rust",
"description": "Learn how to connect to PostgreSQL databases in Sealos DevBox using Rust"
}
This guide will walk you through the process of connecting to a PostgreSQL database using Rust within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Rust environment
* [A PostgreSQL database created using the Database app in Sealos](./)
## Install Required Dependencies
In your Cursor terminal, add the necessary dependencies to your `Cargo.toml` file:
```toml
[dependencies]
tokio = { version = "1.28", features = ["full"] }
sqlx = { version = "0.6", features = ["runtime-tokio-rustls", "postgres"] }
dotenv = "0.15"
```
These dependencies include:
* `tokio`: An asynchronous runtime for Rust
* `sqlx`: A database toolkit for Rust with async support
* `dotenv`: A library for loading environment variables from a file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
DATABASE_URL=postgres://your_username:your_password@your_database_host:5432/your_database_name
```
Replace the placeholders with your actual PostgreSQL credentials from the Database app in Sealos.
#### Create the main.rs file
Create a new file named `src/main.rs` with the following content:
```rust title="src/main.rs"
use sqlx::postgres::PgPoolOptions;
use sqlx::Row;
use dotenv::dotenv;
use std::env;
#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
// Load environment variables from .env file
dotenv().ok();
// Get the database URL from the environment
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create a connection pool
let pool = PgPoolOptions::new()
.max_connections(5)
.connect(&database_url)
.await?;
// Create the employees table if it doesn't exist
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
position VARCHAR(100) NOT NULL
)
"#,
)
.execute(&pool)
.await?;
println!("Table created successfully");
// Insert a sample employee
let new_employee = sqlx::query(
r#"
INSERT INTO employees (name, position)
VALUES ($1, $2)
RETURNING id, name, position
"#,
)
.bind("John Doe")
.bind("Developer")
.fetch_one(&pool)
.await?;
println!(
"Inserted employee: ID: {}, Name: {}, Position: {}",
new_employee.get::("id"),
new_employee.get::("name"),
new_employee.get::("position")
);
// Query all employees
let employees = sqlx::query("SELECT id, name, position FROM employees")
.fetch_all(&pool)
.await?;
println!("All employees:");
for employee in employees {
println!(
"ID: {}, Name: {}, Position: {}",
employee.get::("id"),
employee.get::("name"),
employee.get::("position")
);
}
Ok(())
}
```
Let's break down the main components of this code:
1. **Imports**: We import necessary modules from `sqlx`, `dotenv`, and `std::env`.
2. **Main function**: The `main` function is marked with `#[tokio::main]` to use Tokio's async runtime.
3. **Environment setup**: We load environment variables from the `.env` file and retrieve the database URL.
4. **Connection pool**: We create a connection pool using `PgPoolOptions`.
5. **Table creation**: We create the `employees` table if it doesn't exist.
6. **Data insertion**: We insert a sample employee into the database.
7. **Data querying**: We query and display all employees in the database.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
cargo run
```
This will compile and execute the `main` function, demonstrating the connection to the database, table creation, data insertion, and querying.
## Best Practices
1. Use environment variables for database credentials.
2. Use connection pooling for better performance and resource management.
3. Use prepared statements (as demonstrated with `sqlx::query`) to prevent SQL injection.
4. Handle errors appropriately using Rust's `Result` type.
5. Use async/await for efficient database operations.
## Troubleshooting
If you encounter connection issues:
1. Verify your database credentials in the `.env` file.
2. Ensure your PostgreSQL database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that all required dependencies are correctly specified in your `Cargo.toml` file.
For more detailed information on using PostgreSQL with Rust, refer to the [sqlx documentation](https://github.com/launchbadge/sqlx).
file: ./content/docs/guides/databases/redis/go.en.mdx
meta: {
"title": "Go",
"description": "Learn how to connect to Redis databases in Sealos DevBox using Go"
}
This guide will walk you through the process of connecting to a Redis database using Go within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Go environment
* [A Redis database created using the Database app in Sealos](./)
## Install Required Packages
In your Cursor terminal, install the necessary packages:
```bash
go get github.com/go-redis/redis
go get github.com/joho/godotenv
```
These commands install:
* `github.com/go-redis/redis`: A Redis client for Go
* `github.com/joho/godotenv`: A Go port of the Ruby dotenv library
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
REDIS_HOST=your_redis_host
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
```
Replace the placeholders with your actual Redis credentials from the Database app in Sealos.
#### Create the main.go file
Create a new file named `main.go` with the following content:
```go title="main.go"
package main
import (
"fmt"
"log"
"os"
"github.com/go-redis/redis"
"github.com/joho/godotenv"
)
func main() {
// Load environment variables from .env file
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
// Create a new Redis client
client := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", os.Getenv("REDIS_HOST"), os.Getenv("REDIS_PORT")),
Password: os.Getenv("REDIS_PASSWORD"),
DB: 0, // use default DB
})
// Test the connection
pong, err := client.Ping().Result()
if err != nil {
log.Fatal("Could not connect to Redis: ", err)
}
fmt.Println("Connected to Redis: ", pong)
// Set a key
err = client.Set("mykey", "Hello from Sealos DevBox!", 0).Err()
if err != nil {
log.Fatal("Could not set key: ", err)
}
// Get a key
val, err := client.Get("mykey").Result()
if err != nil {
log.Fatal("Could not get key: ", err)
}
fmt.Println("mykey:", val)
// Close the connection
err = client.Close()
if err != nil {
log.Fatal("Error closing Redis connection: ", err)
}
fmt.Println("Redis connection closed successfully")
}
```
This code demonstrates how to connect to Redis, set a key, get a key, and close the connection.
## Usage
To run the application, use the following command in your Cursor terminal:
```bash
go run main.go
```
This will execute the `main` function, demonstrating the connection to Redis, setting and getting a key, and closing the connection.
## Best Practices
1. Use environment variables for Redis credentials.
2. Always handle potential errors using proper error checking.
3. Use a context for operations that might need to be cancelled or timed out.
4. Close the Redis connection after operations are complete.
5. Consider using connection pooling for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your Redis credentials in the `.env` file.
2. Ensure your Redis database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the required packages are correctly installed.
For more detailed information on using Redis with Go, refer to the [go-redis documentation](https://github.com/go-redis/redis).
file: ./content/docs/guides/databases/redis/index.en.mdx
meta: {
"title": "Redis",
"description": "Deploy and connect to Redis databases in Sealos DevBox"
}
Redis is a popular, open-source in-memory data structure store that can be used as a database, cache, message broker, and queue. In Sealos DevBox, you can easily set up and connect to Redis databases for your development projects.
## Deploy Redis in Sealos
Sealos makes it easy to deploy a Redis database with just a few clicks. Follow these steps:
From the Sealos desktop, click on the "Database" icon to open the Database app.

Click on the "Create New Database" button. In the deployment form:
* Select "Redis" as the database type.
* Choose the desired Redis version (e.g., redis-7.0.6).
* Enter a name for your database (use lowercase letters and numbers only).
* Adjust the CPU and Memory sliders to set the resources for your database.
* Set the number of replicas (1 for single-node development and testing).
* Specify the storage size (e.g., 1 Gi).

Review the projected cost on the left sidebar. Click the "Deploy" button in the top right corner to create your Redis database.
Once deployed, Sealos will provide you with the necessary connection details.

## Connect to Redis in DevBox
Here are examples of how to connect to your Redis database using different programming languages and frameworks within your DevBox environment:
file: ./content/docs/guides/databases/redis/java.en.mdx
meta: {
"title": "Java",
"description": "Learn how to connect to Redis databases in Sealos DevBox using Java"
}
This guide will walk you through the process of connecting to a Redis database using Java within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Java environment
* [A Redis database created using the Database app in Sealos](./)
## Project Setup
#### Create a new Maven project
In your Sealos DevBox terminal, initialize a new Maven project:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=redis-java-example -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
mv redis-java-example/* .
rm -rf redis-java-example/
```
#### Project Structure
After setting up, your project structure should look like this:
```
/
├── pom.xml
├── src
│ ├── main
│ │ ├── java
│ │ │ └── com
│ │ │ └── example
│ │ │ ├── App.java
│ │ │ ├── RedisConfig.java
│ │ │ └── RedisConnection.java
│ │ └── resources
│ │ └── redis.properties
│ └── test
│ └── java
│ └── com
│ └── example
│ └── AppTest.java
```
#### Update pom.xml
Replace the content of your `pom.xml` file with the following:
```xml title="pom.xml"
4.0.0com.exampleredis-java-example1.0-SNAPSHOTUTF-81111redis.clientsjedis4.3.1org.slf4jslf4j-api2.0.5ch.qos.logbacklogback-classic1.4.12junitjunit4.13.2testorg.apache.maven.pluginsmaven-compiler-plugin3.8.11111org.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.App
```
This `pom.xml` file includes the necessary dependencies (Jedis for Redis connectivity, SLF4J for logging) and configures the Maven Shade plugin to create an executable JAR.
#### Create a configuration file
Create a file named `redis.properties` in the `src/main/resources` directory:
```ini title="redis.properties"
redis.host=your_redis_host
redis.port=6379
redis.password=your_redis_password
```
Replace the placeholders with your actual Redis credentials from the Database app in Sealos.
#### Create Java classes
Create the following Java classes in the `src/main/java/com/example` directory:
1. `App.java`:
```java title="App.java"
package com.example;
import redis.clients.jedis.Jedis;
public class App {
public static void main(String[] args) {
try (Jedis jedis = RedisConnection.getConnection()) {
System.out.println("Connected to Redis");
// String operations
jedis.set("mykey", "Hello from Sealos DevBox!");
String value = jedis.get("mykey");
System.out.println("Retrieved value: " + value);
// List operations
jedis.lpush("mylist", "element1", "element2", "element3");
String listElement = jedis.lpop("mylist");
System.out.println("Popped element from list: " + listElement);
// Hash operations
jedis.hset("myhash", "field1", "value1");
jedis.hset("myhash", "field2", "value2");
String hashValue = jedis.hget("myhash", "field1");
System.out.println("Retrieved hash value: " + hashValue);
} catch (Exception e) {
System.err.println("Error connecting to Redis: " + e.getMessage());
} finally {
RedisConnection.closePool();
}
}
}
```
This is the main class that demonstrates basic Redis operations using Jedis:
* It sets and gets a string value.
* It pushes elements to a list and pops an element.
* It sets and gets hash values.
2. `RedisConfig.java`:
```java title="RedisConfig.java"
package com.example;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
public class RedisConfig {
private static final Properties properties = new Properties();
static {
try (InputStream input = RedisConfig.class.getClassLoader().getResourceAsStream("redis.properties")) {
if (input == null) {
System.out.println("Sorry, unable to find redis.properties");
System.exit(1);
}
properties.load(input);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String getHost() {
return properties.getProperty("redis.host");
}
public static int getPort() {
return Integer.parseInt(properties.getProperty("redis.port"));
}
public static String getPassword() {
return properties.getProperty("redis.password");
}
}
```
This class loads the Redis connection details from the `redis.properties` file.
3. `RedisConnection.java`:
```java title="RedisConnection.java"
package com.example;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
public class RedisConnection {
private static final JedisPool pool = new JedisPool(new JedisPoolConfig(),
RedisConfig.getHost(),
RedisConfig.getPort(),
2000,
RedisConfig.getPassword());
public static Jedis getConnection() {
return pool.getResource();
}
public static void closePool() {
pool.close();
}
}
```
This class manages the Redis connection pool using Jedis.
4. `AppTest.java` (in `src/test/java/com/example`):
```java title="AppTest.java"
package com.example;
import static org.junit.Assert.assertTrue;
import org.junit.Test;
public class AppTest
{
@Test
public void shouldAnswerWithTrue()
{
assertTrue( true );
}
}
```
## Build and Run
To build and run the project, use the following commands in your terminal:
```bash
mvn clean package
java -jar target/redis-java-example-1.0-SNAPSHOT.jar
```
If everything is set up correctly, you should see output demonstrating the Redis operations.
## Best Practices
1. Use a properties file to store Redis connection details.
2. Implement a configuration class to load and provide access to Redis properties.
3. Use a connection pool for better performance and resource management.
4. Always close Redis connections after use (or use try-with-resources as shown in the example).
5. Handle exceptions appropriately and provide meaningful error messages.
6. Use Maven for dependency management and build automation.
## Troubleshooting
If you encounter connection issues:
1. Verify your Redis credentials in the `redis.properties` file.
2. Ensure your Redis database is running and accessible from your DevBox environment.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the Jedis dependency is correctly specified in your `pom.xml` file.
5. Make sure you're using the correct version of Java (11 in this example).
## Conclusion
This guide provided a complete example of how to set up a Java project with Maven to connect to a Redis database in the Sealos DevBox environment. It includes all the necessary steps, from project creation to running the application, along with best practices and troubleshooting tips.
For more detailed information on using Redis with Java, refer to the [Jedis GitHub repository](https://github.com/redis/jedis).
file: ./content/docs/guides/databases/redis/nodejs.en.mdx
meta: {
"title": "Node.js",
"description": "Comprehensive guide for using Redis with Node.js in Sealos - from basics to production deployment"
}
import { File, Folder, Files } from 'fumadocs-ui/components/files';
This comprehensive guide covers everything you need to know about using Redis with Node.js in Sealos DevBox, from basic operations to production-ready implementations.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Node.js environment
* [A Redis database created using the Database app in Sealos](./)
* Basic understanding of JavaScript and Node.js
## Project Setup & Structure
Let's start by setting up a complete project structure that we'll build upon throughout this guide.
### Initialize Your Project
In your Cursor terminal, install the necessary packages:
```bash
npm install redis dotenv @redis/json @redis/search @redis/time-series msgpack5
```
### Project Structure
Create the following directory structure:
```bash
# Create directories
mkdir src examples tests
# Create main files
touch .env package.json
touch src/redisClient.js
touch examples/basic-operations.js
touch tests/connection-test.js
```
Your project should look like this:
### Environment Configuration
#### Set up environment variables
Create your `.env` file with your Sealos Redis credentials:
```ini title=".env"
# Basic Redis connection
REDIS_HOST=your_redis_host
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
REDIS_DB=0
# Advanced settings (we'll use these later)
REDIS_TLS=false
REDIS_USERNAME=default
REDIS_CONNECT_TIMEOUT=10000
REDIS_COMMAND_TIMEOUT=5000
```
#### Create the main Redis client
Create `src/redisClient.js` - this will be our single source of truth for Redis connections:
```javascript title="src/redisClient.js"
const { createClient } = require('redis');
require('dotenv').config();
class RedisManager {
constructor() {
this.client = null;
this.isConnected = false;
}
createClient() {
const config = {
url: `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`,
password: process.env.REDIS_PASSWORD,
database: parseInt(process.env.REDIS_DB) || 0,
socket: {
connectTimeout: parseInt(process.env.REDIS_CONNECT_TIMEOUT) || 60000,
commandTimeout: parseInt(process.env.REDIS_COMMAND_TIMEOUT) || 5000,
reconnectStrategy: (retries) => {
if (retries > 20) {
console.error('Too many reconnection attempts, giving up');
return false;
}
// Exponential backoff with jitter (latest best practice)
const jitter = Math.floor(Math.random() * 200);
const delay = Math.min(Math.pow(2, retries) * 50, 2000);
console.log(`Reconnecting in ${delay + jitter}ms...`);
return delay + jitter;
},
keepAlive: 30000,
noDelay: true
}
};
// Add TLS if enabled
if (process.env.REDIS_TLS === 'true') {
config.socket.tls = true;
config.socket.rejectUnauthorized = process.env.NODE_ENV === 'production';
}
this.client = createClient(config);
this.setupEventHandlers();
return this.client;
}
setupEventHandlers() {
this.client.on('connect', () => {
console.log('✅ Redis client connected');
});
this.client.on('ready', () => {
console.log('✅ Redis client ready');
this.isConnected = true;
});
this.client.on('error', (err) => {
console.error('❌ Redis client error:', err.message);
this.isConnected = false;
});
this.client.on('end', () => {
console.log('🔌 Redis client disconnected');
this.isConnected = false;
});
this.client.on('reconnecting', () => {
console.log('🔄 Redis client reconnecting...');
this.isConnected = false;
});
}
async connect() {
if (!this.client) {
this.createClient();
}
if (!this.isConnected) {
await this.client.connect();
}
return this.client;
}
async disconnect() {
if (this.client && this.isConnected) {
// Use destroy() instead of quit() for latest node-redis
this.client.destroy();
}
}
getClient() {
if (!this.isConnected) {
throw new Error('Redis client not connected. Call connect() first.');
}
return this.client;
}
async ping() {
try {
const result = await this.client.ping();
return result === 'PONG';
} catch (error) {
console.error('Redis ping failed:', error);
return false;
}
}
}
// Export singleton instance
const redisManager = new RedisManager();
module.exports = redisManager;
```
#### Test your connection
Create `tests/connection-test.js`:
```javascript title="tests/connection-test.js"
const redisManager = require('../src/redisClient');
async function testConnection() {
try {
console.log('🔍 Testing Redis connection...');
// Connect to Redis
await redisManager.connect();
// Test ping
const pingResult = await redisManager.ping();
console.log('📡 Ping result:', pingResult ? 'SUCCESS' : 'FAILED');
// Test basic operations
const client = redisManager.getClient();
await client.set('test:connection', 'Hello from Sealos!');
const value = await client.get('test:connection');
console.log('💾 Test value:', value);
// Clean up test key
await client.del('test:connection');
console.log('✅ Connection test completed successfully!');
} catch (error) {
console.error('❌ Connection test failed:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Run the test
testConnection();
```
#### Run your first test
```bash
node tests/connection-test.js
```
You should see output like:
```
🔍 Testing Redis connection...
✅ Redis client connected
✅ Redis client ready
📡 Ping result: SUCCESS
💾 Test value: Hello from Sealos!
✅ Connection test completed successfully!
🔌 Redis client disconnected
```
## Basic Operations & Data Types
Now let's explore Redis data types with practical examples. Create `examples/basic-operations.js`:
```javascript title="examples/basic-operations.js"
const redisManager = require('../src/redisClient');
async function demonstrateBasicOperations() {
try {
await redisManager.connect();
const client = redisManager.getClient();
console.log('🚀 Starting basic operations demo...\n');
// 1. Strings
console.log('📝 STRING OPERATIONS:');
await client.set('user:name', 'John Doe');
await client.setEx('session:abc123', 3600, 'active'); // With TTL
const name = await client.get('user:name');
const session = await client.get('session:abc123');
console.log(`Name: ${name}`);
console.log(`Session: ${session}`);
// Increment counter
await client.incr('page:views');
await client.incrBy('page:views', 5);
const views = await client.get('page:views');
console.log(`Page views: ${views}\n`);
// 2. Hashes
console.log('🗂️ HASH OPERATIONS:');
await client.hSet('user:1001', {
name: 'Alice Smith',
email: 'alice@example.com',
age: '28',
city: 'New York'
});
const user = await client.hGetAll('user:1001');
const userAge = await client.hGet('user:1001', 'age');
console.log('User data:', user);
console.log(`User age: ${userAge}\n`);
// 3. Lists
console.log('📋 LIST OPERATIONS:');
await client.lPush('tasks', 'Task 1', 'Task 2', 'Task 3');
await client.rPush('tasks', 'Task 4'); // Add to end
const tasks = await client.lRange('tasks', 0, -1);
const firstTask = await client.lPop('tasks');
console.log('All tasks:', tasks);
console.log(`Completed task: ${firstTask}\n`);
// 4. Sets
console.log('🎯 SET OPERATIONS:');
await client.sAdd('tags:user:1001', 'developer', 'nodejs', 'redis');
await client.sAdd('tags:user:1002', 'designer', 'css', 'nodejs');
const userTags = await client.sMembers('tags:user:1001');
const commonTags = await client.sInter('tags:user:1001', 'tags:user:1002');
console.log('User 1001 tags:', userTags);
console.log('Common tags:', commonTags);
console.log('\n✅ Basic operations demo completed!');
} catch (error) {
console.error('❌ Error in basic operations:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Run the demo
demonstrateBasicOperations();
```
### Test Basic Operations
```bash
node examples/basic-operations.js
```
Expected output:
```
🚀 Starting basic operations demo...
📝 STRING OPERATIONS:
Name: John Doe
Session: active
Page views: 6
🗂️ HASH OPERATIONS:
User data: { name: 'Alice Smith', email: 'alice@example.com', age: '28', city: 'New York' }
User age: 28
📋 LIST OPERATIONS:
All tasks: [ 'Task 3', 'Task 2', 'Task 1', 'Task 4' ]
Completed task: Task 3
🎯 SET OPERATIONS:
User 1001 tags: [ 'nodejs', 'redis', 'developer' ]
Common tags: [ 'nodejs' ]
✅ Basic operations demo completed!
```
## Advanced Data Structures & Real-World Use Cases
Now let's build practical applications using Redis's advanced data structures. We'll create several examples that demonstrate real-world scenarios.
### Working with Sorted Sets - Leaderboard System
Create `examples/leaderboard.js`:
```javascript title="examples/leaderboard.js"
const redisManager = require('../src/redisClient');
class Leaderboard {
constructor(name) {
this.leaderboardKey = `leaderboard:${name}`;
}
async updateScore(userId, score, userData = {}) {
const client = redisManager.getClient();
// Update score in sorted set
await client.zAdd(this.leaderboardKey, { score, value: userId });
// Store additional user data if provided
if (Object.keys(userData).length > 0) {
await client.hSet(`user:${userId}`, userData);
}
console.log(`✅ Updated score for user ${userId}: ${score}`);
}
async getTopPlayers(count = 10) {
const client = redisManager.getClient();
const players = await client.zRevRange(this.leaderboardKey, 0, count - 1, {
withScores: true
});
// Enrich with user data
const enrichedPlayers = [];
for (let i = 0; i < players.length; i++) {
const { value: userId, score } = players[i];
const userData = await client.hGetAll(`user:${userId}`);
enrichedPlayers.push({
rank: i + 1,
userId,
score,
name: userData.name || `User ${userId}`,
...userData
});
}
return enrichedPlayers;
}
async getUserRank(userId) {
const client = redisManager.getClient();
const rank = await client.zRevRank(this.leaderboardKey, userId);
const score = await client.zScore(this.leaderboardKey, userId);
return rank !== null ? { rank: rank + 1, score } : null;
}
}
async function demonstrateLeaderboard() {
try {
await redisManager.connect();
console.log('🏆 Starting leaderboard demo...\n');
const gameLeaderboard = new Leaderboard('game_scores');
// Add some players with scores
await gameLeaderboard.updateScore('player1', 1500, { name: 'Alice', level: 25 });
await gameLeaderboard.updateScore('player2', 2300, { name: 'Bob', level: 32 });
await gameLeaderboard.updateScore('player3', 1800, { name: 'Charlie', level: 28 });
await gameLeaderboard.updateScore('player4', 2100, { name: 'Diana', level: 30 });
await gameLeaderboard.updateScore('player5', 1200, { name: 'Eve', level: 22 });
console.log('\n🥇 Top 3 Players:');
const topPlayers = await gameLeaderboard.getTopPlayers(3);
topPlayers.forEach(player => {
console.log(`${player.rank}. ${player.name} - Score: ${player.score} (Level ${player.level})`);
});
// Check specific player rank
const aliceRank = await gameLeaderboard.getUserRank('player1');
console.log(`\n📊 Alice's rank: #${aliceRank.rank} with score ${aliceRank.score}`);
console.log('\n✅ Leaderboard demo completed!');
} catch (error) {
console.error('❌ Error in leaderboard demo:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Export for use in other examples
module.exports = Leaderboard;
// Run demo if called directly
if (require.main === module) {
demonstrateLeaderboard();
}
```
### Working with Lists - Task Queue System
Create `examples/task-queue.js`:
```javascript title="examples/task-queue.js"
const redisManager = require('../src/redisClient');
class TaskQueue {
constructor(queueName = 'default_queue') {
this.queueName = queueName;
this.processingQueue = `${queueName}:processing`;
}
async addTask(taskData, priority = 'normal') {
const client = redisManager.getClient();
const task = {
id: `task_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`,
data: taskData,
createdAt: new Date().toISOString(),
priority,
attempts: 0
};
const queueKey = priority === 'high' ? `${this.queueName}:high` : this.queueName;
await client.lPush(queueKey, JSON.stringify(task));
console.log(`📝 Added ${priority} priority task: ${task.id}`);
return task.id;
}
async processTask(timeout = 5) {
const client = redisManager.getClient();
// Check high priority queue first
let result = await client.brPop(`${this.queueName}:high`, 0.1);
if (!result) {
// Then check normal priority queue
result = await client.brPop(this.queueName, timeout);
}
if (result) {
const task = JSON.parse(result.element);
// Move to processing queue for reliability
await client.lPush(this.processingQueue, JSON.stringify(task));
console.log(`⚡ Processing task: ${task.id}`);
return task;
}
return null;
}
async completeTask(taskId) {
const client = redisManager.getClient();
// Remove from processing queue
const processingTasks = await client.lRange(this.processingQueue, 0, -1);
for (let i = 0; i < processingTasks.length; i++) {
const task = JSON.parse(processingTasks[i]);
if (task.id === taskId) {
await client.lRem(this.processingQueue, 1, processingTasks[i]);
console.log(`✅ Completed task: ${taskId}`);
break;
}
}
}
async getQueueStats() {
const client = redisManager.getClient();
return {
pending: await client.lLen(this.queueName),
highPriority: await client.lLen(`${this.queueName}:high`),
processing: await client.lLen(this.processingQueue)
};
}
}
async function demonstrateTaskQueue() {
try {
await redisManager.connect();
console.log('📋 Starting task queue demo...\n');
const emailQueue = new TaskQueue('email_queue');
// Add various tasks
await emailQueue.addTask({
type: 'welcome_email',
recipient: 'user@example.com',
template: 'welcome'
});
await emailQueue.addTask({
type: 'password_reset',
recipient: 'admin@example.com',
token: 'abc123'
}, 'high');
await emailQueue.addTask({
type: 'newsletter',
recipients: ['user1@example.com', 'user2@example.com']
});
// Check queue stats
let stats = await emailQueue.getQueueStats();
console.log('📊 Queue stats:', stats);
// Process some tasks
console.log('\n🔄 Processing tasks...');
for (let i = 0; i < 3; i++) {
const task = await emailQueue.processTask();
if (task) {
// Simulate task processing
await new Promise(resolve => setTimeout(resolve, 1000));
await emailQueue.completeTask(task.id);
}
}
// Final stats
stats = await emailQueue.getQueueStats();
console.log('\n📊 Final queue stats:', stats);
console.log('\n✅ Task queue demo completed!');
} catch (error) {
console.error('❌ Error in task queue demo:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Export for use in other examples
module.exports = TaskQueue;
// Run demo if called directly
if (require.main === module) {
demonstrateTaskQueue();
}
```
### Test Advanced Data Structures
Run the leaderboard example:
```bash
node examples/leaderboard.js
```
Run the task queue example:
```bash
node examples/task-queue.js
```
Expected output for leaderboard:
```
🏆 Starting leaderboard demo...
✅ Updated score for user player1: 1500
✅ Updated score for user player2: 2300
✅ Updated score for user player3: 1800
✅ Updated score for user player4: 2100
✅ Updated score for user player5: 1200
🥇 Top 3 Players:
1. Bob - Score: 2300 (Level 32)
2. Diana - Score: 2100 (Level 30)
3. Charlie - Score: 1800 (Level 28)
📊 Alice's rank: #4 with score 1500
✅ Leaderboard demo completed!
```
## Advanced Caching Strategies
Let's implement different caching patterns that you can use in production applications.
### Cache-Aside Pattern (Lazy Loading)
Create `examples/cache-patterns.js`:
```javascript title="examples/cache-patterns.js"
const redisManager = require('../src/redisClient');
class CacheAsideService {
constructor(ttl = 3600) {
this.ttl = ttl; // Time to live in seconds
}
async get(key, fetchFunction) {
try {
const client = redisManager.getClient();
// Try to get from cache first
const cached = await client.get(key);
if (cached) {
console.log(`🎯 Cache HIT for key: ${key}`);
return JSON.parse(cached);
}
console.log(`❌ Cache MISS for key: ${key}`);
// Cache miss - fetch from source
const data = await fetchFunction();
// Store in cache with TTL
await client.setEx(key, this.ttl, JSON.stringify(data));
console.log(`💾 Cached data for key: ${key}`);
return data;
} catch (error) {
console.error('Cache-aside error:', error);
// Fallback to direct fetch if cache fails
return await fetchFunction();
}
}
async invalidate(key) {
const client = redisManager.getClient();
await client.del(key);
console.log(`🗑️ Invalidated cache for key: ${key}`);
}
async invalidatePattern(pattern) {
const client = redisManager.getClient();
const keys = await client.keys(pattern);
if (keys.length > 0) {
await client.del(keys);
console.log(`🗑️ Invalidated ${keys.length} keys matching pattern: ${pattern}`);
}
}
}
// Simulate database operations
const mockDatabase = {
async getUserById(userId) {
console.log(`🔍 Fetching user ${userId} from database...`);
// Simulate database delay
await new Promise(resolve => setTimeout(resolve, 500));
return {
id: userId,
name: `User ${userId}`,
email: `user${userId}@example.com`,
createdAt: new Date().toISOString()
};
},
async getProductById(productId) {
console.log(`🔍 Fetching product ${productId} from database...`);
await new Promise(resolve => setTimeout(resolve, 300));
return {
id: productId,
name: `Product ${productId}`,
price: Math.floor(Math.random() * 1000) + 10,
category: 'Electronics'
};
}
};
async function demonstrateCacheAside() {
try {
await redisManager.connect();
console.log('🚀 Starting Cache-Aside pattern demo...\n');
const cache = new CacheAsideService(1800); // 30 minutes TTL
// Function to get user with caching
async function getUserProfile(userId) {
return await cache.get(`user:${userId}`, async () => {
return await mockDatabase.getUserById(userId);
});
}
// First call - cache miss
console.log('📞 First call to getUserProfile(1001):');
let user = await getUserProfile('1001');
console.log('👤 User data:', user);
console.log('\n📞 Second call to getUserProfile(1001):');
// Second call - cache hit
user = await getUserProfile('1001');
console.log('👤 User data:', user);
// Invalidate and try again
console.log('\n🗑️ Invalidating user cache...');
await cache.invalidate('user:1001');
console.log('\n📞 Third call after invalidation:');
user = await getUserProfile('1001');
console.log('👤 User data:', user);
console.log('\n✅ Cache-Aside demo completed!');
} catch (error) {
console.error('❌ Error in cache-aside demo:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Export for use in other examples
module.exports = { CacheAsideService, mockDatabase };
// Run demo if called directly
if (require.main === module) {
demonstrateCacheAside();
}
```
### Write-Through Pattern
Add this to `examples/cache-patterns.js` (append to the file):
```javascript title="examples/cache-patterns.js (continued)"
class WriteThroughCache {
constructor(ttl = 3600) {
this.ttl = ttl;
}
async set(key, data, saveFunction) {
try {
const client = redisManager.getClient();
// Write to database first
await saveFunction(data);
console.log(`💾 Saved data to database for key: ${key}`);
// Then update cache
await client.setEx(key, this.ttl, JSON.stringify(data));
console.log(`🎯 Updated cache for key: ${key}`);
return data;
} catch (error) {
console.error('Write-through error:', error);
throw error; // Re-throw to maintain transaction integrity
}
}
async get(key) {
const client = redisManager.getClient();
const cached = await client.get(key);
return cached ? JSON.parse(cached) : null;
}
async update(key, updates, updateFunction) {
try {
const client = redisManager.getClient();
// Update database first
const updatedData = await updateFunction(updates);
console.log(`🔄 Updated database for key: ${key}`);
// Update cache
await client.setEx(key, this.ttl, JSON.stringify(updatedData));
console.log(`🎯 Updated cache for key: ${key}`);
return updatedData;
} catch (error) {
console.error('Write-through update error:', error);
throw error;
}
}
}
async function demonstrateWriteThrough() {
try {
await redisManager.connect();
console.log('🚀 Starting Write-Through pattern demo...\n');
const writeCache = new WriteThroughCache(3600);
// Create new user
const newUser = {
id: '2001',
name: 'John Smith',
email: 'john.smith@example.com',
role: 'admin'
};
console.log('📝 Creating new user with write-through:');
await writeCache.set('user:2001', newUser, async (userData) => {
// Simulate database save
console.log('💾 Saving to database:', userData.name);
await new Promise(resolve => setTimeout(resolve, 200));
});
// Read from cache
console.log('\n📖 Reading user from cache:');
const cachedUser = await writeCache.get('user:2001');
console.log('👤 Cached user:', cachedUser);
// Update user
console.log('\n🔄 Updating user with write-through:');
await writeCache.update('user:2001', { role: 'super_admin' }, async (updates) => {
// Simulate database update
console.log('🔄 Updating database with:', updates);
await new Promise(resolve => setTimeout(resolve, 200));
return { ...cachedUser, ...updates };
});
// Read updated data
const updatedUser = await writeCache.get('user:2001');
console.log('👤 Updated user:', updatedUser);
console.log('\n✅ Write-Through demo completed!');
} catch (error) {
console.error('❌ Error in write-through demo:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Add to exports
module.exports = { CacheAsideService, WriteThroughCache, mockDatabase };
```
### Test Caching Patterns
```bash
node examples/cache-patterns.js
```
Expected output:
```
🚀 Starting Cache-Aside pattern demo...
📞 First call to getUserProfile(1001):
❌ Cache MISS for key: user:1001
🔍 Fetching user 1001 from database...
💾 Cached data for key: user:1001
👤 User data: { id: '1001', name: 'User 1001', email: 'user1001@example.com', createdAt: '...' }
📞 Second call to getUserProfile(1001):
🎯 Cache HIT for key: user:1001
👤 User data: { id: '1001', name: 'User 1001', email: 'user1001@example.com', createdAt: '...' }
🗑️ Invalidating user cache...
🗑️ Invalidated cache for key: user:1001
📞 Third call after invalidation:
❌ Cache MISS for key: user:1001
🔍 Fetching user 1001 from database...
💾 Cached data for key: user:1001
✅ Cache-Aside demo completed!
```
## Redis Stack Integration
Redis Stack provides powerful modules for JSON, Search, and TimeSeries. Let's explore how to use them with our Node.js application.
### Working with RedisJSON
Create `examples/redis-json.js`:
```javascript title="examples/redis-json.js"
const redisManager = require('../src/redisClient');
class RedisJSONService {
async setDocument(key, document) {
const client = redisManager.getClient();
// Use the native JSON.SET command (latest node-redis supports JSON module)
await client.json.set(key, '$', document);
console.log(`📄 Stored JSON document: ${key}`);
}
async getDocument(key) {
const client = redisManager.getClient();
return await client.json.get(key);
}
async updateField(key, path, value) {
const client = redisManager.getClient();
await client.json.set(key, path, value);
console.log(`🔄 Updated field ${path} in document: ${key}`);
}
async getField(key, path) {
const client = redisManager.getClient();
return await client.json.get(key, { path });
}
async appendToArray(key, path, ...values) {
const client = redisManager.getClient();
return await client.json.arrAppend(key, path, ...values);
}
async incrementNumber(key, path, increment = 1) {
const client = redisManager.getClient();
return await client.json.numIncrBy(key, path, increment);
}
}
async function demonstrateRedisJSON() {
try {
await redisManager.connect();
console.log('📄 Starting RedisJSON demo...\n');
const jsonService = new RedisJSONService();
// Store user profile as JSON document
const userProfile = {
id: 1001,
name: 'John Doe',
email: 'john@example.com',
preferences: {
theme: 'dark',
notifications: true,
language: 'en'
},
tags: ['premium', 'early-adopter'],
level: 25,
metadata: {
createdAt: new Date().toISOString(),
lastLogin: new Date().toISOString()
}
};
await jsonService.setDocument('user:1001', userProfile);
// Get entire document
console.log('📖 Retrieved user profile:');
const retrievedProfile = await jsonService.getDocument('user:1001');
console.log(JSON.stringify(retrievedProfile, null, 2));
// Update nested field
console.log('\n🔄 Updating theme preference...');
await jsonService.updateField('user:1001', '$.preferences.theme', 'light');
// Get specific field
const theme = await jsonService.getField('user:1001', '$.preferences.theme');
console.log(`🎨 Current theme: ${theme}`);
// Add new tag to array
console.log('\n🏷️ Adding new tag...');
await jsonService.appendToArray('user:1001', '$.tags', 'beta-tester');
// Get updated tags
const tags = await jsonService.getField('user:1001', '$.tags');
console.log('🏷️ Updated tags:', tags);
// Increment a numeric field
console.log('\n🔢 Incrementing user level...');
await jsonService.incrementNumber('user:1001', '$.level', 1);
const level = await jsonService.getField('user:1001', '$.level');
console.log('📈 New level:', level);
console.log('\n✅ RedisJSON demo completed!');
} catch (error) {
console.error('❌ Error in RedisJSON demo:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Export for use in other examples
module.exports = RedisJSONService;
// Run demo if called directly
if (require.main === module) {
demonstrateRedisJSON();
}
```
### Working with RediSearch
Create `examples/redis-search.js`:
```javascript title="examples/redis-search.js"
const redisManager = require('../src/redisClient');
const { SchemaFieldTypes } = require('@redis/search');
class RedisSearchService {
async createIndex(indexName, schema, options = {}) {
const client = redisManager.getClient();
try {
// Use the native ft.create method (latest node-redis supports Search module)
await client.ft.create(indexName, schema, {
ON: 'JSON',
PREFIX: 'product:',
...options
});
console.log(`🔍 Created search index: ${indexName}`);
} catch (error) {
if (!error.message.includes('Index already exists')) {
throw error;
}
console.log(`🔍 Index ${indexName} already exists`);
}
}
async indexDocument(key, document) {
const client = redisManager.getClient();
await client.json.set(key, '$', document);
console.log(`📄 Indexed document: ${key}`);
}
async search(indexName, query, options = {}) {
const client = redisManager.getClient();
return await client.ft.search(indexName, query, options);
}
async aggregate(indexName, query, options = {}) {
const client = redisManager.getClient();
return await client.ft.aggregate(indexName, query, options);
}
}
async function demonstrateRediSearch() {
try {
await redisManager.connect();
console.log('🔍 Starting RediSearch demo...\n');
const searchService = new RedisSearchService();
// Create product search index using latest schema format
await searchService.createIndex('idx:products', {
'$.name': {
type: SchemaFieldTypes.TEXT,
AS: 'name',
SORTABLE: true
},
'$.description': {
type: SchemaFieldTypes.TEXT,
AS: 'description'
},
'$.price': {
type: SchemaFieldTypes.NUMERIC,
AS: 'price'
},
'$.category': {
type: SchemaFieldTypes.TAG,
AS: 'category'
},
'$.rating': {
type: SchemaFieldTypes.NUMERIC,
AS: 'rating'
}
});
// Index some products
const products = [
{
id: 1,
name: 'Wireless Headphones',
description: 'High-quality wireless headphones with noise cancellation',
price: 199.99,
category: 'electronics',
rating: 4.5
},
{
id: 2,
name: 'Smart Watch',
description: 'Feature-rich smartwatch with health monitoring',
price: 299.99,
category: 'electronics',
rating: 4.2
},
{
id: 3,
name: 'Coffee Maker',
description: 'Automatic coffee maker with programmable timer',
price: 89.99,
category: 'appliances',
rating: 4.0
}
];
for (const product of products) {
await searchService.indexDocument(`product:${product.id}`, product);
}
console.log('\n🔍 Searching for "wireless":');
let results = await searchService.search('idx:products', 'wireless');
console.log(`Found ${results.total} results:`);
results.documents.forEach((doc) => {
console.log(`- ${doc.value.name}: $${doc.value.price}`);
});
console.log('\n🔍 Searching electronics under $250:');
results = await searchService.search('idx:products', '@category:{electronics} @price:[0 250]');
console.log(`Found ${results.total} results:`);
results.documents.forEach((doc) => {
console.log(`- ${doc.value.name}: $${doc.value.price} (${doc.value.category})`);
});
console.log('\n✅ RediSearch demo completed!');
} catch (error) {
console.error('❌ Error in RediSearch demo:', error.message);
} finally {
await redisManager.disconnect();
}
}
// Export for use in other examples
module.exports = RedisSearchService;
// Run demo if called directly
if (require.main === module) {
demonstrateRediSearch();
}
```
### Test Redis Stack Features
Run the RedisJSON example:
```bash
node examples/redis-json.js
```
Run the RediSearch example:
```bash
node examples/redis-search.js
```
**Note**: Redis Stack features require a Redis Stack installation. If you're using standard Redis, these examples will show how the commands work, but you'll need Redis Stack for full functionality.
### Working with Connection Pooling (Latest v5+ Features)
For high-performance applications, use the latest connection pooling features:
```javascript title="examples/connection-pool.js"
const { createClientPool } = require('redis');
require('dotenv').config();
async function demonstrateConnectionPool() {
try {
console.log('🏊 Starting connection pool demo...\n');
// Create a connection pool (v5+ feature)
const pool = await createClientPool({
url: `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`,
password: process.env.REDIS_PASSWORD,
database: parseInt(process.env.REDIS_DB) || 0
})
.on('error', err => console.error('Redis Client Pool Error', err))
.connect();
console.log('✅ Connection pool created and connected');
// Execute commands directly on the pool
await pool.ping();
console.log('📡 Pool ping successful');
// Use pool for multiple operations
const operations = [];
for (let i = 0; i < 10; i++) {
operations.push(pool.set(`pool:test:${i}`, `value-${i}`));
}
await Promise.all(operations);
console.log('💾 Stored 10 keys using pool');
// Read back the values
const values = [];
for (let i = 0; i < 10; i++) {
values.push(await pool.get(`pool:test:${i}`));
}
console.log('📖 Retrieved values:', values.slice(0, 3), '...');
// Clean up
for (let i = 0; i < 10; i++) {
await pool.del(`pool:test:${i}`);
}
await pool.destroy();
console.log('🧹 Pool destroyed');
console.log('\n✅ Connection pool demo completed!');
} catch (error) {
console.error('❌ Error in connection pool demo:', error.message);
}
}
// Export for use in other examples
module.exports = { demonstrateConnectionPool };
// Run demo if called directly
if (require.main === module) {
demonstrateConnectionPool();
}
```
### Test Connection Pool
```bash
node examples/connection-pool.js
```
## Comprehensive Troubleshooting Guide
### Common Connection Issues
#### Issue: "ECONNREFUSED" Error
**Symptoms:**
```
Error: connect ECONNREFUSED 127.0.0.1:6379
```
**Solutions:**
1. **Check Redis server status:**
```bash
# In Sealos terminal
redis-cli ping
```
2. **Verify connection parameters:**
```javascript
// Debug connection
const client = redis.createClient({
url: `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`,
password: process.env.REDIS_PASSWORD,
socket: {
connectTimeout: 60000,
lazyConnect: true
}
});
client.on('error', (err) => {
console.error('Detailed error:', err);
});
```
3. **Check environment variables:**
```javascript
console.log('Redis config:', {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
hasPassword: !!process.env.REDIS_PASSWORD
});
```
#### Issue: "WRONGPASS" Authentication Error
**Symptoms:**
```
ReplyError: WRONGPASS invalid username-password pair
```
**Solutions:**
1. **Verify password in environment:**
```bash
echo $REDIS_PASSWORD
```
2. **Test authentication manually:**
```bash
redis-cli -h your_host -p 6379 -a your_password ping
```
3. **Handle authentication in code:**
```javascript
const client = redis.createClient({
url: `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`,
password: process.env.REDIS_PASSWORD,
// For Redis with ACL users
username: process.env.REDIS_USERNAME || 'default'
});
```
#### Issue: "Socket closed unexpectedly"
**Symptoms:**
```
Error: Socket closed unexpectedly
```
**Solutions:**
1. **Implement robust reconnection:**
```javascript
const client = redis.createClient({
socket: {
reconnectStrategy: (retries) => {
if (retries > 20) return false;
return Math.min(retries * 50, 500);
},
connectTimeout: 60000,
lazyConnect: true
}
});
```
2. **Handle connection events:**
```javascript
client.on('end', () => {
console.log('Connection ended, attempting to reconnect...');
});
client.on('reconnecting', () => {
console.log('Reconnecting to Redis...');
});
```
### Performance Issues
#### Issue: Slow Response Times
**Diagnosis:**
```javascript
// Monitor command execution time
const originalSend = client.sendCommand;
client.sendCommand = function(...args) {
const start = Date.now();
const result = originalSend.apply(this, args);
result.then(() => {
const duration = Date.now() - start;
if (duration > 100) { // Log slow commands
console.warn(`Slow Redis command: ${args[0]} took ${duration}ms`);
}
});
return result;
};
```
**Solutions:**
1. **Use pipelining for bulk operations:**
```javascript
// Instead of multiple individual commands
const pipeline = client.multi();
for (let i = 0; i < 1000; i++) {
pipeline.set(`key:${i}`, `value:${i}`);
}
await pipeline.exec();
```
2. **Optimize data structures:**
```javascript
// Use appropriate data types
// For counters: use INCR instead of GET/SET
await client.incr('page_views');
// For bulk data: use HMSET instead of multiple SET
await client.hSet('user:1001', {
name: 'John',
email: 'john@example.com',
age: '30'
});
```
#### Issue: Memory Usage Problems
**Diagnosis:**
```javascript
async function diagnoseMemory() {
const info = await client.info('memory');
const stats = await client.info('stats');
console.log('Memory info:', info);
console.log('Stats:', stats);
// Check for memory leaks
const keyCount = await client.dbSize();
console.log('Total keys:', keyCount);
}
```
**Solutions:**
1. **Set appropriate TTL:**
```javascript
// Always set expiration for cache data
await client.setEx('cache:user:1001', 3600, userData);
// Use EXPIRE for existing keys
await client.expire('session:abc123', 1800);
```
2. **Clean up unused keys:**
```javascript
async function cleanupOldKeys() {
const cursor = '0';
let scanCursor = cursor;
do {
const result = await client.scan(scanCursor, {
MATCH: 'temp:*',
COUNT: 100
});
if (result.keys.length > 0) {
await client.del(result.keys);
}
scanCursor = result.cursor;
} while (scanCursor !== '0');
}
```
### Data Consistency Issues
#### Issue: Race Conditions
**Problem:** Multiple clients modifying the same data simultaneously.
**Solution - Optimistic Locking:**
```javascript
async function updateUserBalance(userId, amount) {
const key = `user:${userId}:balance`;
while (true) {
// Watch the key for changes
await client.watch(key);
const currentBalance = parseFloat(await client.get(key) || '0');
const newBalance = currentBalance + amount;
// Start transaction
const multi = client.multi();
multi.set(key, newBalance.toString());
try {
const results = await multi.exec();
if (results) {
// Transaction succeeded
return newBalance;
}
// Transaction failed due to key modification, retry
} catch (error) {
throw error;
}
}
}
```
**Solution - Lua Scripts for Atomicity:**
```javascript
const updateBalanceScript = `
local key = KEYS[1]
local amount = tonumber(ARGV[1])
local current = tonumber(redis.call('GET', key) or 0)
local new_balance = current + amount
if new_balance >= 0 then
redis.call('SET', key, new_balance)
return new_balance
else
return -1
end
`;
async function updateBalanceAtomic(userId, amount) {
const result = await client.eval(
updateBalanceScript,
1,
`user:${userId}:balance`,
amount.toString()
);
if (result === -1) {
throw new Error('Insufficient balance');
}
return result;
}
```
## Best Practices Summary
### Development Best Practices
1. **Always use environment variables** for configuration
2. **Implement proper error handling** with specific error types
3. **Use connection pooling** for high-traffic applications
4. **Set appropriate TTL** for all cached data
5. **Use pipelining** for bulk operations
6. **Implement health checks** for production deployments
### Security Best Practices
1. **Enable TLS/SSL** in production
2. **Use strong passwords** and consider ACL users
3. **Restrict network access** to Redis instances
4. **Regularly update** Redis and client libraries
5. **Monitor for suspicious activity**
### Performance Best Practices
1. **Choose appropriate data structures** for your use case
2. **Use Lua scripts** for complex atomic operations
3. **Monitor memory usage** and implement cleanup strategies
4. **Optimize serialization** for large objects
5. **Use Redis Stack modules** when appropriate
### Production Best Practices
1. **Implement comprehensive monitoring**
2. **Set up proper logging** and alerting
3. **Use container orchestration** for scalability
4. **Plan for disaster recovery**
5. **Regular performance testing** and optimization
## Complete Application Example
Let's create a comprehensive example that combines multiple Redis features. Create `examples/complete-app.js`:
```javascript title="examples/complete-app.js"
const redisManager = require('../src/redisClient');
const TaskQueue = require('./task-queue');
const Leaderboard = require('./leaderboard');
const { CacheAsideService } = require('./cache-patterns');
class RedisApplication {
constructor() {
this.taskQueue = new TaskQueue('app_tasks');
this.leaderboard = new Leaderboard('user_scores');
this.cache = new CacheAsideService(3600);
}
async initialize() {
await redisManager.connect();
console.log('🚀 Redis Application initialized');
}
async simulateUserActivity() {
console.log('\n👥 Simulating user activity...');
// Add some users to leaderboard
const users = [
{ id: 'user1', name: 'Alice', score: 1500 },
{ id: 'user2', name: 'Bob', score: 2300 },
{ id: 'user3', name: 'Charlie', score: 1800 },
{ id: 'user4', name: 'Diana', score: 2100 }
];
for (const user of users) {
await this.leaderboard.updateScore(user.id, user.score, { name: user.name });
// Add task to send welcome email
await this.taskQueue.addTask({
type: 'welcome_email',
userId: user.id,
email: `${user.name.toLowerCase()}@example.com`
});
}
}
async processBackgroundTasks() {
console.log('\n⚡ Processing background tasks...');
for (let i = 0; i < 4; i++) {
const task = await this.taskQueue.processTask(1);
if (task) {
console.log(`📧 Sending ${task.data.type} to ${task.data.email}`);
// Simulate email sending
await new Promise(resolve => setTimeout(resolve, 500));
await this.taskQueue.completeTask(task.id);
}
}
}
async showLeaderboard() {
console.log('\n🏆 Current Leaderboard:');
const topPlayers = await this.leaderboard.getTopPlayers(5);
topPlayers.forEach(player => {
console.log(`${player.rank}. ${player.name} - ${player.score} points`);
});
}
async demonstrateCache() {
console.log('\n💾 Cache demonstration:');
// Simulate expensive operation
const expensiveOperation = async (id) => {
console.log(`🔄 Performing expensive calculation for ${id}...`);
await new Promise(resolve => setTimeout(resolve, 1000));
return { id, result: Math.random() * 1000, timestamp: new Date().toISOString() };
};
// First call - cache miss
console.log('First call (cache miss):');
const result1 = await this.cache.get('calculation:1', () => expensiveOperation('calc1'));
console.log('Result:', result1);
// Second call - cache hit
console.log('\nSecond call (cache hit):');
const result2 = await this.cache.get('calculation:1', () => expensiveOperation('calc1'));
console.log('Result:', result2);
}
async getApplicationStats() {
const client = redisManager.getClient();
const stats = {
queueStats: await this.taskQueue.getQueueStats(),
totalKeys: await client.dbSize(),
memoryUsage: await client.info('memory'),
topPlayer: await this.leaderboard.getTopPlayers(1)
};
return stats;
}
async cleanup() {
await redisManager.disconnect();
console.log('🧹 Application cleanup completed');
}
}
async function runCompleteExample() {
const app = new RedisApplication();
try {
await app.initialize();
await app.simulateUserActivity();
await app.processBackgroundTasks();
await app.showLeaderboard();
await app.demonstrateCache();
console.log('\n📊 Application Statistics:');
const stats = await app.getApplicationStats();
console.log('Queue stats:', stats.queueStats);
console.log('Total Redis keys:', stats.totalKeys);
console.log('Top player:', stats.topPlayer[0]);
console.log('\n✅ Complete application example finished!');
} catch (error) {
console.error('❌ Application error:', error.message);
} finally {
await app.cleanup();
}
}
// Run the complete example
if (require.main === module) {
runCompleteExample();
}
module.exports = RedisApplication;
```
### Run All Examples
Now you can test all the examples we've built:
```bash
# Test connection
node tests/connection-test.js
# Basic operations
node examples/basic-operations.js
# Advanced data structures
node examples/leaderboard.js
node examples/task-queue.js
# Caching patterns
node examples/cache-patterns.js
# Redis Stack features (if available)
node examples/redis-json.js
node examples/redis-search.js
# Connection pooling (v5+ features)
node examples/connection-pool.js
# Complete application
node examples/complete-app.js
```
### Project Structure Summary
Your final project structure should look like this:
## Next Steps
Now that you have a solid foundation with Redis and Node.js, consider exploring:
1. **Production Deployment**: Implement monitoring, logging, and error handling
2. **Performance Optimization**: Use pipelining, connection pooling, and memory optimization
3. **Security**: Enable TLS, implement proper authentication, and secure your Redis instance
4. **Scaling**: Explore Redis Cluster, Sentinel, and horizontal scaling strategies
5. **Integration**: Connect with your favorite Node.js frameworks (Express, NestJS, Fastify)
For more detailed information and updates, refer to the [official node-redis documentation](https://github.com/redis/node-redis) and [Redis Stack documentation](https://redis.io/docs/stack/).
file: ./content/docs/guides/databases/redis/php.en.mdx
meta: {
"title": "PHP",
"description": "Learn how to connect to Redis databases in Sealos DevBox using PHP"
}
This guide will walk you through the process of connecting to a Redis database using PHP within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with PHP environment
* [A Redis database created using the Database app in Sealos](./)
## Install Required Extensions
In your Cursor terminal, ensure that the Redis extension for PHP is installed:
```bash
sudo apt-get update
sudo apt-get install php-redis -y
```
## Connection Setup
#### Create a Configuration File
First, let's create a configuration file to store our Redis connection parameters. Create a file named `config.php` in your project directory with the following content:
```php
'your_redis_host',
'port' => 6379,
'password' => 'your_redis_password'
];
```
Replace the placeholders with your actual Redis credentials from the Database app in Sealos.
#### Create a Redis Connection Function
Next, let's create a PHP file that will handle the Redis connection. Create a file named `redis_connect.php` with the following content:
```php
connect($config['host'], $config['port']);
if (isset($config['password']) && !empty($config['password'])) {
$redis->auth($config['password']);
}
echo "Connected successfully to Redis.\n";
return $redis;
} catch (Exception $e) {
die("Connection failed: " . $e->getMessage());
}
}
```
This function reads the configuration from `config.php` and establishes a connection to the Redis database.
#### Create a Test Script
Now, let's create a test script to verify our connection and perform some basic Redis operations. Create a file named `test_redis.php` with the following content:
```php
set('mykey', 'Hello from Sealos DevBox!');
echo "Key set successfully.\n";
// Get a key
$value = $redis->get('mykey');
echo "Retrieved value: " . $value . "\n";
// Set a hash
$redis->hSet('myhash', 'field1', 'value1');
$redis->hSet('myhash', 'field2', 'value2');
echo "Hash set successfully.\n";
// Get hash fields
$hashValue = $redis->hGetAll('myhash');
echo "Retrieved hash: " . print_r($hashValue, true) . "\n";
// Close the connection
$redis->close();
echo "Redis connection closed.\n";
```
This script demonstrates setting and getting a key, as well as working with Redis hashes.
## Usage
To run the test script, use the following command in your Cursor terminal:
```bash
php test_redis.php
```
This will execute the script, demonstrating the connection to Redis and basic operations.
## Best Practices
1. Use environment variables or a separate configuration file for Redis credentials.
2. Handle potential errors using try-catch blocks.
3. Close the Redis connection after operations are complete.
4. Use Redis transactions for operations that need to be atomic.
5. Consider using a connection pool for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Verify your Redis credentials in the `config.php` file.
2. Ensure your Redis database is running and accessible.
3. Check for any network restrictions in your DevBox environment.
4. Confirm that the `php-redis` extension is correctly installed.
For more detailed information on using Redis with PHP, refer to the [official PHP Redis documentation](https://github.com/phpredis/phpredis).
file: ./content/docs/guides/databases/redis/python.en.mdx
meta: {
"title": "Python",
"description": "Learn how to connect to Redis databases in Sealos DevBox using Python"
}
This guide will walk you through the process of connecting to a Redis database using Python within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Python environment
* [A Redis database created using the Database app in Sealos](./)
## Activating the Python Environment
Before you start, you need to activate the Python virtual environment in your DevBox. Open the terminal within Cursor IDE and run:
```bash
source ./bin/activate
```
You should see your prompt change, indicating that the virtual environment is now active.
## Installing Required Packages
In your Cursor terminal, install the necessary packages:
```bash
pip install redis python-dotenv
```
This command installs:
* `redis`: The Redis client for Python
* `python-dotenv`: A Python package that allows you to load environment variables from a .env file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
REDIS_HOST=your_redis_host
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
```
Replace the placeholders with your actual Redis credentials from the Database app in Sealos.
#### Create a Redis connection module
Create a new file named `redis_connection.py` with the following content:
```python title="redis_connection.py"
import os
from dotenv import load_dotenv
import redis
# Load environment variables
load_dotenv()
def get_redis_connection():
try:
r = redis.Redis(
host=os.getenv('REDIS_HOST'),
port=os.getenv('REDIS_PORT'),
password=os.getenv('REDIS_PASSWORD'),
decode_responses=True
)
r.ping() # Test the connection
print("Successfully connected to Redis")
return r
except redis.ConnectionError as e:
print(f"Error connecting to Redis: {e}")
return None
def close_connection(connection):
if connection:
connection.close()
print("Redis connection closed")
```
This module provides two main functions:
1. `get_redis_connection()`: This function establishes a connection to the Redis database using the credentials stored in the environment variables. It returns the connection object if successful, or None if an error occurs.
2. `close_connection(connection)`: This function closes the Redis connection when it's no longer needed.
#### Create a test script
Now, let's create a test script to verify our connection and perform some basic Redis operations. Create a file named `test_redis.py` with the following content:
```python title="test_redis.py"
from redis_connection import get_redis_connection, close_connection
def set_value(r, key, value):
r.set(key, value)
print(f"Set {key}: {value}")
def get_value(r, key):
value = r.get(key)
print(f"Get {key}: {value}")
return value
def main():
redis_conn = get_redis_connection()
if redis_conn:
try:
# String operations
set_value(redis_conn, "mykey", "Hello from Sealos DevBox!")
get_value(redis_conn, "mykey")
# List operations
redis_conn.lpush("mylist", "element1", "element2", "element3")
print("List after push:", redis_conn.lrange("mylist", 0, -1))
print("Popped element:", redis_conn.lpop("mylist"))
print("List after pop:", redis_conn.lrange("mylist", 0, -1))
# Hash operations
redis_conn.hset("myhash", "field1", "value1")
redis_conn.hset("myhash", "field2", "value2")
print("Hash value for field1:", redis_conn.hget("myhash", "field1"))
print("All hash fields:", redis_conn.hgetall("myhash"))
except Exception as e:
print(f"An error occurred: {e}")
finally:
close_connection(redis_conn)
if __name__ == "__main__":
main()
```
This script demonstrates various Redis operations:
* Setting and getting string values
* Working with lists (push, pop, and range)
* Using hash structures (set, get, and get all)
## Running the Test Script
To run the test script, make sure your virtual environment is activated, then execute:
```bash
python test_redis.py
```
If everything is set up correctly, you should see output indicating successful connection and the results of various Redis operations.
## Best Practices
1. Always activate the virtual environment before running your Python scripts or installing packages.
2. Use environment variables to store sensitive information like database credentials.
3. Close Redis connections after use to free up resources.
4. Handle exceptions appropriately to manage potential errors.
5. Consider using connection pooling for better performance in production environments.
## Troubleshooting
If you encounter connection issues:
1. Ensure you've activated the virtual environment with `source ./bin/activate`.
2. Verify that your Redis database is running and accessible.
3. Double-check your Redis credentials in the `.env` file.
4. Check the Redis logs in the Database app for any error messages.
For more detailed information on using Redis with Python, refer to the [official Redis-py documentation](https://redis-py.readthedocs.io/en/stable/).
file: ./content/docs/guides/databases/redis/rust.en.mdx
meta: {
"title": "Rust",
"description": "Learn how to connect to Redis databases in Sealos DevBox using Rust"
}
This guide will walk you through the process of connecting to a Redis database using Rust within your Sealos DevBox project.
## Prerequisites
* [A Sealos DevBox project](/docs/guides/fundamentals/create-a-project) with Rust environment
* [A Redis database created using the Database app in Sealos](./)
## Install Required Dependencies
In your Cursor terminal, add the necessary dependencies to your `Cargo.toml` file:
```toml
[dependencies]
redis = "0.22.0"
dotenv = "0.15.0"
```
These dependencies include:
* `redis`: The Redis client for Rust
* `dotenv`: A library for loading environment variables from a file
## Connection Setup
#### Set up the environment variables
First, let's set up the environment variables for our database connection. Create a `.env` file in your project root with the following content:
```ini title=".env"
REDIS_HOST=your_redis_host
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
```
Replace the placeholders with your actual Redis credentials from the Database app in Sealos.
#### Create the main.rs file
Create a new file named `src/main.rs` with the following content:
```rust title="src/main.rs"
use redis::Commands;
use dotenv::dotenv;
use std::env;
fn main() -> redis::RedisResult<()> {
// Load environment variables from .env file
dotenv().ok();
// Get Redis connection details from environment variables
let redis_host = env::var("REDIS_HOST").expect("REDIS_HOST must be set");
let redis_port = env::var("REDIS_PORT").expect("REDIS_PORT must be set");
let redis_password = env::var("REDIS_PASSWORD").expect("REDIS_PASSWORD must be set");
// Create the Redis connection URL
let redis_url = format!("redis://:{}@{}:{}", redis_password, redis_host, redis_port);
// Create a client
let client = redis::Client::open(redis_url)?;
// Connect to Redis
let mut con = client.get_connection()?;
// Set a key
let _: () = con.set("my_key", "Hello from Sealos DevBox!")?;
// Get a key
let value: String = con.get("my_key")?;
println!("Retrieved value: {}", value);
// Set a hash
let _: () = redis::cmd("HSET")
.arg("my_hash")
.arg("field1")
.arg("value1")
.arg("field2")
.arg("value2")
.query(&mut con)?;
// Get hash fields
let hash_value: std::collections::HashMap = con.hgetall("my_hash")?;
println!("Retrieved hash: {:?}", hash_value);
Ok(())
}
```
Let's break down the main components of this code:
1. **Imports**: We import necessary modules from `redis` and `dotenv` crates.
2. **Main function**: The `main` function is where we perform our Redis operations.
3. **Environment setup**: We load environment variables from the `.env` file and retrieve the Redis connection details.
4. **Connection**: We create a Redis client and establish a connection.
5. **Basic operations**: We demonstrate setting and getting a key, as well as working with Redis hashes.