Skip to content

A Cloudflare-based online text/file sharing platform that supports multiple syntax Markdown rendering, self-destructing messages, S3/WebDav/OneDrive/Local aggregated storage, password protection, and more. It can be mounted as WebDAV and supports Docker deployment.

License

Notifications You must be signed in to change notification settings

ling-drag0n/CloudPaste

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CloudPaste - Online Clipboard πŸ“‹

δΈ­ζ–‡ | English | EspaΓ±ol | franΓ§ais | ζ—₯本θͺž

paste

Cloudflare-based online clipboard and file sharing service with Markdown editing and file upload support

Ask DeepWiki License GitHub Stars Powered by Cloudflare Docker Pulls

πŸ“Έ Showcase β€’ ✨ Features β€’ πŸš€ Deployment Guide β€’ πŸ”§ Tech Stack β€’ πŸ’» Development β€’ πŸ“„ License

πŸ“Έ Showcase

✨ Features

πŸ“ Markdown Editing and Sharing

  • Powerful Editor: Integrated with Vditor, supporting GitHub-flavored Markdown, math formulas, flowcharts, mind maps, and more
  • Secure Sharing: Content can be protected with access passwords
  • Flexible Expiration: Support for setting content expiration times
  • Access Control: Ability to limit maximum view count
  • Customization: Personalized share links and notes
  • Support for Raw text direct links: Similar to GitHub's Raw direct links, used for services launched via YAML configuration files
  • Multi-format export: Supports export to PDF, Markdown, HTML, PNG images, and Word documents
  • Easy Sharing: One-click link copying and QR code generation
  • Auto-save: Support for automatic draft saving

πŸ“€ File Upload and Management

  • Multi-Storage Support: Compatible with various S3(R2, B2, etc.)/WebDav/Local Storage(Docker) storage aggregation services
  • Storage Configuration: Visual interface for configuring multiple storage spaces, flexible switching of default storage sources
  • Efficient Upload: Upload to S3 storage via pre-signed URLs/resumable chunked uploads, while other storage types use streaming uploads
  • Real-time Feedback: Real-time upload progress display
  • Custom Limits: Single upload limits and maximum capacity restrictions
  • Metadata Management: File notes, passwords, expiration times, access restrictions
  • Data Analysis: File access statistics and trend analysis
  • Direct Server Transfer: Supports calling APIs for file upload, download, and other operations.

πŸ›  Convenient File/Text Operations

  • Unified Management: Support for file/text creation, deletion, and property modification
  • Online Preview: Online preview and direct link generation for common documents, images, and media files
  • Sharing Tools: Generation of short links and QR codes for cross-platform sharing
  • Batch Management: Batch operations and display for files/text

πŸ”„ WebDAV and Mount Point Management

  • WebDAV Protocol Support: Access and manage the file system via standard WebDAV protocol
  • Network Drive Mounting: Support for mounting by some third-party clients
  • Flexible Mount Points: Support for creating multiple mount points connected to different storage services
  • Permission Control: Fine-grained mount point access permission management
  • API Key Integration: WebDAV access authorization through API keys
  • Large File Support: Automatic use of multipart upload mechanism for large files
  • Directory Operations: Full support for directory creation, upload, deletion, renaming, and other operations

πŸ” Lightweight Permission Management

Administrator Permission Control

  • System Management: Global system settings configuration
  • Content Moderation: Management of all user content
  • Storage Management: Addition, editing, and deletion of S3 storage services
  • Permission Assignment: Creation and permission management of API keys
  • Data Analysis: Complete access to statistical data

API Key Permission Control

  • Text Permissions: Create/edit/delete text content
  • File Permissions: Upload/manage/delete files
  • Storage Permissions: Ability to select specific storage configurations
  • Read/Write Separation: Can set read-only or read-write permissions
  • Time Control: Custom validity period (from hours to months)
  • Security Mechanism: Automatic expiration and manual revocation functions

πŸ’« System Features

  • High Adaptability: Responsive design, adapting to mobile devices and desktops
  • Multilingual: Chinese/English bilingual interface support
  • Visual Modes: Bright/dark theme switching
  • Secure Authentication: JWT-based administrator authentication system
  • Offline Experience: PWA support, allowing offline use and desktop installation

πŸš€ Deployment Guide

Prerequisites

Before starting deployment, please ensure you have prepared the following:

  • Cloudflare account (required)
  • If using R2: Activate Cloudflare R2 service and create a bucket (requires payment method)
  • If using Vercel: Register for a Vercel account
  • Configuration information for other S3 storage services:
    • S3_ACCESS_KEY_ID
    • S3_SECRET_ACCESS_KEY
    • S3_BUCKET_NAME
    • S3_ENDPOINT

The following tutorial may be outdated. For specific details, refer to: Cloudpaste Online Deployment Documentation

πŸ‘‰ View Complete Deployment Guide

πŸ“‘ Table of Contents


Action Automated Deployment

Using GitHub Actions enables automatic deployment of your application after code is pushed. CloudPaste offers two deployment architectures for you to choose from.

Deployment Architecture Selection

πŸ”„ Unified Deployment (Recommended)

Frontend and backend deployed on the same Cloudflare Worker

✨ Advantages:

  • Same Origin - No CORS issues, simpler configuration
  • Lower Cost - Navigation requests are free, saving 60%+ costs compared to separated deployment
  • Simpler Deployment - Deploy frontend and backend in one go, no need to manage multiple services
  • Better Performance - Frontend and backend on the same Worker, faster response time

πŸ”€ Separated Deployment

Backend deployed to Cloudflare Workers, frontend deployed to Cloudflare Pages

✨ Advantages:

  • Flexible Management - Independent deployment, no mutual interference
  • Team Collaboration - Frontend and backend can be maintained by different teams
  • Scalability - Frontend can easily switch to other platforms (e.g., Vercel)

Configure GitHub Repository

1️⃣ Fork or Clone Repository

Visit and Fork the repository: https://github.com/ling-drag0n/CloudPaste

2️⃣ Configure GitHub Secrets

Go to your GitHub repository settings: Settings β†’ Secrets and variables β†’ Actions β†’ New repository secret

Add the following Secrets:

Secret Name Required Purpose
CLOUDFLARE_API_TOKEN βœ… Cloudflare API token (requires Workers, D1, and Pages permissions)
CLOUDFLARE_ACCOUNT_ID βœ… Cloudflare account ID
ENCRYPTION_SECRET ❌ Key for encrypting sensitive data (will be auto-generated if not provided)
ACTIONS_VAR_TOKEN βœ… GitHub Token for deployment control panel (required only when using the control panel, otherwise skip)

3️⃣ Obtain Cloudflare API Token

Get API Token:

  1. Visit Cloudflare API Tokens
  2. Click Create Token
  3. Select Edit Cloudflare Workers template
  4. Add additional permissions:
    • Account β†’ D1 β†’ Edit
    • Account β†’ Cloudflare Pages β†’ Edit (if using separated deployment)
  5. Click Continue to summary β†’ Create Token
  6. Copy the Token and save it to GitHub Secrets

D1 Permission

Get Account ID:

  1. Visit Cloudflare Dashboard
  2. Find Account ID in the right sidebar
  3. Click to copy and save to GitHub Secrets

4️⃣ (Optional) Configure Deployment Control Panel

If you want to use the visual control panel to manage auto-deployment switches, you need additional configuration:

Create GitHub Personal Access Token:

  1. Visit GitHub Token Settings
  2. Click Generate new token β†’ Generate new token (classic)
  3. Set Token name (e.g., CloudPaste Deployment Control)
  4. Select permissions:
    • βœ… repo (Full repository access)
    • βœ… workflow (Workflow permissions)
  5. Click Generate token
  6. Copy the Token and save as Secret ACTIONS_VAR_TOKEN

Using the Control Panel:

  1. Go to repository Actions tab
  2. In the left workflow list, click πŸŽ›οΈ Deployment Control Panel
  3. Click Run workflow β†’ Run workflow on the right
  4. In the popup, select the deployment method to enable/disable
  5. Click Run workflow to apply configuration
  6. After updating the switch state, the control panel will automatically trigger the corresponding deployment workflow once (whether it actually deploys is decided by the current switch state)

πŸ”„ Unified Deployment Tutorial (Recommended)

Deployment Steps

1️⃣ Configure GitHub Secrets (refer to the configuration section above)

2️⃣ Trigger Deployment Workflow

Method 1: Manual Trigger (recommended for first deployment)

  • Go to repository Actions tab
  • Click Deploy SPA CF Workers[δΈ€δ½“εŒ–ιƒ¨η½²] on the left
  • Click Run workflow on the right β†’ select main branch β†’ Run workflow

Method 2: Auto Trigger

  • Use the deployment control panel to enable SPA Unified Auto Deploy
  • After that, deployment will be triggered automatically when pushing code to frontend/ or backend/ directory to main branch

Note: When you manually run Deploy SPA CF Workers[δΈ€δ½“εŒ–ιƒ¨η½²] from the Actions page, it will always deploy once regardless of the auto-deploy switch. Automatic behavior (push or control panel triggered) is still controlled by the SPA_DEPLOY switch.

3️⃣ Wait for Deployment to Complete

The deployment process takes about 3-5 minutes. The workflow will automatically complete the following steps:

  • βœ… Build frontend static assets
  • βœ… Install backend dependencies
  • βœ… Create/verify D1 database
  • βœ… Initialize database schema
  • βœ… Set encryption secret
  • βœ… Deploy to Cloudflare Workers

4️⃣ Get Deployment URL

After successful deployment, you will see output similar to this in the Actions log:

Published cloudpaste-spa (X.XX sec)
  https://cloudpaste-spa.your-account.workers.dev

Your CloudPaste has been successfully deployed! Visit the URL above to use it.

Deployment Complete

Visit your application: https://cloudpaste-spa.your-account.workers.dev

Post-deployment Configuration:

  1. The database will be automatically initialized on first visit
  2. Log in with the default admin account:
    • Username: admin
    • Password: admin123
  3. ⚠️ Important: Change the default admin password immediately!
  4. Configure your S3-compatible storage service in the admin panel
  5. (Optional) Bind a custom domain in Cloudflare Dashboard

Advantages Recap:

  • βœ… Same origin for frontend and backend, no CORS issues
  • βœ… Navigation requests are free, reducing costs by 60%+
  • βœ… Deploy in one go, simple management

πŸ”€ Separated Deployment Tutorial

If you choose separated deployment, follow these steps:

Backend Deployment

1️⃣ Configure GitHub Secrets (refer to the configuration section above)

2️⃣ Trigger Backend Deployment

Method 1: Manual Trigger

  • Go to repository Actions tab
  • Click Deploy Backend CF Workers[WorkerεŽη«―εˆ†η¦»ιƒ¨η½²] on the left
  • Click Run workflow β†’ Run workflow

Method 2: Auto Trigger

  • Use the deployment control panel to enable Backend Separated Auto Deploy
  • Deployment will be triggered automatically when pushing backend/ directory code

3️⃣ Wait for Deployment to Complete

The workflow will automatically complete:

  • βœ… Create/verify D1 database
  • βœ… Initialize database schema
  • βœ… Set encryption secret
  • βœ… Deploy Worker to Cloudflare

4️⃣ Record Backend URL

After successful deployment, note down your backend Worker URL: https://cloudpaste-backend.your-account.workers.dev

⚠️ Important: Remember your backend domain, you'll need it for frontend deployment!

Frontend Deployment

Cloudflare Pages

1️⃣ Trigger Frontend Deployment

Method 1: Manual Trigger

  • Go to repository Actions tab
  • Click Deploy Frontend CF Pages[Pagesε‰η«―εˆ†η¦»ιƒ¨η½²] on the left
  • Click Run workflow β†’ Run workflow

Method 2: Auto Trigger

  • Use the deployment control panel to enable Frontend Separated Auto Deploy
  • Deployment will be triggered automatically when pushing frontend/ directory code

Note: When you manually run the Backend or Frontend deployment workflows from the Actions page, they will always deploy once regardless of the auto-deploy switch. Automatic behavior is controlled by the BACKEND_DEPLOY / FRONTEND_DEPLOY switches.

2️⃣ Configure Environment Variables

Required step: After frontend deployment, you must manually configure the backend address!

  1. Log in to Cloudflare Dashboard
  2. Navigate to Pages β†’ cloudpaste-frontend
  3. Click Settings β†’ Environment variables
  4. Add environment variable:
    • Name: VITE_BACKEND_URL
    • Value: Your backend Worker URL (e.g., https://cloudpaste-backend.your-account.workers.dev)
    • Note: No trailing /, custom domain recommended

⚠️ Must fill in the complete backend domain, format: https://xxxx.com

3️⃣ Redeploy Frontend

Important: After configuring environment variables, you must run the frontend workflow again!

  • Return to GitHub Actions
  • Manually trigger Deploy Frontend CF Pages workflow again
  • This is necessary to load the backend domain configuration

Frontend Redeploy

4️⃣ Access Application

Frontend deployment URL: https://cloudpaste-frontend.pages.dev

⚠️ Please strictly follow the steps, otherwise backend domain loading will fail!

Vercel (Alternative)

Vercel deployment steps:

  1. Import GitHub project in Vercel after forking
  2. Configure deployment parameters:
Framework Preset: Vite
Build Command: npm run build
Output Directory: dist
Install Command: npm install
  1. Configure environment variables:
    • Name: VITE_BACKEND_URL
    • Value: Your backend Worker URL
  2. Click Deploy button to deploy

☝️ Choose either Cloudflare Pages or Vercel

⚠️ Security Notice: Please change the default admin password immediately after system initialization (username: admin, password: admin123).


Manual Deployment

CloudPaste supports two manual deployment methods: unified deployment (recommended) and separated deployment.

πŸ”„ Unified Manual Deployment (Recommended)

Unified deployment deploys both frontend and backend to the same Cloudflare Worker, offering simpler configuration and lower costs.

Step 1: Clone Repository

git clone https://github.com/ling-drag0n/CloudPaste.git
cd CloudPaste

Step 2: Build Frontend

cd frontend
npm install
npm run build
cd ..

Verify build output: Ensure frontend/dist directory exists and contains index.html

Step 3: Configure Backend

cd backend
npm install
npx wrangler login

Step 4: Create D1 Database

npx wrangler d1 create cloudpaste-db

Note the database_id from the output (e.g., xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)

Step 5: Initialize Database

npx wrangler d1 execute cloudpaste-db --file=./schema.sql

Step 6: Configure wrangler.spa.toml

Edit backend/wrangler.spa.toml file and modify the database ID:

[[d1_databases]]
binding = "DB"
database_name = "cloudpaste-db"
database_id = "YOUR_DATABASE_ID"  # Replace with ID from Step 4

Step 7: Deploy to Cloudflare Workers

npx wrangler deploy --config wrangler.spa.toml

After successful deployment, you'll see your application URL:

Published cloudpaste-spa (X.XX sec)
  https://cloudpaste-spa.your-account.workers.dev

Deployment Complete!

Visit your application: Open the URL above to use CloudPaste

Post-deployment Configuration:

  1. The database will be automatically initialized on first visit
  2. Log in with the default admin account (username: admin, password: admin123)
  3. ⚠️ Change the default admin password immediately!
  4. Configure S3-compatible storage service in the admin panel
  5. (Optional) Bind a custom domain in Cloudflare Dashboard

⚠️ Security Notice: Please change the default admin password immediately after system initialization.


πŸ”€ Separated Manual Deployment

If you need to deploy and manage frontend and backend independently, you can choose the separated deployment method.

Backend Manual Deployment

  1. Clone the repository
git clone https://github.com/ling-drag0n/CloudPaste.git
cd CloudPaste/backend
  1. Install dependencies

    npm install
  2. Log in to Cloudflare

    npx wrangler login
  3. Create D1 database

    npx wrangler d1 create cloudpaste-db

    Note the database ID from the output.

  4. Modify wrangler.toml configuration

    [[d1_databases]]
    binding = "DB"
    database_name = "cloudpaste-db"
    database_id = "YOUR_DATABASE_ID"
  5. Deploy Worker

    npx wrangler deploy

    Note the URL from the output; this is your backend API address.

  6. Initialize database (automatic) Visit your Worker URL to trigger initialization:

    https://cloudpaste-backend.your-username.workers.dev
    

⚠️ Important: Remember your backend domain, you'll need it for frontend deployment!

Frontend Manual Deployment

Cloudflare Pages

  1. Prepare frontend code

    cd CloudPaste/frontend
    npm install
  2. Configure environment variables Create or modify the .env.production file:

    VITE_BACKEND_URL=https://cloudpaste-backend.your-username.workers.dev
    VITE_APP_ENV=production
    VITE_ENABLE_DEVTOOLS=false
    
  3. Build frontend project

    npm run build

    Be careful when building! !

  4. Deploy to Cloudflare Pages

    Method 1: Via Wrangler CLI

    npx wrangler pages deploy dist --project-name=cloudpaste-frontend

    Method 2: Via Cloudflare Dashboard

    1. Log in to Cloudflare Dashboard
    2. Select "Pages"
    3. Click "Create a project" β†’ "Direct Upload"
    4. Upload files from the dist directory
    5. Set project name (e.g., "cloudpaste-frontend")
    6. Click "Save and Deploy"

Vercel

  1. Prepare frontend code

    cd CloudPaste/frontend
    npm install
  2. Install and log in to Vercel CLI

    npm install -g vercel
    vercel login
  3. Configure environment variables, same as for Cloudflare Pages

  4. Build and deploy

    vercel --prod

    Follow the prompts to configure the project.


ClawCloud CloudPaste Deployment Tutorial

10GB free traffic per month, suitable for light usage only

Step 1:

Registration link: Claw Cloud (no #AFF) No credit card required, as long as your GitHub registration date is more than 180 days, you get $5 credit every month.

Step 2:

After registration, click APP Launchpad on the homepage, then click create app in the upper right corner

image.png

Step 3:

First deploy the backend, as shown in the figure (for reference only): image.png

Backend data storage is here: image.png

Step 4:

Then the frontend, as shown in the figure (for reference only): image.png

Deployment is complete and ready to use, custom domain names can be configured as needed
πŸ‘‰ Docker Deployment Guide

πŸ“‘ Table of Contents


Docker Command Line Deployment

Backend Docker Deployment

CloudPaste backend can be quickly deployed using the official Docker image.

  1. Create data storage directory

    mkdir -p sql_data
  2. Run the backend container

    docker run -d --name cloudpaste-backend \
      -p 8787:8787 \
      -v $(pwd)/sql_data:/data \
      -e ENCRYPTION_SECRET=your-encryption-key \
      -e NODE_ENV=production \
      dragon730/cloudpaste-backend:latest

    Note the deployment URL (e.g., http://your-server-ip:8787), which will be needed for the frontend deployment.

⚠️ Security tip: Be sure to customize ENCRYPTION_SECRET and keep it safe, as this key is used to encrypt sensitive data.

Frontend Docker Deployment

The frontend uses Nginx to serve and configures the backend API address at startup.

docker run -d --name cloudpaste-frontend \
  -p 80:80 \
  -e BACKEND_URL=http://your-server-ip:8787 \
  dragon730/cloudpaste-frontend:latest

⚠️ Note: BACKEND_URL must include the complete URL (including protocol http:// or https://) ⚠️ Security reminder: Please change the default administrator password immediately after system initialization (Username: admin, Password: admin123).

Docker Image Update

When a new version of the project is released, you can update your Docker deployment following these steps:

  1. Pull the latest images

    docker pull dragon730/cloudpaste-backend:latest
    docker pull dragon730/cloudpaste-frontend:latest
  2. Stop and remove old containers

    docker stop cloudpaste-backend cloudpaste-frontend
    docker rm cloudpaste-backend cloudpaste-frontend
  3. Start new containers using the same run commands as above (preserving data directory and configuration)

Docker Compose One-Click Deployment

Using Docker Compose allows you to deploy both frontend and backend services with one click, which is the simplest recommended method.

  1. Create a docker-compose.yml file
version: "3.8"

services:
   frontend:
      image: dragon730/cloudpaste-frontend:latest
      environment:
         - BACKEND_URL=https://xxx.com # Fill in the backend service address
      ports:
         - "8080:80" #"127.0.0.1:8080:80"
      depends_on:
         - backend # Depends on backend service
      networks:
         - cloudpaste-network
      restart: unless-stopped

   backend:
      image: dragon730/cloudpaste-backend:latest
      environment:
         - NODE_ENV=production
         - PORT=8787
         - ENCRYPTION_SECRET=custom-key # Please modify this to your own security key
         - TASK_WORKER_POOL_SIZE=2
      volumes:
         - ./sql_data:/data # Data persistence
      ports:
         - "8787:8787" #"127.0.0.1:8787:8787"
      networks:
         - cloudpaste-network
      restart: unless-stopped

networks:
   cloudpaste-network:
      driver: bridge
  1. Start the services
docker-compose up -d

⚠️ Security reminder: Please change the default administrator password immediately after system initialization (Username: admin, Password: admin123).

  1. Access the services

Frontend: http://your-server-ip:80 Backend: http://your-server-ip:8787

Docker Compose Update

When you need to update to a new version:

  1. Pull the latest images

    docker-compose pull
  2. Recreate containers using new images (preserving data volumes)

    docker-compose up -d --force-recreate

πŸ’‘ Tip: If there are configuration changes, you may need to backup data and modify the docker-compose.yml file

Nginx Reverse Proxy Example

server {
    listen 443 ssl;
    server_name paste.yourdomain.com;  # Replace with your domain name

    # SSL certificate configuration
    ssl_certificate     /path/to/cert.pem;  # Replace with certificate path
    ssl_certificate_key /path/to/key.pem;   # Replace with key path

    # Frontend proxy configuration
    location / {
        proxy_pass http://localhost:80;  # Docker frontend service address
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Backend API proxy configuration
    location /api {
        proxy_pass http://localhost:8787;  # Docker backend service address
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        client_max_body_size 0;

        # WebSocket support (if needed)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # WebDAV Configuration
    location /dav {
        proxy_pass http://localhost:8787/dav;  # Points to your backend service

        # WebDAV necessary headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # WebDAV method support
        proxy_pass_request_headers on;

        # Support all WebDAV methods
        proxy_method $request_method;

        # Necessary header processing
        proxy_set_header Destination $http_destination;
        proxy_set_header Overwrite $http_overwrite;

        # Handle large files
        client_max_body_size 0;

        # Timeout settings
        proxy_connect_timeout 3600s;
        proxy_send_timeout 3600s;
        proxy_read_timeout 3600s;
    }
}

⚠️ Security tip: It is recommended to configure HTTPS and a reverse proxy (such as Nginx) to enhance security.

πŸ‘‰ S3 Cross-Origin Configuration Guide

R2 API Retrieval and Cross-Origin Configuration

  1. Log in to Cloudflare Dashboard

  2. Click R2 Storage and create a bucket.

  3. Create API token R2api R2rw

  4. Save all data after creation; you'll need it later

  5. Configure cross-origin rules: click the corresponding bucket, click Settings, edit CORS policy as shown below:

[
   {
      "AllowedOrigins": ["http://localhost:3000", "https://replace-with-your-frontend-domain"],
      "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
      "AllowedHeaders": ["*"],
      "ExposeHeaders": ["ETag"],
      "MaxAgeSeconds": 3600
   }
]

B2 API Retrieval and Cross-Origin Configuration

  1. If you don't have a B2 account, register one first, then create a bucket. B2θ΄¦ε·ζ³¨ε†Œ

  2. Click Application Key in the sidebar, click Create Key, and follow the illustration. B2key

  3. Configure B2 cross-origin; B2 cross-origin configuration is more complex, take note B2cors

  4. You can try options 1 or 2 first, go to the upload page and see if you can upload. If F12 console shows cross-origin errors, use option 3. For a permanent solution, use option 3 directly.

    B21

Regarding option 3 configuration, since the panel cannot configure it, you need to configure manually by downloading B2 CLI tool. For more details, refer to: "https://docs.cloudreve.org/zh/usage/storage/b2".

After downloading, in the corresponding download directory CMD, enter the following commands:

b2-windows.exe account authorize   //Log in to your account, following prompts to enter your keyID and applicationKey
b2-windows.exe bucket get <bucketName> //You can execute to get bucket information, replace <bucketName> with your bucket name

Windows configuration, Use ".\b2-windows.exe xxx", Python CLI would be similar:

b2-windows.exe bucket update <bucketName> allPrivate --cors-rules "[{\"corsRuleName\":\"CloudPaste\",\"allowedOrigins\":[\"*\"],\"allowedHeaders\":[\"*\"],\"allowedOperations\":[\"b2_upload_file\",\"b2_download_file_by_name\",\"b2_download_file_by_id\",\"s3_head\",\"s3_get\",\"s3_put\",\"s3_post\",\"s3_delete\"],\"exposeHeaders\":[\"Etag\",\"content-length\",\"content-type\",\"x-bz-content-sha1\"],\"maxAgeSeconds\":3600}]"

Replace with your bucket name. For allowedOrigins in the cross-origin allowance, you can configure based on your needs; here it allows all.

  1. Cross-origin configuration complete

MinIO API Access and Cross-Origin Configuration

  1. Deploy MinIO Server

    Use the following Docker Compose configuration (reference) to quickly deploy MinIO:

    version: "3"
    
    services:
      minio:
        image: minio/minio:RELEASE.2025-02-18T16-25-55Z
        container_name: minio-server
        command: server /data --console-address :9001 --address :9000
        environment:
          - MINIO_ROOT_USER=minioadmin # Admin username
          - MINIO_ROOT_PASSWORD=minioadmin # Admin password
          - MINIO_BROWSER=on
          - MINIO_SERVER_URL=https://minio.example.com # S3 API access URL
          - MINIO_BROWSER_REDIRECT_URL=https://console.example.com # Console access URL
        ports:
          - "9000:9000" # S3 API port
          - "9001:9001" # Console port
        volumes:
          - ./data:/data
          - ./certs:/root/.minio/certs # SSL certificates (if needed)
        restart: always

    Run docker-compose up -d to start the service.

  2. Configure Reverse Proxy (Reference)

    To ensure MinIO functions correctly, especially file previews, configure reverse proxy properly. Recommended OpenResty/Nginx settings:

    MinIO S3 API Reverse Proxy (minio.example.com):

    location / {
        proxy_pass http://127.0.0.1:9000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    
        # HTTP optimization
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Enable HTTP/1.1 keepalive
    
        # Critical: Resolve 403 errors & preview issues
        proxy_cache off;
        proxy_buffering off;
        proxy_request_buffering off;
    
        # No file size limit
        client_max_body_size 0;
    }

    MinIO Console Reverse Proxy (console.example.com):

    location / {
        proxy_pass http://127.0.0.1:9001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    
        # WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    
        # Critical settings
        proxy_cache off;
        proxy_buffering off;
    
        # No file size limit
        client_max_body_size 0;
    }
  3. Access Console to Create Buckets & Access Keys

    For detailed configuration, refer to official docs:
    https://min.io/docs/minio/container/index.html
    CN: https://min-io.cn/docs/minio/container/index.html

    minio-1

  4. Additional Configuration (Optional)

    Allowed origins must include your frontend domain.
    minio-2

  5. Configure MinIO in CloudPaste

    • Log in to CloudPaste admin panel
    • Go to "S3 Storage Settings" β†’ "Add Storage Configuration"
    • Select "Other S3-compatible service" as provider
    • Enter details:
      • Name: Custom name
      • Endpoint URL: MinIO service URL (e.g., https://minio.example.com)
      • Bucket Name: Pre-created bucket
      • Access Key ID: Your Access Key
      • Secret Key: Your Secret Key
      • Region: Leave empty
      • Path-Style Access: MUST ENABLE!
    • Click "Test Connection" to verify
    • Save settings
  6. Troubleshooting

    • Note: If using Cloudflare's CDN, you may need to add proxy_set_header Accept-Encoding "identity", and there are caching issues to consider. It is recommended to use only DNS resolution.
    • 403 Error: Ensure reverse proxy includes proxy_cache off & proxy_buffering off
    • Preview Issues: Verify MINIO_SERVER_URL & MINIO_BROWSER_REDIRECT_URL are correctly set
    • Upload Failures: Check CORS settings; allowed origins must include frontend domain
    • Console Unreachable: Verify WebSocket config, especially Connection "upgrade"

More S3-related configurations to come......

πŸ‘‰ WebDAV Configuration Guide

WebDAV Configuration and Usage Guide

CloudPaste provides simple WebDAV protocol support, allowing you to mount storage spaces as network drives for convenient access and management of files directly through file managers.

WebDAV Service Basic Information

  • WebDAV Base URL: https://your-backend-domain/dav
  • Supported Authentication Methods:
    • Basic Authentication (username+password)
  • Supported Permission Types:
    • Administrator accounts - Full operation permissions
    • API keys - Requires enabled mount permission (mount_permission)

Permission Configuration

1. Administrator Account Access

Use administrator account and password to directly access the WebDAV service:

  • Username: Administrator username
  • Password: Administrator password

2. API Key Access (Recommended)

For a more secure access method, it is recommended to create a dedicated API key:

  1. Log in to the management interface
  2. Navigate to "API Key Management"
  3. Create a new API key, ensure "Mount Permission" is enabled
  4. Usage method:
    • Username: API key value
    • Password: The same API key value as the username

NGINX Reverse Proxy Configuration

If using NGINX as a reverse proxy, specific WebDAV configuration needs to be added to ensure all WebDAV methods work properly:

# WebDAV Configuration
location /dav {
    proxy_pass http://localhost:8787;  # Points to your backend service

    # WebDAV necessary headers
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # WebDAV method support
    proxy_pass_request_headers on;

    # Support all WebDAV methods
    proxy_method $request_method;

    # Necessary header processing
    proxy_set_header Destination $http_destination;
    proxy_set_header Overwrite $http_overwrite;

    # Handle large files
    client_max_body_size 0;

    # Timeout settings
    proxy_connect_timeout 3600s;
    proxy_send_timeout 3600s;
    proxy_read_timeout 3600s;
}

Common Issues and Solutions

  1. Connection Problems:

    • Confirm the WebDAV URL format is correct
    • Verify that authentication credentials are valid
    • Check if the API key has mount permission
  2. Permission Errors:

    • Confirm the account has the required permissions
    • Administrator accounts should have full permissions
    • API keys need to have mount permission specifically enabled
  3. ⚠️⚠️ WebDAV Upload Issues:

    • The upload size for webdav deployed by Workers may be limited by CF's CDN restrictions to around 100MB, resulting in a 413 error.
    • For Docker deployments, just pay attention to the nginx proxy configuration, any upload mode is acceptable

πŸ”§ Tech Stack

Frontend

  • Framework: Vue.js 3 + Vite
  • Styling: TailwindCSS
  • Editor: Vditor
  • Internationalization: Vue-i18n
  • Charts: Chart.js + Vue-chartjs

Backend

  • Runtime: Cloudflare Workers
  • Framework: Hono
  • Database: Cloudflare D1 (SQLite)
  • Storage: Multiple S3-compatible services (supports R2, B2, AWS S3)
  • Authentication: JWT tokens + API keys

πŸ’» Development

API Documentation

API Documentation

Server Direct File Upload API Documentation - Detailed description of the server direct file upload interface

Local Development Setup

  1. Clone project repository

    git clone https://github.com/ling-drag0n/cloudpaste.git
    cd cloudpaste
  2. Backend setup

    cd backend
    npm install
    # Initialize D1 database
    wrangler d1 create cloudpaste-db
    wrangler d1 execute cloudpaste-db --file=./schema.sql
  3. Frontend setup

    cd frontend
    npm install
  4. Configure environment variables

    • In the backend directory, create a wrangler.toml file to set development environment variables
    • In the frontend directory, configure the .env.development file to set frontend environment variables
  5. Start development servers

    # Backend
    cd backend
    npm run dev
    
    # Frontend (in another terminal)
    cd frontend
    npm run dev

Project Structure

CloudPaste/
β”œβ”€β”€ frontend/                         # Frontend Vite + Vue 3 SPA
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ api/                      # HTTP client & API services (no domain semantics)
β”‚   β”‚   β”œβ”€β”€ modules/                  # Domain modules layer (by business area)
β”‚   β”‚   β”‚   β”œβ”€β”€ paste/                # Text sharing (editor / public view / admin)
β”‚   β”‚   β”‚   β”œβ”€β”€ fileshare/            # File sharing (public page / admin)
β”‚   β”‚   β”‚   β”œβ”€β”€ fs/                   # Mounted file system explorer (MountExplorer)
β”‚   β”‚   β”‚   β”œβ”€β”€ upload/               # Upload controller & upload views
β”‚   β”‚   β”‚   β”œβ”€β”€ storage-core/         # Storage drivers & Uppy wiring (low-level abstraction)
β”‚   β”‚   β”‚   β”œβ”€β”€ security/             # Frontend auth bridge & Authorization header helpers
β”‚   β”‚   β”‚   β”œβ”€β”€ pwa-offline/          # PWA offline queue & state
β”‚   β”‚   β”‚   └── admin/                # Admin panel (dashboard / settings / key management, etc.)
β”‚   β”‚   β”œβ”€β”€ components/               # Reusable, cross-module UI components (no module imports)
β”‚   β”‚   β”œβ”€β”€ composables/              # Shared composition APIs (file-system / preview / upload, etc.)
β”‚   β”‚   β”œβ”€β”€ stores/                   # Pinia stores (auth / fileSystem / siteConfig, etc.)
β”‚   β”‚   β”œβ”€β”€ router/                   # Vue Router configuration (single entry for all views)
β”‚   β”‚   β”œβ”€β”€ pwa/                      # PWA state & installation prompts
β”‚   β”‚   β”œβ”€β”€ utils/                    # Utilities (clipboard / time / file icons, etc.)
β”‚   β”‚   β”œβ”€β”€ styles/                   # Global styles & Tailwind config entry
β”‚   β”‚   └── assets/                   # Static assets
β”‚   β”œβ”€β”€ eslint.config.cjs             # Frontend ESLint config (including import boundaries)
β”‚   β”œβ”€β”€ vite.config.js                # Vite build configuration
β”‚   └── package.json
β”œβ”€β”€ backend/                          # Backend (Cloudflare Workers / Docker runtime)
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ routes/                   # HTTP routing layer (fs / files / pastes / admin / system, etc.)
β”‚   β”‚   β”‚   β”œβ”€β”€ fs/                   # Mount FS APIs (list / read / write / search / share)
β”‚   β”‚   β”‚   β”œβ”€β”€ files/                # File sharing APIs (public / protected)
β”‚   β”‚   β”‚   β”œβ”€β”€ pastes/               # Text sharing APIs (public / protected)
β”‚   β”‚   β”‚   β”œβ”€β”€ adminRoutes.js        # Generic admin routes
β”‚   β”‚   β”‚   β”œβ”€β”€ apiKeyRoutes.js       # API key management routes
β”‚   β”‚   β”‚   β”œβ”€β”€ mountRoutes.js        # Mount configuration routes
β”‚   β”‚   β”‚   β”œβ”€β”€ systemRoutes.js       # System settings & dashboard stats
β”‚   β”‚   β”‚   └── fsRoutes.js           # Unified FS entry aggregation
β”‚   β”‚   β”œβ”€β”€ services/                 # Domain services (pastes / files / system / apiKey, etc.)
β”‚   β”‚   β”œβ”€β”€ security/                 # Auth + authorization (AuthService / securityContext / authorize / policies)
β”‚   β”‚   β”œβ”€β”€ webdav/                   # WebDAV implementation & path handling
β”‚   β”‚   β”œβ”€β”€ storage/                  # Storage abstraction (S3 drivers, mount manager, file system ops)
β”‚   β”‚   β”œβ”€β”€ repositories/             # Data access layer (D1 + SQLite repositories)
β”‚   β”‚   β”œβ”€β”€ cache/                    # Cache & invalidation (mainly FS)
β”‚   β”‚   β”œβ”€β”€ constants/                # Constants (ApiStatus / Permission / DbTables / UserType, etc.)
β”‚   β”‚   β”œβ”€β”€ http/                     # Unified error types & response helpers
β”‚   β”‚   └── utils/                    # Utilities (common / crypto / environment, etc.)
β”‚   β”œβ”€β”€ schema.sql                    # D1 / SQLite schema bootstrap
β”‚   β”œβ”€β”€ wrangler.toml                 # Cloudflare Workers / D1 configuration
β”‚   └── package.json
β”œβ”€β”€ docs/                             # Architecture & design docs
β”‚   β”œβ”€β”€ frontend-architecture-implementation.md    # Frontend layering & modules/* design
β”‚   β”œβ”€β”€ frontend-architecture-optimization-plan.md # Frontend optimization plan (Phase 2/3)
β”‚   β”œβ”€β”€ auth-permissions-design.md                # Auth & permissions system design
β”‚   └── backend-error-handling-refactor.md        # Backend error handling refactor design
β”œβ”€β”€ docker/                           # Docker & Compose deployment configs
β”œβ”€β”€ images/                           # Screenshots used in README
β”œβ”€β”€ Api-doc.md                        # API overview
β”œβ”€β”€ Api-s3_direct.md                  # S3 direct upload API docs
└── README.md                         # Main project README

Custom Docker Build

If you want to customize Docker images or debug during development, you can follow these steps to build manually:

  1. Build backend image

    # Execute in the project root directory
    docker build -t cloudpaste-backend:custom -f docker/backend/Dockerfile .
    
    # Run the custom built image
    docker run -d --name cloudpaste-backend \
      -p 8787:8787 \
      -v $(pwd)/sql_data:/data \
      -e ENCRYPTION_SECRET=development-test-key \
      cloudpaste-backend:custom
  2. Build frontend image

    # Execute in the project root directory
    docker build -t cloudpaste-frontend:custom -f docker/frontend/Dockerfile .
    
    # Run the custom built image
    docker run -d --name cloudpaste-frontend \
      -p 80:80 \
      -e BACKEND_URL=http://localhost:8787 \
      cloudpaste-frontend:custom
  3. Development environment Docker Compose

    Create a docker-compose.dev.yml file:

    version: "3.8"
    
    services:
      frontend:
        build:
          context: .
          dockerfile: docker/frontend/Dockerfile
        environment:
          - BACKEND_URL=http://backend:8787
        ports:
          - "80:80"
        depends_on:
          - backend
    
      backend:
        build:
          context: .
          dockerfile: docker/backend/Dockerfile
        environment:
          - NODE_ENV=development
          - RUNTIME_ENV=docker
          - PORT=8787
          - ENCRYPTION_SECRET=dev_secret_key
        volumes:
          - ./sql_data:/data
        ports:
          - "8787:8787"

    Start the development environment:

    docker-compose -f docker-compose.yml up --build

πŸ“„ License

Apache License 2.0

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

❀️ Contribution

  • Sponsorship: Maintaining the project is not easy. If you like this project, you can give the author a little encouragement. Every bit of your support is the motivation for me to move forward~

    image.png

    • Sponsors: A huge thank you to the following sponsors for their support of this project!!

      Sponsors

  • Contributors: Thanks to the following contributors for their selfless contributions to this project!

    Contributors

If you think the project is good I hope you can give a free star✨✨, Thank you very much!

About

A Cloudflare-based online text/file sharing platform that supports multiple syntax Markdown rendering, self-destructing messages, S3/WebDav/OneDrive/Local aggregated storage, password protection, and more. It can be mounted as WebDAV and supports Docker deployment.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5