- Home
- User Guide
- CLI Overview
Storage Commands
CLI commands for managing S3-compatible storage
Manage your S3-compatible object storage directly from the command line. These commands provide low-level access to your storage bucket for troubleshooting, backup, and advanced operations.
colibri storage connect
Connect to an S3-compatible storage provider and save credentials.
colibri storage connect [options]Options
| Option | Short | Description | Required |
|---|---|---|---|
--endpoint | -e | Storage service endpoint URL | Yes |
--access-key-id | -a | Access key ID | Yes |
--secret-access-key | -s | Secret access key | Yes |
--region | -r | Storage region | No |
--force-path-style | -F | Force path-style URLs (for MinIO) | No |
--force | -f | Overwrite existing connection | No |
Examples
AWS S3:
colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id AKIAIOSFODNN7EXAMPLE
--secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
--region us-east-1MinIO (local):
colibri storage connect
--endpoint http://localhost:9000
--access-key-id minioadmin
--secret-access-key minioadmin
--force-path-styleCloudflare R2:
colibri storage connect
--endpoint https://abc123.r2.cloudflarestorage.com
--access-key-id your-access-key
--secret-access-key your-secret-key
--region autoBackblaze B2:
colibri storage connect
--endpoint https://s3.us-west-004.backblazeb2.com
--access-key-id your-key-id
--secret-access-key your-application-key
--region us-west-004DigitalOcean Spaces:
colibri storage connect
--endpoint https://nyc3.digitaloceanspaces.com
--access-key-id your-access-key
--secret-access-key your-secret-key
--region nyc3Reading Credentials from stdin
For security, you can pipe the secret key:
echo "your-secret-key" | colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id YOUR_KEY
--secret-access-key -Or from environment variable:
export S3_SECRET_KEY="your-secret-key"
echo "$S3_SECRET_KEY" | colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id YOUR_KEY
--secret-access-key -Connection Storage
Credentials are stored in the database, not in config files, ensuring they’re shared across all instances connecting to the same database.
Testing the Connection
The connect command verifies connectivity before saving:
colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id YOUR_KEY
--secret-access-key YOUR_SECRET
--verboseOutput:
Testing connection to https://s3.amazonaws.com...
✓ Connection successful
✓ Credentials saved
Connected to storage provider "https://s3.amazonaws.com".colibri storage list-objects
List all objects (files) in a storage bucket.
colibri storage list-objects [bucket] [options]Arguments
[bucket]- Bucket name (uses default if omitted)
Aliases
colibri storage list
colibri storage lsExamples
List all objects in default bucket:
colibri storage list-objectsOutput:
Key Size Last Modified Storage Class
──────────────────────────────────────────────────────────────────────────────────────
assets/abc123/original.epub 1.2 MB 2024-01-15 10:30:00 STANDARD
assets/def456/original.mobi 850 KB 2024-01-16 14:22:00 STANDARD
covers/work123/original.jpg 450 KB 2024-01-15 10:31:00 STANDARD
covers/work123/thumbnail.jpg 12 KB 2024-01-15 10:31:00 STANDARDList specific bucket:
colibri storage list-objects my-bucketFilter by prefix:
# List only covers
colibri storage list-objects --prefix covers/
# List objects for specific work
colibri storage list-objects --prefix covers/work123/JSON output:
colibri storage list-objects --jsonOutput:
[
{
"Key": "assets/abc123/original.epub",
"Size": 1258291,
"LastModified": "2024-01-15T10:30:00Z",
"StorageClass": "STANDARD",
"ETag": "d8e8fca2dc0f896fd7cb4cb0031ba249"
}
]Count objects:
colibri storage list-objects --json | jq 'length'Calculate total size:
colibri storage list-objects --json |
jq '[.[].Size] | add' |
numfmt --to=ieccolibri storage copy
Copy an object within storage.
colibri storage copy <source> <destination>Arguments
<source>- Source object path<destination>- Destination object path
Aliases
colibri storage cpExamples
Copy within same bucket:
colibri storage copy assets/old.epub assets/new.epubBackup a cover:
colibri storage copy
covers/work123/original.jpg
backups/covers/work123-backup.jpgDuplicate for testing:
colibri storage copy
assets/production.epub
assets/test-copy.epubcolibri storage move
Move (rename) an object in storage.
colibri storage move <bucket> <source> <destination>Arguments
<bucket>- Bucket name<source>- Source object path<destination>- Destination object path
Aliases
colibri storage mvExamples
Rename a file:
colibri storage move colibri
assets/old-name.epub
assets/new-name.epubReorganize structure:
colibri storage move colibri
old-folder/file.epub
new-folder/file.epubNote: Move performs a copy followed by delete. Large files may take time.
colibri storage remove
Delete one or more objects from storage.
colibri storage remove <keys...> [options]Arguments
<keys...>- One or more object keys to delete
Options
| Option | Short | Description |
|---|---|---|
--force | -f | Skip confirmation prompt |
Aliases
colibri storage rmExamples
Delete single object:
colibri storage remove assets/old-book.epubOutput:
Warning: This will permanently delete 1 object(s).
Are you sure? (y/N): y
✓ Deleted assets/old-book.epubForce delete without confirmation:
colibri storage remove assets/old-book.epub --forceDelete multiple objects:
colibri storage remove
assets/book1.epub
assets/book2.epub
covers/work1/old.jpg
--forceDelete by pattern (using shell):
# Delete all temporary imports
colibri storage list-objects --json |
jq -r '.[] | select(.Key | startswith("imports/")) | .Key' |
xargs colibri storage rm --forceClean up old backups:
# Delete backups older than 30 days
colibri storage list-objects backups/ --json |
jq -r --arg cutoff "$(date -d '30 days ago' -Iseconds)"
'.[] | select(.LastModified < $cutoff) | .Key' |
xargs -r colibri storage rm --forceWarning: Deletion is permanent and cannot be undone unless you have bucket versioning enabled.
colibri storage list-buckets
List all available buckets in your storage account.
colibri storage list-bucketsAliases
colibri storage lbExamples
List buckets:
colibri storage list-bucketsOutput:
Name Created Region
────────────────────────────────────────────────────
colibri 2024-01-01T00:00:00Z us-east-1
colibri-backup 2024-01-15T00:00:00Z us-east-1JSON output:
colibri storage list-buckets --json[
{ "Name": "colibri", "CreationDate": "2024-01-01T00:00:00Z" },
{ "Name": "colibri-backup", "CreationDate": "2024-01-15T00:00:00Z" }
]colibri storage make-bucket
Create a new bucket.
colibri storage make-bucket <bucket>Arguments
<bucket>- Name of the bucket to create
Aliases
colibri storage mbExamples
Create bucket:
colibri storage make-bucket colibri-testOutput:
✓ Created bucket: colibri-testCreate with error handling:
if colibri storage make-bucket new-bucket 2>/dev/null; then
echo "Bucket created successfully"
else
echo "Failed to create bucket (may already exist)"
ficolibri storage remove-bucket
Delete an empty bucket.
colibri storage remove-bucket <bucket>Arguments
<bucket>- Name of the bucket to delete
Aliases
colibri storage rbExamples
Delete bucket:
colibri storage remove-bucket old-bucketNote: Bucket must be empty before deletion. Use
colibri storage rmto delete contents first.
Configuration and Credentials
Viewing Current Configuration
# Check configured endpoint
colibri settings get --json | jq -r '.storage'Environment Variable Override
You can override storage configuration per-command:
export S3_ENDPOINT=https://s3.amazonaws.com
export S3_ACCESS_KEY=YOUR_KEY
export S3_SECRET_KEY=YOUR_SECRET
colibri storage list-objectsMultiple Storage Backends
Switch between different storage providers:
# Configure primary storage
colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id KEY1
--secret-access-key SECRET1
# Use different storage temporarily
S3_ENDPOINT=https://backup-storage.com
S3_ACCESS_KEY=KEY2
S3_SECRET_KEY=SECRET2
colibri storage list-objectsCommon Workflows
Backup Entire Library
#!/bin/bash
# backup-storage.sh
BACKUP_DIR="./colibri-backup-$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"
# Download all assets
colibri storage list-objects --json |
jq -r '.[].Key' |
while read key; do
mkdir -p "$BACKUP_DIR/$(dirname "$key")"
# Use AWS CLI or similar to download
aws s3 cp "s3://colibri/$key" "$BACKUP_DIR/$key"
done
echo "Backup complete: $BACKUP_DIR"Migrate to New Storage Provider
#!/bin/bash
# migrate-storage.sh
# Step 1: Export from old provider
OLD_ENDPOINT="https://old-storage.com"
NEW_ENDPOINT="https://new-storage.com"
# Configure rclone for both providers
rclone config create old-storage s3
provider AWS
endpoint "$OLD_ENDPOINT"
access_key_id "$OLD_ACCESS_KEY"
secret_access_key "$OLD_SECRET_KEY"
rclone config create new-storage s3
provider AWS
endpoint "$NEW_ENDPOINT"
access_key_id "$NEW_ACCESS_KEY"
secret_access_key "$NEW_SECRET_KEY"
# Step 2: Sync data
rclone sync old-storage:colibri new-storage:colibri --progress
# Step 3: Update Colibri configuration
colibri storage connect
--endpoint "$NEW_ENDPOINT"
--access-key-id "$NEW_ACCESS_KEY"
--secret-access-key "$NEW_SECRET_KEY"
--force
echo "Migration complete"Clean Up Old Imports
#!/bin/bash
# cleanup-imports.sh
# Delete imports older than 7 days
CUTOFF=$(date -d '7 days ago' -Iseconds)
colibri storage list-objects --json |
jq -r --arg cutoff "$CUTOFF"
'.[] | select(.Key | startswith("imports/")) |
select(.LastModified < $cutoff) | .Key' |
xargs -r colibri storage rm --force
echo "Cleanup complete"Storage Usage Report
#!/bin/bash
# storage-report.sh
echo "Storage Usage Report - $(date)"
echo "================================"
# Count objects by type
echo "Assets: $(colibri storage ls --json | jq '[.[] | select(.Key | startswith("assets/"))] | length')"
echo "Covers: $(colibri storage ls --json | jq '[.[] | select(.Key | startswith("covers/"))] | length')"
echo "Imports: $(colibri storage ls --json | jq '[.[] | select(.Key | startswith("imports/"))] | length')"
# Total size
TOTAL_BYTES=$(colibri storage ls --json | jq '[.[].Size] | add')
TOTAL_GB=$(echo "scale=2; $TOTAL_BYTES / 1024 / 1024 / 1024" | bc)
echo "Total Size: ${TOTAL_GB} GB"Verify Storage Integrity
#!/bin/bash
# verify-storage.sh
# Check if all database assets exist in storage
colibri works list --json |
jq -r '.[].assets[].storageKey' |
while read key; do
if ! colibri storage ls --json | jq -e --arg key "$key" '.[] | select(.Key == $key)' > /dev/null; then
echo "Missing: $key"
fi
doneTroubleshooting
Connection Errors
Problem: “Unable to connect to storage endpoint”
Solutions:
# Verify endpoint is accessible
curl -I https://s3.amazonaws.com
# Test with verbose output
colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id YOUR_KEY
--secret-access-key YOUR_SECRET
--verbose
# Check firewall/network
telnet s3.amazonaws.com 443Authentication Errors
Problem: “Access Denied” or “Invalid credentials”
Solutions:
# Verify credentials
aws s3 ls s3://colibri
--endpoint-url https://s3.amazonaws.com
--profile test
# Check IAM permissions (AWS)
aws iam get-user-policy
--user-name colibri-storage
--policy-name S3Access
# Test with different region
colibri storage connect
--endpoint https://s3.amazonaws.com
--access-key-id YOUR_KEY
--secret-access-key YOUR_SECRET
--region us-west-2Path Style vs Virtual Hosted
Problem: “Bucket not found” with MinIO or some providers
Solution: Enable path-style URLs:
colibri storage connect
--endpoint http://localhost:9000
--access-key-id minioadmin
--secret-access-key minioadmin
--force-path-style # Important for MinIOExplanation:
- Virtual-hosted style:
https://bucket.s3.amazonaws.com/object - Path style:
https://s3.amazonaws.com/bucket/object
MinIO and some S3-compatible providers require path style.
SSL Certificate Errors
Problem: “SSL certificate verify failed”
Solutions:
# For development/testing only - skip SSL verification
# (Not recommended for production)
export AWS_CA_BUNDLE=""
export NODE_TLS_REJECT_UNAUTHORIZED="0"
# Or install proper certificates
# For MinIO, use a proper domain with Let's EncryptSlow Operations
Problem: Storage commands are slow
Solutions:
# Use closer region
colibri storage connect
--endpoint https://s3.eu-west-1.amazonaws.com
--region eu-west-1
--access-key-id YOUR_KEY
--secret-access-key YOUR_SECRET
# Enable connection pooling (automatic in SDK)
# Check network bandwidth
speedtest
# For large files, consider multipart upload settingsPermission Denied on Delete
Problem: Cannot delete objects
Solutions:
# Check IAM policy includes DeleteObject
# For AWS, ensure policy has:
{
"Effect": "Allow",
"Action": ["s3:DeleteObject"],
"Resource": "arn:aws:s3:::colibri/*"
}
# Verify object ownership
colibri storage ls --json | jq '.[] | select(.Key == "problem-file.epub")'
# Try with force flag
colibri storage rm problem-file.epub --forceSecurity Best Practices
Rotate credentials regularly
bash# Generate new keys every 90 days # Update with connect command colibri storage connect --force --access-key-id NEW_KEY ...Use IAM policies with least privilege
json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"], "Resource": "arn:aws:s3:::colibri/*" } ] }Never commit credentials to version control
bash# Use environment variables export S3_ACCESS_KEY="..." export S3_SECRET_KEY="..."Enable MFA delete for production (AWS S3)
bashaws s3api put-bucket-versioning --bucket colibri --versioning-configuration MFADelete=EnabledMonitor access logs
bash# Enable S3 access logging aws s3api put-bucket-logging --bucket colibri --bucket-logging-status file://logging.json
Performance Tips
- Use regions close to your server: Reduces latency
- Enable CDN for frequently accessed content: CloudFront, Cloudflare
- Consider Cloudflare R2 for high traffic: Zero egress fees
- Use lifecycle rules: Auto-delete old imports
- Enable compression: For text-based files (epub, xml)
- Batch operations: Group multiple uploads/deletes
See Also
- Storage Configuration - Setting up S3 storage
- CLI Overview - General CLI usage
- Works Commands - Manage books (which use storage)
External Tools
Useful tools for working with S3 storage:
- AWS CLI: Official AWS command-line tool
- s3cmd: Command-line S3 client
- rclone: Sync files between storage providers
- MinIO Client (mc): Official MinIO CLI
- Cyberduck: GUI client for S3
# Install AWS CLI
pip install awscli
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Install MinIO client
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc