Authenticated File Uploads
Last updated:
Authenticated File Uploads
Overview
This guide shows you how to implement authenticated file uploads with Visulima Storage. We'll use Better Auth as the authentication solution, but the patterns shown here can be adapted to other authentication libraries. Instead of creating a Better Auth plugin, we'll use Better Auth for authentication middleware and the storage handlers directly with your web framework (Express, Hono, Fastify, etc.).
This approach provides:
- Better separation of concerns: Auth handles authentication, storage handles file operations
- More flexibility: Use any web framework with existing, well-tested handlers
- Easier maintenance: Leverage existing storage handler APIs
- Framework agnostic: Works with Express, Hono, Fastify, and any framework supporting middleware
Installation
First, install Better Auth, Visulima Storage, and your chosen framework:
npm install better-auth @visulima/storage express
# or
npm install better-auth @visulima/storage hono
# or
npm install better-auth @visulima/storage fastifyFor cloud storage backends, install the corresponding peer dependencies:
# For AWS S3
npm install @aws-sdk/client-s3 @aws-sdk/credential-providers @aws-sdk/s3-request-presigner @aws-sdk/signature-v4-crt aws-crt @aws-sdk/types
# For Google Cloud Storage
npm install @google-cloud/storage node-fetch gaxios
# For Azure Blob Storage
npm install @azure/storage-blobBasic Setup
1. Configure Better Auth
Create an auth.ts file to configure Better Auth:
import { betterAuth } from "better-auth";
import { drizzleAdapter } from "better-auth/adapters/drizzle";
import { db } from "@/db";
export const auth = betterAuth({
database: drizzleAdapter(db, {
provider: "pg", // or "mysql", "sqlite"
}),
emailAndPassword: {
enabled: true,
},
});2. Create Type Definitions (Recommended)
Create proper TypeScript types for better type safety:
// types/express.d.ts
import type { User } from "better-auth/types";
declare global {
namespace Express {
interface Request {
user?: User;
}
}
}3. Create Authentication Middleware
Create middleware to protect your file upload routes:
Note: The Better Auth API may vary by version. The following example uses
auth.api.getSession(). Please refer to the Better Auth documentation for the exact API in your version.
// middleware/auth.ts
import type { Request, Response, NextFunction } from "express";
import { auth } from "@/lib/auth";
export const requireAuth = async (req: Request, res: Response, next: NextFunction) => {
try {
// Get session from Better Auth
// Note: API may vary - check Better Auth docs for your version
const session = await auth.api.getSession({
headers: req.headers as any,
});
if (!session) {
return res.status(401).json({ error: "Unauthorized" });
}
// Attach user to request (now properly typed with the type definition above)
req.user = session.user;
next();
} catch (error) {
return res.status(401).json({ error: "Unauthorized" });
}
};Express Integration
Basic Example with Multipart Handler
import express from "express";
import { DiskStorage, Multipart } from "@visulima/storage";
import { auth } from "@/lib/auth";
import { requireAuth } from "@/middleware/auth";
const app = express();
// Mount Better Auth routes first (before file routes)
app.all("/api/auth/*", (req, res) => {
return auth.handler(req, res);
});
// Initialize storage with OWASP-compliant validation
// See: https://cheatsheetseries.owasp.org/cheatsheets/File_Upload_Cheat_Sheet.html
const storage = new DiskStorage({
directory: "./uploads", // Store outside webroot in production
maxUploadSize: "50MB", // Prevent DoS attacks
// Use allowlist approach - only allow business-critical extensions
allowMIME: [
"image/jpeg", // Explicitly list allowed types
"image/png",
"image/webp",
"application/pdf", // Avoid wildcards when possible
],
// Storage automatically generates UUID-based filenames (OWASP recommended)
filename: (file) => file.id, // UUID prevents path traversal and overwrite attacks
});
const multipart = new Multipart({ storage });
// Protected upload endpoint
app.post("/api/files", requireAuth, multipart.handle, async (req, res) => {
try {
const file = req.body; // File object from storage handler
const user = req.user; // User from Better Auth (properly typed)
// Store file-user association in database (recommended)
await db.file.create({
data: {
storageId: file.id,
userId: user.id,
name: file.originalName,
size: file.size,
contentType: file.contentType,
},
});
res.json({
id: file.id,
url: `/api/files/${file.id}`,
size: file.size,
contentType: file.contentType,
});
} catch (error) {
console.error("Upload error:", error);
// If database save fails, clean up storage file
if (req.body?.id) {
await storage.delete({ id: req.body.id }).catch(() => {});
}
res.status(500).json({ error: "Failed to save file" });
}
});
// Protected file retrieval
app.get(
"/api/files/:id",
requireAuth,
multipart.handle, // Handles GET requests to stream files
(req, res) => {
// File is automatically streamed by the handler
},
);
// Protected file deletion
app.delete("/api/files/:id", requireAuth, async (req, res) => {
try {
const { id } = req.params;
const user = req.user; // Properly typed with Express type definition
// Always verify ownership before deletion
const fileRecord = await db.file.findFirst({
where: { storageId: id, userId: user.id },
});
if (!fileRecord) {
return res.status(403).json({ error: "Forbidden" });
}
// Use transaction to ensure both operations succeed or fail together
await db.$transaction(async (tx) => {
await storage.delete({ id });
await tx.file.delete({ where: { id: fileRecord.id } });
});
res.status(204).send();
} catch (error) {
console.error("Delete error:", error);
res.status(500).json({ error: "Failed to delete file" });
}
});Advanced Example with User-Scoped Storage
import express from "express";
import { DiskStorage, Multipart } from "@visulima/storage";
import { requireAuth } from "@/middleware/auth";
const app = express();
// Create storage with user-specific file naming
// Note: filename function runs during file creation, before user metadata is available
// For user-scoped paths, store user association in database and use it for organization
const storage = new DiskStorage({
directory: "./uploads",
filename: (file) => {
// Files are organized by ID. User association stored in database
return file.id;
},
});
const multipart = new Multipart({ storage });
// Upload with user metadata
// Configure storage onCreate hook to inject user metadata
const originalOnCreate = storage.onCreate;
storage.onCreate = async (file) => {
// Note: This runs for all uploads. For per-request user data,
// consider using storage options or storing user association after upload
if (originalOnCreate) {
await originalOnCreate(file);
}
};
app.post("/api/files", requireAuth, multipart.handle, async (req, res) => {
const file = req.body;
const user = req.user; // Properly typed
// Store file-user association in database
// await db.file.create({
// data: {
// storageId: file.id,
// userId: user.id,
// name: file.originalName,
// },
// });
res.json({
id: file.id,
url: `/api/files/${file.id}`,
size: file.size,
});
});
// List user's files
app.get("/api/files", requireAuth, async (req, res) => {
const user = (req as any).user;
// Get user's files from database (recommended approach)
// const files = await db.file.findMany({
// where: { userId: user.id },
// });
// res.json({ files });
// Alternative: Filter from storage metadata (if you set it)
const allFiles = await storage.list(1000);
const userFiles = allFiles.filter((file) => file.metadata?.userId === user.id);
res.json({ files: userFiles });
});Hono Integration
Hono works seamlessly with Better Auth and the storage handlers:
import { Hono } from "hono";
import { DiskStorage, Multipart } from "@visulima/storage";
import { auth } from "@/lib/auth";
const app = new Hono();
const storage = new DiskStorage({ directory: "./uploads" });
const multipart = new Multipart({ storage });
// Authentication middleware for Hono
const requireAuth = async (c: any, next: () => Promise<void>) => {
try {
const session = await auth.api.getSession({
headers: c.req.raw.headers as any,
});
if (!session) {
return c.json({ error: "Unauthorized" }, 401);
}
// Store user in Hono context
c.set("user", session.user);
await next();
} catch {
return c.json({ error: "Unauthorized" }, 401);
}
};
// Protected upload endpoint
app.post("/api/files", requireAuth, async (c) => {
const user = c.get("user");
// Use multipart.fetch for Web API Request/Response
const response = await multipart.fetch(c.req.raw);
if (response.ok) {
const file = await response.json();
// Optionally store file-user association
// await db.file.create({
// data: {
// storageId: file.id,
// userId: user.id,
// },
// });
return c.json({
id: file.id,
url: `/api/files/${file.id}`,
size: file.size,
});
}
return response;
});
// Protected file retrieval
app.get("/api/files/:id", requireAuth, async (c) => {
return await multipart.fetch(c.req.raw);
});
// Protected file deletion
app.delete("/api/files/:id", requireAuth, async (c) => {
const { id } = c.req.param("id");
await storage.delete({ id });
return c.body(null, 204);
});Fastify Integration
import Fastify from "fastify";
import { DiskStorage, Multipart } from "@visulima/storage";
import { auth } from "@/lib/auth";
const fastify = Fastify();
const storage = new DiskStorage({ directory: "./uploads" });
const multipart = new Multipart({ storage });
// Authentication decorator
fastify.decorate("requireAuth", async (request: any, reply: any) => {
try {
const session = await auth.api.getSession({
headers: request.headers as any,
});
if (!session) {
return reply.code(401).send({ error: "Unauthorized" });
}
request.user = session.user;
} catch {
return reply.code(401).send({ error: "Unauthorized" });
}
});
// Protected upload endpoint
fastify.post("/api/files", { preHandler: fastify.requireAuth }, async (request, reply) => {
// Convert Fastify request to Web API Request
const webRequest = new Request(`http://localhost${request.url}`, {
method: request.method,
headers: request.headers as any,
body: request.raw,
});
const response = await multipart.fetch(webRequest);
const file = await response.json();
return reply.send({
id: file.id,
url: `/api/files/${file.id}`,
size: file.size,
});
});REST Handler Example
For direct binary uploads or API-first applications:
import express from "express";
import { DiskStorage, Rest } from "@visulima/storage";
import { requireAuth } from "@/middleware/auth";
const app = express();
const storage = new DiskStorage({ directory: "./uploads" });
const rest = new Rest({ storage });
// Upload binary data
app.post("/api/files", requireAuth, rest.handle, (req, res) => {
const file = req.body;
res.json({
id: file.id,
url: `/api/files/${file.id}`,
});
});
// Update file (PUT)
app.put("/api/files/:id", requireAuth, rest.handle, (req, res) => {
const file = req.body;
res.json({ id: file.id });
});
// Batch delete
app.delete("/api/files", requireAuth, rest.handle, (req, res) => {
// Handler processes ?ids=id1,id2,id3 or JSON body
res.status(204).send();
});TUS Handler for Resumable Uploads
For large files or unreliable networks:
import express from "express";
import { DiskStorage, Tus } from "@visulima/storage";
import { requireAuth } from "@/middleware/auth";
const app = express();
const storage = new DiskStorage({ directory: "./uploads" });
const tus = new Tus({ storage });
// TUS endpoints (all protected)
app.post("/api/files/tus", requireAuth, tus.handle);
app.patch("/api/files/tus/:id", requireAuth, tus.handle);
app.head("/api/files/tus/:id", requireAuth, tus.handle);
app.delete("/api/files/tus/:id", requireAuth, tus.handle);Cloud Storage Examples
AWS S3 with Better Auth
import express from "express";
import { S3Storage, Multipart } from "@visulima/storage/provider/aws";
import { requireAuth } from "@/middleware/auth";
const app = express();
const storage = new S3Storage({
bucket: process.env.S3_BUCKET,
region: process.env.S3_REGION,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
filename: (file) => {
// Files organized by ID. User association stored separately in database
return file.id;
},
});
const multipart = new Multipart({ storage });
app.post("/api/files", requireAuth, multipart.handle, (req, res) => {
res.json({ id: req.body.id, url: req.body.url });
});Google Cloud Storage
import { GCStorage, Multipart } from "@visulima/storage/provider/gcs";
import { requireAuth } from "@/middleware/auth";
const storage = new GCStorage({
bucket: process.env.GCS_BUCKET,
projectId: process.env.GCS_PROJECT_ID,
});
const multipart = new Multipart({ storage });
app.post("/api/files", requireAuth, multipart.handle, (req, res) => {
res.json({ id: req.body.id });
});Client-Side Usage
Upload with Authentication
import { createAuthClient } from "better-auth/react";
const authClient = createAuthClient({
baseURL: "http://localhost:3000",
});
// Upload file
const uploadFile = async (file: File) => {
const session = await authClient.getSession();
if (!session) {
throw new Error("Not authenticated");
}
const formData = new FormData();
formData.append("file", file);
const response = await fetch("/api/files", {
method: "POST",
headers: {
// Better Auth automatically includes auth headers
Cookie: document.cookie,
},
body: formData,
});
if (!response.ok) {
throw new Error("Upload failed");
}
return response.json();
};Best Practices
Authentication & Authorization
- Always validate authentication before file operations - Use middleware on all file endpoints
- Verify file ownership before allowing access, update, or deletion
- Store file-user associations in your database for reliable ownership tracking
- Use database transactions when creating file records to ensure consistency
- Implement role-based access control if you need different permissions for different user roles
File Management (OWASP Compliant)
- Use separate upload endpoints for different file types - Create dedicated endpoints for avatars, documents, images, etc. This allows different validation rules, storage locations, and access controls per file type
- Use allowlist for file extensions - Only allow business-critical extensions. Never rely on blocklist alone (OWASP)
- Validate file signatures (magic bytes) - Don't trust Content-Type header. Validate actual file content against expected signatures
- Validate file types and sizes using storage validation options (
allowMIME,maxUploadSize) - Filename safety - Storage automatically generates UUID-based filenames, preventing path traversal and overwrite attacks
- Implement per-user storage quotas to prevent abuse
- Use database for file metadata - Don't rely solely on storage metadata for user associations
- Implement cleanup jobs for orphaned files when users are deleted
- Use signed URLs for private file access (S3, GCS, Azure) instead of public URLs
- File content validation - For images, use image rewriting/transformation. For documents, consider CDR (Content Disarm & Reconstruct)
- Antivirus/Sandbox scanning - Run files through antivirus or sandbox in production environments
Security (OWASP Compliant)
- Implement rate limiting for upload endpoints to prevent abuse and DoS attacks
- Sanitize file names - Storage handlers automatically generate UUID-based filenames, preventing path traversal
- Set appropriate permissions on storage backends (private buckets/containers) - Use least privilege principle
- Use HTTPS for all file transfers
- Validate file content - Always validate actual file content, not just MIME types
- Don't expose internal errors - Return generic error messages to clients
- Protect from CSRF attacks - Ensure CSRF protection is enabled (Better Auth handles this)
- Keep libraries updated - Regularly update Better Auth, storage package, and all dependencies
- File storage location - Prefer storing files on different host or outside webroot (OWASP)
- Use handler mapping - Access files via application handler instead of direct paths to prevent file enumeration
Performance & Reliability
- Use database indexes on
userIdandstorageIdcolumns for fast lookups - Implement proper error handling with try-catch blocks
- Add logging for file operations (upload, delete, access) for auditing
- Use connection pooling for database operations
- Monitor storage usage per user to detect anomalies
Code Quality
- Use TypeScript types - Avoid
anytypes, create proper interfaces for request extensions - Validate environment variables at startup
- Mount Better Auth routes separately from file routes for clarity
- Handle session expiration gracefully with proper error messages
Security Considerations
Authentication & Session Management
- Always use
requireAuthmiddleware on file endpoints - Never skip authentication - Handle session expiration - Return clear error messages when sessions expire
- Validate session freshness - Consider checking session age for sensitive operations
- Mount Better Auth routes - Ensure Better Auth handles its own routes (
/api/auth/*)
Authorization & Access Control
- Verify file ownership before allowing access, update, or deletion
- Use database lookups for ownership verification - Don't trust client-provided data
- Implement role-based access if needed - Check user roles before allowing operations
- Return 403 Forbidden (not 404) when user lacks permission - Prevents information leakage
File Validation (OWASP Compliant)
Following OWASP File Upload Cheat Sheet recommendations:
- Use allowlist for extensions - Only allow business-critical extensions (e.g.,
.jpg,.png,.pdf). Never use blocklist approach alone - Configure
allowMIMEin storage options to restrict file types - This is a quick check but not sufficient alone - Validate file signatures (magic bytes) - Don't trust Content-Type header as it can be spoofed. Validate actual file content against expected file signatures
- Set
maxUploadSizeto prevent oversized uploads and DoS attacks - Validate file content - MIME type can be spoofed, always validate actual file content
- Filename safety - Storage automatically generates UUID-based filenames, which prevents path traversal and overwrite attacks
- Consider virus scanning - Run files through antivirus or sandbox in production
- Content Disarm & Reconstruct (CDR) - For documents (PDF, DOCX, etc.), consider CDR to remove potentially malicious content
- Sanitize file names - Storage handlers automatically sanitize file paths and generate safe filenames
Infrastructure Security (OWASP Compliant)
Following OWASP File Upload Cheat Sheet recommendations:
- File Storage Location (in priority order):
- Store files on a different host - Complete segregation between application and file storage
- Store files outside webroot - Only administrative access allowed
- Store inside webroot with write-only permissions - If read access needed, use proper controls (internal IP, authorized users)
- Path Traversal: Storage handlers automatically sanitize file paths and generate UUID-based filenames
- Use handler mapping - Access files via application handler (
/api/files/:id) instead of direct file paths. Storage uses ID-based access which prevents direct file enumeration - CORS: Configure CORS appropriately - Only allow trusted origins
- Rate Limiting: Implement rate limiting to prevent abuse and DoS attacks
- Use private storage buckets - Don't make storage buckets/containers publicly accessible
- Use signed URLs for temporary access instead of public URLs (S3, GCS, Azure)
- Filesystem permissions - Set files with least privilege principle (read/write for required users only)
- Encrypt sensitive files at rest if required by compliance
- Protect from CSRF - Ensure Better Auth and your framework handle CSRF protection
Error Handling
- Don't expose internal errors - Return generic messages to clients
- Log errors server-side - Include user ID, file ID, and operation type
- Handle storage failures gracefully - Implement retry logic for transient failures
- Validate all inputs - Check file IDs, user IDs, and other parameters
Separate Upload Endpoints for Different File Types
Using separate upload endpoints for different file types (avatars, documents, images, etc.) is a security best practice that provides:
- Different validation rules per file type (size limits, allowed MIME types)
- Different storage locations for better organization
- Different access controls and permissions
- Easier monitoring and auditing of file operations
- Better error handling with type-specific error messages
Example: Separate Endpoints for Avatars and Documents
import express from "express";
import { DiskStorage, Multipart } from "@visulima/storage";
import { requireAuth } from "@/middleware/auth";
import { db } from "@/db";
const app = express();
// Avatar storage - optimized for small images
const avatarStorage = new DiskStorage({
directory: "./uploads/avatars",
maxUploadSize: "5MB", // Smaller limit for avatars
allowMIME: ["image/jpeg", "image/png", "image/webp"], // Only image types
filename: (file) => file.id, // UUID-based filename
});
const avatarHandler = new Multipart({ storage: avatarStorage });
// Document storage - for PDFs, DOCX, etc.
const documentStorage = new DiskStorage({
directory: "./uploads/documents",
maxUploadSize: "50MB", // Larger limit for documents
allowMIME: ["application/pdf", "application/msword", "application/vnd.openxmlformats-officedocument.wordprocessingml.document"],
filename: (file) => file.id,
});
const documentHandler = new Multipart({ storage: documentStorage });
// Avatar upload endpoint
app.post("/api/avatars", requireAuth, avatarHandler.handle, async (req, res) => {
try {
const file = req.body;
const user = req.user;
// Additional validation for avatars (e.g., image dimensions)
// You can use ImageTransformer to validate/transform images
await db.file.create({
data: {
storageId: file.id,
userId: user.id,
name: file.originalName,
size: file.size,
contentType: file.contentType,
category: "avatar", // Track file category
},
});
res.json({
id: file.id,
url: `/api/avatars/${file.id}`,
size: file.size,
});
} catch (error) {
console.error("Avatar upload error:", error);
if (req.body?.id) {
await avatarStorage.delete({ id: req.body.id }).catch(() => {});
}
res.status(500).json({ error: "Failed to upload avatar" });
}
});
// Document upload endpoint
app.post("/api/documents", requireAuth, documentHandler.handle, async (req, res) => {
try {
const file = req.body;
const user = req.user;
// Additional validation for documents (e.g., file signature validation)
// Consider CDR (Content Disarm & Reconstruct) for documents
await db.file.create({
data: {
storageId: file.id,
userId: user.id,
name: file.originalName,
size: file.size,
contentType: file.contentType,
category: "document", // Track file category
},
});
res.json({
id: file.id,
url: `/api/documents/${file.id}`,
size: file.size,
});
} catch (error) {
console.error("Document upload error:", error);
if (req.body?.id) {
await documentStorage.delete({ id: req.body.id }).catch(() => {});
}
res.status(500).json({ error: "Failed to upload document" });
}
});
// Separate retrieval endpoints
app.get("/api/avatars/:id", requireAuth, avatarHandler.handle);
app.get("/api/documents/:id", requireAuth, documentHandler.handle);
// Separate deletion endpoints with category-specific logic
app.delete("/api/avatars/:id", requireAuth, async (req, res) => {
const { id } = req.params;
const user = req.user;
const fileRecord = await db.file.findFirst({
where: { storageId: id, userId: user.id, category: "avatar" },
});
if (!fileRecord) {
return res.status(403).json({ error: "Forbidden" });
}
await db.$transaction(async (tx) => {
await avatarStorage.delete({ id });
await tx.file.delete({ where: { id: fileRecord.id } });
});
res.status(204).send();
});Benefits of Separate Endpoints
- Type-specific validation - Different file size limits and MIME types per endpoint
- Better organization - Files stored in separate directories/buckets
- Easier monitoring - Track uploads by category (avatar, document, etc.)
- Flexible access control - Different permissions per file type
- Simpler error handling - Type-specific error messages
- Better performance - Optimize storage configuration per file type
Complete Example: Express + Better Auth + S3
import express from "express";
import { S3Storage, Multipart } from "@visulima/storage/provider/aws";
import { auth } from "@/lib/auth";
import { requireAuth } from "@/middleware/auth";
import { db } from "@/db";
const app = express();
app.use(express.json());
const storage = new S3Storage({
bucket: process.env.S3_BUCKET,
region: process.env.S3_REGION,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
maxUploadSize: "50MB",
allowMIME: ["image/*", "application/pdf"],
filename: (file) => {
// Files organized by ID. User association stored separately in database
return file.id;
},
});
const multipart = new Multipart({ storage });
// Upload endpoint
app.post("/api/files", requireAuth, multipart.handle, async (req, res) => {
const file = req.body;
const user = req.user; // Properly typed
// Store file record in database with user association
await db.file.create({
data: {
storageId: file.id,
userId: user.id,
name: file.originalName,
size: file.size,
contentType: file.contentType,
},
});
res.json({
id: file.id,
url: `/api/files/${file.id}`,
size: file.size,
});
});
// List user's files
app.get("/api/files", requireAuth, async (req, res) => {
const user = (req as any).user;
const files = await db.file.findMany({
where: { userId: user.id },
});
res.json({ files });
});
// Get file
app.get(
"/api/files/:id",
requireAuth,
async (req, res, next) => {
const { id } = req.params;
const user = (req as any).user;
// Verify ownership
const fileRecord = await db.file.findFirst({
where: { storageId: id, userId: user.id },
});
if (!fileRecord) {
return res.status(403).json({ error: "Forbidden" });
}
next();
},
multipart.handle,
);
// Delete file
app.delete("/api/files/:id", requireAuth, async (req, res) => {
const { id } = req.params;
const user = (req as any).user;
// Verify ownership
const fileRecord = await db.file.findFirst({
where: { storageId: id, userId: user.id },
});
if (!fileRecord) {
return res.status(403).json({ error: "Forbidden" });
}
await storage.delete({ id });
await db.file.delete({ where: { id: fileRecord.id } });
res.status(204).send();
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});OWASP File Upload Security
This guide follows security best practices from the OWASP File Upload Cheat Sheet. Key security measures implemented:
- ✅ Allowlist-based extension validation - Only business-critical extensions allowed
- ✅ UUID-based filename generation - Prevents path traversal and file overwrite attacks
- ✅ File size limits - Prevents DoS attacks via large files
- ✅ MIME type validation - Quick check (not sufficient alone)
- ✅ Handler-based file access - Files accessed via application handler (
/api/files/:id) instead of direct paths - ✅ Authentication & Authorization - Only authorized users can upload/access files
- ✅ Private storage - Files stored in private buckets/containers with signed URLs
- ✅ CSRF protection - Handled by Better Auth and framework
Additional Security Recommendations
For production environments, consider implementing:
- File signature validation (Magic Bytes) - Validate magic bytes to ensure file type matches extension. This is critical as Content-Type headers can be spoofed. Example:
// File signature validation (OWASP recommended)
// Validates magic bytes to ensure file type matches declared Content-Type
// See: https://cheatsheetseries.owasp.org/cheatsheets/File_Upload_Cheat_Sheet.html
const validateFileSignature = (fileBuffer: Buffer, expectedType: string): boolean => {
const signatures: Record<string, number[][]> = {
"image/jpeg": [[0xff, 0xd8, 0xff]],
"image/png": [[0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a]],
"image/webp": [
[0x52, 0x49, 0x46, 0x46],
[0x57, 0x45, 0x42, 0x50],
], // RIFF...WEBP
"application/pdf": [[0x25, 0x50, 0x44, 0x46]], // %PDF
};
const expectedSignatures = signatures[expectedType];
if (!expectedSignatures) return false;
// Check if file buffer starts with any of the expected signatures
return expectedSignatures.some((sig) => sig.every((byte, index) => fileBuffer[index] === byte));
};
// Use in upload handler with file signature validation
// Note: This example reads the file after upload for validation. For better performance,
// consider validating the file signature from the upload stream before storage.
app.post("/api/files", requireAuth, multipart.handle, async (req, res) => {
try {
const file = req.body;
// Validate file signature matches declared Content-Type (OWASP best practice)
// Note: storage.get() loads entire file into memory. For large files, consider
// using storage.getStream() and reading only the first few bytes for signature validation.
const fileData = await storage.get({ id: file.id });
if (!validateFileSignature(fileData.content, file.contentType)) {
// Delete the file if signature doesn't match
await storage.delete({ id: file.id });
return res.status(400).json({ error: "Invalid file type" });
}
// File signature validated, continue with processing
const user = req.user;
await db.file.create({
data: {
storageId: file.id,
userId: user.id,
name: file.originalName,
size: file.size,
contentType: file.contentType,
},
});
res.json({
id: file.id,
url: `/api/files/${file.id}`,
size: file.size,
});
} catch (error) {
console.error("Upload error:", error);
if (req.body?.id) {
await storage.delete({ id: req.body.id }).catch(() => {});
}
res.status(500).json({ error: "Failed to save file" });
}
});- Antivirus scanning - Integrate with services like VirusTotal API or ClamAV
- Content Disarm & Reconstruct (CDR) - For documents (PDF, DOCX), use CDR to remove potentially malicious content
- Sandboxing - Process files in isolated environments before making them available
- Image rewriting - For images, use image transformation to remove embedded malicious content (already supported via ImageTransformer)
- Separate storage host - Store files on a different server/host for complete segregation
Next Steps
- Explore Better Auth documentation for authentication features
- Check out Storage documentation for advanced storage features
- Review Framework integrations for framework-specific examples
- See Image Transformers for media processing
- Learn about Error Handling for robust error management
- Read OWASP File Upload Cheat Sheet for comprehensive security guidance