Skip to content

Performance Optimization

Maximize app performance for smooth operation at scale.

Target metrics:

Page Load Time:
• Admin dashboard: <2 seconds
• File library: <3 seconds (1000+ files)
• Order history: <2 seconds
• Settings pages: <1.5 seconds
API Response Time:
• File operations: <500ms
• Product mapping: <300ms
• Download link generation: <200ms
• Webhook processing: <1 second
Download Performance:
• Link generation: <500ms
• First byte: <200ms (TTFB)
• Download speed: 10+ MB/s
• Success rate: >98%
System Reliability:
• Uptime: 99.9%
• Error rate: <0.5%
• Database queries: <100ms average
• Background jobs: <5 min processing

Customer experience:

Fast performance (< 2s loads):
✓ 95% customer satisfaction
✓ 15% higher conversion rate
✓ 8% lower cart abandonment
✓ Fewer support tickets
✓ Better reviews and ratings
Slow performance (> 5s loads):
❌ 70% customer satisfaction
❌ 25% lower conversion
❌ 40% cart abandonment increase
❌ 3x more support tickets
❌ Negative reviews

Regular maintenance schedule:

Weekly tasks:

Settings → Advanced → Database → Maintenance
Tasks to run:
☐ Analyze database statistics
☐ Update query planner statistics
☐ Identify slow queries
☐ Review connection pool usage
Command (if direct access):
ANALYZE VERBOSE;
Result: Updated statistics for query optimizer
Time: 2-5 minutes
Benefit: 10-30% faster queries

Monthly tasks:

☐ Rebuild database indexes
☐ Vacuum full database
☐ Review table bloat
☐ Archive old data
Command:
REINDEX DATABASE alva_production;
VACUUM FULL ANALYZE;
Result: Reclaimed space, rebuilt indexes
Time: 15-30 minutes (maintenance window)
Benefit: 20-50% faster queries, reduced storage

Essential indexes:

Files table:

-- Optimize file searches
CREATE INDEX idx_files_shop_id ON files(shop_id);
CREATE INDEX idx_files_created_at ON files(created_at DESC);
CREATE INDEX idx_files_file_type ON files(file_type);
CREATE INDEX idx_files_active ON files(is_active) WHERE is_active = true;
-- Composite index for common queries
CREATE INDEX idx_files_shop_active ON files(shop_id, is_active)
WHERE is_active = true;
Query improvement:
Before: 2,400ms (full table scan)
After: 45ms (index scan)
Speedup: 53x faster

Orders/Purchases table:

-- Optimize order lookups
CREATE INDEX idx_purchases_shop_id ON purchases(shop_id);
CREATE INDEX idx_purchases_customer_email ON purchases(customer_email);
CREATE INDEX idx_purchases_order_date ON purchases(order_date DESC);
CREATE INDEX idx_purchases_fraud_status ON purchases(fraud_status);
-- Composite for fraud checking
CREATE INDEX idx_purchases_shop_fraud ON purchases(shop_id, fraud_status, order_date DESC);
Query improvement:
Before: 1,800ms
After: 35ms
Speedup: 51x faster

Downloads table:

-- Optimize download tracking
CREATE INDEX idx_downloads_token ON downloads(token) WHERE expires_at > NOW();
CREATE INDEX idx_downloads_customer ON downloads(customer_email, created_at DESC);
CREATE INDEX idx_downloads_product ON downloads(product_id, created_at DESC);
-- IP tracking
CREATE INDEX idx_downloads_ip_count ON downloads(customer_ip, token);
Query improvement:
Before: 950ms
After: 12ms
Speedup: 79x faster

Efficient query patterns:

Bad query (N+1 problem):

// ❌ BAD: Makes 1000+ database queries
const files = await prisma.file.findMany({
where: { shopId: shop.id }
});
for (const file of files) {
// Separate query for each file!
const productMappings = await prisma.productMapping.findMany({
where: { fileId: file.id }
});
file.products = productMappings;
}
Result: 1,000 files = 1,001 queries
Time: 15-25 seconds

Good query (eager loading):

// ✅ GOOD: Makes 1 database query
const files = await prisma.file.findMany({
where: { shopId: shop.id },
include: {
productMappings: {
include: {
product: true
}
}
}
});
Result: 1,000 files = 1 query
Time: 450ms
Speedup: 33-55x faster

Pagination for large datasets:

Bad approach:

// ❌ BAD: Loads all 10,000 orders at once
const orders = await prisma.purchase.findMany({
where: { shopId: shop.id },
orderBy: { orderDate: 'desc' }
});
Result:
Memory: 250 MB
Time: 8-12 seconds
Browser: Freezes, may crash

Good approach:

// ✅ GOOD: Paginated with cursor
const ITEMS_PER_PAGE = 50;
const orders = await prisma.purchase.findMany({
where: { shopId: shop.id },
orderBy: { orderDate: 'desc' },
take: ITEMS_PER_PAGE,
skip: page * ITEMS_PER_PAGE
});
const total = await prisma.purchase.count({
where: { shopId: shop.id }
});
Result:
Memory: 2.5 MB (100x less)
Time: 180ms (44-66x faster)
Browser: Smooth, no freezing

Archive old data:

Archival strategy:

Archive criteria:
• Orders > 2 years old
• Cancelled/refunded orders > 1 year
• Test data (any age)
• Failed fraud checks > 6 months
• Expired download tokens > 90 days
Archive process:
1. Export to CSV
2. Store in separate archive database/table
3. Delete from main tables
4. Keep 2-year rolling window
Frequency: Monthly

Archive procedure:

-- Export old orders to archive table
INSERT INTO purchases_archive
SELECT * FROM purchases
WHERE order_date < NOW() - INTERVAL '2 years';
-- Verify export
SELECT COUNT(*) FROM purchases_archive
WHERE order_date < NOW() - INTERVAL '2 years';
-- Delete archived records
DELETE FROM purchases
WHERE order_date < NOW() - INTERVAL '2 years';
-- Vacuum to reclaim space
VACUUM ANALYZE purchases;
Result:
Before: 850,000 orders, 2.4 GB
After: 125,000 orders, 340 MB
Speedup: 5-8x faster queries

Multi-layer caching:

Layer 1: Browser cache (client-side)

Static assets:
• Images: 7 days
• CSS/JS bundles: 365 days (versioned URLs)
• Fonts: 365 days
Cache-Control headers:
Cache-Control: public, max-age=31536000, immutable
Result: 80% fewer asset requests

Layer 2: CDN cache (edge)

Cloudflare R2 CDN:
• File downloads: 24 hours
• Download page HTML: 5 minutes
• API responses: No cache (dynamic)
Cache-Control: public, max-age=86400
Result: 95% cache hit rate
Global: <100ms latency

Layer 3: Application cache (server)

Redis/memory cache:
• Product data: 30 minutes
• Shop settings: 60 minutes
• File metadata: 60 minutes
• Database query results: 5 minutes
TTL (time-to-live): Auto-expire
Result: 70% fewer database queries

File metadata caching:

Without cache:

// ❌ Every request hits database
async function getFile(fileId) {
const file = await prisma.file.findUnique({
where: { id: fileId }
});
return file;
}
Result per 1,000 requests:
Database queries: 1,000
Avg response time: 85ms
Database load: High

With cache:

// ✅ Cache for 1 hour
import { cache } from './cache';
async function getFile(fileId) {
const cacheKey = `file:${fileId}`;
// Try cache first
let file = await cache.get(cacheKey);
if (!file) {
// Cache miss: query database
file = await prisma.file.findUnique({
where: { id: fileId }
});
// Store in cache for 1 hour
await cache.set(cacheKey, file, 3600);
}
return file;
}
Result per 1,000 requests:
Database queries: 15-20 (98% cache hit rate)
Avg response time: 8ms (10.6x faster)
Database load: Minimal

When to clear cache:

File updates:

// Clear cache when file modified
async function updateFile(fileId, updates) {
// Update database
const file = await prisma.file.update({
where: { id: fileId },
data: updates
});
// Invalidate cache
await cache.delete(`file:${fileId}`);
await cache.delete(`files:shop:${file.shopId}`);
// Purge CDN cache
await cloudflare.purgeCache(file.cdnUrl);
return file;
}

Settings changes:

// Clear all shop-related caches
async function updateShopSettings(shopId, settings) {
await prisma.shop.update({
where: { id: shopId },
data: settings
});
// Clear all shop caches
await cache.deletePattern(`shop:${shopId}:*`);
await cache.deletePattern(`files:shop:${shopId}:*`);
await cache.deletePattern(`products:shop:${shopId}:*`);
}

Manual cache clear:

Settings → Advanced → Performance → Clear Cache
Options:
☐ Clear all caches (full reset)
☐ Clear file metadata only
☐ Clear product data only
☐ Clear download pages only
When to use:
• After bulk file updates
• After settings changes
• When seeing stale data
• Before testing new features

Cloudflare R2 CDN setup:

Optimal settings:

Settings → Storage → CDN → Cloudflare R2
☑ CDN enabled
☑ Global edge caching (300+ locations)
☑ HTTP/2 enabled
☑ HTTP/3 (QUIC) enabled
☑ Brotli compression
☑ Auto-minify: HTML, CSS, JS
Cache rules:
• File downloads: 24 hours
• Thumbnails: 7 days
• Static assets: 365 days
Result:
• 40-60% faster downloads globally
• 80% reduced origin requests
• 99.99% availability

Geographic performance:

Download performance by region:
North America:
• TTFB: 45ms
• Download: 35 MB/s
• CDN: 95% cache hit
Europe:
• TTFB: 65ms
• Download: 32 MB/s
• CDN: 93% cache hit
Asia:
• TTFB: 85ms
• Download: 28 MB/s
• CDN: 91% cache hit
Oceania:
• TTFB: 95ms
• Download: 25 MB/s
• CDN: 89% cache hit
Target: <100ms TTFB globally

Image optimization:

Product images:

Optimization:
• Format: WebP (70% smaller than JPEG)
• Fallback: JPEG for old browsers
• Lazy loading: Images below fold
• Responsive: Serve appropriate size
Example:
Original: product.jpg (450 KB)
Optimized: product.webp (135 KB)
Savings: 70%
HTML:
<picture>
<source srcset="product.webp" type="image/webp">
<img src="product.jpg" alt="Product" loading="lazy">
</picture>

Icon sprites:

Combine icons into single sprite sheet:
Before: 25 icon files, 25 HTTP requests
After: 1 sprite file, 1 HTTP request
Savings:
• 24 fewer HTTP requests
• 40% smaller total size (compression)
• Faster page loads (600ms → 350ms)

CSS and JavaScript:

Minification:

Build process:
npm run build
Result:
CSS: 125 KB → 42 KB (66% reduction)
JS: 850 KB → 320 KB (62% reduction)
Techniques:
• Remove whitespace
• Remove comments
• Shorten variable names
• Tree-shaking (remove unused code)

Code splitting:

Load only needed JavaScript:
Before (monolithic):
• Bundle: 850 KB
• Initial load: 850 KB
• Time to interactive: 4.2s
After (code splitting):
• Main bundle: 180 KB
• Route chunks: 50-120 KB each
• Initial load: 180 KB
• Time to interactive: 1.8s
• Improvement: 2.3x faster

Memoization:

Prevent unnecessary re-renders:

import { memo, useMemo, useCallback } from 'react';
// ❌ BAD: Re-renders on every parent update
function FileList({ files }) {
return (
<div>
{files.map(file => (
<FileCard key={file.id} file={file} />
))}
</div>
);
}
// ✅ GOOD: Only re-renders when files change
const FileList = memo(function FileList({ files }) {
const sortedFiles = useMemo(() => {
return files.sort((a, b) => b.createdAt - a.createdAt);
}, [files]);
return (
<div>
{sortedFiles.map(file => (
<MemoizedFileCard key={file.id} file={file} />
))}
</div>
);
});
const MemoizedFileCard = memo(FileCard);
Result:
Before: 250ms render time (1,000 files)
After: 45ms render time
Speedup: 5.5x faster

Large lists optimization:

Without virtual scrolling:

// ❌ BAD: Renders all 5,000 files at once
function FileLibrary({ files }) {
return (
<div>
{files.map(file => (
<FileRow key={file.id} file={file} />
))}
</div>
);
}
Result:
Initial render: 8-12 seconds
Memory: 450 MB
Browser: Laggy scrolling

With virtual scrolling:

// ✅ GOOD: Only renders visible rows (~30)
import { VariableSizeList } from 'react-window';
function FileLibrary({ files }) {
return (
<VariableSizeList
height={600}
itemCount={files.length}
itemSize={(index) => 80}
width="100%"
>
{({ index, style }) => (
<div style={style}>
<FileRow file={files[index]} />
</div>
)}
</VariableSizeList>
);
}
Result:
Initial render: 180ms (44-66x faster)
Memory: 12 MB (37x less)
Browser: Smooth 60fps scrolling

Search input optimization:

Without debouncing:

// ❌ BAD: API call on every keystroke
function SearchBox() {
const handleSearch = async (query) => {
const results = await fetch(`/api/search?q=${query}`);
// ...
};
return (
<input
type="search"
onChange={(e) => handleSearch(e.target.value)}
/>
);
}
Result:
User types "tutorial" (8 characters)
API calls: 8 (t, tu, tut, tuto, tutor, tutori, tutoria, tutorial)
Cost: Expensive, unnecessary load

With debouncing:

// ✅ GOOD: API call only after user stops typing
import { debounce } from 'lodash';
function SearchBox() {
const debouncedSearch = useMemo(
() => debounce(async (query) => {
const results = await fetch(`/api/search?q=${query}`);
// ...
}, 300), // Wait 300ms after last keystroke
[]
);
return (
<input
type="search"
onChange={(e) => debouncedSearch(e.target.value)}
/>
);
}
Result:
User types "tutorial" (8 characters)
API calls: 1 (only "tutorial")
Reduction: 87.5% fewer API calls

Protect against abuse:

Rate limit configuration:

Settings → Advanced → API → Rate Limiting
Limits:
• Anonymous: 60 requests/hour
• Authenticated: 1,000 requests/hour
• Admin API: 5,000 requests/hour
Headers returned:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 856
X-RateLimit-Reset: 1709856000
When exceeded:
Status: 429 Too Many Requests
Retry-After: 3600 (seconds)

Fraud check worker:

Inefficient polling:

// ❌ BAD: Checks every 1 second, wastes resources
setInterval(async () => {
const jobs = await prisma.fraudCheckQueue.findMany({
where: { status: 'pending' }
});
for (const job of jobs) {
await processFraudCheck(job);
}
}, 1000); // Every second
Result:
Database queries: 86,400/day
Most queries: No jobs (wasted)
Database load: High

Efficient polling with backoff:

// ✅ GOOD: Adaptive polling interval
let pollInterval = 5000; // Start with 5 seconds
const MIN_INTERVAL = 5000;
const MAX_INTERVAL = 60000;
async function pollJobs() {
const jobs = await prisma.fraudCheckQueue.findMany({
where: { status: 'pending' },
take: 10 // Batch size
});
if (jobs.length > 0) {
// Jobs found: process and reduce interval
await Promise.all(jobs.map(processFraudCheck));
pollInterval = Math.max(MIN_INTERVAL, pollInterval / 2);
} else {
// No jobs: increase interval (exponential backoff)
pollInterval = Math.min(MAX_INTERVAL, pollInterval * 1.5);
}
setTimeout(pollJobs, pollInterval);
}
pollJobs();
Result:
Database queries: 2,000-5,000/day (90% reduction)
Database load: Minimal
Adaptability: Fast when busy, slow when idle

Optimize connections:

Configuration:

DATABASE_URL=postgresql://user:pass@host:5432/db?
connection_limit=20&
pool_timeout=10&
connect_timeout=5
Connection pool:
• Min connections: 5
• Max connections: 20
• Idle timeout: 10 minutes
• Connection timeout: 5 seconds
Why pooling helps:
• Reuse connections (avoid TCP handshake)
• Limit max connections (prevent overload)
• Queue requests when pool full
• Auto-recovery from connection errors

Pool monitoring:

Monitor connection pool:
Settings → Advanced → Database → Connection Pool
Metrics:
• Active connections: 12/20
• Idle connections: 8
• Queued requests: 0
• Average wait time: 2ms
Alerts:
⚠️ Pool 90% full: Consider increasing limit
❌ Queue building up: Database overloaded

Key metrics to track:

Dashboard metrics:

Analytics → Performance
Response times:
• P50 (median): 180ms
• P95: 650ms
• P99: 1,200ms
Target: P95 < 1,000ms
Throughput:
• Requests/min: 450
• Downloads/min: 120
• Orders/min: 25
Error rates:
• 2xx success: 98.5%
• 4xx client errors: 1.2%
• 5xx server errors: 0.3%
Target: 5xx < 0.5%
Database:
• Query time P95: 85ms
• Connection pool: 65% utilization
• Slow queries (>1s): 2/hour

Track actual user experience:

Core Web Vitals:

Metrics tracked:
• LCP (Largest Contentful Paint): 1.8s
Target: < 2.5s ✓
• FID (First Input Delay): 45ms
Target: < 100ms ✓
• CLS (Cumulative Layout Shift): 0.08
Target: < 0.1 ✓
• TTFB (Time to First Byte): 320ms
Target: < 500ms ✓
Score: 95/100 (Excellent)

By page type:

Dashboard:
• LCP: 1.2s
• FID: 35ms
• Score: 98/100
File Library (1,000+ files):
• LCP: 2.1s
• FID: 65ms
• Score: 89/100
Order History:
• LCP: 1.5s
• FID: 40ms
• Score: 95/100

Set up alerts:

Critical alerts (immediate action):

Settings → Monitoring → Alerts → Critical
Triggers:
☑ Error rate >2% for 5 minutes
☑ API response time P95 >2s for 5 minutes
☑ Download success rate <95% for 10 minutes
☑ Database connection pool >90% for 5 minutes
☑ Server down/unreachable
Notification:
• Email: dev-team@example.com
• SMS: +1-555-0100 (on-call)
• Slack: #alerts-critical
• PagerDuty: Escalation policy

Warning alerts (investigate soon):

Triggers:
☑ Error rate >1% for 15 minutes
☑ API response time P95 >1s for 15 minutes
☑ Download speed <5 MB/s for 30 minutes
☑ Database slow queries >10/hour
☑ CDN cache hit rate <80%
Notification:
• Email: dev-team@example.com
• Slack: #alerts-warning

Set performance targets:

Budget enforcement:

Performance budgets:
• Page weight: 500 KB max
• JavaScript bundle: 200 KB max
• CSS: 50 KB max
• Images: 200 KB max per page
• Time to interactive: 3s max
• API response: 500ms max
Enforcement:
• CI/CD checks on build
• Alerts if budget exceeded
• Block deployment if >10% over budget
Example CI check:
✅ Bundle size: 185 KB (92% of budget)
✅ Page weight: 445 KB (89% of budget)
❌ API response P95: 650ms (130% of budget)
Result: Build fails, investigate before deploying

Before major releases:

Load test procedure:

Tools:
• Apache JMeter (open source)
• k6 (modern, scriptable)
• Artillery (Node.js-based)
Test scenarios:
1. Normal load: 100 concurrent users
2. Peak load: 500 concurrent users
3. Stress test: 1,000+ concurrent users
4. Spike test: Sudden traffic surge
Duration:
• Ramp-up: 5 minutes
• Sustained: 30 minutes
• Ramp-down: 5 minutes

Example k6 test:

load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
stages: [
{ duration: '5m', target: 100 }, // Ramp up to 100 users
{ duration: '30m', target: 100 }, // Stay at 100 users
{ duration: '5m', target: 0 }, // Ramp down to 0
],
thresholds: {
http_req_duration: ['p(95)<1000'], // 95% requests < 1s
http_req_failed: ['rate<0.01'], // <1% failure rate
},
};
export default function () {
// Test file library
let res = http.get('https://yourshop.myshopify.com/admin/files');
check(res, {
'status is 200': (r) => r.status === 200,
'response < 2s': (r) => r.timings.duration < 2000,
});
sleep(1);
// Test download link generation
res = http.post('https://yourshop.myshopify.com/api/download', {
order_id: '12345'
});
check(res, {
'link generated': (r) => r.status === 200,
'response < 500ms': (r) => r.timings.duration < 500,
});
sleep(2);
}

Run test:

Terminal window
k6 run --vus 100 --duration 30m load-test.js
Results:
status is 200: 98.5%
response < 2s: 96.2%
link generated: 99.1%
response < 500ms: 97.8%
http_req_duration: avg=450ms, p(95)=850ms
http_req_failed: 1.2%
Conclusion: System handles load well

Diagnostic process:

Step 1: Measure

Tools:
• Browser DevTools (Network, Performance)
• Analytics dashboard
• Database query logs
• Server logs
Identify slow areas:
☐ Slow page loads (>3s)
☐ Slow API responses (>1s)
☐ Slow database queries (>100ms)
☐ High error rates (>1%)
☐ Memory leaks (growing usage)

Step 2: Profile

Use browser profiler:
1. Open DevTools → Performance
2. Click Record
3. Navigate/interact with app
4. Stop recording
5. Analyze flame graph
Look for:
• Long tasks (>50ms, blocks UI)
• Excessive re-renders
• Memory leaks
• Slow network requests

Step 3: Database profiling

-- Find slow queries (PostgreSQL)
SELECT
query,
calls,
total_time / 1000 as total_seconds,
mean_time / 1000 as avg_seconds,
max_time / 1000 as max_seconds
FROM pg_stat_statements
WHERE mean_time > 100 -- Slower than 100ms
ORDER BY total_time DESC
LIMIT 10;
Example result:
Query: SELECT * FROM files WHERE shop_id = ?
Calls: 45,000
Avg: 850ms
Max: 3,200ms
Issue: Missing index on shop_id
Fix: CREATE INDEX idx_files_shop_id ON files(shop_id);
Result: Avg drops to 12ms (70x faster)

Issue 1: Slow file library (1,000+ files)

Symptoms:
• Page load: 8-15 seconds
• Browser freezing
• High memory usage
Causes:
• Loading all files at once (no pagination)
• No virtual scrolling
• Rendering all file cards upfront
• No caching
Fixes:
✓ Enable pagination (50 files per page)
✓ Implement virtual scrolling
✓ Lazy load file thumbnails
✓ Cache file metadata (1 hour)
✓ Add database indexes
Result:
Before: 12s load, 450 MB memory
After: 1.8s load, 45 MB memory
Improvement: 6.6x faster, 10x less memory

Issue 2: Slow order processing

Symptoms:
• Webhook processing >5 seconds
• Delayed email notifications
• Fraud checks timing out
Causes:
• Synchronous processing (blocking)
• No background jobs
• N+1 database queries
• External API calls in webhook
Fixes:
✓ Move to background jobs (async)
✓ Batch database queries
✓ Cache product/file data
✓ Optimize fraud check queries
✓ Parallel processing where possible
Result:
Before: 6.5s webhook processing
After: 450ms webhook + background job
Improvement: 14x faster, non-blocking

Issue 3: Slow download link generation

Symptoms:
• Customer waits 3-5 seconds for download
• High bounce rate
• Support complaints
Causes:
• Database query for every file in order
• Generating signed URLs synchronously
• No caching of download metadata
• Checking fraud status on every request
Fixes:
✓ Cache download tokens (5 minutes)
✓ Pre-generate links on order (async)
✓ Batch database queries
✓ Cache fraud status
✓ Use database indexes
Result:
Before: 3.8s link generation
After: 180ms link retrieval
Improvement: 21x faster

Respect limits:

Rate limit tiers:

Shopify Plus: 20 requests/second
Advanced: 10 requests/second
Standard: 4 requests/second
Basic: 2 requests/second
Leaky bucket algorithm:
• Requests added to bucket
• Bucket drains at fixed rate
• Bucket full = rate limited
Header: X-Shopify-Shop-Api-Call-Limit: 32/40
Meaning: 32 of 40 requests used, 8 remaining

Rate limit strategy:

// ✅ GOOD: Respect rate limits
import { delay } from './utils';
class ShopifyClient {
constructor() {
this.requestQueue = [];
this.maxRequestsPerSecond = 4; // For Standard plan
}
async request(endpoint) {
// Wait if rate limit approaching
const callLimit = response.headers['x-shopify-shop-api-call-limit'];
const [used, max] = callLimit.split('/').map(Number);
if (used >= max * 0.9) {
// 90% of limit: wait 1 second
await delay(1000);
}
return fetch(`https://shop.myshopify.com/admin/api/2024-01/${endpoint}`);
}
}

Choose efficient API:

REST (multiple requests):

// ❌ SLOWER: 3 separate REST API calls
const product = await fetch('/admin/api/2024-01/products/123.json');
const variants = await fetch('/admin/api/2024-01/products/123/variants.json');
const metafields = await fetch('/admin/api/2024-01/products/123/metafields.json');
Time: 850ms (3 round trips)
API calls: 3/40 used

GraphQL (single request):

// ✅ FASTER: 1 GraphQL request
const query = `
query {
product(id: "gid://shopify/Product/123") {
title
variants(first: 10) {
edges {
node {
title
price
}
}
}
metafields(first: 10) {
edges {
node {
namespace
key
value
}
}
}
}
}
`;
const result = await fetch('/admin/api/2024-01/graphql.json', {
method: 'POST',
body: JSON.stringify({ query })
});
Time: 280ms (1 round trip)
API calls: 1/40 used
Improvement: 3x faster, 66% fewer API calls

Before launch:

☐ Database indexes on all foreign keys
☐ Pagination enabled for large lists (>100 items)
☐ Caching strategy implemented
☐ CDN configured and tested
☐ Assets optimized (images, CSS, JS)
☐ Virtual scrolling for long lists
☐ Background jobs for async operations
☐ Rate limiting configured
☐ Load testing completed
☐ Monitoring and alerts set up
☐ Performance budgets defined

Ongoing maintenance:

☐ Weekly database optimization
☐ Monthly index rebuild
☐ Quarterly data archival
☐ Review slow query logs weekly
☐ Monitor cache hit rates
☐ Review error logs daily
☐ Load test before major releases
☐ Update performance budgets quarterly

Immediate improvements:

1. Enable caching (5 minutes)

Settings → Performance → Caching
☑ Enable all caching layers
Impact: 50-70% faster page loads
Effort: Minimal

2. Add database indexes (10 minutes)

-- Essential indexes
CREATE INDEX idx_files_shop_id ON files(shop_id);
CREATE INDEX idx_purchases_shop_id ON purchases(shop_id);
CREATE INDEX idx_downloads_token ON downloads(token);
Impact: 10-50x faster queries
Effort: Low

3. Enable CDN (15 minutes)

Settings → Storage → CDN
☑ Enable Cloudflare R2 CDN
Impact: 40-60% faster downloads
Effort: Low

4. Implement pagination (30 minutes)

// Replace full list with pagination
<DataTable
rows={files}
pagination={true}
rowsPerPage={50}
/>
Impact: 5-10x faster page loads
Effort: Medium

5. Optimize images (20 minutes)

Terminal window
# Convert to WebP
npm install sharp
node scripts/convert-images-to-webp.js
Impact: 60-70% smaller images
Effort: Low

Targets:

Page load: <2s
API response: <500ms
Download speed: 10+ MB/s
Uptime: 99.5%
Configuration:
• Database: Basic optimization
• Caching: Moderate (1 hour)
• CDN: Enabled
• Monitoring: Weekly checks

Targets:

Page load: <1.5s
API response: <300ms
Download speed: 20+ MB/s
Uptime: 99.9%
Configuration:
• Database: Weekly optimization, monthly archival
• Caching: Aggressive (multi-layer)
• CDN: Optimized with custom rules
• Monitoring: Daily checks, automated alerts

Targets:

Page load: <1s
API response: <200ms
Download speed: 30+ MB/s
Uptime: 99.95%
Configuration:
• Database: Daily optimization, weekly archival
• Caching: Aggressive (Redis + CDN)
• CDN: Custom domain, optimized routing
• Monitoring: Real-time monitoring, 24/7 alerts
• Load testing: Before all releases
• Performance budgets: Strictly enforced