Tech Stack Behind 60fps Mockup

Deep analysis of the stack behind 60fps Mockup, from rendering pipeline to billing and rate limits.

March 4, 2026

·

12 min read

·

Updated March 13, 2026

If you want to understand how 60fps Mockup is built, this post is a direct breakdown.

You will see:

  • What stack we use and why we chose it
  • How upload to export works under the hood
  • Where performance optimizations matter most
  • Why we chose managed services for backend operations
  • What we deliberately did not build yet
Short version: We optimized for fast browser rendering, stable exports, and low operational overhead.

Quick answer

  • 60fps Mockup is built for predictable browser based export quality.
  • The stack prioritizes rendering speed, deterministic output, and low operational overhead.
  • Managed services handle billing, auth, monitoring, and rate limits so product work stays focused on the rendering pipeline.

Stack snapshot

Core stack in one view:

Product workflow we optimize for

This product is tuned for one workflow:

  1. User uploads an iPhone recording
  2. System detects device model and frame geometry from video dimensions
  3. User edits background, shadow, status bar, and layout
  4. Preview updates in real time
  5. Export produces a PNG image or MP4 video

Every stack decision is measured against this path. Features that do not improve this workflow are intentionally delayed. That constraint keeps the codebase focused and the product fast.

If you want the workflow view of how that output becomes a ship-ready asset, use App Demo Video: A Simple Workflow That Actually Ships.

If you want the practical source-to-output version of the same flow, use Screen Recording to Mockup: The Fastest Practical Workflow.

Frontend and state architecture

The frontend is Next.js 16 with the App Router. Server Components handle initial page loads and metadata. Client Components run all the interactive editor UI.

Frontend principles:

  • Keep initial load fast
  • Load heavy UI chunks only when needed
  • Keep editor state centralized and predictable

Heavy editor sections use dynamic imports in components/editor/editor-shell.tsx. This splits the bundle so the initial load is lighter and the editor loads only when the user is actually using it.

State model from lib/editor/state.ts is a single Zustand store with around 40 properties:

  • Video metadata: file, dimensions, duration
  • Device config: model, frame geometry
  • Background settings: color, gradient, image
  • Effects: zoom, shadow, status bar
  • Mockup mode: standard, hand, or scene
  • App branding settings
  • Export state: progress, cancellation flag

A single Zustand store keeps updates direct. Controls write to the store and the canvas reads from it without passing props through multiple layers. Cancellation and export guards are easier to implement when the state is centralized.

Rendering pipeline

The renderer in lib/editor/canvas-compositor.ts is the core of the product. Everything from background color to device frame to video frame compositing happens here.

Layer order per frame:

  1. Background layer
  2. Shadow layer
  3. Video frame layer
  4. Status bar layer
  5. Device frame layer
  6. Optional hand or scene mockup layer
  7. Branding and final polish

This order matters. The device frame sits on top of the video frame so the frame bezels cleanly clip the content. Shadows render below so they do not clip the frame itself.

Rendering optimizations:

  1. Run `prepareAssetsForFrameRendering()` once before frame loops. This pre-renders shadows, pre-decodes the device frame image, and caches layout geometry. All of this happens once before the export loop starts, not on every frame.
  2. Cache layout so geometry is reused across frames. The canvas size, device position, scale, and shadow position are identical across all frames in a video export. Computing them once and reusing them is a meaningful speedup.
  3. Reuse module scope scratch canvases. Allocating a new canvas object per frame is expensive. Module-scope scratch canvases are reused and cleared between frames.
  4. Cache decoded images by URL. Device frame images, background images, and mockup overlays are fetched and decoded once, then reused. The cache excludes blob: and data: URLs which change.
  5. Use affine grid transforms in `perspective-warp.ts` for hand and scene output. Hand mockups and scene mockups require perspective correction. This is done with grid subdivision and affine transforms rather than WebGL, keeping the renderer compatible with all browsers without a GPU dependency.
The goal is not a flashy engine. The goal is predictable exports on normal hardware.

Status bar rendering

The status bar layer handles a subtle challenge: the status bar at the top of the iPhone frame needs to match the video content but also be clean and legible.

The implementation uses pixel extension: a dual-edge vertical stretch with horizontal gradient blending. This stretches the top pixels of the video to fill the status bar area, then blends in the clean status bar icons on top. On mobile Safari where SVG rendering has edge cases, a fallback renders the shadow as an SVG element instead.

Device detection

When a recording is uploaded, the tool reads the video dimensions and matches them against known device aspect ratios with a tolerance of 0.015. This covers the small dimensional variations between similar device generations and lets the tool select the correct frame without user input.

If the aspect ratio does not match a known device, the tool falls back gracefully and lets the user pick a frame manually.

The detection logic lives in lib/editor/device-detection.ts.

Export flow

Export behavior from lib/editor/useExports.ts:

  • Image export: single frame rendered to a 2160x2160 canvas and saved as PNG.
  • Video export: frame-by-frame render loop, frames passed to mp4-muxer, saved as MP4.
  • Early file handle acquisition happens before the export loop starts, which improves the save dialog flow.
  • Cancellation guards prevent duplicate exports from parallel button clicks.
  • Progress updates during long video exports so the user has feedback.

The video export is done entirely in the browser. No server is involved in rendering. The browser reads the video frames, the compositor draws each frame to the canvas, and mp4-muxer packages the output into a valid MP4 container. This keeps the export fast and eliminates any file transfer latency.

Backend checks before export

Before export starts, app/api/exports/start/route.ts runs a short server-side validation:

  • Request rate limit via Upstash Redis
  • Authenticated user check via Supabase
  • Billing state lookup from the database
  • Free vs pro path selection
  • Response headers for rate limit state

Free users get forced defaults: iPhone 17 Pro frame, white background, 1x scale. PRO users get full access to all frames, backgrounds, and 2x resolution.

This API call is light and fast. The actual rendering happens client-side. The server check just validates that the user has permission and has not exceeded rate limits.

Why we chose managed services

Managed services let the team focus on product output quality instead of infrastructure.

  • Supabase handles auth (email OTP and Google OAuth) and the database (billing state, user records, webhook event idempotency)
  • Polar handles subscription checkout, billing portal, and webhook events (verified with HMAC-SHA256)
  • Upstash Redis handles rate limiting with a serverless-compatible Redis API
  • Sentry handles error visibility with session replay and performance traces

What we intentionally did not build yet:

  • Custom billing infrastructure
  • Server-side rendering farm
  • Broad project or asset management features
  • Custom auth system

Each of these is a significant engineering investment. Using managed services for them means that engineering time goes into the rendering pipeline and user workflow instead.

Billing flow

Plans: Monthly ($5), Yearly ($25), Lifetime ($50) via Polar.sh.

The billing state is stored in a billing_state table in Supabase with fields for user ID, is_pro status, and pro expiry date. The client polls billing state every 2 minutes when the tab is visible via a useBillingState hook. Server-side, getOrCreateBillingState() handles auto-expiry checks and row creation for new users.

Webhook events from Polar are verified and stored with idempotency tracking in a webhook_events table to prevent duplicate processing.

Where 60fps Mockup fits

For teams building visual launch assets from iPhone recordings, this stack is optimized for:

  • Fast editor response in normal browser environments without a GPU requirement
  • Repeatable export behavior from the same source input every time
  • Lower maintenance burden while product scope is still focused

Snapshot:

  • Best for: teams that want to understand the technical foundation behind the product
  • Strong point: browser-first rendering, deterministic output, managed service stack
  • Watch for: the stack is optimized for one core workflow, not broad general purpose design

Build notes for similar teams

If you are building a similar browser-based canvas tool, use this checklist:

  1. Define one core workflow and optimize for it before adding features
  2. Keep rendering logic deterministic so exports are consistent
  3. Separate editor state from UI components using a centralized store
  4. Add export guards early to prevent duplicate or partial exports
  5. Cache expensive computations before the export loop, not inside it
  6. Use managed services for auth, billing, and rate limiting until scale forces custom systems
  7. Test exports on mid-range hardware, not just developer machines

Decision checklist

Use these questions if you are choosing a stack for a similar tool:

  1. Do we need browser first rendering or backend rendering first?
  2. Is predictable export quality more important than broad feature breadth right now?
  3. Which part of our current pipeline is slowest?
  4. Can managed services remove non core engineering work?
  5. What output workflow do users repeat most often every week?

FAQ

Why did 60fps Mockup choose a browser first rendering stack?

A browser first approach keeps feedback fast during editing and makes the path from source to export more direct. The user does not wait for a server round trip to see the effect of a background color change. Rendering happens immediately on their machine.

Why use managed services instead of custom backend systems?

Managed services reduce non-core engineering work and let the team focus on rendering quality and workflow speed. Building a custom billing system or auth system is significant work that does not improve the core export quality for users.

What part of the stack affects export quality most?

The rendering pipeline and export loop affect output quality the most. Layout caching, frame processing consistency, and the compositor layer order all directly affect what the exported file looks like.

Is the export done on a server?

No. The image and video rendering happen entirely in the browser. The server API call before export is only for validation (auth, rate limit, billing state). The actual canvas compositing and MP4 muxing happen client-side.

Why use mp4-muxer instead of MediaRecorder?

mp4-muxer gives precise frame-by-frame control over the video container. MediaRecorder captures a live media stream, which is less predictable for frame-accurate video that goes through a compositor. mp4-muxer also produces valid MP4 files that work across all platforms.

Does the tool work on Windows?

Yes. Because 60fps Mockup is browser-based, it works on any device with a modern browser. There is nothing to install and no Mac-only dependency.

Final summary

  • The stack is built for one workflow: recording upload to clean export.
  • Browser-first rendering keeps the editor responsive and the export direct.
  • Managed services handle everything outside the core rendering pipeline.
  • Every optimization is measured against the user's repeat weekly workflow.

Related reads