This was one of those weeks where everything clicked. I shipped a major infrastructure migration on ContentForge, added AI-powered features I've been wanting for months, and started cleaning up my older iOS video app. Here's what went down.
ContentForge Gets a Real Backend
The biggest effort this week was migrating ContentForge off SQLite and onto Supabase. This was overdue — SQLite worked fine for local dev, but I needed real auth and cloud storage to make this a multi-user product.
I added Supabase auth with a proper login/signup UI, migrated every API route to use Supabase Postgres instead of SQLite, and moved all file storage to Supabase Storage buckets. That last part was important because AI-generated images were using temporary URLs that would break. Now they get saved to Supabase Storage for permanent links.
The auth migration touched almost every route, but it forced me to clean up a lot of the API surface. Worth it.
AI Design Generation
This was the fun part. I added an AI design generator that lets you type a prompt and get back a full slide design — text elements, backgrounds, images, the works. It's powered by Leonardo AI for image generation and tied into the existing canvas editor.
You can also feed it a screenshot or a website URL and it'll generate a design inspired by that. The prompt bar lives right on the dashboard, so the workflow is: describe what you want, get a design, tweak it in the editor. I also added visual slide previews to the dashboard cards so you can see what each post looks like at a glance.
One interesting bug: AI-generated text elements kept overflowing their bounding boxes. The fix was straightforward — clamp the text to fit — but it's the kind of thing you only catch when you're actually using the feature end-to-end.
Render Worker and Video Export
I built a dedicated render worker that runs on Railway. It uses atomic job claiming so multiple workers can run in parallel without stepping on each other. The worker handles all the Remotion rendering — video export, TTS audio, transitions, the lot.
Getting the Docker setup right took a few rounds. I had to add emoji font support, fix module resolution with symlinks, switch from tsc compilation to tsx runtime (then back to tsc + node after that caused other issues), and copy over the right scripts. The worker now has a startup health check and proper logging, so I can actually debug issues in production.
I also added bulk video export — you can select multiple rows in the dashboard and export them all at once, each with per-row TTS audio. And I shipped Remotion transitions between slides plus TikTok-style captions, which makes the video output feel way more polished.
The last piece was stitching multiple canvases into a single post. This lets you create a carousel-style video from separate slide designs, which is how most short-form content actually works.
Simplifying VideoApp
On the iOS side, I made a significant change to my older video transcription app. I ripped out the entire AWS pipeline — S3 uploads, Lambda triggers, AWS Transcribe — and replaced it with a direct call to OpenAI's Whisper API. Transcriptions now get stored locally instead of in Firestore.
This cut out three external services and a bunch of infrastructure complexity. The app still does the same thing (upload a video, get a transcript, edit it), but the architecture is dramatically simpler. I also moved the RevenueCat and OpenAI API keys to Firebase Remote Config instead of hardcoding them.
New Project: YouTube Tool
I bootstrapped a new Next.js project this week — a YouTube tool I'm experimenting with. Just the initial scaffolding so far, nothing to show yet. More on this in future weeks.
What I Learned
The theme this week was reducing complexity. Migrating to Supabase consolidated three separate concerns (auth, database, file storage) into one service. Replacing AWS Transcribe with Whisper eliminated an entire cloud pipeline. Even the Docker debugging was about finding the simplest configuration that actually works.
The AI design generator was a reminder that the "last mile" of AI features — making the output actually fit in your UI, persisting it properly, handling edge cases — takes as long as the AI integration itself. The image generation call is one line; making it production-ready was 10 commits.