Behind the Pixels: Our AI Pipeline from Prompt to Premium Wallpaper
A technical deep-dive into our Python pipeline, fal.ai integration, and the journey from concept to 4K wallpaper.
Behind the Pixels: Our AI Pipeline from Prompt to Premium Wallpaper
Introduction
What turns a fragment of language into a crisp, colour-true 4K wallpaper that loads instantly anywhere in the world? At Anwyll AI (“The Beloved One”), we obsess over that question. We combine cutting‑edge machine learning models with a production-hardened Python pipeline to deliver premium AI art at scale. In this technical deep‑dive, I’ll walk through our two‑script architecture, why we chose fal.ai for hosted inference, how we configure Recraft V3 for consistent style, and how our automation transforms raw generations into finished, watermarked, globally cached wallpapers ready for Next.js.
Whether you’re an ML engineer evaluating hosted inference, or a developer building an AI art workflow, this post details the end‑to‑end journey from prompt to CDN—down to Python code, AWS integration, and quality gates.
Architecture at a glance: two‑script pipeline
We keep the core of our wallpaper generation simple and composable. The system is designed as two orchestrated Python scripts:
- generate_images.py: Reads a queue of prompts, calls fal.ai’s hosted Recraft V3 model, saves raw image files and metadata.
- process_and_upload.py: Upscales, watermarks, optimises to WebP/AVIF, performs quality control, uploads to S3, and refreshes CloudFront.
This separation gives us clear concerns and reliable retries. Generation is stateless and parallel; processing is deterministic and idempotent (so we can safely rerun failed steps).
High‑level flow:
- Prompt catalogue → generate_images.py → fal.ai (Recraft V3) → raw PNGs + JSON metadata.
- process_and_upload.py → upscaling + watermark + optimisation + QC → S3 object storage → CloudFront global CDN.
- Next.js frontend pulls a catalogue JSON and renders responsive images.
We drive both scripts via GitHub Actions on a schedule and with manual triggers for curated releases.
Image generation with fal.ai and Recraft V3
Why fal.ai for hosted inference
We evaluated self‑hosting GPUs versus using a provider. We chose fal.ai because:
- Cold‑start free, low‑latency inference with autoscaling GPUs.
- Streaming progress and webhooks for robust orchestration.
- Simple Python SDK and REST API; consistent semantics across models (Flux, SDXL, Recraft V3).
- Predictable cost controls with per‑invocation billing.
fal.ai lets us focus on prompt engineering and quality control instead of cluster maintenance and CUDA drift.
Recraft V3: capabilities and parameters
Recraft V3 is exceptional at clean, flat illustrations, vector‑like aesthetics, and brand‑ready shapes—ideal for wallpapers that must be legible on home screens and ultra‑wide monitors. Key parameters we tune:
- prompt: The main description, e.g. “serene coastal minimalism in teal gradients, soft shapes, negative space, 4K wallpaper”.
- negative_prompt: We aggressively exclude artefacts: “text, lettering, logo, watermark, grain, jpeg noise, overexposure”.
- aspect_ratio: We generate 16:9, 9:16, and 1:1 variants to support desktop, mobile, and square packs.
- style: Recraft’s “flat-illustration”/“vector” leaning for crisp edges and iconographic simplicity.
- guidance_scale: 5.5–7.0 depending on prompt specificity; higher for strict style adherence.
- num_inference_steps: 24–36; ramping steps improves subtle gradients without overfitting noise.
- seed: Fixed for determinism in releases; randomised during exploration.
Example prompt template we use:
{subject}, flat vector style, soft gradients, harmonious colour palette, balanced negative space, subtle depth via long shadows, premium wallpaper, ultra‑clean edges, high contrast between focal figure and background
Negative prompt: “text, watermark, logo, noisy texture, blurry edges, chromatic aberration, harsh dithering, extra limbs, disfigured”
generate_images.py (fal.ai API integration)
We use fal.ai’s Python client for simplicity. If you prefer REST, swap the subscribe call for a POST with your API key and JSON payload.
# generate_images.py
import os
import json
import uuid
import time
from pathlib import Path
from typing import Dict, Any, List
# pip install fal-client pillow requests
import fal_client
import requests
OUTPUT_DIR = Path("outputs/raw")
META_DIR = Path("outputs/meta")
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
META_DIR.mkdir(parents=True, exist_ok=True)
fal_client.api_key = os.environ["FAL_API_KEY"]
RECRAFT_MODEL = "recraft-ai/recraft-v3" # hosted on fal.ai
def recraft_v3_generate(args: Dict[str, Any]) -> Dict[str, Any]:
# Blocking call; for high throughput, run in threads or async with asyncio.gather
result = fal_client.subscribe(
RECRAFT_MODEL,
arguments=args,
with_logs=False, # set True to stream logs
)
return result
def download(url: str, dest: Path):
r = requests.get(url, timeout=60)
r.raise_for_status()
dest.write_bytes(r.content)
def run_batch(prompts: List[Dict[str, Any]]):
for item in prompts:
prompt_id = str(uuid.uuid4())
args = {
"prompt": item["prompt"],
"negative_prompt": item.get("negative_prompt", ""),
"aspect_ratio": item.get("aspect_ratio", "16:9"),
"style": item.get("style", "flat-illustration"),
"guidance_scale": item.get("guidance_scale", 6.5),
"num_inference_steps": item.get("num_inference_steps", 28),
"seed": item.get("seed", 42),
"output_format": "png",
}
t0 = time.time()
result = recraft_v3_generate(args)
elapsed = time.time() - t0
# fal.ai returns signed URLs
images = result.get("images", [])
if not images:
print(f"No image returned for {prompt_id}")
continue
url = images[0]["url"]
raw_path = OUTPUT_DIR / f"{prompt_id}.png"
download(url, raw_path)
meta = {
"id": prompt_id,
"args": args,
"source": "recraft-v3",
"runtime_s": round(elapsed, 2),
"raw_path": str(raw_path),
"created_at": int(time.time()),
}
(META_DIR / f"{prompt_id}.json").write_text(json.dumps(meta, indent=2))
if __name__ == "__main__":
# Example batch
prompts = [
{
"prompt": "Minimalist ocean horizon, teal and navy gradients, flat-illustration, calming, premium 4K wallpaper",
"negative_prompt": "text, watermark, logo, noisy texture, glare, artifacts",
"aspect_ratio": "16:9",
"seed": 1234
},
{
"prompt": "Geometric mountain silhouettes, dusky pink sky, vector style, clean edges, mobile wallpaper",
"aspect_ratio": "9:16",
"seed": 5678
}
]
run_batch(prompts)
Notes:
- We persist prompt, parameters, and runtime for traceability and reproducibility.
- Deterministic seeds enable exact re‑generations during curation.
- For throughput, we run multiple workers (Python multiprocessing) or switch to the async client.
Post‑processing and quality control
Raw generations aren’t yet premium wallpapers. We transform them through upscaling, watermarking, optimisation, and rigorous checks before they reach our catalogue.
Upscaling
Anwyll
Senior GenAI Engineer at Anwyll AI. Passionate about democratizing AI-generated art and building production ML systems. Based in Poole, UK.