MultiGen Use Cases - Real-World Integration Patterns¶
Last Updated: October 18, 2025 Version: v0.1.98 Target Audience: Engineers integrating MultiGen into production systems
Table of Contents¶
- Embedded Systems & IoT
- Web Services & APIs
- CLI Tools & DevOps
- Performance-Critical Applications
- Data Processing Pipelines
- WebAssembly & Browser Applications
- Cross-Platform Libraries
- Research & Academia
Embedded Systems & IoT¶
Use Case: Sensor Data Processing on Microcontroller¶
Problem: Prototype algorithm in Python, deploy to resource-constrained device
Solution: C backend with minimal runtime
# sensor_processing.py
def process_sensor_data(readings: list) -> int:
"""Filter and aggregate sensor data."""
filtered: list = []
for reading in readings:
if reading > 0 and reading < 100:
filtered.append(reading)
total: int = 0
for value in filtered:
total += value
return total // len(filtered) if len(filtered) > 0 else 0
def main() -> int:
readings: list = [15, 23, -1, 45, 200, 67, 82, 34]
average: int = process_sensor_data(readings)
print(average)
return 0
Build & Deploy:
# Compile to C
multigen build -t c sensor_processing.py
# Result: 82KB binary, zero dependencies
ls -lh build/sensor_processing
# -rwxr-xr-x 82K sensor_processing
# Cross-compile for ARM
arm-linux-gnueabi-gcc -std=c99 \
-I src/multigen/backends/c/runtime \
-I src/multigen/backends/c/ext \
build/src/sensor_processing.c \
src/multigen/backends/c/runtime/*.c \
-o build/sensor_processing_arm
Benefits:
- [x] Prototype in Python (fast iteration)
- [x] Deploy to embedded device (C binary)
- [x] No runtime dependencies (static linking)
- [x] Small binary size (fits in limited flash memory)
Web Services & APIs¶
Use Case: High-Performance API Endpoint¶
Problem: Python API too slow, need 10x speedup without full rewrite
Solution: Rust backend for safety + performance
# api_handler.py
def process_request(user_ids: list) -> dict:
"""Process batch user request."""
results: dict = {}
for user_id in user_ids:
# Simulate expensive calculation
score: int = compute_score(user_id)
results[user_id] = score
return results
def compute_score(user_id: int) -> int:
"""Complex scoring algorithm."""
total: int = 0
for i in range(100):
total += (user_id * i) % 997
return total
def main() -> int:
users: list = [1, 2, 3, 4, 5]
results: dict = process_request(users)
for user_id in results:
print(f"User {user_id}: {results[user_id]}")
return 0
Build & Deploy:
# Compile to Rust
multigen build -t rust api_handler.py
# Benchmark
time ./build/api_handler
# real 0m0.200s (vs Python: 2.1s = 10x faster)
# Deploy as microservice
./build/api_handler --port 8080
Integration with Existing Rust Service:
// main.rs
use actix_web::{web, App, HttpServer};
mod api_handler; // MultiGen-generated module
async fn handle_batch(users: web::Json<Vec<i32>>) -> impl Responder {
let results = api_handler::process_request(users.into_inner());
web::Json(results)
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.route("/batch", web::post().to(handle_batch))
})
.bind("0.0.0.0:8080")?
.run()
.await
}
Benefits:
- [x] 10x performance improvement
- [x] Memory safety (Rust borrow checker)
- [x] Easy integration with Rust web frameworks
- [x] Incremental migration (port bottlenecks first)
CLI Tools & DevOps¶
Use Case: Fast DevOps Automation Tool¶
Problem: Build internal tool, need instant compilation for rapid iteration
Solution: Go backend for 63ms compile time
# deploy_tool.py
def parse_config(lines: list) -> dict:
"""Parse deployment configuration."""
config: dict = {}
for line in lines:
if "=" in line:
parts: list = line.split("=")
key: str = parts[0].strip()
value: str = parts[1].strip()
config[key] = value
return config
def validate_deployment(config: dict) -> bool:
"""Validate deployment configuration."""
required: list = ["app_name", "region", "instances"]
for key in required:
if key not in config:
print(f"Missing required key: {key}")
return False
return True
def main() -> int:
# Read config
with open("deploy.conf") as f:
lines: list = f.readlines()
config: dict = parse_config(lines)
if validate_deployment(config):
print("[x] Configuration valid")
# Proceed with deployment...
return 0
else:
print(" Configuration invalid")
return 1
Build & Deploy:
# Instant compilation
time multigen build -t go deploy_tool.py
# real 0m0.063s INSTANT
# Run
./build/deploy_tool
# Distribute
# 2.3MB self-contained binary, no dependencies
scp build/deploy_tool prod-server:/usr/local/bin/
Benefits:
- [x] Instant compilation (edit-compile-test loop < 100ms)
- [x] Single binary deployment
- [x] No runtime dependencies
- [x] Great for internal tools with frequent changes
Performance-Critical Applications¶
Use Case: Scientific Computing Algorithm¶
Problem: NumPy too slow, need maximum performance
Solution: LLVM backend with O3 optimization
# matrix_ops.py
def matrix_multiply(a: list, b: list, n: int) -> list:
"""Optimized matrix multiplication."""
result: list = []
for i in range(n):
row: list = []
for j in range(n):
sum_val: int = 0
for k in range(n):
sum_val += a[i][k] * b[k][j]
row.append(sum_val)
result.append(row)
return result
def main() -> int:
n: int = 100
# Initialize matrices
a: list = []
b: list = []
for i in range(n):
row_a: list = []
row_b: list = []
for j in range(n):
row_a.append(i + j)
row_b.append(i - j)
a.append(row_a)
b.append(row_b)
# Compute
result: list = matrix_multiply(a, b, n)
# Print checksum
total: int = 0
for i in range(n):
for j in range(n):
total += result[i][j]
print(total)
return 0
Build & Benchmark:
# No optimization (debug)
multigen build -t llvm -O0 matrix_ops.py
time ./build/matrix_ops
# real 0m0.086s
# Maximum optimization
multigen build -t llvm -O3 matrix_ops.py
time ./build/matrix_ops
# real 0m0.054s (36.5% faster!)
# Compare to Python
time python matrix_ops.py
# real 0m2.450s (45x slower)
Benefits:
- [x] 36.5% speedup with O3
- [x] 45x faster than Python
- [x] Industry-standard LLVM optimizations
- [x] Small binary (37KB)
Data Processing Pipelines¶
Use Case: Log Processing & Analytics¶
Problem: Process millions of log lines, Python too slow
Solution: C++ backend for performance + STL
# log_processor.py
def parse_log_line(line: str) -> dict:
"""Parse Apache-style log line."""
parts: list = line.split()
if len(parts) < 7:
return {}
result: dict = {}
result["ip"] = parts[0]
result["method"] = parts[5].strip('"')
result["path"] = parts[6]
result["status"] = int(parts[8])
return result
def analyze_logs(filename: str) -> dict:
"""Analyze log file."""
stats: dict = {}
stats["total"] = 0
stats["errors"] = 0
stats["unique_ips"] = 0
ips: set = set()
with open(filename) as f:
for line in f:
parsed: dict = parse_log_line(line)
if len(parsed) > 0:
stats["total"] += 1
if parsed["status"] >= 400:
stats["errors"] += 1
ips.add(parsed["ip"])
stats["unique_ips"] = len(ips)
return stats
def main() -> int:
stats: dict = analyze_logs("access.log")
print(f"Total requests: {stats['total']}")
print(f"Errors: {stats['errors']}")
print(f"Unique IPs: {stats['unique_ips']}")
return 0
Build & Deploy:
# Build with C++
multigen build -t cpp log_processor.py
# Process 10M log lines
time ./build/log_processor
# real 0m2.145s (vs Python: 45s = 21x faster)
# Small binary for container deployment
ls -lh build/log_processor
# -rwxr-xr-x 36K log_processor
# Docker integration
FROM alpine:latest
COPY build/log_processor /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/log_processor"]
Benefits:
- [x] 21x faster than Python
- [x] Tiny binary (36KB) for containers
- [x] STL containers (set, map) for efficiency
- [x] Easy integration with existing C++ pipelines
WebAssembly & Browser Applications¶
Use Case: Client-Side Algorithm in Browser¶
Problem: Run Python algorithm in browser without server
Solution: LLVM backend → WebAssembly
# fibonacci.py
def fibonacci(n: int) -> int:
"""Compute nth Fibonacci number."""
if n <= 1:
return n
a: int = 0
b: int = 1
for i in range(2, n + 1):
temp: int = a + b
a = b
b = temp
return b
def main() -> int:
result: int = fibonacci(40)
print(result)
return 0
Build for WebAssembly:
# Compile to LLVM IR
multigen convert -t llvm fibonacci.py
# Compile IR to WASM
llc -march=wasm32 -filetype=obj build/fibonacci.ll -o build/fibonacci.wasm
# Result: 616 bytes (80% smaller than native)
ls -lh build/fibonacci.wasm
# -rw-r--r-- 616 fibonacci.wasm
JavaScript Integration:
// index.html
<script type="module">
import { loadWASM } from './wasm-loader.js';
const wasmInstance = await loadWASM('fibonacci.wasm');
// Call Fibonacci from JavaScript
const result = wasmInstance.exports.fibonacci(40);
console.log(`Fibonacci(40) = ${result}`);
// Output: Fibonacci(40) = 102334155
</script>
Benefits:
- [x] Python algorithm runs in browser
- [x] No server required
- [x] 80% smaller than native binary
- [x] Near-native performance in browser
Cross-Platform Libraries¶
Use Case: Algorithm Library for Multiple Platforms¶
Problem: Maintain algorithm in one place, deploy to iOS/Android/Desktop
Solution: LLVM backend for multi-target compilation
# crypto_hash.py
def simple_hash(data: str) -> int:
"""Simple hash function for demonstration."""
hash_val: int = 0
for i in range(len(data)):
char_code: int = ord(data[i])
hash_val = ((hash_val << 5) - hash_val) + char_code
hash_val = hash_val & 0xFFFFFFFF # 32-bit
return hash_val
def main() -> int:
result: int = simple_hash("Hello, World!")
print(result)
return 0
Cross-Compile for Multiple Platforms:
# x86-64 (Desktop)
multigen build -t llvm crypto_hash.py
# Output: build/crypto_hash (x86-64)
# ARM (iOS/Android)
llc -march=arm64 -filetype=obj build/crypto_hash.ll -o build/crypto_hash_arm64.o
# Output: build/crypto_hash_arm64.o
# RISC-V (Embedded)
llc -march=riscv64 -filetype=obj build/crypto_hash.ll -o build/crypto_hash_riscv.o
# Output: build/crypto_hash_riscv.o
# WebAssembly (Browser)
llc -march=wasm32 -filetype=obj build/crypto_hash.ll -o build/crypto_hash.wasm
# Output: build/crypto_hash.wasm
Benefits:
- [x] Write once, compile for any platform
- [x] Single source of truth
- [x] LLVM handles target-specific optimizations
- [x] No manual porting required
Research & Academia¶
Use Case: Algorithm Study in Functional Programming¶
Problem: Teach functional programming concepts using Python syntax
Solution: Haskell backend for pure functional output
# quicksort_functional.py
def quicksort(arr: list) -> list:
"""Functional quicksort implementation."""
if len(arr) <= 1:
return arr
pivot: int = arr[0]
rest: list = arr[1:]
less: list = []
greater: list = []
for x in rest:
if x < pivot:
less.append(x)
else:
greater.append(x)
sorted_less: list = quicksort(less)
sorted_greater: list = quicksort(greater)
result: list = []
for x in sorted_less:
result.append(x)
result.append(pivot)
for x in sorted_greater:
result.append(x)
return result
def main() -> int:
data: list = [64, 34, 25, 12, 22, 11, 90]
sorted_data: list = quicksort(data)
for x in sorted_data:
print(x)
return 0
Build & Study:
# Generate Haskell code
multigen build -t haskell quicksort_functional.py
# Examine functional output
cat build/src/quicksort_functional.hs
Generated Haskell (excerpt):
quicksort :: [Int] -> [Int]
quicksort arr =
if length arr <= 1
then arr
else
let pivot = arr !! 0
rest = tail arr
less = filter (< pivot) rest
greater = filter (>= pivot) rest
sortedLess = quicksort less
sortedGreater = quicksort greater
in sortedLess ++ [pivot] ++ sortedGreater
Benefits:
- [x] Learn functional programming patterns
- [x] Python syntax → Haskell semantics
- [x] Type-safe functional code
- [x] Academic research tool
Integration Patterns¶
Pattern 1: Gradual Migration¶
Start: Pure Python codebase Step 1: Identify performance bottlenecks (profiling) Step 2: Extract bottleneck to separate .py file Step 3: Compile bottleneck with MultiGen (keep interface identical) Step 4: Replace Python module with compiled version Step 5: Measure improvement, iterate
Example:
# Before: Pure Python
from algorithms import expensive_computation
result = expensive_computation(data)
# After: Compiled module (same interface)
from build.algorithms import expensive_computation # MultiGen-compiled
result = expensive_computation(data) # 10x faster, same code
Pattern 2: Library Wrapping¶
Wrap MultiGen output in language-specific library
C++ Example:
// my_library.hpp
#pragma once
#include "algorithm.h" // MultiGen-generated
namespace MyLibrary {
class Algorithm {
public:
int compute(const std::vector<int>& data) {
return multigen_compute(data);
}
};
}
Pattern 3: Microservice Deployment¶
Deploy compiled code as standalone service
# Build service
multigen build -t rust api_service.py
# Dockerize
docker build -t my-service .
docker run -p 8080:8080 my-service
# Deploy to Kubernetes
kubectl apply -f deployment.yaml
Common Workflows¶
Development Workflow¶
# 1. Prototype in Python
python algorithm.py
# 2. Compile and test
multigen build -t cpp algorithm.py
./build/algorithm
# 3. Benchmark
time ./build/algorithm
# 4. Optimize
multigen build -t llvm -O3 algorithm.py
time ./build/algorithm
# 5. Deploy
./deploy.sh build/algorithm
CI/CD Integration¶
# .github/workflows/build.yml
name: Build with MultiGen
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install MultiGen
run: pip install multigen
- name: Build algorithms
run: |
multigen build -t rust src/algorithms/*.py
- name: Test
run: |
for binary in build/*; do
./$binary || exit 1
done
- name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: compiled-binaries
path: build/
Summary¶
MultiGen enables:
- [x] Rapid prototyping in Python
- [x] Production deployment in compiled languages
- [x] Performance optimization without rewrites
- [x] Cross-platform development
- [x] Educational tools for language learning
Choose backend based on:
- Target platform (embedded → C, web → Rust/Go, research → Haskell)
- Performance needs (critical → LLVM O3, normal → C++)
- Integration requirements (existing codebase → matching backend)
- Deployment constraints (size → C++, portability → LLVM)
Get Started: See docs/BACKEND_SELECTION_GUIDE.md for choosing the right backend for your use case.