Back to blog

Why I Chose Rust Over C and C++ for Drengr

10 min read
EngineeringRust

When I tell people I built a mobile automation tool in Rust, the first question is always "why not Python?" I've answered that elsewhere. But the question that actually kept me up at night during the early architecture phase was different: why not C or C++?

Drengr is a CLI tool that talks to Android devices over ADB, iOS simulators over simctl, and cloud devices over Appium WebDriver. It parses UI trees, captures screenshots, manages concurrent device sessions, and serves as an MCP server over stdio. This is systems programming territory. C and C++ have owned this space for decades. So why Rust?

This isn't a "Rust vs C++" holy war post. I've written production code in all three languages. This is an honest account of a specific decision for a specific project, with the trade-offs I actually faced.

The Case for C

C was tempting. ADB itself is written in C++. The Android debug bridge protocol is well-documented at the C level. I could have called into ADB's libraries directly, skipping the subprocess overhead entirely. A C binary would be tiny — potentially under 1MB with static linking and aggressive stripping.

I seriously considered it. For about two days.

The problem crystallized when I started sketching the MCP server. MCP is JSON-RPC 2.0 over stdio. That means parsing JSON, routing method calls, managing request/response correlation, handling concurrent tool invocations. In C, I'd need a JSON parser (jansson? cJSON? write my own?), string handling that doesn't segfault, and manual memory management for every request/response lifecycle.

I've done this before. I know what it looks like. It looks like 60% of your code being memory management boilerplate, and the remaining 40% being the actual logic you care about. For a research project where I need to iterate fast and try experimental approaches to screen parsing and AI agent loops, that ratio is fatal.

The Case for C++

C++ was a stronger contender. Modern C++ (17/20) has smart pointers, string_view, std::optional, std::variant — many of the ergonomic features that make Rust pleasant to write. The ADB ecosystem is native C++. I could use nlohmann/json for parsing. The standard library has threads, mutexes, condition variables.

Three things killed it for me:

1. The Build System Problem

I wanted a single static binary that anyone could curl and run. No shared library dependencies, no runtime requirements, no "install libfoo-dev first." In Rust, this is cargo build --release --target x86_64-unknown-linux-musl. Done.

In C++, static linking is a odyssey. CMake or Meson? Which standard library — libstdc++ or libc++? Static linking glibc is technically possible but discouraged and produces larger binaries with potential compatibility issues. Musl works but you need a separate toolchain. Cross-compilation for Apple Silicon from Linux? I'd need a cross-compiler toolchain per target triple.

Cargo handles all of this. I add a target, run the build, get a binary. The CI matrix in my GitHub Actions workflow is 20 lines. The equivalent CMake + cross-compilation setup would be 200+.

2. Concurrency Without Fear

Drengr manages multiple concurrent operations: the MCP server handles requests while the SDK server listens for in-app network events, the OODA loop runs autonomous agent sessions, and the explore mode does BFS traversal with concurrent screen captures. These all share state — the current device transport, the screen annotation cache, the situation engine.

In C++, shared mutable state across threads means choosing between:

  • Raw mutexes with manual lock/unlock discipline (and hoping you never forget)
  • Atomic operations for primitives (and hoping your lock-free algorithm is actually correct)
  • Higher-level abstractions like folly::Synchronized (and adding Facebook's folly as a dependency)

Data races in C++ are undefined behavior. Not "your program crashes." Undefined behavior. The compiler is allowed to do literally anything. Time travel. Nasal demons. In practice, it means subtle corruption that shows up three hours into a test session as a garbled screenshot or a silently wrong element count.

In Rust, the type system prevents data races at compile time. If I try to share a mutable reference across threads without proper synchronization, it doesn't compile. Period. The compiler forces me to use Arc<Mutex<T>> or channels or atomics explicitly. I can't accidentally share a raw pointer to a screen buffer across two async tasks.

For a tool that manages real device sessions — where a bug could mean sending the wrong tap to the wrong device — this isn't a nice-to-have. It's a requirement.

3. The Dependency Story

Drengr depends on reqwest (HTTP client), tokio (async runtime), serde (serialization), image (screenshot processing), and about 30 other crates. Adding a dependency in Rust is one line in Cargo.toml. Cargo downloads, compiles, and statically links it. Version resolution is automatic. Security advisories are tracked by cargo audit.

In C++, every dependency is a project. Do they use CMake? Meson? Autotools? Their own bespoke build system? Do they support static linking? Are their transitive dependencies compatible with mine? The Conan and vcpkg package managers have improved this, but they're still far from Cargo's "it just works" experience.

I estimated that managing C++ dependencies alone would cost me 2-3 weeks of the early development timeline. In a solo project where every week counts, that's not acceptable.

What I Miss From C/C++

Honesty requires admitting what Rust costs me.

Compile Times

A clean build of Drengr takes about 90 seconds. An incremental build after touching one file takes 8-12 seconds. The equivalent C project would compile in under 5 seconds clean, under 1 second incremental. When I'm iterating on screen parsing logic and want to test against a real device, those seconds add up.

I've mitigated this with cargo watch and by structuring the crate to minimize recompilation, but it's a real cost.

The Learning Curve

I knew C and C++ before I knew Rust. The borrow checker's mental model — ownership, borrowing, lifetimes — took weeks to internalize. There were days early in the project where I spent more time fighting the compiler than writing features. Async Rust made it worse: pinning, Send/Sync bounds, the colored function problem.

If I'd written Drengr in C++, the first prototype would have been done a week earlier. No question. But I believe the Rust version has fewer bugs, and I spend almost zero time debugging memory issues. That trade-off has compounded in my favor over the months since.

FFI Friction

ADB is a C++ tool. Some interactions would be more natural in C++ — direct FFI into ADB's libraries, for example. Instead, I shell out to the adb binary as a subprocess. It works, but it adds latency (spawning a process per command) and complexity (parsing stdout). A C++ implementation could potentially link against libadb directly.

In practice, the subprocess approach has been fine. ADB commands complete in 10-50ms typically, and the parsing is straightforward. But it's an architectural compromise I wouldn't need in C++.

The Numbers

After six months of development:

  • ~6,300 lines of Rust — this includes the MCP server, three device transports (ADB, simctl, Appium), the OODA loop, the explore mode, the test runner, the SDK server, screen annotation, and the situation engine
  • Zero memory-related bugs in production. Not one use-after-free, double-free, buffer overflow, or data race
  • 189 tests, all passing. The test suite runs in under 3 seconds
  • Binary size: ~15MB stripped, with LTO fat optimization. A C equivalent might be 3-5MB, but 15MB for a tool that includes an HTTP client, JSON parser, image processing, and async runtime is reasonable
  • Cold start: ~15ms to first MCP response. This matters when AI agents are waiting

What I'd Do Differently

If I started over tomorrow, I'd still choose Rust. But I'd do a few things differently:

  • Start with synchronous code, add async later. I went async-first with tokio, which complicated the early prototyping phase. Many of the ADB interactions don't benefit from async — they're sequential command-response pairs. I could have started synchronous and migrated the concurrent parts later.
  • Use fewer abstractions early. I over-engineered the transport trait in the first version. Three concrete implementations of a simple interface would have been clearer than a trait with twelve methods and two associated types.
  • Accept more unsafe. I avoided unsafe entirely for the first four months. Some of the ADB binary protocol parsing would have been cleaner with unsafe pointer arithmetic in a well-tested, isolated module. Rust's unsafe isn't C — it's a clearly bounded region where you tell the compiler "I've verified this manually." I was too cautious.

The Real Answer

The real reason I chose Rust over C and C++ isn't any single technical argument. It's this: Rust lets me write systems-level code at the speed I think, with the confidence that the compiler has caught the classes of bugs that would otherwise cost me debugging days.

For a solo developer building a research project that interacts with real hardware, manages concurrent sessions, and serves as infrastructure for AI agents — that confidence isn't a luxury. It's the difference between shipping and not shipping.

I don't have a team to review my pointer arithmetic. I don't have a QA department to catch my data races. I have the Rust compiler. And it's the most reliable colleague I've ever worked with.

C and C++ are extraordinary languages. They power the systems Drengr sits on top of — the operating systems, the ADB daemon, the simctl infrastructure. I have deep respect for them. But for this project, at this scale, as a solo developer? Rust was the right call.

The binary works. The code is correct. And I sleep well at night knowing the compiler has my back.