Exploring Rust for AWS Lambda: A Proof of Concept

...

On November 14, 2025, AWS Lambda promoted Rust support from Experimental to Generally Available, backed by AWS Support and the Lambda availability SLA. This announcement prompted our team to build a REST API proof of concept using Rust on AWS Lambda with ARM64 architecture (Graviton2 processors), which offers 20% lower cost and up to 19% better performance compared to x86.

Why Rust for Serverless?

Most serverless applications run on Node.js or Python, but we wanted to explore Rust for several reasons. Rust compiles to native machine code, producing standalone binaries without runtime interpreters, potentially meaning faster cold starts and lower memory usage. The compiler catches bugs before deployment—no null pointer exceptions or type mismatches slip through. Plus, the Rust ecosystem has matured significantly, with tools specifically designed for Lambda deployment.

The Proof of Concept

We built a REST API with three endpoints demonstrating common serverless patterns:

  • Greeting Service: A GET endpoint handling personalized greetings with query parameters
  • Health Check: A GET endpoint returning status, timestamp, and requester IP for monitoring
  • SQS Integration: A POST endpoint accepting messages and queuing them for asynchronous processing

The stack includes Rust Edition 2024, AWS SDK for Rust, Tokio for async runtime, and AWS CDK with TypeScript for infrastructure deployment. Everything is compiled for ARM64 architecture and deployed using Cargo Lambda.

Development Experience: The Good and The Challenging

Cargo Lambda proved to be a game-changer. It handles cross-compilation to ARM64, packaging, and deployment automatically. You can develop on any architecture and it produces Lambda-ready ARM64 binaries seamlessly. The Rust compiler provides excellent feedback, often suggesting fixes for errors, and the type system ensures that certain classes of errors simply cannot occur at runtime.

However, let’s be honest – Rust is more complex than JavaScript or Python. Concepts like ownership, borrowing, and lifetimes take time to understand. For this proof of concept, we spent more time wrestling with the compiler than we would have with Node.js. But once the code compiles, it tends to just work. The bugs we encountered were logic errors, not runtime crashes or type mismatches.

Testing and Production Patterns

The project includes comprehensive testing at two levels. Application tests validate endpoint behavior, input validation, edge cases, and routing logic. Infrastructure tests ensure AWS resources are configured correctly – verifying SQS queue settings, Lambda environment variables, IAM permissions, and API Gateway configuration. These infrastructure tests caught several issues that would have been painful to debug in production.

Even as a proof of concept, we built it with production patterns in mind: proper error handling with meaningful messages, structured logging for observability, least-privilege IAM permissions, and asynchronous I/O throughout for efficient concurrent request handling.

Key Learnings

Tooling makes the difference: Without Cargo Lambda and AWS CDK, the deployment complexity would have been significantly higher. Infrastructure as Code with TypeScript provided type safety and IDE support for cloud resources, while testing infrastructure configuration before deployment caught many issues early.

The compiler is strict, but helpful: Initially frustrating, we came to appreciate that the Rust compiler catches bugs before deployment rather than letting them surface in production. This is a compile-time error versus a 3am production page.

ARM64 matters: Using Graviton2 processors provides better price-performance – the cold starts feel snappier, and the memory footprint is smaller. Whether that matters depends on your traffic patterns and cost sensitivity.

When Does Rust Make Sense?

Based on this experiment, Rust is a good fit for performance-critical APIs where cold start time matters, high-traffic applications where memory efficiency impacts cost, applications requiring type safety and compile-time guarantees, and long-running functions that need to maximize work within timeout limits.

It might be overkill for quick prototypes, simple glue code between AWS services, teams without Rust experience, or infrequent functions where cold start doesn’t matter.

Trade-offs to Consider

Writing equivalent functionality in Node.js would have been faster – the Rust learning curve is real. Cargo Lambda handles most deployment complexity, but there are still more moving parts than a Node.js Lambda deployment. The code that compiles tends to be reliable, but finding developers comfortable with Rust is harder than finding JavaScript developers.

The Bottom Line

This proof of concept achieved its goal – we now understand what building serverless applications with Rust feels like. The tooling is mature enough to make it practical, and the deployment story is surprisingly smooth thanks to Cargo Lambda and AWS CDK.

Would we use Rust for all our Lambda functions? Probably not. The development overhead is real. But for specific use cases – performance-critical APIs, high-traffic services, or situations where type safety is paramount—it’s a viable option worth considering.

The most valuable lesson? Modern infrastructure-as-code practices and comprehensive testing make experimental projects like this manageable. If you’re curious about Rust for serverless, the barrier to entry is lower than you might think. The learning curve exists, but the tooling makes it approachable, and the community is helpful. Rust on Lambda is more than academic – it’s practical.

Contact us

    policy