MVP on a Shoestring: Our Serverless Experiment

Prashant Srinivasan, Director of Engineering

19 June 2024

Introduction
If you have a product idea, you want to keep your costs down until you validate demand and find customers, right? At Codewalla we work with a number of early stage start ups and keeping costs down is one of our top priorities. One of our clients had a similar requirement: build an MVP with minimal fixed infrastructure cost. This led us to revisit the concept of serverless architecture. The idea of not having to manage servers, automatic scaling and paying only for what you use made serverless a compelling choice for this project.
The Experiment

After a brainstorming session, my team and I decided to use the AWS serverless ecosystem due to our familiarity with it.

  • We chose AWS Lambda to deploy our Node.js code as serverless functions.
  • Used MongoDB Atlas for our serverless storage needs.
  • AWS API Gateway to trigger the Lambda functions.
  • For handling asynchronous tasks, we employed AWS SQS (Simple Queue Service).
  • VueJS UI was deployed using AWS CloudFront.

All set and ready to go. However, a few weeks into development, we encountered several significant challenges that made us question the viability of a fully serverless architecture:

Scheduling Tasks:  In a traditional server-based setup, scheduling tasks with cron jobs is straightforward. In a serverless environment, scheduling was initially a headache. AWS CloudWatch Events offered some help, but it wasn’t always straightforward.

Complex Business Domains:  The Lambda functions could not handle business domains as single deployable units. Real-world business logic is complex, and the Lambda functions started getting bloated.

Bloated functions -> Slow Startup -> Increased latency

Long-Running Processes:  Traditional servers manage long-running processes efficiently. In contrast, long-running APIs in a serverless environment suffered from severe latency issues and cold start delays, making it difficult to develop performant APIs.

Database Connections:  Optimizing database connections and implementing effective caching strategies is more challenging in a serverless environment. The lack of connection pooling affected performance.

Typical, right? There’s a tech that promises a revolution only to deceive in the end. But does it? I want to take a step back and reexamine some of these challenges.

Function-Based Decomposition: Every business domain as a service is microservice architecture, but not necessarily a function-based serverless architecture. I’ve always had a problem with my development team trying to squeeze code into a microservice because it is too small to be its own microservice. I have seen this become a maintenance headache over time. Now, my team is breaking down their code into even smaller units called nanoservices. These are smaller than microservices. This helps keep the codebase more manageable.

Cold Starts: Cold starts are still an issue, with function startup times being a few 100 milliseconds. Every API call is that many milliseconds slow. However, by ensuring functions are lightweight we managed to reduce the latency. It’s common practice to initialize a thousand unnecessary environment variables during startup. Nobody cares as it delays just the server startup and not the API execution time. Well, we cannot do that when we want to be serverless.

Database Connection Management: DB connection pooling does not exist in a serverless world as you would create a connection pool only to destroy it on function shut down. This means connection switching and pooling optimizations are not available to me out of the box. This also means that managing orphan connections and leaks is not my problem. That’s a positive, right? I have been part of painful debugging sessions when the production server runs out of connections due to leaks.

Query Optimization:   DB query optimization is always an afterthought for the development team. Most teams do not care about DB query performance on day one. During the development stages, your application doesn’t have much data, so all your queries appear to work well. Once your software is live for a considerable period, this changes. I have seen teams redesign features/user experience when they realize that the product cannot handle the data demands. At this stage, it becomes super expensive to make the change. With limited computing power and no connection pooling, a developer is forced to think about query optimization from day zero. Some might call it premature optimization, but I call it adherence to coding best practices.

Let me revisit my statement now. Is serverless architecture just a snake oil? Not really. It doesn’t sound that bad to me now. Especially when we were able to reduce our fixed infrastructure costs by 600%. This approach is ideal for products that are still finding their market fit, where minimizing fixed costs is crucial.

Conclusion

Reflecting on our journey, serverless computing has been both challenging and rewarding. It pushed us to optimize and adhere to best practices constantly. Despite the significant challenges, the benefits of serverless architecture—such as cost efficiency and scalability—make it a compelling choice for many applications. Our experience underscores the importance of disciplined development practices when embracing serverless architecture.

Scroll to Top