Understanding serverless can be difficult. It’s an abstract concept, and there are a lot of prerequisites and domain knowledge you need to understand before you can truly grok the concept. But at its heart, serverless is simple.
Today we’ll look at how and in which scenarios serverless works best with some real world examples. One (or hopefully many) of the examples should resonate with you, and you should soon be able to imagine how your company can leverage serverless. It’s a powerful tool and great fun to use!
To give you a sneak peak of the four examples we’ll cover, they include:
- An HTTP API
- Event-driven processing
- Scheduled ops work (e.g., cron)
- Data processing
But before we dive into the details, let’s quickly get on the same page about what serverless is.
Understanding Serverless
Serverless, in technical terms, is taking cloud computing to the extreme. The server part is managed by the cloud provider, leaving the customer to focus solely on application building rather than on provisioning, operating, and managing servers.
Not only does serverless make operations simpler, it means the customer pays only for what they use. If the service runs, the customer pays for the time it runs. When it’s not running, the customer pays nothing. Simple.
As a concept, it applies to many aspects of computing: compute, databases, file storage, load balancing, etc. For instance, the AWS S3 service is known as one of the first serverless services, yet S3 is not what people typically think about when they talk about serverless. They usually mean serverless compute. Or in simple terms, computers that do things.
Serverless: It Consists Of Two Parts
Before we get into the examples, we need to introduce one thing.
Frustratingly, conversations about serverless are made difficult by a failure to distinguish two aspects.
1. On-demand compute. The first aspect of serverless is the easiest to understand. It’s the fact that you can simply upload some code (for instance as a zip file), and your cloud provider will execute the code as many or as few times as desired. On-demand compute is the essence of serverless. It’s code on demand.
2. Cloud integrations. The second part of serverless is how it integrates with the surrounding cloud provider. But these integrations are where things get complicated. For instance, logs are shipped to the cloud provider’s log services, events come from the cloud provider’s event services, and functions integrate solely with the cloud provider’s gateway services. That said, this tight integration with the cloud provider isn’t an inherently bad thing. It has its benefits.
However, I imagine that right about now you are wondering why we’re taking this digression? How does the distinction between compute and integrations matter when we’re discussing serverless examples? Let me explain.
Why Some Examples Work Better Than Others
The reason certain examples or use cases work so well with serverless can typically be explained with one of the two reasons above.
For instance, take a use case that has an unexpected traffic pattern. Having on-demand compute will make operations simpler because you won’t need to predict usage patterns. Or if you have a use case that would benefit from integrations with the cloud provider, serverless can speed up implementation.
But because cloud provider integrations play such a big role in determining whether an example will work well on serverless, we won’t talk about any specific cloud providers today. Bringing cloud provider specifics into the mix requires a more nuanced discussion. And to make matters more complicated, these integrations are always changing.
With this point addressed, let’s take a look at the scenarios where serverless works well. Generally speaking, there are two main areas.
Where Serverless Works
1. When a server isn’t wanted (or needed). Serverless works best when operating a service isn’t desirable, when the user wants to focus mainly on the logic of their application and not on the grunt work of managing servers.
2. When usage can’t be accurately predicted. Another scenario where serverless is particularly useful is when the user can’t accurately predict the future usage of a service and therefore doesn’t know how many “servers” they’d need in the first place. If the demand is higher than expected, with serverless you pay some more for the usage without impacting the end user. If there is no demand, the user pays nothing.
But serverless isn’t a silver bullet, and there are scenarios where serverless doesn’t make sense.
Where Serverless Doesn’t Work
1. When end-user latency matters. Contrary to popular belief, serverless can struggle to scale in some scenarios. Because of what is known as the “cold start,” serverless imposes a latency penalty on users as it scales. The penalty is a slower response time for the first user to hit newly spun-up serverless infrastructure.
2. When a server really is needed. There are a few exceptions where servers are truly needed. For instance, when a dedicated server may be required under strict regulations. Or if particular software such as WordPress requires a server. But such requirements are generally an exception, not the norm.
And that should hopefully give you a good enough introduction to serverless that we can now turn our attention to some examples. Let’s start with the humble HTTP API.
Example One: Building an HTTP API
HTTP APIs are used primarily as abstraction layers on top of data. They ensure that data is protected and can be accessed easily and securely. You may have data in one (or many) data stores, and with the help of arguments passed to the API (e.g., query parameters), results are then filtered, grouped, and returned.
Let’s consider some examples.
- Creating your own HTTP payments API
- Serving static content, such as web pages
- Implementing an authentication server via HTTP methods
- Redirecting requests to a different domain or website
- Creating an HTTP service that resizes a passed image
- Exposing (or selling) data via HTTP API for third parties to leverage
Why does serverless work so well for HTTP APIs?
Serverless is compelling for this use case mainly because you can serve requests for the HTTP API while absorbing changes on demand. Serverless works if you expect low numbers or high. If you expect low numbers, you’re not paying for a large server “just in case.” And if you expect high numbers, you don’t have to fiddle around trying to get your API to scale and downscale at the right time to save money.
Example Two: Event-Driven Processing
A close (but more advanced) relative of the HTTP API example is using serverless to process events. Unlike HTTP, event processing is asynchronous. Events are put into a common component like a queue or bus and then processed at a rate that can be tolerated by the consumer.
Since events can “de-couple” technical processes, the initiator of a command doesn’t need to wait for a response. Events are useful in long-running scenarios where a response isn’t needed, such as making and fulfilling an order on a website.
Let’s consider some examples.
- Creating complicated e-commerce checkout processes
- Implementing complicated legal or governmental processes
- Creating a mail server to process many requests “eventually”
Why does serverless work so well for event processing?
It can process events concurrently and at a flexible amount of scale. Scaling infrastructure is difficult when processing asynchronous events due to the unpredictability of knowing when events might arrive.
As we covered in the introduction, many cloud providers have good integrations with their event services. For instance, AWS recently launched Event Bridge, its new event service.
Example Three: Scheduling Ops Work (e.g., Cron)
Building on our event example, we’ve got a similar but sufficiently different example: scheduling computation, or more specifically operations work. Some tasks—such as performing system backups, running updates, and rotating log files—all need to happen on a recurring basis. And just as with the events processing, serverless works well in this environment.
Let’s consider some examples.
- Log shipping
- Rotating log files
- Applying server patches
- Running security scanning and profiling
Why does serverless work so well for scheduled ops work?
Operations tasks are often completed at set schedules, such as every evening or morning. Operations engineers often call these types of tasks cron tasks. It’s a typical method for scheduling tasks.
First, since serverless scales automatically, these scheduled tasks don’t need to provision servers precisely when they need them. They can use serverless to handle the workload simply when it’s required. Second, cloud providers typically expose simple methods for configuring task schedules. For instance, AWS performs scheduling via CloudWatch events.
Example Four: Data Processing
There are many occasions where data needs to be moved or streamed from one place to another. If your company is a media company, data processing is your party piece. But even regular companies often require some aspect of data processing or streaming.
Let’s consider some examples.
- Serving multimedia content, such as videos
- Validating or encrypting logs in real time
- Moving log data from one location to another
- Archiving data into a less expensive storage method
- Compressing or encrypting data for long term storage
- Moving data from a live database to an analytics database for processing
Why does serverless work for data processing?
It can scale to meet very large usage data demands. As with the other examples, cloud providers typically provide built-in methods for performing common tasks, such as backing up or encrypting log files, which can be simply configured using serverless.
What Will You Do With Serverless?
Hopefully this article helped you understand a bit more about what serverless is with some real-world examples of where it is used. Serverless isn’t magic, and it’s pretty simple when you peel back the layers. Serverless is not only a great opportunity for businesses, it’s also great fun!
If you’re keen to try serverless, you’ll soon find that monitoring your application is essential. So be sure to take a look at Scalyr’s demo to see the product in action and how you can gain insights into your serverless application.
This post was written by Lou Bichard. Lou is a JavaScript full stack engineer with a passion for culture, approach, and delivery. He believes the best products emerge from high performing teams and practices. Lou is a fan and advocate of old-school lean and systems thinking, XP, continuous delivery, and DevOps.