Serverless computing, or FaaS (Functions-as-a-Service) is a form of cloud compute in which application developers depend on third party services to manage the server-side of operations, allowing them to focus on building applications on a function-by-function basis.
The serverless platform manages the deployment, running and scaling of code and the provisioning, maintaining and scaling of servers. Developers can focus on writing functions (i.e. small pieces of code) that are triggered by events without having to worry about the whole application or the infrastructure necessary to run it. Functions are pay-per-use so there are no costs if they are not called and you can scale as needed. Serverless computing therefore can reduce the cost and resources needed to develop cloud services.
Serverless offers the opportunity to process data, integrate systems, work with IoT devices and build straightforward APIs and microservices. It’s particularly effective for tasks that you want to run on a schedule, such as image or order processing or file maintenance.
Developers don’t need to choose just one provider to work with. By working with multiple FaaS providers, you create a safety net if one of them goes down or has a bug. Applications that build this into their design from the outset often rely upon different third-party functionality to guarantee reliability.
The “serverless revolution” began when Amazon’s Lambda was launched in 2014 and it remains the most popular serverless platform, partly because of its head start in the market. Google didn’t introduce Cloud Functions until early 2016 and Microsoft only released Azure Functions a little later (although Microsoft was already offering significant Platform-as-a-Service (PaaS), functionality for a number of years). Azure Functions has now overtaken Google Functions in both its development and usage rates.
In this post, we will ask what the two big players in serverless – Microsoft Azure Functions and AWS Lambda – share, and how do they differ? Organizations are increasingly migrating their workloads to the cloud to store, manage and process data.
Microsoft Azure Functions
Azure Functions was officially launched in March 2016, and while it has a narrower scope of options than AWS Lambda, it has been steadily building functionality. Azure Functions allows you to execute code in a serverless environment without having to previously publish a web application or create a VM.
Languages
Through Create Functions, many native languages are supported, including Node.js, Python, C# and Javascript. Azure Functions also supports F# and PHP. It is important to select a language that all platforms in your application’s architecture will support in order to reduce the amount of work needed to port code between different FaaS providers.
Platforms and Dependencies
Functions runs in a Windows environment. It supports NuGet and NPM, allowing the developer to pick their preferred libraries. It also offers straightforward integrations with other Azure services (including Azure Cosmos DB, Azure Event Hubs, Azure Event Grid, Azure Notification Hubs and others) and various external SaaS services, including Bitbucket, Github and VS Team Services.
Function Triggers
Azure Functions supports an event-driven approach to Triggers. Like AWS, Azure Functions offers dynamic, configurable triggers that can be used to invoke functions on their platforms. Microsoft enables access through a web API, in addition to invoking the functions based on a schedule. Microsoft also provides triggers from their other services, such as Azure Storage, Azure Event Hubs, and there is even some support for SMS-triggered invocations using Twilio.
When something interesting happens within the cloud environment, you can trigger a related function. Cron Jobs enables timer-based events for scheduled tasks. Events on Microsoft’s other SaaS services, SharePoint or OneDrive can also be arranged to trigger operations in Azure Functions.
Pricing
Azure Functions offers three different plans:
Consumption plan – Functions and Lambda billed in the same way for the consumption plan: the total cost is calculated from the number of triggers and the execution time. You only pay for the time that your code runs, and don’t pay for resource management.
Under the consumption plan, the first one million requests are free as is 400,000GB-s of resource consumption per month under the monthly subscription in pay-as-you-go pricing across all function apps in the subscription.
Premium plan – The Premium plan “provides enhanced performance and is billed on a per second basis based on the number of vCPU-s and GB-s your Premium Functions consume.
App Service plan – If App Service is already being used for other applications, you can run Azure Functions on the same plan for no additional cost.
AWS Lambda
AWS Lambda is no longer the only serverless provider in town, but the fact it got out the gate ahead of all the other big providers has given it a major head start in terms of its features. Another significant advantage to Lambda is its extensive integration with AWS’ larger cloud portfolio.
Latest Features
Among the announcements at the just held AWS re:Invent 2018 were two related to its serverless framework: Lambda Layers and Lambda Runtime API, both intended “to make serverless development even easier”.
Lambda Layers is “a way to centrally manage code and data that is shared across multiple functions”. It corrects a previous problem that developers had – it was previously necessary to package and deploy shared code with all the functions deploying it; now developers can put all the shared elements in a single zip file and upload the resource as a Lambda Layer.
Lambda Runtime API offers a new interface to use any programming language or a particular language version for developing functions.
The two features can be used in conjunction with runtimes shared as layers so developers can then use their preferred programming language when authoring Lambda functions.
Languages
Lambda offers support for JavaScript, Python, NodeJS and C#, and is working on bringing in support for other languages in the future. You can bring your own code. Any third party library, including native ones can be used.
Platforms and Dependencies
AWS Lambda allows you to easily integrate its services with the other cloud services that AWS offers; for instance, integration with S3 and Kinesis enables log analysis and on-the-fly filtering, or video transcoding and backup. AWS Lambda enables the addition of custom logic to all its resources, including Amazon DynamoDB tables and Amazon S3 buckets, so that compute can be applied to data as it enters or migrates through the cloud.
AWS Lambda can be used to create new backend services for a web, mobile, or IoT application, receiving client requests through the Amazon gateway that is triggered on demand using the Lambda API or custom API endpoints. This means reduced power usage, easier updates and the ability to avoid client platform variations.
Triggers
Multiple AWS Functions can be managed through AWS Step Functions, which allows you to coordinate several AWS services at once into serverless workflows, enabling the quick construction and updating of serverless workflows. You can define workflows that trigger a variety of Lambda functions, including sequential, parallel, branching and error-handling steps. AWS also lets users define workflows for highly complex triggers, such as decision trees for alternative business use-cases originating from the same function.
Pricing
As with Azure Functions and most serverless platforms and/or FaaS, you only pay for the compute time you consume; this is indeed one of the main advantages to using serverless platforms. You pay only for the requests served and the compute time needed to run your code.
Amazon offers subsecond metering, billing for function triggering and execution in increments of 100ms. As with Azure, the first one million requests and 400,000GB-seconds of compute time per month are free; every 1M requests thereafter cost $0.20—apportioned between request and compute charges.
Lambda charges a request every time it begins to execute in response to an event notification or invoke call. Billing occurs in relation to the total number of requests across all functions.
A Comparison: Some Similarities and Differences
Both Microsoft Azure and AWS Lambda support Node.js, Python and C#. Currently, AWS Lambda further supports Python and Java, and Azure offers extended support for F# and PHP. However, both providers are continuing to grow their language support offerings and presumably at some point, there will be parity among them.
Each framework runs on a different execution platform. While Azure Functions runs in a Windows environment, AWS Lambda is constructed from the AMI, which runs on Linux.
The container architectures differ. In Azure Functions, resources are provisioned as needed, however, the files run on Azure’s WebJob instead of being deployed from a zip file.
Both of them run dynamic, configurable triggers that can be used to invoke functions on each of their platforms and their respective wider set of cloud services. AWS allows developers to configure an API trigger using API Gateway and issue different types of trigger, depending on what is needed, for instance a dynamic trigger via a DynamoDB action or a file-based trigger based on Amazon S3 usage. AWS Step Functions is a particularly useful tool to create state machines to model workflow, and coordinate multiple AWS services in one place.
Another difference is that Azure offers an App Service plan while Lambda does not. The App Service plan lets users pay per app service as opposed to its Dynamic Service plan that runs independently so that users pay per function, as they do with Lambda. The App Service is likely a good fit for functions that take a long time to execute; for quick functions, the Dynamic Service is probably more suitable. Difference in pricing between the two frameworks is minimal in terms of the dynamic service (pay-per-function) plan.
The main current difference between the two is that Microsoft’s service is open-sourced, allowing users to deploy it on local servers or alternative cloud services while Lambda is not.
Takeaway
There are a variety of FaaS alternatives to AWS Lambda and Microsoft Azure Functions, the other two biggest in the space being the other hyperscalers: Google Cloud Functions and IBM Cloud Functions.
AWS Lambda offers a series of features that in many ways set the standard for all FaaS platforms. And Lambda is impressive because of the size of its product portfolio – what matters most in serverless is the availability and consumption of other services within the cloud provider ecosystem since being able to execute functions in response to events is only useful as far as it is possible to extend.
Furthermore, serverless is increasingly becoming the glue or scripting language that ties together all the cloud features. AI or mapping tools that were previously mainly independent are now being linked through the event-driven serverless functions. One of the quickest ways to explore other kinds of compute, such as machine learning for analytics is to create a serverless app and begin to send events to the ML corner of the cloud.
It is currently difficult for the other providers, including Microsoft, to equal the extensive offerings of AWS. However, all the big FaaS models are continually evolving and adding new features; there are also benefits to running a hybrid or multi-cloud environment. There are a considerable number of smaller players as well – from RedHat’s fabric8 to NStack to Platform9Systems’Fission. These can be strong options for edge processing, high-security apps, offline apps or situations in which high load/utilization or high computational requirements are expected.
Pricing is a major consideration of course with choice of framework. If execution time exceeds goals, request numbers spike, and triggers are left unmonitored, costs can quickly multiply and reduce the value of taking a serverless approach. A useful pricing comparison tool is the Serverless Cost Calculator, which enables the prediction of eventual costs, according to predicted number of executions and average execution time.