Developer Tools: The Command Line

Hello again!
Welcome to the next post in this series! Today we’re gonna be going over some the command line and terminal emulators. Again, this is a topic that changes quite a lot between operating systems, so check out the second post in the series for some good options! Let’s jump right in!

Terminal Emulators

Let’s take a trip down memory lane. Back in the early days of computing, not everyone had their own computer. Instead, an institution might have a computer that it allowed its staff to use. To access this computer, you might have had a basic monitor and a keyboard setup, like the VT100. This setup was called a terminal and it allowed you to enter commands that were processed by the main computer. Jump forward to today and you might be using something like Terminal.app, iTerm, kitty, or Gnome Shell. These are called terminal emulators – they are software designed to act like the old school terminals and allow you to input and output from a computer. The only difference is that the computer you are working with is right in front of you, not in a different building on campus.
You might be wondering why we use something that imitates an old concept – don’t we have GUI’s and tools for this now? In some ways, you are right – many of the old tools have been replaced by more graphical alternatives. But as it turns out, text is still a great way to interact with a computer. Until we have great voice commands built into all software, text is the fastest way to do many things.

The shell

Now we know what kind of programs we can use to interact with the computer using text. The shell is the program that reads that text and decides what to do. It takes a string like git commit -m ‘Add the things!’ and decides that it should route to git and needs to pass the parameters commit -m ‘Add the things!’. They take a program and wrap around the execution of it, like a shell. 😉
There are lots of different shell programs, all with their own features. Many add widgets and simplifications to make you more productive, which often hides the actual functionality of a shell program. Some have fancy ways to interact with your previous commands and others integrate programming languages like Python directly into the shell. It’s also worth noting that in addition to being programs that you can run, each of these shells is a programming language that allows you to write command line functions and programs.

Bash

Bash (Bourne Again SHell) is the default shell on most Unix operating systems. It has a lot of great features, like Tab completion and aliases. This is the shell I use – I like it because there are lots of great resources out there and it doesn’t do too much. It’s kind of the baseline for the good shells and many others are based in part on the ideas set forward by Bash. Bash is also integrated on Windows now, which makes switch computers/OS’s much easier.

zsh

zsh (z-shell) is basically a more interactive version of Bash. In addition to Tab completion, it features type ahead search and autocomplete. Most people who use zsh install ohmyzsh, which gives you some really good defaults for your shell. This is often a good choice for newer devs as it has lots of user-friendly features and a good community around it.

fish

fish is pretty similar to zsh. It’s got a lot of great user-friendly features. Fish has good syntax highlighting, which can be nice! Julia Evans has a great article on the benefits of fish. I am thinking about switching to this in the future to play around with it.

xonsh

xonsh is cross-platform and deeply integrations Python into the shell experience. If you like Python and don’t like writing shell scripts, this might be a good option!

The others

There are a lot of shell programs. People have lots of opinions on these things. Experiment a bit and see what programs you like the most. Searching for “alternatives to bash" is a good place to find some starting places.

The Command Prompt

Now that we know what a terminal emulator is and how a shell works, let’s talk about the command prompt. The command prompt is where you are going to type your commands – it’s prompting you for input, which will be passed to the shell, which will decide how to respond. You’ll most likely get some sort of output back, perhaps another string or a graphical program opening. Just like the shell, there are lots of possibilities when working with your command prompt. How they are displayed is usually handled by the shell, but the basic way to do it is to set an environment variable named PS1. Environment variables are global variables that tell your shell how to behave – they describe the environment that it is operating in.
If you are interested in reading a bit more about all this terminology, this StackOverflow answer is really good.

Some basic commands

cd – change to <directory> – Example: cd ~/Projects

ls <directory> – list the files in <directory> – Example: ls ~/Projects

cp <file> <location> – copy the <file> to <location> – Example: cp ~/Projects/test.txt ~/textfile.txt

If you want to learn more, I’d recommend checking this tutorial out.

Giving your shell superpowers

Once you have a shell, you’ll want to upgrade it to fit your workflow!

Looking good!

First, let start by making a good-looking prompt. In most shells, this is done by changing the PS1 enviroment variable. To start make a file in your home folder (you can get there by opening a terminal and typeing cd ~). Let’s create a file called .bash_prompt. If you have VS Code installed, just run code .bash_prompt.
Once that file opens up, put this at the top:
#!/usr/bin/env bash

# Shell prompt
## Based on wuz/.files (github.com/wuz/.files)

bold="\[\e[1m\]"
reset="\[\e[0m\]"
black="\[\e[30m\]"
blue="\[\e[34m\]"
green="\[\e[32m\]"
cyan="\[\e[36m\]"
red="\[\e[31m\]"
yellow="\[\e[33m\]"
purple="\[\e[35m\]"
brown="\[\e[33m\]"
gray="\[\e[37m\]"
dark_gray="\[\e[90m\]"
light_blue="\[\e[94m\]"
light_green="\[\e[92m\]"
light_cyan="\[\e[96m\]"
light_red="\[\e[91m\]"
light_purple="\[\e[95m\]"
light_yellow="\[\e[93m\]"
white="\[\e[97m\]"

This gives us a bunch of colors to work with. Just for the record, those colors are described in ANSI escape codes.
Now that we have some colors set up, let’s make a new prompt. Add this directly below what you have set up there.

# ^ rest of file ^

dir_chomp() {
if [ $PWD == $HOME ]; then
echo "~";
else
local ns="${PWD//_/ }"
local nh="${ns/#$HOME/\~}";
local wd="${nh%/*}";
local wdnhrd=`echo $wd | sed -e "s;\(/.\)[^/]*;\1;g"`
echo "$wdnhrd/${ns##*/}";
fi
}

# Set the terminal title and prompt.
PS1="${bold}${light_green}\$(dir_chomp)${white}";
PS1+="${blue}${bold}\n→ ${reset}"
export PS1;

Here we are creating a function called dir_chomp, which takes a directory like ~/Projects/Work/app and converts it into ~/P/W/app. If you don’t like that, replace this line: PS1="${bold}${light_green}$(dir_chomp)${white}"; with PS1="${bold}${light_green}\w${white}";. That will show the full directory path. We are also setting some colors, using the variables we defined above. With those variables, they will make all characters after them that color until they reach a ${reset} or another color variable.
Now that we have that in there, you can type source ~/.bash_prompt and you should be able to see your new prompt! Try navigating around with cd and see the directory change above the → character. If you like it, run code .bashrc and add this to the end of the file:
# end of ~/.bashrc

source ~/.bash_prompt

Now it should stick to your editor.

Aliases

If you use the command line a lot, you end up running the same couple commands over and over again. Sometimes it can help to shorten those commands and this is where aliases come in. You can add these directly to your ~/.bashrc file. They look like this:
alias short=’longer command –here’

Here are some examples from my aliases (feel free to steal them!):
alias d=’docker’
alias di=’docker images’
alias da=’docker ps -a’
alias drma=’docker rm -f $(docker ps -q)’

alias ip="dig +short myip.opendns.com @resolver1.opendns.com"
alias ipl="hostname -I | cut -d’ ‘ -f1"
alias ips="ifconfig -a | grep -o ‘inet6\? \(addr:\)\?\s\?\(\(\([0-9]\+\.\)\{3\}[0-9]\+\)\|[a-fA-F0-9:]\+\)’ | awk ‘{ sub(/inet6? (addr:)? ?/, \"\"); print }’"

alias c="code ."

alias be="bundle exec"

Now you’re looking good and your more productive to boot! There are lots of customizations you can do to customize your command prompt! I have some git status tools in mine. For more information, check out the Awesome Bash list or try googling "awesome zsh" or "awesome fish shell" for the other lists for other shells!
Happy hacking!

Link: https://dev.to//wuz/developer-tools-the-command-line-3f66

How a Monolith Architecture Can Be Transformed into Serverless

There is a growing audience surrounding serverless and many are keen to take advantage of the benefits it provides. There are a lot of different resources out there surrounding serverless, but they tend to focus on how you get started. More concisely, they tend to focus on how you can build something brand new using serverless architectures.
But, this is missing a larger audience. Those that have existing codebases that cannot just be moved to serverless. They often come with a large sum of technical debt that makes serverless appear like nothing more than a fantasy.
So in this post, I want to focus on how a monolith application could be transformed into serverless, incrementally over time.
We already know that anything brand new could be built using serverless. But how do we take the application or service we have today and make it serverless? The simplest answer is that we rewrite it. But that is also extremely expensive from a development perspective depending on what needs to be rewritten.
Therefore, in this post, I propose that we pause a second before we jump in and begin rewriting our codebases to leverage serverless. Let’s evaluate how we would move the components of existing applications to serverless. By going through this process we can likely surface the things that will go well and the potential pain points we could encounter. Once we have those two things in hand we can develop a plan on how we would move our legacy application, or at least pieces of it, to be entirely serverless.

Thinking about serverless

There is a healthy debate surrounding serverless, it’s pros, it’s cons, and even it’s definition. This isn’t a post where I intend to debate those things once more. However, I do think it’s valuable to share the definition that resonates with me when I think of this new paradigm.

Serverless is a cloud-first architecture that allows me to focus on delivering code to my end users. There is no provisioning, maintaining, patching, and capacity planning of the underlying servers. Scale and availability to support workloads large and small are handled by the cloud provider.

In the ideal world, serverless frees me from having to deal with the servers my code runs on. With that freedom, I can focus on the code that delivers value to my end users. That is the core of serverless in my mind. Is it idealistic at times? Absolutely, as developers, maintaining servers isn’t the only non-code related thing we have to do, but it’s still worth striving to this idea.
Therefore, when I am considering moving an existing service to serverless I force myself to think about the cost and reward. Not unlike any other architecture tech debt decision you make. The reality is that making technical debt decisions and gauging the cost and reward of each one is complex, non-trivial, and hardly ever perfect.
So as we are exploring the idea of moving a legacy piece of code to serverless, we should acknowledge that this is a very hard problem to solve. It has multiple solutions, some better than others, and some not even possible for large applications. So let’s explore how we can move from a big monolithic application to a serverless architecture.

Step 1: Grounding ourselves in reality

To get started, let’s put this out there right now:
$ echo ‘Not everything fits into the serverless model.’

This is an important thing to get out there from the get-go when you are considering serverless. It doesn’t work for everything, at least not today.
That said, it does work for a lot of things and initially, some of those things may not be intuitive or natural to you. Remember, this is a different paradigm than what many of us are used to but it’s not drastically different than how we might already be composing our applications.
When we’re thinking about investing the effort to transform our current architecture into serverless it’s important to ground ourselves in reality. Establish a clear reason about why you believe moving to serverless is best for your application or service. If you can’t get past this stage you should really reevaluate the path you are about to embark down.
These reasons are going to be unique to you and your workload, so think carefully about what you want to gain by moving to serverless. Maybe your driving reason is cost, you don’t want to pay for servers running around the clock for the 1-2 hours a day you have traffic. Or maybe your driver is scale, you want to have almost infinite scalability without being responsible for the underlying infrastructure.
Whatever the driving reason is for your move to serverless, make sure you validate that the importance is what you think it is. Each step that follows is going to be a challenge and will test your reasoning for this journey.
Once we have grounded ourselves in the reality of what we want to accomplish it’s time to start evaluating our application.

Step 2: Dividing the movable from the unmovable

This is where the fun begins. It’s time to start looking at what things in your application can be moved to serverless and what things you think can’t be moved, at least initially.
It’s not necessary to overthink here. Things that are very easy to move in your monolith tend to jump out at you. Versus things that seem difficult from the outset are likely better to shelve at the moment.
At this stage, it is often helpful to think about the constraints that exist inside of a serverless architecture.

Depending on your serverless cloud provider, a single function has 1-15 minutes to launch, complete the necessary work, and then exit. This time limit is configurable, so you should evaluate how long you believe it is going to take to complete the work and set your time limit accordingly.
The cold start latency is real. Depending on your use case you can notice considerable latency delays the first time your serverless function is invoked. There is a lot of factors that can contribute to this and there are a lot of things you can do to help minimize this.
Disk size constraints are another thing to keep in mind. Inside of a serverless function, you often don’t have gigabytes worth of storage at your disposal. However, you usually have some kind of tmp or scratch storage if you need it.
Memory is constrained as well and depending on your cloud provider you have up to 3GB of it. This tends to be less of a constraint than the others listed in here, but it’s still important to keep in mind.

These are the core constraints that are important to keep in mind as we are thinking about what pieces of our monolith can be moved and what pieces can’t. There are other constraints around deployment size, payload size, environment variables, and file descriptors. However, these constraints are applicable to only a small number of workloads and likely not as important to you.
An important thing to note is that these constraints can be worked around. However, when you are just embarking on this journey you should avoid those workloads for the time being. They can tend to spiral out of control when you are new and I can assure you that there are probably other workloads that are easier to transition initially.
Here are some high-level things that I would consider movable out of the gate. Note: The services mentioned here are specific to Amazon Web Services, but applicable services exist outside of AWS.

CRON Jobs

CRON jobs are a great place to start, as long as they fit into the constraints mentioned above. These tend to be automated processes that we all have running that are usually doing some mundane tasks for us. If you’re lucky these are running on their own instance, which means when you move them to serverless you get to kill an instance 💀
These jobs also tend to be outside of the main development flow of your application or service. Which means mistakes may have a lower blast radius. Therefore CRON jobs are a great place to start your serverless journey because you get to get familiarize yourself with the paradigm, learn some lessons, gain a bit of value, and hopefully not disrupt your users.

‘Glue’ Services

I’m sure there is a more technical term than just ‘glue’, but let me explain what these services are in my mind. First, service is a loose definition here, it could be a separate service running on its own instance, or it could be a service layer inside of your monolith.
A glue service is a service that acts as the mediator between one or more other services. In other words, it glues two services together. Often times these services are doing transformations or relaying messages between services. So this workload can fit well into serverless as long as it is stateless.
Moving these types of services to serverless means you need to think about how does this workload receive inputs and how does it pass along outputs. The inputs could be received via an API Gateway if you already use HTTP and the client services need responses synchronously. Or if your client services just want to send the message along, your inputs could come from SNS topics.
These ‘glue’ services can often fit into serverless right out of the gate with the caveat that they don’t hold state. State can be tricky in a serverless world because the compute power is ephemeral, even more so than a VM, so you have to keep that in mind. It’s not an impossible problem to solve, but it can move something from ‘movable’ to ‘hold-off’ for now.

Email Services

A lot of applications have code or services that send emails to their users. Depending on how entangled this is in your codebase it could be a good candidate to pull out and make serverless. Like CRON jobs, these services could likely be run in the background and therefore are not necessarily user-facing. They can impact the user if we fail to send an email which is bad, but the user experience shouldn’t be directly tied to the actual sending of the email.
Sending emails has wildly different implementations across all different types of use cases. In general, they tend to follow a flow like this:

User completes some action or they want to be notified something has happened.
There is often an email template that is used for the event that has happened that the user needs to know about.
The email is generated via the template and the event details.
The email is then sent to the user.

There could be more or fewer steps in here depending on your use-case, but this is a general flow of logic.
This flow fits nicely into serverless. If your lucky, your email delivery may already be separated out from your monolith. In that scenario, you can move the logic into serverless, change how services pass messages to it, and your probably close to there. If your email delivery is stitched into your application that needs to be decoupled first and then you could look into making it serverless.
These are just some high-level ideas that fit nicely into the serverless architecture out of the gate. This is not an exhaustive list as that is dependent on your own workload and requirements. However, I think we can summarize the movable things as generally meeting these 3 requirements.

Generally, not directly customer facing. While not a hard and fast rule, your first things to move to serverless are generally not customer facing. The reasoning behind this varies, but for me, it comes down to learning this new paradigm while not disrupting user workflows.
Not coupled to some synchronous API. The more things can be decoupled and asynchronous when your first moving to serverless the better. Why? Because this is typically where the architecture thrives. By keeping things async and distributed we allow our workloads to scale independently.
Aim for workloads or services that have clear boundaries. I realize with large applications this can be a real challenge. However, the more you can define a boundary for a given service the easier it will be to move to serverless. The reasoning is rather simple, clear boundaries define clear contracts. If we move a given service and the contract isn’t clear or gets muddy, we can end up with distributed tech debt rather than monolithic.

Now that we have principles that define our initially movable services. Let’s start thinking about how we would actually move them and the tools available to us.

Step 3: Moving the movable

Now that we have separated our services into two groups, the movable and the unmovable (at least for now), we can start thinking about how that move actually happens.
This is a variable step that is going to depend on your own codebase. Some things you might decide to rewrite to leverage the serverless paradigm to it’s fullest. Other things you might be able to “port" over to a serverless world because it is well suited for it. Or there might even be things that you start to move and realize this isn’t movable.
All three of these scenarios are valid. But I think it’s valuable to focus more on the first two because those are the "moving forward" stages.
With that in mind, let’s think about when a rewrite might be needed.
A rewrite of an existing service into serverless may be necessary for any of these high-level scenarios.

The language/framework won’t work in serverless. Maybe the service is written in Cobol, uses Spring Boot, or makes heavy use of native binaries. This use to be a much more common problem, but with the introduction of AWS Lambda Layers it’s actually less of a problem.
The service is tightly coupled into the monolith. This is a very common scenario that I tend to see in older code bases. We want to pull that service out but we actually probably need to strangle the old one and build up a new one. Check out the Strangler Pattern for that, even if serverless isn’t in your future.
The existing code just isn’t performant enough to run in a serverless environment. Another common scenario that is typically closely related to the earlier one. Maybe the execution time won’t work for this workload or maybe memory is too constrained.

These are the three scenarios I tend to see most often when it comes to making the case for the ‘rewrite into serverless’ case. We could likely come up with more so use your best judgment when deciding to pursue the rewrite path.
The second path is preferable because it avoids the additional overhead of recreating an existing service. It is often an easier sell to stakeholders because "I’m just going to move this over here" sounds a lot better than "I need rewrite this thing in order to move it over here".
Like the previous path, we can envision some high-level scenarios where this path is entirely possible.

The language/framework is supported in serverless. Seems simple to say, but this is actually a huge win if your service is already in a language or framework that serverless providers support out of the box. In this scenario, we can typically port this over to run in a serverless environment. This often means adding the necessary code to run in a function handler, tweaking monitoring, updating logging, and making any additional configuration changes.
The service can run inside of a Docker container or is already containerized. Remember in the earlier path when we said languages or frameworks that were not supported out of the box in a serverless environment probably need to be rewritten. Well if you’re using AWS, that might not be your only option. img2lambda lowers that barrier and makes it possible for you to bring those workloads directly over by using Lambda Layers.

That is two paths we can take when beginning our transition of moving the movable workloads in serverless. There are certainly other approaches here, containerization is one that is seen quite frequently. Moving a workload to a container can be a nice middle ground when thinking about transitioning to serverless. However, the closer analysis might reveal that is an unnecessary step and you should consider one of the earlier two paths.

Step 4: Moving the unmovable

But what about the things that we deemed unmovable?
The first question to answer in this scenario, hopefully after gotten one or two movable services under your belt, is why was it unmovable when you first decided that it was? When we’re new to serverless we typically deem something as unmovable because of the constraints around serverless.

Limited execution time, 1-15 minutes, depending on your cloud provider.
Cold start latency associated with serverless workloads.
Disk size constraints, we typically only have a small amount of scratch space.
Memory is constrained, up to 3GB, depending on your cloud provider.

Can things that run into these limitations be moved as well? Of course, they can, but you are likely going to have to do some refactoring. Let’s look at each of these and sketch out some high-level ideas you could try to remove the limitation and turn this into a ‘movable’ service.

Limited execution time

When serverless architectures were first introduced this was a controversial limitation that stopped a lot of people initially. We tend to think of programs and applications as running indefinitely, but that’s not necessary with modern cloud computing.
If we think of an auto-scaling group in the cloud, it scales out and scales in, starting and killing workloads as it does so. Serverless is not drastically different in this regard, except that we have a shorter amount of time to finish our work before our ‘instance’ is gone from underneath of us.
If you are stuck at this limitation you may need to reimagine how this particular service works.
Is this service working on things in bulk? If so, create a function that fans out this work and another serverless workload that processes a single thing. By fanning out the work we can take advantage of parallelization to lower our execution time.
Can the service be moved from synchronous to asynchronous? Often times we start with the former because it is simpler to implement. However, asynchronous allows us to process work in the background and be even more strategic with our execution time.
There are many other scenarios where this limitation can arise. My best advice is to think creatively about how you could refactor things to work in a serverless world. You may still decide not to go that path, and that is totally normal, but the exercise should at least get you thinking more deeply about why it’s not possible.

Cold start latency

This is still a blocker for many folks looking to move to a fully serverless architecture. The time in between your function being invoked and it actually beginning its work is what we refer to the cold start.
It typically only happens when there is no previous container/image/environment laying around for your serverless workload. In which case a new one must be launched, the code must be initialized, and only then can it begin its work.
This problem is quite obvious in AWS Lambda when you are running your workload inside of a VPC. You often do this so you can connect to database services like RDS. Because of this problem, Lambda will actually keep these environments hot for an extended period of time. That said, your synchronous APIs will likely still notice the cold start on that initial hit.
So what are some solutions here?
The well documented "best practice" is to ping your workloads to keep containers warm. Admittedly, this is a hack and smells funny. Serverless actually has a plugin that does exactly this. In the case of AWS, you create a CRON job that every 5 minutes invokes your Lambda function. This keeps a warm environment for that function so you can minimize the cold start.
But, cold start latency isn’t just on the cloud provider, it’s on you as a developer as well. The more things declared inside of your function handlers, the longer it takes to get to work.
Things that are global or can be reused across function invocations, should be declared outside of your function handler. This is to leverage cold vs warm start again. Things that are kept outside of your handler will be kept warm and will not need to be reinitialized.
Cold starts are a constraint everyone moving to serverless should think about. Are they blocker? In most cases no because in rather active applications the environments will likely be kept warm. But if your workload is very spikey then you could see frequent cold starts and therefore you will want to think about the strategies above.

Disk size and memory constraints

To be honest, I have never encountered these limitations running serverless workloads.
That said, I think there are some high-level things you can think about and consider changing if you encounter either disk or memory constraints.
In the event of a disk constraint, you are likely managing some kind of state in your function or operating on a very large file or collection files. In the former case, keep your state external and stream it in rather than reading it all at once. This is good for memory and for disk space.
For working with large files, if it is an option, you might want to consider streaming the file and fanning out the work on it. If you can stream it then you can likely tell one function to work on this first chunk, the next function to work on the next chunk, so on and so forth.
These are pretty hard constraints to work around in a serverless environment, but they are not impossible. However, they should prompt a discussion around whether this workload is truly right for serverless right now.

Step 5: Setting yourself up for success

Once you have the path you have determined to be viable from step three you are likely going to start thinking about implementation. Before you get too far down that trail though, here are some tips, processes, and tools that can aid in your success.

Use Infrastructure as Code, your future self will thank you. Think about it, your going from managing one big application to likely managing multiple serverless workloads. You’re going from centralized to distributed. A recipe for disaster is provisioning those distributed services manually by hand. Use tools like Terraform, Serverless Framework, CloudFormation, or Pulumi to make this management far easier.
The Twelve-Factor app will make your life easier. Chris Munns, AWS Serverless Advocate, has a fantastic blog post that focuses on the methodology in a serverless environment.
Decoupling services from one another can define clear contracts and enable individual scaling. Again, not a new concept at all but one that will elevate your serverless game exponentially. The more services can be async and decoupled from one another the better. Have services pass messages to one another rather than calling each other directly. Work from queues of events rather than in response to a single request.
Walk before you run. Start with smaller bite size services as you initially start moving to serverless. This will help you establish good patterns and practices for yourself while also revealing some of the pain points you are likely to encounter.

Conclusion

Serverless workloads are one future for cloud computing, but they are not the future. We are going to need all kinds of computing platforms as technology advances and something likely won’t fit into the serverless model initially.
That’s OK. But it’s not a reason to not take advantage of it where you can.
The serverless architecture frees us from the responsibility of provisioning and managing the underlying compute power our systems run on. By being free of those complexities we can focus on writing the code that delivers value to our users. That is the value add of a serverless architecture.
It’s not perfect. It has warts and oddities that will get in the way of your journey, but most can be solved by taking a step back and thinking a bit different. Some won’t be able to be solved and that’s perfectly OK as well. Nobody is saying that your monolith applications can become serverless overnight. But they can incrementally move there with a well thought out plan.

Are you hungry to learn even more about Amazon Web Services?

If you are looking to begin your AWS journey but feel lost on where to start, consider checking out my course. We focus on hosting, securing, and deploying static websites on AWS. Allowing us to learn over 6 different AWS services as we are using them. After you have mastered the basics there we can then dive into two bonus chapters to cover more advanced topics like Infrastructure as Code and Continuous Deployment.

Link: https://dev.to//kylegalbraith/how-a-monolith-architecture-can-be-transformed-into-serverless-8l4