In this scenario we are going to be creating an AWS Lambda in Python to automatically process any JSON files uploaded to an S3 bucket into a DynamoDB table.
In DynamoDB I’ve gone ahead and created a table called “employees” and the the primary key is employee ID. It can be anything you like.
In this scenario, we have a directory that is very cluttered and we need to organize and back it up simultaneously. We can do this by writing a script that checks a local folder for specific file extensions, backs then up and moves them to their appropriate directories in S3.
First, we need a user that has API keys to access S3.
I have uploaded my files to GitHub so you can follow along.
This is a scenario where we are going to host a static website from a docker container. For a lot of people this simple exercise is their first interaction with docker, or a container in general.
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
We’re going to spin up an Ubuntu Server 20.04 instance, install updates, and all of our packages needed for Python3. Lastly, we’re going to run a program to test it works.
I had been a little rusty when it comes to AWS, having not used it for a couple months. I wanted a quick little project to refresh my memory and was reading at a couple different articles on the subject. I decided just to combine them into one project for fun. Follow along, it’s really quite simple to get running.
Two things we need:
I was recently put to task on database experience that I’ve had. Realistically, I had completed a few projects spinning up RDS Aurora instances, but that was about it. I really wanted to demonstrate my flexibility and ability to adapt using tools that are new to me. It was recommended that I run PostgreSQL in docker, and access with DBeaver.
That’s exactly what I’m going to show you here today.
We’re going to need a few things.
In this scenario, I am going to show you how to completely configure and deploy an AWS VPC with the aid of the powerful tool Terraform. Being Infrastructure as Code (IaC) this shows you just how easy it could be to replicate resources.
Feel free to follow along, I have uploaded all files to GitHub.
Here us the layout of the VPC we will be creating. It will host two EC2 instances, a webserver in a public subnet, and a Database in a private subnet.
I have uploaded all my files to GitHub so you can follow along.
I am going to take you through the steps setting up an environment for automated building, testing, and deployment, using containers and services hosted in the cloud.
What we need:
I have uploaded all the files to GitHub so you can follow along.
Docker Swarm is an open-source container orchestration platform and is the native clustering engine for and by Docker. It allows you to manage multiple containers deployed across multiple host machines.
One of the key benefits associated with docker swarm is the high level of availability offered for applications. In a docker swarm, there are multiple worker nodes and at least one manager node that is responsible for handling the worker nodes’ resources efficiently and ensuring that the cluster operates efficiently.
Let’s start setting up our cluster in…
Continuing with Terraform, it’s amazing how easily it cuts through tedious tasks, such as setting up more complicated infrastructure. Taking something you would assume would take hours, into minutes and easily replicated if need be. The end result of this project will be completing a VPC peering connection request across 2 AWS accounts.
It’s assumed that the 2 VPCs that you need peered already have been created previously. …
Today’s project will utilize two major cloud computing tools.
Terraform is an infrastructure orchestration tool (also known as “infrastructure as code(IaC)”). Using Terraform, you declare every single piece of your infrastructure once, in static files, allowing you to deploy and destroy cloud infrastructure easily, make incremental changes to the infrastructure, do rollbacks, infrastructure versioning, etc.
Amazon created an innovative solution for deploying and managing a fleet of virtual machines — AWS ECS. Under the hood, ECS utilizes AWSs’ well-known concept of EC2 virtual machines, as well as CloudWatch for monitoring them, auto scaling groups (for provisioning and deprovisioning machines depending…