Skip to main content

Command Palette

Search for a command to run...

Cloud Resume Challenge

Updated
4 min read
Cloud Resume Challenge

The aim of this project is to create a digital resume with a working visitor counter. Sounds simple enough but as you will see there are a lot of moving parts.

The project can be broken down into two sections. The front end and the back end.

Front End

While some may consider this the easiest part of the challenge, to me this was the most time-consuming. I planned for my resume page to be a part of my blog so I dived in trying to build this out. The official guidance was to just write some simple HTML and CSS but naturally this was not going to be enough for my state-of-the-art blog. So, I bashed my head against the wall wrestling with a variety of “low code editors” which were often so confusing I would have been better off just taking a 100-hour web development course.

After much time in the desert, I arrived at the promised land in the form of Hashnode. I decided to use Hashnode to manage my blog and created a separate page hosted on a subdomain for my resume. With some DNS magic, I managed to get this to work, and this setup meets my needs perfectly.

DNS, Cloudfront, Hosting, SSL Certificate

With my web files hosted in a S3 bucket, I wanted to allow public access without allowing public access. This is possible with a CloudFront distribution and this also allows for the use of a secure HTTPS connection but this requires the creation of an SSL certificate which is attached to the CloudFront distribution.

As my main domain was dedicated to the blog, I chose to set up a subdomain to host my resume page. I used an alternative domain registrar to Route 53 but the certificate for the subdomain was successfully issued by Amazon Certificate Manager after creating CNAME records. My CloudFront distribution now had a custom domain name and facilitated HTTPS access.

Backend

To create a website visitor counter, we need a database to store this information. DynamoDB was used for this and to perform actions on the table a lambda function written in Python is used. We don’t want direct communication between the JavaScript code and DynamoDB so I used API Gateway to handle this aspect.

I encountered a few errors in this side of the woods with my DynamoDB table successfully updating when my API URL function was invoked but I still could not get the visitor counter to provide an appropriate response. After updating the response headers and debugging the JavaScript Code I managed to get it to work. To find out what was happening under the hood I used CloudWatch to log the errors from the API Gateway after configuring permissions for this in IAM.

A valuable lesson I learned while completing this section is that error logs are useful if you filter them properly. They act as a map guiding you to the right destination, without them you are often walking blind as you don’t actually know what the problem is and you’ll undoubtedly spend a lot of time lost at sea.

CI/CD and Automation

I did not want to have to keep reuploading files to S3 every time I made a change and manually invalidating the CloudFront distribution cache. While there are a few tools that can be used to build a CI/CD pipeline I settled on GitHub actions. I configured VS Code and Github through the use of SSH keys and Github and AWS through the use of access keys. I then created a YAML file with the workflow which describes which actions should be taken when code is pushed to a GitHub Repository.

While this is not particularly complicated, I found myself making simple errors that prevented the workflow file from executing. These include creating .github/workflows in another folder as opposed to the main folder and forgetting that you have to configure Github Secrets for every repository you create. Discovering the “empty commit” was useful as an issue I faced early on was that I could not push to Github if there were no changes (I would get the "everything is up to date" message). For example, this was a problem when I discovered the issue was that I had forgotten to set up secrets for the new repo and my code had not changed so I could not push to trigger the workflow.

Considerations

I plan to write this entire project as IAC. I am also aware that the security of this project could be strengthened by securing the API, ensuring services have only the permissions required, and the use of a WAF.