7 Cloud Setup

Lab 7: Google Cloud Setup

WARNING: 🚨 DO NOT AT ANY POINT ENTER ANY CREDIT CARD INFORMATION 🚨

So while not having a URL has probably left a sore taste in your mouth, have no despair. The goal of this class will be to go through and setup our PHP projects so that they have a link that people would be able to access. Just like we had when it was just HTML and CSS.

Why can't we just use MoGenius or CloudFlare etc?

PHP and other server technologies run actual code on a server. That is scarier then just serving some content, which is what HTML and CSS are. They are just files. However, with PHP, we have code that we want to execute. While there are startups that popup with some type of free tier sometimes. Most require a credit card. So I needed to look for a cloud hosing provider that:

  • Offered a Free Tier
  • Supported Modern CI/CD Practices
  • Did not require a credit card

AT NO POINT WILL YOU NEED TO ENTER CREDIT CARD INFORMATION If you do get some ask from google to enter your credit card, please ignore.

This is just hard to find these days, because there is a lot of fraudulent sites out there. Once you have PHP, you can start to collect banking information and have more malicious websites. So many of the free providers require a credit card, but not all of you have one. Hence, why we ended up with Google Cloud. It's not as ideal as Mogenius was, but it is more established and works as Infrastructure as a Service. This means that it will allow you to customize everything about your setup, which is different than other providers, that only let you run code (but required credit cards)

To get around this Google Cloud has an education part that we were able to utilize.

One of the drawbacks of this approach is that there is more setup for us that is required to get your site up and running. I'll try my best to explain what I think is pertinent along the way.

0 - Fork your comp127 repo to your personal repo space

Just like we did in the original lab setup, you'll create a fork of your existing repo. We will follow the same portion where most of what we are doing here with regards to secrets will happen with that repo.

1 - Get the Cloud Credits

You should see in Discord or Canvas a link that you can use to apply for education credits. Click on that link and you should see an application to get your credits.

link to apply for credits

Fill out your information and provide your u.pacific.edu email address to verify your student status

Once you submit, you should get an email verification link to verify your email. Click on that link, and there should be a success you've been verified. At this point you'll get a second email that contains your coupon code. Make sure to click to redeem and Accept and Continue once you're presented with the credits agreement.

You may end up seeing something with a coupon automatically applied, like this:

credit application form

Important: If you end up seeing Google asking for credit card information, it is most likely because your browser has you on your personal account. Make sure that the account selected in any google cloud application is your personal school email.

2 - Setup Google Cloud

Specify a new project

While you may have something called my first project after clicking accept. I'm going to have you name the first project something like COMP 127. If you do already have other projects, you should be able to once you click accept and go to the dashboard, see something like this:

selecting a project dashboard

If you don't see something like this, just click this link to start a new project (opens in a new tab)

Name the project COMP 127.

naming the project 127

After creating the project, make sure to click the Select Project to make the project the default project.

selecting the project

Doing so will allow us to put all of our resources regarding this particular project into one place. Think of it as a like a folder that will house all of our information related to COMP 127.

Create a Service Account

Once you select the project, you should be brought to a page that looks like this (but if you aren't keep reading)

google dashboard

Verify that your project lists COMP 127, and then the next step will be to create a service account. This will act as a service that will mimic or impersonate us when we want to build and push a docker image and then deploy it. So this way we do not have to do these steps ourselves. Ideally, we want to have something that is continuously deployed, just like we had in cloudflare, where when we sync our personal fork, it will just deploy to the cloud. To do this though, we need a service to impersonate us so that it will do these steps for us.

Let's start by clicking View all Products. If you don't have view all products, click on the three lines on the top left, hover over solutions and then go to All Products

google console all products

Click on the IAM & Admin section, and you should be provided given this screen where you should click Service Accounts.

google console iam click on service accounts

FYI: Anytime that i mention a particular section in Google cloud or a page, know that since it is google, you can always search for the page as long as you are in the google cloud console page (opens in a new tab)

Once you're at the service account, you should click on Create Service Account

google console iam service new service account

Here use these options:

  1. For the name use COMP 127 Service
  2. For the id specify comp-127-service
  3. Click Create and Continue

create new service account google cloud

You'll then need to specify a few different roles for your service which act like permissions:

Let's start by specifying the cloud run admin role:

selecting the cloud run admin role

Then once, that's selected you'll need to select Add Another Role.

selecting the add another role

You'll add three more roles here:

  1. Artifact Registry Writer
  2. Storage Admin
  3. Service Account User

Make sure that you specify those exact roles as there are many that look alike.

After adding the roles separately, click Continue, and then in Step 3, click Done.

At this point you should be back on the basic page, where we'll save a key to use for later. You'll do this by selecting the options for the Service Account and clicking Manage Keys.

Grabbing the manage keys portion

Warning: You'll notice that Google when you do this will mention how storing keys is not secure. Realize that this is like saving your password into a file, as the key acts as your password for your account. Google does recommend using Workload Identity Protection, but for our workload, I just couldn't get it to work. Please heed their warnings and just protect that file as closely as possible and keep it secure.

From here click on the Add Key->Create New Key button:

Doing the Add Key Button

Make sure the key type is JSON and then click Create. A key in json format will now be downloaded to your computer, with a message from google saying so as well. Make sure you save it in a secure location.

3 - Saving a Key in your personal fork

Now go to your personal fork of your repo and go to the Settings page, then click Secrets and Variables and then Actions.

Going to github actions secrets and variables

From there click on New repository secret, and then once on that page:

  1. Specify the name as GCP_SERVICE_ACCOUNT_KEY
  2. Open up the secure key that was saved on your computer in a text editor and then copy the entire contents and paste it into the Secret field.
  3. Click Add Secret

Adding a secret key

Once the secret has been added, you should see it listed in the settings, and this will allow your personal fork to deploy.

4 - Setup Google Artifact Registry

Once we build a docker image, we need to have a way to store the image that we have just build to run via docker. This means that the image that we built needs to be saved on the web somewhere. To do this, we'll need to setup our artifact registry, which google uses to let us store containers and refer to them later.

Let's go back to Google Artifact Registry by searching for it in the google cloud console page (opens in a new tab).

You can search for artifact registry via search and select the option that comes up:

searching for the artifact registry

Once you get to the page, click the CREATE REPOSITORY button either in the upper left, or in the middle of the page. Doing so will get you to the create repository page. Look at the image below and fill out the pertinent details.

create repository for artifact registry

After looking at the image above, notice that we:

  1. Named the repository artifact-127
  2. Select the region as us-west1
  3. Keep the format as Docker

create repository artifact features, google encryption, immutable disabled, cleanup delete artifacts

You'll also notice that in the image above we kept the original settings, except for:

  1. Select to Delete artifacts

For Delete artifacts, we will need to specify a cleanup policy that just states when we should delete stuff. Go ahead and click ADD A CLEANUP POLICY and follow the image below.

create repository new cleanup policy

Make sure that you click Done and then Create, which will lead you back to our new artifact being listed below.

artifact registry listed

We'll then want to get the URL that is listed for the docker container, which is something that we'll need in the next step. To get link, we'll need to click on the artifact in the registry, and then we'll click the copy button that shows up on the next page:

artifact registry copy button

Make sure to save the URL somewhere for now, as we'll need that in our next step, (but we will add /127-app to what we pasted)

Setup Google Cloud Run

Now that we have our artifact registry created, we now have allowed google to be able to hold our containers in the cloud. Next we'll need to create a service that will allow this container to actually run. We'll do this by creating a service on google cloud run. Let's start by using searching for cloud run in the search bar.

you should also see it on the main dashboard page

Once you are there, you should see a page like the one below, which mentions that you can try it in console.

cloud run start

There should be an option at the top to Create a Service or to select DEPLOY CONTAINER->SERVICE

Once there, you'll be at a Create Service page. Leave the default option of Deploy one revision from an existing container image as is.

Paste the copied artifact registry from before into the Container Image URL textbox (making sure that it has /127-app at the end)

Give the container the service name run-127, and select the region us-west1.

cloud run initial setup

Next select Allow unauthenticated invocations, and then click the heading Container(s), Volumes, Networking, Security, which will lead you to this page:

cloud run setup create service

as you scroll down, I'll want you to edit the resources to use a 128 MiB memory resource and then click DONE slightly lower in the box.

cloud run setup edit container

Once you click done and scroll down, you'll see a few more options, which you'll leave alone, but change the Maximum number of instances to 1 and Startup CPU boost to be unchecked.

cloud run setup edit container page 2

Once done here, you can finally click the blue CREATE button at the bottom.

Congrats you just finished most of the google cloud portion of this doc!

5 - Configure Github Actions

Add github actions files in your comp127 repo (NOT THE FORK)

While you will not actually enable github actions in the comp127 repo, in order for our fork structure to work, we need to add the files into the original repo so that way when we sync the fork, we will be able to enable actions to push to the cloud.

Adding these files to your personal repo initially and not your comp127 repo makes it easier for you to discard the files in a future repo sync.

Add gitignore

Remember, these next steps will happen in your comp127-hosted repo, and not in your personal fork.

Make sure that you see /comp127/ as part of your repo URL. First, we need to modify our gitignore and add this portion:

# To make sure we don't commit a credentials file into our repo
gha-creds-*.json

add gitignore clause

Commit the file in your comp127 repo and click save.

Add a dockerignore

A .dockerignore file is similar to a gitignore, but it used by docker to help streamline our builds. Having a dockerignore will tell docker to refrain from adding files and directories to our images. As an example, we don't want docker to bundle the .git folder (a hidden git repository folder) into our docker container.

At the base of your comp127 repository, create a new file in your repo called .dockerignore and add the following information:

# Ignore node_modules or vendor if applicable
node_modules
vendor
 
# Ignore environment and configuration files
.env
*.env
 
# Ignore Git files
.git
.gitignore
 
# Ignore local Docker Compose files
docker-compose.yml
 
# Ignore IDE and editor files
.vscode
.idea
vendor

Add a Dockerfile.prod file

Because Google requires some different options that are specifically only for running on google cloud, we are going to create a new Dockerfile. To keep in line with what we have been doing however, We'll create a new file called Dockerfile.prod.

Here is the contents of that file:

FROM phpstorm/php-71-apache-xdebug
 
# Copy application files
COPY . /var/www/html/.
 
# Update Apache to listen on port 8080 and all interfaces (0.0.0.0)
RUN sed -i 's/Listen 80/Listen 8080/' /etc/apache2/ports.conf && \
    echo "ServerName localhost" >> /etc/apache2/apache2.conf && \
    sed -i 's/<VirtualHost *:80>/<VirtualHost *:8080>/' /etc/apache2/sites-available/000-default.conf
 
# Expose port 8080 for Cloud Run
EXPOSE 8080

Please create this file at the base level of your repo, just like before with .dockerignore.

Come up with variables for yaml file

Lastly, we'll need to have a github actions file that will:

  1. connect to our google cloud
  2. build docker
  3. push our docker image
  4. deploy our docker image

To do this, we need to come up with a couple of variables

Create our .github/workflows/deploy.yml file

Please make sure that you spell this correctly! I spent 15 minutes considering whether I made the right choice to be a computer scientist trying to figure out why this cloud stuff wasn't running, only to figure out that I had mistakenly named a workflow folder instead of a workflows folder.

But here is the content that you will need to add. Please make sure that you pay special attention to the names that you need in the highlighted portion below:

Your PROJECT_ID will be most likely comp127- and then the part that I kept blurring out in the images above. I wanted to make sure that you all did not just blindly copy that one, which is why I kept blurring it out!

name: Deploy to Google Cloud Run with Service Account Key
 
on:
  push:
    branches:
      - main
 
jobs:
  deploy:
    runs-on: ubuntu-latest
 
    env:
      PROJECT_ID: <YOUR_PROJECT_ID> (Mine was based on the project name I had...comp127- the part that was blurred)
      REGION: "us-west1" (Assuming you picked this region)
      REPOSITORY_NAME: <NAME_OF_YOUR_REPOSITORY> (I called mine "artifact-127")
      IMAGE_NAME: "127-app"
      IMAGE_TAG: "latest"
      CLOUD_RUN_SERVICE_NAME: "run-127"
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Authenticate to Google Cloud with Service Account Key
        uses: google-github-actions/auth@v2
        with:
          credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
 
      - name: Configure Docker for Artifact Registry
        run: |
          echo "Configuring Docker for Google Artifact Registry..."
          gcloud auth configure-docker ${{ env.REGION }}-docker.pkg.dev --quiet
 
      - name: Set IMAGE_URL Environment Variable
        run: |
          echo "IMAGE_URL=${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY_NAME }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}" >> $GITHUB_ENV
 
      - name: Build Docker Image and Push to Artifact Registry
        run: |
          echo "Building and Pushing Docker image with tag: $IMAGE_URL"
          docker build -f Dockerfile.prod -t $IMAGE_URL .
          docker push $IMAGE_URL
          
      - name: Deploy to Cloud Run
        run: |
          gcloud run deploy ${{ env.CLOUD_RUN_SERVICE_NAME }} \
            --image $IMAGE_URL \
            --region ${{ env.REGION }} \
            --platform managed \
            --allow-unauthenticated \
            --project ${{ env.PROJECT_ID }}

The most important one being your project id. You'll need to commit this file.

You'll also need to enable github actions by going to the settings portion of your personal fork and saying Enable Github Actions

6 - Celebrate

Now that we have this, anytime that we want to push our content, we'll go to our personal repo and sync the fork and you should have something that will appear, based on the URL that is given in the app itself under cloud run!

If you're not sure what the URL is, go back into the google cloud console, search for Cloud Run, click on your run-127 service, and then URL will be near the top of the screen and will start with run-127!