A web design and development studio

Sync­ing your data­base and assets across dif­fer­ent envi­ron­ments (local, stag­ing and pro­duc­tion) is an issue that plagues most devel­op­ers whether deploy­ing web­sites or apps. It’s tedious, time-con­sum­ing and error prone 😑. Every time I find myself knee-deep in an SSH ter­mi­nal import­ing and export­ing Post­gres data­base dumps I promise myself nev­er again”

The import/​export dance 💃🏾

First, let’s define a cou­ple of com­mon sce­nar­ios where I find myself doing the import/​export dance.

1) Deploy­ing local to staging

At the begin­ning of devel­op­ment you’ve been work­ing local­ly on your machine with a test data­base and test user uploads.

Your col­leagues or client want to see it up and run­ning so you cre­ate a stag­ing serv­er. When you go to deploy your site to this stag­ing serv­er you do the dance:

  1. Cre­ate a local data­base dump
  2. Zip-up exist­ing user uploads
  3. SFTP/SCP and upload that data­base dump and zip file to the stag­ing server
  4. SSH into the stag­ing server
  5. Import the data­base dump and unzip the data­base dump

My expe­ri­ence is at least 10 oth­er things will hap­pen in the mean­time to frus­trate this process. Per­mis­sions, for­got­ten pass­words, wrong versions …

2) Import­ing stag­ing to local

You haven’t worked on a web­site for a while but the client comes to you for a new fea­ture. You still have a local copy of the code but the data­base and/​or assets are out-of-date. You have to login to the pro­duc­tion serv­er and do the dance.

3) Deploy­ing stag­ing to production

You have a stag­ing site that is ready for deploy­ment. The client has been load­ing all their con­tent onto the stag­ing site. You either have to repur­pose the stag­ing serv­er as the pro­duc­tion serv­er or … do the dance.

The Ide­al, Auto­mat­ed, World

If you are reg­u­lar­ly deploy­ing web­sites to dif­fer­ent envi­ron­ments this begins to add up to an enor­mous amount of time doing tedious tasks.

There is a way to remove all of this faff though. First, cre­ate a more sen­si­ble archi­tec­ture that cen­tralis­es our back­ups in a sin­gle acces­si­ble loca­tion and sec­ond, auto­mate, auto­mate, automate.

Sin­gle source of truth 📜

We first need to cre­ate a sin­gle source of truth”: a loca­tion that holds the defin­i­tive ver­sions of our data­base and assets backups. 

All of our var­i­ous envi­ron­ments will push and pull to and from this loca­tion. Ide­al­ly all of the back­ups with­in this loca­tion will be ver­sioned too.

This sin­gle-source-of-truth needs to be in the cloud and remote­ly avail­able so that our oth­er envi­ron­ments (and colleagues/​clients) can access it (in the above dia­gram this is an AWS S3 bucket).

Automa­tion 🤖

We then need to auto­mate as many of the steps men­tioned in the export/​import dance above:

  • Auto­mate the process of back­ing up our data­base and assets
  • Auto­mate the process of pushing/​pulling these back­ups to and from our sin­gle source of truth.
  • Auto­mate the process of restor­ing these back­ups once pulled-down

A script­ing approach: craft-scripts

Let’s be clear, these aren’t new ideas. In fact, there is already a de-fac­to library out there that han­dles many of the con­cepts men­tioned above real­ly well called craft-scripts

This library was cre­at­ed by Andrew Welch a.k.a. NYS­tu­dio 107. As well as cre­at­ing an absolute pletho­ra of great plu­g­ins for Craft, he’s writ­ten a com­pre­hen­sive blog post on the top­ic of DB and asset sync­ing that you should read before going any further.

This library uses bash scripts to give you the tools you need to do every­thing we men­tioned above:

There are some lim­i­ta­tions to this approach though.

  • You need to inte­grate it into your code­base (by adding the scriptsfold­er to your project).
  • It uses bash scripts, so every­thing oper­ates out­side of the Craft ecosystem.
    • You can’t run any of the scripts via the CP for example
    • You can’t con­fig­ure any­thing via the CP
  • It requires addi­tion­al envi­ron­ment con­fig­u­ra­tion out­side of the stan­dard Craft .env file (via a scripts/.env.sh).
  • This con­fig­u­ra­tion is sig­nif­i­cant­ly dif­fer­ent depend­ing on environment.

While this cer­tain­ly isn’t the end of the world, for me where this becomes an issue is when are work­ing on a num­ber of sites and you are try­ing to auto­mate pro­vi­sion­ing and deploy­ment via a ser­vice such as Larvel Forge. You have to start log­ging in again to your serv­er and copying/​pasting con­fig­u­ra­tions between envi­ron­ments which can take a lot of extra time.

A plu­g­in approach: craft-sync

🚨 This is a beta plu­g­in — not yet ready for production!

To over­come some of these short­com­ing we’ve cre­at­ed a plu­g­in to com­plete­ly han­dle sync­ing between envi­ron­ment: craft-sync

This plu­g­in lets you back­up and restore both your data­base and assets from with­in the Craft 3 con­trol pan­el. It also lets you push and pull these back­ups from AWS S3, again via the con­trol pan­el. Fur­ther­more every­thing is con­fig­ured via the set­tings” pan­el and Craft’s native .env file.

The big ben­e­fit of this approach is that you only have to con­fig­ure the plu­g­in once when you set up your site. After that you should­n’t have to go near the com­mand line again to back­up and restore your data­base and assets.

You will also ben­e­fit from hav­ing a sin­gle-source of ver­sioned back­ups in a pri­vate S3 buck­et. Even bet­ter, you can sched­ule these back­ups to run regularly. 🎉

Pre­req­ui­sites

You need to have awscli installed for every envi­ron­ment you plan on using the plu­g­in in. This is the com­mand-line tool that will per­form the actu­al upload/​download between our site and AWS

On OSX:

> brew install awscli

On Ubun­tu:

> sudo apt-get install awscli

Instal­la­tion

Installing the plu­g­in should be easy. Either search for it in the Craft Store or run:

composer require weareferal/craft-sync

from the com­mand line.

Con­fig­u­ra­tion

First you need to add your AWS details. Go to the Setting > Plugins > Sync set­tings page:


You can either enter the details direct­ly, but it’s eas­i­er to migrate from envi­ron­ment to envi­ron­ment when you use envi­ron­ment vari­ables instead. This means adding your AWS details to your ‘.env‘ file first:

AWS_ACCESS_KEY = "..."
AWS_SECRET_KEY = "..."
AWS_REGION = "us-west-2"
AWS_BUCKET_NAME = "feral-backups"
AWS_BUCKET_PREFIX = "craft-backups/my-site"

.env

and then ref­er­enc­ing those envi­ron­ment vari­ables from the set­tings page:

This is impor­tant as it means your access ID and secret key are not saved in the project.yaml file (which is checked into Git) and are instead only ref­er­enced via envi­ron­ment vari­ables (which are not checked into Git)

Sync­ing, back­ing up and restoring

Once you have con­fig­ured every­thing, head over to the Utilties > Sync sec­tion of the Craft CP to actu­al­ly per­form operations.

This is the guts of the plu­g­in. Here you can:

  • Cre­ate a new local data­base back­up (this pig­gy-backs off the exist­ing Data­base Back­up” util­i­ty action)
  • Push all local data­base back­ups to S3
  • Pull all remote S3 data­base back­ups to your local environment
  • Restore a par­tic­u­lar local data­base backup

Sim­i­lar­ly you can do the same for your vol­ume assets.

Vol­ume backups

Craft native­ly han­dles data­base backup/​restore which makes life eas­i­er, but vol­ume back­ups are anoth­er story. 

This plu­g­in col­lects all the vol­umes you have set up in your Settings > Assets sec­tion, zips them all up and places them along­side your data­base back­ups in the storage/backups folder.

That should be every­thing you need to get start­ed with craft-sync. If you’ve got any issues, ideas or feed­back please get in touch via hello@​weareferal.​com and let us know.

If you made it this far, you may also like

Navigating the sadistic world of web animations

Feral post 07

A rare glimpse into the seedy underbelly of website animations. Viewer discretion is advised.


Feral © 2019.