Skip to main content

Recently, Firebase opened a first new location for its Realtime Database. You can read more about it here.

Betas on Firebase are usually here to stay. It can be frightning at first because they do stipulate that “Use of multiple Realtime Database instances in different locations is a beta feature. This means that the functionality might change in backward-incompatible ways. A beta release is not subject to any SLA or deprecation policy”.

Welp, here we go.

Step 1 : Think

We use 3 firebase products :

  • Auth
  • RTDB Database
  • Functions

Our plan was not to change the Firebase project, just the RTDB location so it solves the Auth problem, as it’s linked to the project

Migrating RTDB meant several things :

  • Change the configuration of our React client and SSR
  • Change the configuration of our RTDB Hooks and HTTPS endpoints

We decided not to migrate the location of our functions because it changes the URL. We have some hard coded links in our emails templates so we don’t want to change the URLs,  but we still needed to point to the new RTDB location.

Easy ! Just update the default Database ?

Well, no. We couldn’t find the option in Firebase. Looks like the default database is the one named after your project. For RTDB hooks, you have to specify it.

via GIPHY

Ok… Then just restore a new backup to the new database !

Well, no ! Our database is over 1GB, and Firebase cannot read more than 256MB at once via their REST API. They do however specify “If you are having trouble restoring a backup from a very large database, please reach out to our support team.”, here. Well thanks Firebase.

via GIPHY

Step 2 : Plan & Research

Migrating RTDB.

To avoid having a long delay and back and forth discussions with firebase, we decided to use the  REST API to dump our data and restore it to the new location manually. We had to use the REST API because its quotas are bigger than the js SDK for some reason.

To simplify this step, we used the legacy tokens as authentication.

We wrote a node.js script to fetch the structure of our RTDB, using the shallow flag. (We can provide the source code but it’s not really open source ready)

After that, we fetched the nodes one by one. Firebase conveniently replies with a 413 HTTP status when you’re trying to download a node that is too big . So we just handled this issue by fetching child nodes instead and boom, we had our data ready to be restored.

Restoring was a bit of a fine-tuning. We discovered that nodes over 80MB could have trouble being ingested by firebase so we had to chunk the updates by bits of 25MB max.

Firebase had trouble ingesting some node so we’ll have to migrate them by hand because they were badly designed and too complex.

Also, Arrays firebase are a pain in the butt, and if you miss an entry in an JSON array, firebase does not rebuild it and it will freak out and not import it.

That is how we handled RTDB.

Migrating Cloud Functions

There are a few things to consider when you migrate cloud functions to a new instance

Migrating Database Hooks to the new instance

Firebase RTDB is a reactive database, meaning you can trigger functions upon data change.

Sadly, they use the default database if not told otherwise.

We had to replace all of our

with


We stored this in a global variable on the file we initialise firebase in for more convenience.

Then we were able to import this shiny new variables in our database hooks.

A few replaces here and there and were good to go.

Use the correct instance in your functions.

Once you call Firebase.initializeApp with the correct config, all your calls will be made to the specified database. Nothing much to see here.

Test the deploy

We have over 250 functions.

If you’ve been using Firebase you might have stumbled upon this error : “You have exceeded your deployment quota, please deploy your functions in batches by using the –only flag, and wait a few minutes before deploying again. Go to https://firebase.google.com/docs/cli/#deploy_specific_functions to learn more”.

The rate limiting for firebase function deploys is quickly reached , as it is only 80 WRITE calls per minute.

To overcome this, we splited our functions deploy commands into nice chunks, using the –only flag.

That’s a common firebase issue but it still nice to remember.

Step 3 : execute

With Vincent living at Reunion Island for now, we decided that I had to wake up early so that we’d be synchronised to do this at 7 AM today.

Our plan was :

  • Put the web app in construction mode by displaying a disclaimer to users (1 min)
  • Clean our database by deleting unused nodes (old logs…) and make it faster to export and import (1 min)
  • Backup the RTDB (5 min)
  • Run the migration script (5 mins)
  • Export the few nodes from firebase website that wouldn’t import via the script  (5 mins)
  • Import them in the new firebase instance on the firebase website (5 min)
  • Deploy all the new functions in chunks (roughly 7 min per chunk, we could run them in parallel after the rate limiting was reseted)
  • Deploy the webapp with the new environment variables (15-20 min)

We had plan for a 1 hour time frame.

It went well !

Step 4 : cleaning

This is a painful step. We wanted to delete the old database to avoid confusions.

In theory you could just execute

A blog article describes how this works but basically, it shallow fetches the data to find the structure and the CLI automatically detects a large node and performs a chunked delete efficiently.

There are a few subtleties about this but basically, if you have some big nested node like we do, run this command with the –debug flag, identify which are taking some time and delete them manually with a REST call or a ref().set(null) call.

We did not try to increase the defaultWriteSizeLimit flag in the database but i think it could have helped to delete the data in bigger chunks .

Thoughts and conclusion

We were a bit scared to proceed with such a manual migration to a beta product. However, we think having our data closer to our clever-cloud servers and our users can only be beneficial for the platform.

We were also a bit disappointed to see the lack of migration tools for firebase especially migrating data from an instance to an other or setting the default database of a project. This could have saved us a lot of time.

Thanks for reading me and I hope you’ll enjoy our Belgium powered server as much as the Good old Uncle sam one.

Clément Devos

Author Clément Devos

More posts by Clément Devos

Leave a Reply