

We’ve been a bit quiet with product updates lately, so I wanted to tell you all about what we’re currently working on.
We’re working on some pretty significant infrastructure changes. In fact, we’re making the biggest, most far-reaching changes to how Beacon works that we’ve ever made.
The core of Beacon, the way in which we create, read, and update records, has largely remained unchanged since our CEO Chris invented it in 2017 (for the database nerds: we’ve been using Postgres running on AWS RDS). This system went live when we had zero Beacon customers and Theresa May was Prime Minister. Things have changed a lot since then. Now over 1,500 customers rely on Beacon, and as larger charities continue to choose Beacon the size of the biggest accounts keeps increasing. Last weekend our largest customer casually imported 12.5 million records in one go.
We want to make sure we’re using the latest tools that are available to us, and we want to continue to innovate. Being smaller than our main competitors has a number of advantages, and the best one is that we can innovate faster and adopt new technologies more quickly. We want Beacon to be powered by the latest and greatest tools out there, and that means that sometimes we need to undertake bigger projects to modernise and keep bang up to date. That’s what we’re doing right now.
We’re also starting to see the limits of our design choices from the early days of our infrastructure. Several customers will have noticed that some of the more complex and computationally intensive features of Beacon can struggle under certain circumstances. We want to get ahead of this and continue to deliver a product that is genuinely world class. You should expect a higher standard of us than our competitors, and a core value for us is to work in a world in which we’re innovating, not just coping. A world in which rollup fields can take hours to recompute, workflows are unreliable, and filters can time out, should not be acceptable to any of our customers. And it’s not acceptable to me.
So, what are we doing? Well, we’re moving our main data store from Amazon (AWS) to Google (GCP). Specifically, we’re moving to a database called Google Spanner. We’re currently rebuilding our main data processing and storage system from scratch around this technology. We’ve already got a lot of experience using Google’s database offerings, as a significant part of Beacon is already powered by Google BigQuery. It’s a big undertaking, but we’re confident that we’ve got the right team in place to pull this off.
Google originally developed Spanner to run their own mission-critical applications which require massive scale and strong reliability with zero downtime. Gmail and YouTube run on Spanner.
Google has since made Spanner available to other organisations, and it’s now used to power Uber, Shopify, and Walmart. It has a set of functionality that makes it perfect for powering Beacon. Heck, if we had Google’s resources we’d have built something like it ourselves. We actually sort of did - if Spanner had been around when we were first assembling Beacon we would have used it, but we had to build something ourselves from scratch to meet our needs. It’s very exciting that Google have now given us the perfect tool for the Beacon database job.
Here’s a quick summary of the benefits of moving to Spanner:
Spanner is designed with zero downtime scaling in mind. We won’t need to turn off Beacon to upgrade our main database infrastructure any more. Spanner will scale effortlessly for our next 10,000 customers with no disruption to our current customers.
For large numbers of records or particularly complex filters, Beacon can be slower than we would like. With Spanner, loading list pages with any number of records and high filter complexity is blazing fast. A charity with 50 million People records loads at the same speed as a charity with 1000 People records - we know because we’ve tested it at this scale. Adding more customers has a negligible effect on load speeds.
Moving to a Spanner-powered world allows us to make imports even faster. We can also speed up bulk actions and writes from the API. As with reads, writes to large Beacon accounts are an equivalent speed to writes to small accounts.
We are fundamentally rethinking and redesigning how rollup fields and smart fields are computed. With Spanner, delays in processing rollup fields will be a thing of the past.
Search within Beacon has never been amazing. As you’d expect from Google, Spanner has search fundamentally baked into the architecture of the database. We’ll be able to implement some super fast fuzzy search options for you.
Our codebase is becoming simpler. And this means we can go much, much faster. This may not seem terribly important, but when we can significantly reduce the hundreds of thousands of lines of code in the Beacon codebase we make our engineers’ lives vastly easier. Our systems become easier to maintain and test. This means fewer bugs, better stability, and shipping new features faster.
As I like to bang on about to anyone who’ll listen: I’ve got a masters degree in AI. Despite having some experience in this field, I never thought that in my lifetime we’d find ourselves with the kinds of AI tools that we have today. It’s amazing, and it’s brilliant.
My hot take is that nobody really wants AI slop content (I wrote every word of this blog post myself, thank you very much), but AI is nothing short of amazing at doing the boring bits of writing software. We’re getting AI to write the boring bits so that we humans can focus on the interesting and challenging stuff!
For the Spanner project we’re taking an AI-first approach, designing our whole way of working to be deeply aligned with AI. AI is completely changing the way Beacon engineers work on a daily basis through using Cursor and models like Claude Opus and Composer 2.
To use an analogy for how we’re working, Beacon humans have designed the bridge and are making sure it’s structurally sound, meets our requirements, and has been built properly. AI is hammering in the thousands of rivets.
This week we’ve reorganised our engineering team to accelerate this project and drive things forwards as fast as we possibly can, under the supervision of our CEO, Chris, who designed the original version of the Beacon system from scratch. We’re pouring everything into this, we’re using the latest tools and development techniques, and we’re expecting to have our new system live and ready to use by the Summer.
You won’t need to do anything technically or administratively. We already process your data using Google infrastructure, and, of course, we’ll be keeping everything in the UK so there are no consequences of this work from a security or GDPR standpoint. Your data will be as safe as it has ever been.
I look forward to showing you a better, faster, more stable Beacon very soon.