Backend Architecture, Concurrency, and Data Consistency Challenges for a Restaurant Menu Website Built with Go

I am developing and maintaining a restaurant menu website where the backend services are written primarily in Go. The backend is responsible for serving menu data, handling frequent updates to items and pricing, managing caching, and responding to API requests from both the frontend and internal admin tools. While Go has generally performed well in terms of speed and simplicity, I am running into architectural challenges as the application grows. What started as a relatively simple REST API has evolved into multiple services handling menu data, availability, promotions, and analytics, and the boundaries between these components are becoming less clear. I am unsure whether my current package structure and service separation align with common Go best practices for long-term maintainability.

Concurrency and data consistency are some of the biggest challenges I am facing. Menu data is updated frequently by staff through admin interfaces, while customers simultaneously access the same data through the public website. To keep the site responsive, I use in-memory caching combined with background refresh routines, but this has introduced complexity around synchronization and stale data. In some cases, users briefly see outdated prices or items that should no longer be available. I am currently using mutexes and channels in several places, but the logic is becoming hard to reason about. I would like guidance on idiomatic Go patterns for managing concurrent reads and writes to shared data structures in a high-read, moderate-write scenario.

Database access patterns also raise concerns. The backend interacts with a relational database to store menu items, categories, and historical changes. As the dataset grows, certain queries have become slower, and I have added caching layers to compensate. However, this introduces additional complexity around cache invalidation and consistency. I am unsure whether I should rely more heavily on database-level optimizations, application-level caching, or a combination of both. Advice on structuring data access layers in Go, including the use of repositories, query batching, or connection pooling, would be extremely helpful.

API design and versioning are another area where I am seeking clarity. The menu data is consumed by multiple clients, including a web frontend, mobile views, and internal tools. As requirements change, API fields are added or modified, and maintaining backward compatibility has become increasingly difficult. I want to avoid breaking existing clients while still allowing the API to evolve. I am interested in learning how Go developers typically approach API versioning, response struct design, and backward-compatible changes in production systems.

Error handling, logging, and observability also present challenges. While Go encourages explicit error handling, the volume of error checks in a complex backend can make code verbose and harder to follow. At the same time, insufficient logging makes it difficult to diagnose production issues related to slow responses or incorrect data. I am using structured logging and basic metrics, but correlating logs across requests and goroutines is still difficult. Recommendations on logging patterns, context propagation, and observability tooling that fit well with Go applications would be very valuable.

Finally, I am thinking about scalability and future growth. The website is expected to support multiple locations, each with unique menus and update schedules, which will significantly increase the amount of data and the number of concurrent requests. I want to ensure that the current Go-based backend can scale without becoming overly complex or fragile. Insights from the GolangBridge community on structuring scalable Go services, managing concurrency safely, and maintaining clean, readable code as a project grows would be greatly appreciated. Sorry for long post!

The caching could be a good candidate for LISTEN / NOTIFY assuming you’re using Postgres:

So instead of just checking for a new version every once in a while, you notify your API when a new version exists.

If you want to push more caching into the DB layer, check out materialized views:

Materialized views are supported by other SQL flavors as well.

TBH I am surprised you are running into problems already. DB performance problems usually arise when you’re dealing with like billions of rows. And it doesn’t sound like that is the case here. What’s your strategy for tuning queries and creating indexes?

I think there are a lot of performance tweaks you could do. But it is hard to suggest things when you don’t know what is slowing things down. If you post some code maybe people can give you some specific suggestions on how to tweak it.

What does your DB server look like? Where is it deployed? Is the CPU pegged 100% of the time?

1 Like

It sounds to me like multi-tenant scenario? Each location its own database?

Thanks for the detailed explanation! Reading through this, I do believe there might be a fundamental issue somewhere in your architecture. To be honest, I can’t really believe there should be performance problems with this kind of application - I mean, there are really big restaurant chains out there handling this just fine, aren’t there?

You’re asking for tips? I think your own approaches might be a bit too complex, too early in the process. Here’s what I would look at:

1. Don’t over-engineer it. Keep it simple! Start with a clean data structure - this often solves more problems than fancy caching strategies. Basically KISS

2. Images. How exactly are you handling images in your app? This is often where the real bottleneck is hiding. BIG DATA compared to some menu prices. :elephant: :left_right_arrow: :mouse:

3. Reverse proxy setup. What reverse proxy are you using? Is caching enabled there? What about caching for images specifically? Is GZIP compression turned on?

4. Simple in-memory caching. Use straightforward maps with mutex locks for your cache. Go is really good at this - you can easily have millions of entries in such a cache map and save yourself a ton of database queries. - But again, this is already the next step, after you made it work with a good data structure. :wink:

Hope this helps!

2 Likes

Just a note: you should generally not store images in a DB (there are some exceptions to this rule, but I generally try to avoid it!). Use object storage or some other similar mechanism for storing images and store a reference to them in your DB.

5 Likes

Exactly.
And this is how one way to make your app slow. Imagine a restaurant menu with very cool, but very large pictures. And then they are stored in the db directly and the app is SSR for good SEO. Then every user hit will be a lot of impact…

Thanks a lot for the feedback it’s really helpful to step back and reconsider whether I’m overcomplicating things. You’re right that I may have jumped too quickly into caching and concurrency patterns without first ensuring the core data structures and flows are solid.

I’ll take a closer look at simplifying the core menu and category models, ensuring they’re consistent and easy to reason about before layering caching or async updates. Do you have any specific Go idioms for representing hierarchical menu data that scale well with multiple locations? We’re using Nginx but I haven’t fully optimized caching headers or GZIP compression yet. Do you typically rely on Nginx caching for menu API responses too, or just static assets like images? I agree starting with a simple map with mutexes seems much safer than over-engineering. Any tips for invalidating cached items safely when updates come from multiple sources?

I just had a similar situation, where I thought I had to create a complex caching mechanic, because my queries were very complex. It turned out that simply optimizing queries and indexes brought the performance up to a speed where no caching is necessary at call.

As a general rule of thumb: Let the database do everything a database is good at: storing data, retrieving data, concurrency, locking and handling concurrent readers/writers, low level caching… These are all features which databases are highly optimized for by a whole army of turbo nerds, which know what they are doing. Trying to outperform SQLite or Postgres in low level performance is like trying to outperfom the go standard library in HTTP performance, it makes only sense in very very extreme niche cases.

In general caching is useful, when you do additional complex transformations on the initial data. If you for example build a PDF from the menu, which users can download it would seem a good idea to cache the PDF after creation. A pattern I would use is a debounced update and switch for the cache. So anytime an admin changes anything at the menu a go routine will be informed via a channel. The go routine then waits for a few seconds to catch possibly multiple updates at once and will then select all relevant data in a transaction (The database isolation level will ensure it is a consistent snapshot of the data if you operate in a single transaction) create a new PDF and then simply do an atomic replacement of the pointer to the relevant PDF data. After that the go routine will look if new updates have arrived (use a buffered channel with length 1 and writers will write with a select, which simply skips if the channel is already full, since it is just important to know new updates have arrived, not how many).

1 Like