Migration architecture question

Hello All,

This is my first post here and I hope I have the right section for this type of question.

I’ve been developing a platform using Node.js for some time now, which consists of two GraphQL gateways and a dozen different microservices. The architecture is quite simple, a request hits a gateway and GraphQL routes the request to the right services, which does some work like creating data, and then the service sends the result back to the gateway, which gets returned to the client.

I’ve decided to convert the microservices from Node.js to Go because I simply can’t stand working in Node.js/JS any longer. I dread having to write anymore JS/Node.js on the backend and since I started working with Go a couple weeks ago, I’ve fallen in love and am excited to develop again. Carrying on with Node would be like carrying on in a unhealthy, dysfunctional relationship, with zero trust and daily insanity/abuse lol.

In the current platform, each microservice has access to a standard library that has approximately 24 functions for doing stuff like creating, getting, deleting, and validating data. It’s designed around polyglot persistence and we use Neo4j, MongoDB, PostgreSQL and Redis.

Each function in the main library such as create, is broken down into smaller functions (components), which are composed using custom pipe function, which takes an array of the create components, executes each component and builds an “immutable” request state object with the result of each functions execution. Some functions need the result of a previous function and get that from the state object. If something fails, a handler takes the request state and ensures the databases are rolled back correctly.

I was hoping to use a similar style and write a Go library that all Go services could utilize but I do not know if this architecture would be consider a good practice in Go? I’m still relatively new to the language. I’d have functions for create, get, soft delete, hard delete, list, update, etc. and then functions for each step. Currently, each step (function/component) has it’s own file in a parent directory such as create.

I was hoping to avoid composing functions in the style of a(b(c(d()))) in favor of iterating of a slice of functions.

Maybe this way of thinking is wrong in Go and there is a better way?

Anyways, I was hoping for some suggestions how to translate what I’ve currently built over to Go.

Thank you!

2 Likes

Hi, Dan, welcome to the forum!

I want to say that what I imagine you’re thinking of doing seems reasonable but your question is very abstract, so I can’t accurately say whether or not what you’re trying to do will work.

If you encounter an issue and come up with some sample code, I can give you a better answer but until then, my suggestion would be to give it a shot!

2 Likes

Thanks for the reply. I’ve doing a ton of research and I’ve concluded that my pipeline design in Node.js is probably not suited for Go.

2 Likes

I was halfway writing a long response, but instead I’ll say that Sean is right: the description is kind of vague and we need some more info in order to help you.

In any case, don’t throw the towel just yet. You seem pretty eager to ditch node/js, and maybe Go isn’t the answer and instead Rust, Haskell, C++, Java, or some other language is for what you are thinking. But if you could throw some code to show us how one of your services work (and how you’ve tried to port it to Go), we could maybe help you with it.

2 Likes

Hi @iegomez,

Sorry, tomorrow, I will provide more details. I am very eager to ditch Node for many reasons and I think Go is the right language, It’s just very different than JS/Node and will require a different architecture than the dynamic architecture used in Node.

For example, I have a service (CRUD) that has a route called /create. The create route is used to create “Things” in Neo4j/MongoDB. A Thing could be a Person, Car, Dog, Race Track, etc. Each Thing can have a different schema.

When a request goes from GraphQL to the create service handler, the service first validates the Thing type and then makes sure it’s active for example. The service then loads an object for the Thing that has reducer functions and the Neo4j cypher strings for that Thing type. A reducer function just takes the request object and returns a new object with less or more fields. There is a reducer function for each step in creating data. One reduces the request arguments to check for props in use, another for the fields that go into Neo4j, another for the results of the Neo4j insertion, which get combined with certain request arguments that go into a MongoDB document. and so on. The handler can create a single Thing or handle an array with hundreds or more Things.

The create handler can create any Thing type as long as it’s given a valid Thing type and the system supports it. Adding a Thing type to the system is adding the Thing type config, reducers, cyper queries, and MongoDB schema. It is very quick and efficient to add additional Thing types.

Delete, update, read, etc. all work in a similar fashion.

In Go, from what I’m learning, you wouldn’t do something like this and would likely have a create handler or function for each Thing Type, which would likely mean allot more complication. Allot more code and duplicate code, as well as allot more components.

My only fear so far with Go is that it will turn into Type and error handling “simulator” with a sprinkling of business logic.

I hope that makes sense. :slight_smile:

Bed time here.

Thanks!

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.