I don’t understand how something like rails API mode or django rest is not a popular option in golang. I can understand that such a framework might not be used by every project, but any realistic CRUD app will have dozens of entities, needing json decoding, database serialization, and various REST routes. That feels like a lot of simple, non DRY code to have to maintain by hand. Using something like gorm might solve some of this, but not all of it. My conclusion is that people are not using golang for large crud projects and offloading that portion to python django or rails, and using golang for specialized services instead?
That is a statement. Not a question? I can confirm that my attempts to create an API in Go was a bit of a challenge. Mainly because I found no consensus how to create an REST API. And it was also hard to get a DRY approach. But finally I managed to at least create a draft of a dynamic REST API. I have not got much feedback, so I do not know if I am on the right track or not. https://crud.go4webdev.org
Go is almost purpose-built for web APIs and is the most performant, ergonomic language I’ve used for quickly building out web APIs. What problems specifically are you experiencing?
If you have a framework or something that works for you and your team, use it. But for things like json encoding/decoding I can’t think of another language/framework where json is actually part of the stdlib off the top of my head. That points to it being a pretty core part of go. In my experience, in the teams I’m working on, go is very well-suited for these tasks.
Also, I would contend that “dozens” of entities is a small project. I would also contend that if you are only doing simple CRUD operations your app is simpler than any real production app I’ve ever worked on. I do often use ORMs or generated SQL for the simplest of CRUD queries, but hand-coding SQL is always more performant and usually required in my experience. Again, perhaps the apps you are building are simpler than mine.
I do sometimes use GORM for the simplest of CRUD operations and for the ability to easily scan DB queries into structs. There are plenty of packages in the ecosystem to help with this, such as jmoiron/sqlx. It is a solved problem. Also, again, I think you must be using contrived examples or small projects if your main problem is writing simple queries.
A small tech startup called Google might have something to say about that. How exactly did you arrive at that conclusion? Which real-world projects are you surveying? Send me a github repo link or two, please.
I’ve been building web apps for a living for closing on two decades, and I’m choosing go for most of my web APIs moving forward. There are some cases where it isn’t my preferred tool or an environment dictates I use something else - but by and large, go is my preferred tool right now and is gaining a lot of traction at my company. I’ve seen it gaining traction elsewhere as well (New York Times, Uber, Github, I could go on). If it doesn’t work for you, use something else. It’s working incredibly well for me and a lot of other teams though.
It’s a question because I have no idea if my conclusion is valid (:
I am mostly trying to figure what startups with 5-10 devs, working on a variety of software problems are using for different parts of their backend. They might be building a monolith or a few microservices. I can imagine web CRUD might only be a small part of their overall set of software that they are building, depending on the nature of the startup.
I didn’t mean my post to be attacking anything. Sorry if it came across like that. I think golang looks really nice for high throughput, high availability, and a variety of complex backend needs. I am just trying to figure out whether people value DRY when building web CRUD in golang or not, or if they do value DRY, but just don’t use golang for that portion of their system.
I don’t claim to have worked on lots of large systems or small systems. I have worked on a scattering of systems of different sizes, and I usually did not architect or build large portions of each system. Sometimes my contributions have been valuable and large, sometimes I have made small contributions.
That’s going to heavily depend on team composition in my experience. Is your startup 5-10 devs of people with years of Python experience? Obviously use django. Though I have seen plenty of larger organizations switching to go for infrastructure cost reasons:
… as well as due to the simplicity of go reducing long-term maintenance costs. Anyway, once again, use whatever works for you and your team (tools are always just a means to an end).
When you say “DRY” are you talking about not writing update foo set bar = 'hello' where id = 1
for every record type in your system? If so, yes, in every large go app I’ve worked on we have used generated code for simple updates like this (either via something like gorm or internal SQL generation). What other kinds of things are you worried about repeating?
I’ll try to think aloud and see where it goes. Let’s start with a very contrived single entity system. Let’s say we are storing “workspaces” that have an id, a branch name, created at and updated at.
type Workspace struct {
Id int
BranchName string
CreatedAt time.Time
UpdatedAt time.Time
}
We will start defining a predictable pattern of routes:
rtr.POST("/workspaces", ...)
rtr.GET("/workspaces/:id", ...)
rtr.PATCH("/workspaces/:id", ...)
rtr.DELETE("/workspaces/:id, ...)
The GET and PATCH will have to decode a json body. It’s still DRY, since we will just decorate the type with json attributes.
Each of the routes will need to execute queries. I guess with GORM, it’s still DRY since your type, your json decoding and GORM’s generation of queries are all in sync.
I think I was wrong in seeing some aspect where the Workspace type and fields would need to be updated in multiple places, if one of the types changes. I guess the only remaining problem with the code above is that it is fairly predictable and noisy, such as the pattern of route matchers to define, and the implementations of each of the basic routes will follow predictable patterns.
My main goal was to make the API as DRY as possible, but Go makes it a bit harder to be generic. In my approach with a query lookup database, the routes and endpoints can be even more DRY. One main route and a few generic sub routes. It works as expected, just with fewer lines of code…
IMO any type of ORM is a layer upon SQL that should simplify queries. ORM is like jQuery. An extra layer that just makes it more complicated at the end of the day. My limited experience of ORM had nothing to do with DRY. Same amount of queries but in a different language.
I am not sure if DRF is something we would want to emulate in the Go ecosystem.
It is incredibly convoluted and suffers pathologically bad performance.
I in previous job, we used Device42 to manage our datacentre. Device42 is built on DRF and could not handle more than 10 REST API requests per second! Their support told us that
this was a limitation of their architecture and something that they could not fix.
(It is possible to write DRF apps which do not have terrible performance. But it is unnecessaritly hard. Optimizing slow Django REST Framework performance)
If you want a Rest framework in Go, have a look at the Gin tutorial
Are you trying to say that you want to route via convention? Because I don’t see a lot of repeated code there. In my experience, routing via convention is almost always a bad idea. But, you could create a layer of abstraction that you control to more or less accomplish this. Let’s create a struct called CRUDRouter (and for fun let’s use generics since they’re new and shiny) that will register routes for us and handle them. First, let’s create an interface that defines the operations we want each item to be able to perform:
type CRUDModel[T any] interface {
Create() T
GetList() []T
GetSingleItem(ID int) T
UpdateItem(ID int) T
Delete(ID int) bool
}
Now that we have our interface, let’s create our CRUDRouter
and put a type constraint on it to ensure any types passed in implement CRUDModel:
// CRUDrouter is a simple abstraction so we don't have to repeat our standard
// GET/POST/PUT/DELETE routes for every record type. Meant for simple
// CRUD operations.
type CRUDrouter[T CRUDModel[T]] struct {
// The base for our multiplexer
muxBase string
}
So far so good, right? But we need to handle requests. Let’s create a handler:
// handle a CRUD request
func (rtr *CRUDrouter[T]) handle(w http.ResponseWriter, r *http.Request) {
id, err := parseID(r.URL.EscapedPath())
// We need ID for anything other than a GET to muxbase (AKA list request)
if r.URL.EscapedPath() != rtr.muxBase && err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// Instantiate zero-value for our model
var model T
switch r.Method {
case http.MethodGet:
// List items
if id == 0 {
items := model.GetList()
json.NewEncoder(w).Encode(&items)
} else {
// Get single item
item := model.GetSingleItem(id)
json.NewEncoder(w).Encode(&item)
}
case http.MethodPost:
// Create a new record.
json.NewDecoder(r.Body).Decode(&model)
model.Create()
case http.MethodPut:
// Update an existing record.
json.NewDecoder(r.Body).Decode(&model)
model.UpdateItem(id)
case http.MethodDelete:
// Remove the record.
model.Delete(id)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
This is a pretty naive implementation of course and I didn’t even bother using a router, so we are just parsing the URL’s path using the following function:
// TODO: add router so you don't have to do this. This example is simple and
// contrived and uses only stdlib.
func parseID(path string) (int, error) {
splitStrs := strings.Split(path, "/")
if len(splitStrs) == 0 {
return 0, errors.New("Invalid ID param")
}
return strconv.Atoi(splitStrs[len(splitStrs)-1])
}
OK so now we have a generic, reusable struct that can handle GET
, POST
, GET/:ID
, PUT/:ID
and DELETE/:ID
but we have to register our routes with a multiplexer of some sort. Let’s assume we are just using stdlib and *http.ServeMux
:
// registerRoutes registers standard CRUD routes with our multiplexer
func (rtr CRUDrouter[T]) registerRoutes(mux *http.ServeMux) {
mux.HandleFunc(rtr.muxBase, rtr.handle)
mux.HandleFunc(fmt.Sprintf("%v/", rtr.muxBase), rtr.handle)
}
What about a convenience function that wraps this into a single line?
// NewCRUDRouter instantiates a new CRUD router for a given record
// type and registers the routes with your multiplexer.
func NewCRUDRouter[T CRUDModel[T]](muxBase string, mux *http.ServeMux) *CRUDrouter[T] {
router := CRUDrouter[T]{
muxBase: muxBase,
}
router.registerRoutes(mux)
return &router
}
… and how do you use it? Let’s assume we have structs of type Workspace
(as per your example) and User
that implement CRUDModel
. You can use this like so:
func main() {
router := http.NewServeMux()
// Handles the following:
// GET /workspaces
// POST /workspaces
// GET /workspaces/:id
// PUT /workspaces/:id
// DELETE /workspaces/:id
NewCRUDRouter[Workspace]("/workspaces", router)
// Handles the following:
// GET /users
// POST /users
// GET /users/:id
// PUT /users/:id
// DELETE /users/:id
NewCRUDRouter[User]("/users", router)
log.Fatal(http.ListenAndServe(":8080", router))
}
Here’s a playground link that won’t work on the playground, but un-comment the lines per instructions and you can run this locally (either copy/paste it local or hit ctrl+s):
In summary: I was able to implement routing via convention using nothing but the stdlib with very little effort. You could easily take this further. If you wanted, you could never define a single route and run your web server based solely on convention. I don’t think that is a great idea (and I don’t think I’m alone there given that this isn’t a major paradigm in any of the larger go routing frameworks I know of) but you could do it if you wanted to.
This also gives you compile time checking. Consider the following where BogusStruct
doesn’t implement CRUDModel
:
NewCRUDRouter[BogusStruct]("/workspaces", router)
// Output:
// BogusStruct does not implement CRUDModel[BogusStruct] (missing Create method)
You get compile-time checking. When you add all this together and get used to it, trust me: you are going to end up with more predictable web APIs that have fewer errors because you’re catching them at compile time. And that becomes more important the larger the project, not less. Same with performance.
And that becomes more important the larger the project, not less. Same with performance.
I’m definitely on board with you on that point. I’m trying to push my career more towards companies in seed stage and still figuring out product market fit, so I want use convention preference for now to optimize for speed of change. I know that’s not sustainable and eventually, I would want to rewrite to stop using convention focused approach and head in the direction you are alluding.
It sounds like overall, it can be pretty fast to roll your own framework with conventions if I were to use golang in an early stage setting.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.