My goal is to create an API that can serve hundreds of endpoints in the future. I may break several “rules” in my attempts to make this a bit more generic avoiding typing errors and make it easier to maintain.
The first rule I break is to use sqlx when dealing with Postgresql which is reducing the code significant.
The second rule I break is to move the queries to a lookup table. This reduces the number of endpoints. So every API call is calling the database twice. First to get the query and then to execute the query. A benefit of this is that I can maintain the queries on-the-fly without restarting Go.
The third rule I may break is that I am using http.Get because it is simpler to understand.
This is my first attempt to create an API as a newbie. And so far it is only the GET part of the CRUD. And I may over simplify. Here is my code:
Regarding the maps, I saw you use it as local variable but generally speaking in concurrent environments you have to be careful to lock them when writing.
About APIs, a good practice is to return JSON structures.
Just saying is better to avoid maps in concurrent environments or use them carefully.
Well, I think keeping queries in the database can lead to security holes or application fail is somebody change a query in the database or somehow that record is altered. I guess that is safer to hardcode the queries in the application.
The API should just have read only set at role level. Managing and maintaining maybe 1000 endpoints / queries could be even more risky for type errors etc.
I cannot see any risk of altering the lookup database. But there may be other risks that I am not aware of now.
I want to learn gRPC, but I am not able to understand it. There are several “how-to-do” when using traditional REST api on the net, but basically none dealing with common CRUD using gRPC. The complexity of gRPC is higher.
json, _ := json.Marshal(data)
// present result for the web, partner or client
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, "%s", json)
The endpoint() function is basically a hand written Mux.
That’s fine. But looking at some of the other muxs out there
(including http.ServeMux in the standard library), to see how they usually
behave.
In some places there are more lines of code than you need.
So
// get data and fill template
var data interface{}
data = json2map(url)
if data == nil {
tpl.ExecuteTemplate(w, page, nil)
} else {
tpl.ExecuteTemplate(w, page, data)
}
can be replaced by
// get data and fill template
data := json2map(url)
tpl.ExecuteTemplate(w, page, data)
Less code makes it easier for another person to read and see what is happening.
I have tried to understand how mux works outside the “hello world”, but never found a example where maybe 1000 endpoints are used. Below I faked an example of how I did understand it. In the long run this will be hard to maintain. Please correct me if I got it wrong.
My calculation of endpoints is based on 100 tables * CRUD = 400 endpoints + some extra endpoints.
Have a look at https://github.com/gorilla/mux
which does exactly what you need…
(look at mux.Vars)
You will only need to define 4 endpoint handlers,
with {title} specified as a var.
And for the other 99 tables? How do I handle them? Not just “books” but “pens”, “glasses”,“flowers” etc. Endpoints and handler for ONE table is crystal clear to me. It is when dealing with 100 SQL tables that I need to understand.
If I understand your code correctly, the REST API and the HTML Client basically do the same thing.
But the former serializes the data into JSON, while the latter renders it into a html template.
So if we extend my toy example, we could just replace the json Encode line with something like tpl.ExecuteTemplate(w, objType, obj)
My own personal approach is just to do the API, and then instead of writing a html client, write a front end in JS which talks to the API to render pages in the browser. This gives you a nice separation of the
presentation layer from the back end - at the cost of having to mess around with javascript.