Go for webservices/API development

Hi All,

I am going to start development on Golang for backend, can anyone suggest the best go framework, also I want to make this server scalable [cluster].

From some blogs I read framework itself is reducing the golang prformance and dnt use any framework like, in that case how can we secure the application.

Also what about the cloud native go with microservices for API development ?

My goal is to develop an API/webservice which can respond for 1million same requests with in 1 sec.

Please suggest a mechanism.

A question first: Is the HTTP server really going to be the performance bottleneck in your scenario?

Measure how long your backend code needs for handling a request (if you pass the request in without any HTTP or network in between), and then measure the time that a plain HTTP server with a dummy handler needs (you know, those handful of lines from the standard libary examples).

Maybe you will find that the HTTP handler needs only a fraction of the time that your backend needs, in which case you have quite some room in chosing the Web libary or framework that best suits your other needs (like security etc).

1 Like

If you consider a framework/mux, here is a nice article

https://redmonk.com/fryan/2018/02/07/language-framework-popularity-a-look-at-go/

BT! I choosed go-chi and would choose it again :smile:

What is this based on?

What you want is a hard task. Look over this article who treat the problem. Pay attention on job queue technique.

A question first: Is the HTTP server really going to be the performance bottleneck in your scenario?

Yes, currently we are running a web application in php/zend2/hhvm which has some request bottle neck in maximum concurrent requests to serve and currently we scaled the server using different nodes , load balancer and RDS.

Measure how long your backend code needs for handling a request (if you pass the request in without any HTTP or network in between), and then measure the time that a plain HTTP server with a dummy handler needs (you know, those handful of lines from the standard libary examples).
Current back-end code cant scale up-to a level [without db access], we have a requiremnt like server 1 million requests per sec/ even with or without db access. We are trying to migrate our code to go-lang.

Here we have doubts in the selection , do we actually needs a web framework, if so which is the best for our case ? how can we make cluster server in go-lang.

for example to fetch a static list of languages

do we actually needs a web framework

How should one answer that? No, you don’t but maybe some provides functionality you need

how can we make cluster server in go-lang.

As in every language - load balancers

for example to fetch a static list of languages

Why don’t you use a cdn for sth. like that and save the power for requests that need more resources.

we have a requiremnt like server 1 million requests per sec/

Did you ask this in forums about other languages already as well? Same here - how should one answer that? If your server is strong enough, you can. It depends on many factors.

I Know there is some factors, instance type, cpu , memory, the web server tuning, load balancer, My question is, is there anything which we need to take care in go-lang for the same?

Please can you tell me , How much data your sending to accept 1milion request for sec

I am afraid I still don’t get it. What do want to take of? Go is a language and you can write applications which can handle 1mio reqs/s. Your requirement does not exactly address the language itself

This problem have more aspects. The short answer can be yes, you can handle this but let see some points of view.

First of all 1 million requests mean a request at every 1microsecond (1/1000000).
Next, if a regular ethernet packet has 1500 bytes (let say rounded 1.5k) multiply this with 1000000. So, at limit, your hardware must be able to receive, store and handle this huge amount of information in one second but more than that, until you process this information it continue to come, right?

Let say you have this insane hardware and from Go through standard library you receice packets. For the sake of simplicity let supose that one requests has one packet (though in real life a tcp connection has more), probably with goroutine mechanism will handle the number of requests but also we have two problems here:

  • processing every request under 1microsecond
  • return results (remember the huge amount of incoming information? results can be more than that as space, but imagine what bottle neck can be on data return path)

As i already say few comments bellow are some techniques to quick queue the requests and execute them later. Also returning results can be or must be prioritized and handled (this depends of kind of processing). Of course this is a generally discution, probably more specific details will change this things.

See follow resource who treat the problem from operating system point of view. Also useful for testing your conditions.

Its not that much of a requirement in terms of hardware honestly, your example of 1500 bytes per packet scaled to 1 million equates to about 1.4 GB, no big deal for most machines these days. Not to detract from the discussion about techniques, but the hardware isn’t going to be a major hurdle in terms of bottle-necking.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.