API versioning method

The traditional way to versioning an API is to use
/api/v1/endpoint ,/api/v2/endpoint

I interpret this as there is ONE API with different routing, correct?

What about a versioning method where there are MANY API’s (Go executables) separated by sub domains? Like

api1.domain/endpoint, api2.domain/endpoint

Does the sub domain method cause any trouble in the future?

This is the correct way to version a public API

Versioning using Accept header

Content negotiation may let you preserve a clean set of URLs but you still have to deal with the complexity of serving different versions of content somewhere. This burden tends to be moved up the stack to your API controllers which become responsible for figuring out which version of a resource to send. The end result tends to be a more complex API as clients have to know which headers to specify before requesting a resource.

e.g.

Accept: application/vnd.example.v1+json 
Accept: application/vnd.example+json;version=1.0

In the real world, an API is never going to be completely stable. So it’s important how this change is managed. A well documented and gradual deprecation of API can be an acceptable practice for most of the APIs.

2 Likes

Have to see its documentation. Otherwise, I assume it is 1 API (/endpoint/), 2 different routing (/api/v2, and /api/v1), 1 version difference (v2).

Should not be an issue since all 4 URIs are unique and you have the freedom to roll out the following updates:

  1. API pathing improvements (non-backward compatible and backward compatible changes)
  2. input data schema changes
  3. new API can has its own cache without interfering with existing API ones
  4. output data schema changes
  5. can roll out non-interfering deprecation and removal.

What I can see is that using subdomain method assumes you have easy access to DNS administration to roll out API versioning updates. For small project or you’re the DNS administrator, this would not be an issue. Otherwise, you need to through some red-tapes from network dept.

The /vX/ pathing method allows you to work on the same network and DNS settings so 1 less dept to visit when you roll out API updates every single time.

Another matter, assuming you’re serving from 1 origin server, is that you need to configure your server to be multi-facing for serving multiple subdomains. However, if each version has its own origin server clusters, then subdomain makes sense to separate both v1 and v2 servers.


There are many ways of versioning API. The most important part is documentation + deprecation notice, built-in deprecation warning, communications with your customers, and grace period.

1 Like

Could this header manage a sub domain instead?

Accept: application/api2.example+json

Yes. I have access to the DNS. I have done 2 separate dummy API by the sub domain method using separate ports. So I do not think there is any technical issues.

And I interpret your answer that this is an OK method as long as I use common sense?

There are a few methods for api versioning. One of them is described in this article.

1 Like

I don’t think there’s any “one size fits all” solution here. I don’t see any issues with your strategy of v1.mydomain.com and v2.mydomain.com as long as you are certain you will always want to release new versions of all of your APIs in tandem. I don’t know much about your use case but I could see the potential for wanting to release a new major version of just one API and leaving the others behind. As long as that’s not the case, I think your approach seems sane.

Using headers as Jarrod recommended is also good but I sometimes like the ease of seeing which version of an API I’m using as part of the URI instead of having to inspect headers. It just makes the separation that much more obvious in my opinion.

2 Likes

Yeap, you’re ready to take off. Remember to handle multi-facing configurations (which I assumed you naturally done it to your firewall filtering in your test then :joy:).

The keyword is “documentation + communications”. Anything else is just continuously improved overtime.


FYI

@jarrodhroberson’s header recommendation is best to do at the stage when your API’s URI pathing is consistent and stabilized and it’s more for hardening purposes. Some good indicator would be in the past 10? releases, the changes are only related to IO data schema and no pathing changes being made.

The implementation should also be seamless as blocking API because of header can seriously traumatize your customer support center. Moreover, a lot of customers (including me) rarely poke into responded headers (you can do customer testing with curl command).

If I understand your comment correct, I do not need a firewall filtering, because I use Nginx to proxy the API ports. Just 443 and 80 is open. Is there a better way to do this? Did I understand you correct?

Not really. Multi-facing means your 1 origin server is serving both v1.domain.com and v2.domain.com simultaneously. The firewall i’m referring here is not just the conventional port restriction. Some (hard or soft) firewall can filter custom rules.

Depending on how you harden your security, some servers implement strict interfacing restriction either on load-balancer side (e.g. Nginx or Apache) or an actual firewall rule. This goal is to prevent accidental exposure (e.g. testing.domain.com by a junior developer or some sloppy outsourced contractors) by having the firewall whitelist only v1.domain.com, v2.domain.com, … and etc.

In your case, it’s too early so for time being, just keep in mind about this step.

EDIT:

  1. One famous example is Cloudflare’s firewall rules where it allows custom patterns if you’re using their CDN+DNS as front-facing service.

Hi,

my 2 cents here.
There’s no one solution that is the best; also if the “Accept Header” is considered a best practice, it creates problems in the routing configuration (for instance, use different handler depending on the version provided in the header it’s harder to configure in some platform/framework).

As other people said this is why many online services (GitHub, Twitter, …) use version specified in the URL.

My suggestion is to:

  1. check if you have constraints (requirements)
  2. check if the tool you wanna use has limitations (or choose a different tool if it does not support versioning schema you MUST use)
  3. choose what do you like more
1 Like

At the beginning both v1. and v2. will be on the same server. But I am about to learn about CORS. Will this be a way to manage multi-facing by some sort of key?

If I use the “sub domain method” the handlers for a certain version of API will only master one api version? Then the routing should not be a problem? Correct interpreted?

If I’m not mistaken CORS only makes sense for browser and HTML+Javascript threat model. I don’t it is applicable here (unless your response body is browser-able HTML?!).

Using key for a firewall job is a bit overkill. Remember that you still have an auth-token of your choice to authenticate and authorize user to deal with.

You can refer to this page for securing API servers per your requirements as you go:

A bit old (2019) but still usable.

EDIT: another one:

Note: I have to stop you here now because we are going way out-of-topic soon and too deep into rabbit hole :grin:. Just continue to explore with your test servers and learn things out on-the-spot will be better. Cheers!

1 Like

There is not a good way or bad way indeed. Depends what you try to achieve, also there are a lot of pro and cons around this subject.

However, using a header force you to use another middleware to check the version and this can slow you down if you have a lot of requests (routing is also a bit slower). On the other hand using subdomains make you dependent to a DNS server that can be sometime slower or down and this is not so good if you are looking for high availability. Hardcore routing is fast but need code changes, and so on.
You must check from the poin of view of your particular use case.

1 Like

Hi @Sibert ,

it works fine, as well as other methods.
My point was … “choose whatever works for you” :smiley:

1 Like

My main question was if versioning using sub domain method. My conclusion is to continue with subdomains as there seems to be no major drawback with this method as I understand it.

Thank you all for your contributions.

1 Like