Datadog + Go (Onsite or Remote)

Did you discover Go in the past couple of years, and were blown away by its simplicity and efficiency for distributed systems engineering? Read on…

We’re on a mission to bring sanity to cloud operations and we need you to build the data pipelines to ingest, store, analyze and query hundreds of billions of events a day. Join us to build powerful and resilient data systems.

What you will do

Build distributed, high-throughput, real-time data pipelines
Do it in Go, with bits of Python, C or others
Use Kafka, Redis, Cassandra, Elasticsearch and other open-source components
Join a tightly knit team solving hard problems the right way
Own meaningful parts of our service, have an impact, grow with the company
Who you must be

You have a BS/MS/PhD in a scientific field
You have significant experience with Go and its standard library
Before Go, you’ve mastered Python, Java or C/C++
You can get down to the low-level when needed
You tend to obsess over code simplicity and performance
You want to work in a fast, high growth startup environment
Your Github shows your chops
Bonus points

You wrote your own data pipelines once or twice before (and know what you did wrong)
You have battle scars with Cassandra, Hadoop, Kafka or Numpy
Is this you?

Send your resume (re@datadoghq.com) and link to your Github