Using Go for big data?


We’re currently conducting market research on the use of Go for big data projects. We are looking for developers whose work involves analyzing large datasets who wouldn’t mind spending a few minutes on a Google Hangout to discuss the following:

  • Are you currently processing large datasets (> 100TB)?
  • Is your processing batch- or stream-oriented (or both)?
  • What platform are you using currently - HadoopMR, Spark, Flink, Storm, Amazon EMR, Google BigTable/BigQuery?
  • Where are you running your workloads - on premise or on cloud?

If you are interested, please contact me at


1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.