About the job
- Implement stream processing pipelines to handle hundreds millions of messages and events daily
- Use open source tools and data stores to analyze and store billions of data points
- Build workflows to make data accessible to end users
- Work in a fast-moving, agile team of other developers to build large features in a rapidly changing environment
- BS in Computer Science, Computer Engineering or equivalent degree/work experience. MS/PhD a plus.
- Extensive experience with MapReduce-style implementation using Hadoop
- Production experience with open source data stores such as HDFS (and related technologies), Cassandra or MongoDB in production environments
- Java and/or Ruby expertise
- A passion for analyzing data and making it understandable for users
- Experience with high velocity stream processing
- Experience with social APIs
- Experience with natural language processing algorithms
- Background in statistics
Is this you?
If you think this job is right for you email email@example.com with the subject line Data Engineer. Include your resume and a link to your GitHub account. If you don’t have a GitHub account, send us some code you’re proud of. If you don’t have any code you’re proud of, that’s probably not good.