What you need:
- BS in Computer Science (MS preferred a plus) and 3+ years of relevant industry development experience
- Hands on experience with Hadoop Stack (MapReduce, Sqoo, Pig, Hive, Flume)
- Hadoop administration is a plus
- Experience working in an open source, Linux environment
- Knowledge of NoSQL platforms
- Ability to read understand and build Java Code
What you will be doing:
- Implementing new Hadoop hardware infrastructure
- Designing, developing, optimizing major high-volume big data, analytics & Business Intelligence systems
- Developing, evolving current & new data-related Scripting, automation and other processes
- Tracking, verifying, evolving data sources, data flows, tools & storage mechanisms
- Producing, maintaining accurate, highest quality technical and system documentation
- Helping design, architect & implement data engineering infrastructure
- Implementing solutions using Hadoop, MapReduce, Tableau, Hive, HBASE, BigTable, MS SQL etc.
- Work with BI, ETL and application teams to troubleshoot.