Job Type - Full Time,
- Overall 7+ years of IT experience in a variety of industries, which includes hands on experience in Big Data Analytics and development
- Expertise with the tools in Hadoop Ecosystem including Pig, Hive, HDFS, MapReduce, Sqoop, Storm, Spark, Kafka, Yarn, Oozie, and Zookeeper.
- Solid experience building APIs (REST), Java services, or Docker Microservices
- Experience with data pipelines using Apache Kafka, Storm, Spark, AWS Lambda or similar technologies
- 2+ yrs of experience in writing pyspark for data transformation.
- 2+ years of experience with detailed knowledge of data warehouse technical architectures, ETL/ ELT, reporting/analytic tools, and data security
- 2+ years of experience in designing data warehouse solutions and integrating technical components
- 2+ yrs of experience leading data warehousing and analytics projects, including using AWS technologies – Redshift, S3, EC2, Data-pipeline and other big data technologies
- 1+ yr of experience of BI implementation in the Cloud.
- Experience working with terabyte data sets using relational databases (RDBMS) and SQL
- Experience using Agile/Scrum methodologies to iterate quickly on product changes, developing user stories and working through backlogs.
- Exposure in at least one reporting tool like Qlikview/Tableau/similar will be a plus.
- Familiarity with Linux/Unix scripting
- Experience with Hadoop, MPP DB platform, other NoSQL (Mongo, Cassandra) technologies will be a big plus