Our client located in downtown Toronto is looking for a Big Data Engineer who will be responsible for developing new system stacks and tools for big data ingestion, processing, and analytics.
This position is perfect for a developer whose passion is to apply cutting-edge technologies to solve complex business and engineering problems.
This individual will work on a team of talented engineers responsible for the full life-cycle of production systems, software, tools, and flows.
Design, develop, and maintain the software and systems that make up the data platform that runs our entire business
Participate in multi-disciplinary projects
Partner with the Data Science and Engineering teams who use our platform to by diagnosing, predicting and correcting scaling problems
Contribute to our teams growing set of development platforms, tools, and processes
Hands-on experience with Big data technologies (HBASE, HDFS, SPARK, and / or HADOOP)
Demonstrated proficiency with Spark, Scala, Python
Experience in building stream processing systems using spark streaming or Storm.
Experience in integration of data from multiple sources
Experience with NoSQL cluster databases, such as HBase, Cassandra, Druid
Experience with various messaging systems such as KAFKA, or RabbitMQ
Proficient understanding of distributed computing principles.
Experience with other highly scalable, low latency big data systems is a plus
Experience with Hortonworks / Cloudera distributions is a plus
Experience with Cloud technology and containerization (docker / kubernetes) is a plus
Who you are
Able to learn and apply new technologies quickly
Proven excellent problem-solving abilities
Able to work both independently and as part of a team
Able to multi-task in a dynamic environment
Have Excellent verbal and communication skills
Software development experience using mainstream languages such as Java, Scala and / or C++
Experience architecting, deploying and operating mission-critical big data clusters.