Working in a close knit technical team, the person will :
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build large-scale batch and real-time data pipelines using the latest technologies.
Help drive transformation by continuously looking for ways to automate existing processes and testing and optimize data quality.
Apply design thinking and an agile mindset in working with other engineers, data scientists and business stakeholders to continuously experiment, iterate and deliver on new initiatives.
Leverage best practices in continuous integration and delivery.
Explore new capabilities and technologies to drive innovation.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Experience leveraging big data technologies (one or more of Hadoop, Spark, Kafka, Cassandra, ElasticSearch) to build data products
Knowledgeable about containers and orchestration (e.g. Docker, Kubernetes, Mesos);
Experience building APIs and exposing services through APIs
Experience writing clean and concise code using Java / Scala / Python
Experience with public cloud environments
A passion for simplifying and automating work, making things better, continuous learning, open-ended problems, efficiency and helping others
Knowledgeable about data modeling, data access and data storage techniques.
Understanding of machine learning fundamentals
Our client is a well renowned name in the software / technology industry, and is looking for a critical resource in the strategic team(s).
Its coming up with newer capabilities and is always transforming / aspiring to become the best in what it does !
In addition to an excellent remuneration, be part of a well renowned organization and grow with it.