DATA ENGINEER Position Summary
If you are a Data Engineer with a craving for making sense out of structured and
unstructured data with the goal of affecting people’s lives in a positive manner, please
read on! We are looking for a Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data.
The focus will be on working with the Data Management Team to design technologies that will wrangle, standardize and enhance our master data and transactional data repositories, then build operational and
monitoring processes to govern that data. You will also be responsible for federation of
this data across the enterprise using batch, streaming and microservices architectures.
Unique skills expected for this job are the ability to write clean, high-quality Spark / Python libraries that can be re-used within our platform;
ability to create orchestration workflows that ingest structured and unstructured data in both streaming and batch modes;
enrich and make it available for use throughout the enterprise.
What you will do
Build the infrastructure required for optimal ETL of data from a wide variety of data sources using Python, AWS services and big data tech
Create and maintain enterprise-wide integration pipelines leveraging Kinesis,Glue, Step Functions, Lambda, and general microservices microbatch architecture best practices
Manage databases running on PostgreSQL, Snowflake, Redis, Redshift,
ElasticSearch, Redis, Ne04j
Monitor performance using Cloudwatch, Cloudtrail and advise on necessary infrastructure changes as needed
Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
Work with stakeholders including the Executive, DataOps and Business teams to assist with data-related technical issues and support related data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our enterprise data hub into an innovative industry leader.
Who you are
Minimum of 4-6 years experience implementing production systems in the cloud (preferably AWS)
Understanding of database design (both SQL and noSQL)
Experience with object-oriented / object function scripting languages : Java,
Python, pySpark, Pandas
Experience with stream-processing systems : Spark Streaming, Kafka, Kinesis etc.
Excellent analytical and problem solving skills
Application Integration Experience Leveraging Microservices and Microbatching
Experience with data cleansing, data wrangling, data quality, standardization, transformations etc
Experience with AWS cloud services : EC2, S3, EMR, API Gateway, IAM, Lambda, SQS
Experience with data pipeline and workflow management tools : Luigi, Airflow, Streamsets etc.
Experience with relational SQL and NoSQL databases, including PostgreSQL, MSSQL, Redis, MongoDB
Experience with build systems : github, bitbucket, jenkins, jira, Terraform
Advanced working SQL knowledge and experience working with relational databases - both operational DBs and data warehouses
Strong analytic skills related to working with unstructured datasets
Prior Experience with Master Data Management is a plus
BS / MS in Math, Computer Science, or equivalent experience