Senior AWS Data Engineer
Calgary, AB, CA
16h ago

Summary :

Parkland Corporation is deeply invested in building next-generation Enterprise Data, Digital & Analytics capability, with an aspiration to improve our value to customers, our customers experiences, and drive profitable growth and industry leadership for Parkland.

Parkland is seeking a Senior AWS Data Engineer to be part of our newly forming Enterprise Digital Team consisting of skillsets spanning machine learning, AI, Statistics, Data Engineering and full stack development.

You will work as part of a high caliber team that works on solutions across Parkland’s business areas including Supply, Pricing and Loyalty, Retail, Commercial, Trading and Refining to drive high value solutions.

As Canada and Caribbean’s largest and one of America’s fastest growing independent suppliers of fuel and marketing products and a leading convenience store operator, Parkland’s operations provide a rich and varied set of analytics opportunities, including over 1 mm retail transactions per day.

You an experienced AWS Data Engineer who is passionate about building scalable, enterprise-level systems? We are looking for a Data Engineer to play a key role in building next generation tools and solutions.

In addition to technical expertise, you will invest time to understand the needs of the business, the data behind it, and how to transform information into technical solutions that allow the business to act

The role will need to build data pipelines to stream data from multiple sources internal and external to Parkland, leveraging propriety AWS Services.

The ideal candidate will know how to design logical schemas that organize data in a meaningful, efficient way and understand how to build scalable and maintainable solutions.

The candidate is an expert at data modeling, ETL design and business intelligence tools, and has hands-on knowledge of databases such as Redshift, Aurora, RDS.

The role will partner closely with internal customers to invent new technical solutions to highly complex sustainability data analytics problems.

Key Responsibilities :

  • You will be the authoritative source of enterprise metadata and the solutions team for applications that action Parkland strategies to better serve our customers.
  • You should have deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms.
  • You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake.
  • You should be an authority at crafting, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake.
  • Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.

  • Translate business and functional requirements into robust, scalable, operable solutions that work well within the overall data architecture.
  • Design, develop, implement, test, document, and operate large-scale, high-volume and low latency applications.
  • Designs data integrations and data quality framework.
  • Develops and maintains scalable data pipelines and builds out new API integrations.
  • Implement data structures using best practices in data modeling, ETL / ELT processes, SQL, and Oracle.
  • Manage stakeholder communication, prioritization of tasks and on time solution delivery.
  • Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance.
  • Designs data integrations and data quality framework.
  • Designs and evaluates open source and vendor tools for data lineage.
  • Works closely with all business units and engineering teams to develop strategy for long term data platform architecture.
  • Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models.
  • SQL Server, Redshift, AWS glue, python, tensor flow, pandas any machine learning tools in AWS advantage
  • Produce comprehensive, usable dataset documentation and metadata.
  • Design and develop operational and analytical reports as per the customer needs by using the tools.
  • Evaluate and make decisions around the use of new or existing software products and tools.
  • Mentor Junior Data engineers

  • This position requires a Bachelor's Degree in Computer Science or a related technical field, and 7+ years of meaningful employment experience.
  • 5+ years of work experience with AWS Tech Stack, ETL, Data Modeling, and Data Architecture.
  • 3+ years of relevant experience data engineering roles.
  • 7+ years of database experience in database design, data modeling, writing advanced SQL queries, data warehousing in Redshift, MySQL, or other relational database systems.
  • 3+ years of coding experience in scripting languages such as Python, Ruby, Perl, Java
  • 1+ years of experience creating visual data representations, such as Tableau, QuickSight, or other BI platforms
  • Expert-level skills in writing and optimizing SQL.
  • Experience with Big Data technologies such as Hadoop, Hive / Spark.
  • Experience operating very large data warehouses or data lakes.

  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
  • Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
  • Knowledge of Engineering and Operational Excellence using standard methodologies.
  • Strong SQL skills with experience in writing complex SQLs, materialized views, high performance queries.
  • Experience with AWS services such as DMS, Redshift, Redshift Spectrum S3 and RDS.
  • Experience with Big Data Technologies.
  • Strong knowledge of data management fundamentals and data storage principles.
  • Experience in working and delivering end-to-end projects independently.
  • Knowledge of best practices and IT operations in an always-up, always-available service
  • Experience with or knowledge of Agile Software Development methodologies
  • Excellent problem solving and troubleshooting skills
  • Process oriented with great documentation skills
  • Excellent oral and written communication skills with a keen sense of customer service
  • Strong written and verbal communication skills across diverse audiences.
  • Ability in managing and communicating data warehouse plans to internal clients
  • Experience designing, building, and maintaining data processing systems
  • Experience working with either a Map Reduce or an MPP system on any size / scale
  • We Offer :

  • A safety focused work environment and ongoing safety training.
  • A share in our success through the Employee Share Purchase Plan and 100% company matching.
  • Flexible medical and dental packages, a Health Care Spending Account, along with a supportive Employee and Family Assistance Program.
  • In-house learning and development opportunities, leadership training, international opportunities.
  • An employee referral program earn up to $2000 for your referral.
  • We thank all candidates in advance for their interest, however only those being considered will be contacted. Please note :

    Report this job

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form