Big Data Engineer II
Amazon
Vancouver, BC, CA
3d ago
source : BCJobs.com

DESCRIPTION

The Tech360 Data Platform team's vision is to help customers handle the full life cycle of data at all levels of granularity, simplify data collection, integration, and aggregation of AWS data assets, and provide services (compute, storage, security) to access datasets at scale.

We collect and process billions of usage and billing transactions every single day and relate it to the largest data feed supported by Salesforce.

com. We transform this raw data into actionable information in the Data Lake and make it available to our internal service owners to analyze their business and service our external customers.

We are truly leading the way to disrupt the big data industry. We are accomplishing this vision by bringing to bear Big Data technologies like Elastic Map Reduce (EMR) in addition to data warehouse technologies like Spectrum to build a data platform capable of scaling with the ever-increasing volume of data produced by AWS services.

You will have the ability to craft and build Tech360's data lake platform and supporting systems for years to come.

You should have deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms.

You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake.

You should be an authority at crafting, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake.

Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.

BASIC QUALIFICATIONS

  • This position requires a Bachelor's Degree in Computer Science or a related technical field, and 5+ years of meaningful employment experience.
  • 5+ years of work experience with ETL, Data Modeling, and Data Architecture.
  • Expert-level skills in writing and optimizing SQL.
  • Experience with Big Data technologies such as Hive / Spark.
  • Proficiency in one of the scripting languages - python, ruby, linux or similar.
  • Experience operating very large data warehouses or data lakes.
  • PREFERRED QUALIFICATIONS

  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
  • Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
  • Knowledge of Engineering and Operational Excellence using standard methodologies.
  • Amazon.com is an Equal Opportunity Employer - Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation.

    Report this job
    checkmark

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form