Hadoop Engineer
Bell Canada
Toronto, ON, CA Toronto, ON, CA
31d ago

Bell is a truly Canadian company with a 137 year track record of success. We are defined by the passion of our team members and their belief in our company’s vast potential.

To ensure we continue to be recognized as Canada’s leading communications company, we’re committed to finding and developing the next generation of leaders.

This means creating best-in-class career and development opportunities for our employees.

If you’re passionate, driven and find yourself seeking interesting work, new challenges and continuous learning opportunities, then we want you to join our team.

An integral part of our customer service focus, includes contact centres across Canada supporting all of our Bell Mobility and Bell Residential Services customers.

Our Customer Operations team is responsible for strategy, design and delivery of core call centre tools and processes, including queue strategy, routing, and call monitoring tools.

We have been building our Business Intelligence team and have made tremendous strides in creating the BEST BI environment this industry has seen! As a result, we've been able to provide strategic guidance and intelligence that has contributed to Bell's success.

If you want to work with the latest & greatest BI tools like, Best in Class Teradata, SAS and Hadoop all within an Agile Methodology environment, then this may be the role for you!

Our people are empowered to make big things happen and are supported by growth, training and personal development opportunities.

About the Role :

The Hadoop Developer will be a key member of the Bell Business Intelligence Big Data Team and work on the Hadoop platform.

The Hadoop Developer will work closely with Hadoop Administrators, Data Scientists, and business stakeholders. This position will be responsible for, but not limited to :

  • Develop high-performance data processing pipelines
  • Partner with Business Analysts and internal customers to improve our data coverage and analytic capabilities;
  • ETL code optimization to produce production ready code
  • Documentation of all development code.
  • Testing of new tools in the Hadoop ecosystem.
  • Opportunity to be cross trained in advanced analytical technique.
  • Along with the rest of the team, actively research and share learning / advancements in the Hadoop space, especially related to development
  • Ability to take initiative to research, learn and recommend emerging technologies
  • Skills Required :

  • Experience working with Apache Spark, Kafka and other big data technologies
  • Experience in coding with a variety of languages like Python, Scala, & Java
  • Experience in developing Big Data ingestion frameworks or experience in working with ingestion tools
  • Demonstrated analytical and problem solving skills, particularly those that apply to a "Big Data environment.
  • Experience with data pipeline E.T.L. tools, such as : Talend (added advantage)
  • Experience Shell scripting / Python (added advantage)
  • Previous Java experience is added asset
  • Competencies :

  • Strong Communication skills
  • Self-Motivated
  • Willingness to learn
  • Excellent planning and organizational skills
  • Bilingualism is an asset (English and French)

    Apply
    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form