Americas-Architecture Consultant-PS
Abacus Service Corporation
Toronto, ON
6d ago

Description : Downtown Toronto

Downtown Toronto

Security clearance preferred (Canadian Government Clearance done by RCMP)

Position Title : Data Engineer / Architect (Big Data)

Role Summary :

At Teradata Customer, Data Architect is an important role that strives to reconcile the corporate data architecture with the business requirements as well as the overall IT Architecture as defined by the Application Services Team.

In this role, the Data Architect (Big Data) will perform analysis and provide designs / solutions using Big Data technologies for various projects consistent with the corporate data architecture.

In discharging the Data Architect's duties, the Data Architect must provide leadership and guidance in the research, experimentation, business analysis, and use of Big Data technology including architecture, integration and capabilities.

The Data Architect must also be able to work closely with the users, applications / systems designers and the developers on various projects and support teams.

The candidate is expected to have solid theoretical as well as hands-on practical experience on every step of the enterprise architecture process from high-

level logical design to the final implementation of the envisioned architecture.

Core Responsibilities :

  • Provide and influence Big Data design and architecture
  • Manage data loading into the Hadoop ecosystem
  • Development of Big Data set processes for data modeling, mining and production
  • Perform Big Data modelling leveraging Hadoop database technologies
  • Exploratory data analysis; Query and process Big Data, provide reports, summarize and visualize the data
  • Consult, collaborate, and recommend solutions for batch and streaming use case patterns
  • Maintain security and data privacy in data flow designs
  • Influence and evolve Big Data security models
  • Capture, maintain and integrate technical metadata in a Big Data ecosystem and external metadata repositories
  • Assist and enable the integration of business metadata in a Big Data ecosystem and external metadata repositories
  • Participate in PoC / PoT efforts to integrate new Big Data management technologies, software engineering tools, and new patterns into existing structures
  • Research opportunities for data acquisition and new uses for existing data
  • Create custom software components (e.g. UDFs) and analytics applications
  • Create Big Data warehouses that can be used for reporting or analysis by data scientists
  • Influence / recommend ways to improve data reliability, efficiency and quality
  • Collaborate with other data management and IT team members on project goals
  • Document detailed Big Data design solutions conformant to enterprise standards, architecture and technologies
  • Oversee handover to operational teams
  • Monitor Big Data databases to ensure accurate and appropriate use of data and perform quality control of database activities
  • Troubleshoot and correct problems discovered in Big Data databases
  • Follow change management procedures and help to create policies and best practices for all Big Data environments
  • Maintain involvement in continuous improvement of Big Data solution processes, tools and templates
  • Create and publish design documents, usage patterns, and cookbooks for technical community
  • Experience

  • Experience with the Hortonworks Data Platform (version 2.6)
  • Experienced with the Hadoop ecosystem and toolset Sqoop, Spark, HDFS, Hive, HBase, etc.
  • Experience with Big Data streaming frameworks and tools (Spark Streaming, Storm, Kafka, etc.)
  • Experience developing Hadoop integrations (batch or streaming) for data ingestion, data mapping and data processing capabilities
  • Experience with Hadoop security frameworks : Ranger
  • Experience with Hadoop metadata frameworks and security policies : Ranger, Atlas
  • Experience automating data pipelines in a Big Data ecosystem
  • Experience programming in both compiled languages (Java, Scala) and scripting languages (Python or R)
  • Experience with SLDC methods (i.e. agile, iterative, waterfall, etc.)
  • Experience with DevOps or Continuous Delivery tools and processes a plus
  • Experience working in a cloud IaaS / PaaS environment a plus
  • Experience in designing solutions for Big Data warehouses
  • Experience in Big Data performance analysis, tuning and capacity planning
  • Experience using a source code management / version control system.
  • Experience in data profiling and analysis is a plus
  • Experience in designing business intelligence systems, dashboard reporting, and analytical reporting is a plus
  • Technical Skills :

  • Proficiency in SQL, NoSQL, relational database design and methods
  • Proven understanding of batch-oriented Hadoop frameworks and patterns : Sqoop, Hive, HDFS, Pig
  • Proven understanding of stream-oriented Hadoop frameworks and patterns : Nifi, Kafka, HBase
  • Experience with ETL Tools and methodologies a plus : Informatica, Talend, etc.
  • Analytical and problem solving skills, applied to the Big Data domain
  • Ability to write high-performance, reliable and maintainable data flows
  • Demonstrated understanding of unstructured data designs
  • Knowledge of Big Data scalability principles
  • Knowledge of Big Data security models and policies : Ranger, Atlas
  • Knowledge of BI tools with big data : Cognos, Tableau
  • Knowledge of Analytics and Data Science tooling with big data : R, Python, SAS, SPSS
  • Ability to work with development teams in an Agile environment to develop new features and modify existing features with a "design for the future mentality
  • Able to interact with client projects in cross-functional teams
  • Strong interpersonal, written and oral communication skills
  • Proven ability to effectively prioritize and execute tasks in a team-oriented, collaborative work place
  • Self-reliant and comfortable with a rapidly changing environment
  • Understanding of Shared Services operating model including engagement, interaction, time recording, monthly charge back cycles, and financial calendaring
  • Customer service orientated
  • Ability to quickly assimilate new skills
  • Soft / Leadership Skills :

  • Solution and delivery oriented with ability to work on multiple projects simultaneously
  • Strong communications and presentation skills with a proactive, energetic approach
  • Project Management and coordination skills
  • Resourceful, organized and independent self-starter with an enquiring mind
  • Conceptual thinker with ability to understand business objectives and translate into business solutions
  • Recognition and promotion of the value of Data Architecture Design Patterns
  • Ability to interact with different levels of technical and management personnel
  • Working experience in large organization
  • Qualifications :

  • Bachelor's degree in computer science, information sciences or related field
  • 7+ years IT experience within an enterprise operating environment preferably in financial services setting working with ETL tools, Data Warehouse, RDBMS, and Hadoop / Big Data vendor technologies
  • Must have 5+ years proven Hadoop / Big Data implementation, operational, and development experience
  • If yes have they worked continuously for over 12 months? No Plan to Hire? No Work Hours 40 PS or Non-PS? PS Hours per Day 8 Hours per Week 40 Total Hours 4,176.00

    Add to favourites
    Remove from favorites
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form