We are a fast-growing and pioneering people analytics company that is transforming the financial workplace. We use cutting-edge software and machine learning to generate previously unidentifiable insights into employee behavior and performance.
We have been recognized by renowned companies such as Amazon Web Services and Google Cloud for our achievements in AI, big data analytics, and machine learning.
We have also been included in the Forbes FinTech 50, CB Insights AI 100, and Tech Nation’s prestigious Future 50 program.
Our goal is to help businesses achieve better outcomes by developing and delivering data-driven solutions for compliance, CRM, HR, and workplace productivity.
We also aim to rapidly expand our worldwide customer base to include companies across all major industries.
About the Role
The engineering heart of Behavox is Data Operating Platform - DOP.
DOP covers fundamental aspects of data processing, targeting massive amounts of data coming from various sources, and requires a lot of engineering effort just to store it.
This is before tackling the even tougher tasks of processing and making sense of it.
Our platform is a powerful tool that helps our clients work their way through billions of data items by means of searching, filtering and visualizing relationships between entities in the system.
We've even built our own IDE inside it!
You, as a part of the Platform team, should be responsible for DOP feature development and its availability, stability and performance.
Magic for less operational costs" - this is the goal of this team.
Lead DOP components for Big Data product;
Decompose, estimate and deliver task implementation in communicated timelines for a small team in micro-release paradigm for high-load production;
Contribute to the development process improvement, highlighting weak points, coming up with better ways of getting things done;
Contribute to infrastructure code base if it's necessary for task implementation (DevOps as a culture paradigm);
Storage algorithms and schemas optimization for Big Data product, every byte matters when you work with petabytes of data;
Distributed data processing flow optimization;
Design performance and storage consumption metrics;
Design DOP API for product implementation needs;
Mentor, educate and support those around you.
Ideal Candidate Profile
5+ years of experience building scalable and reliable distributed back-ends for web applications;
Solid Computer Science fundamentals - data structures, architecture, concurrency and various design patterns;
Strong knowledge of programming patterns in high-load and BigData field;
Strong knowledge of core Java, Spring and Hibernate frameworks;
Experience with Apache HBase / Elasticsearch / Apache Spark;
Strong knowledge of relational and NoSQL databases;
Comfortable working with Linux and command line;
Experience in the autonomous leading of implementation effort in a specific area for a considerable period of time (as a single specialist or a lead of a team).
What We Offer
Passionate team members who are applying cutting-edge tech to data and analytics;
Fully covered health benefits for employee and family;
Generous time-off policy;
Flexible work schedule.
Interview with the hiring manager;
Take-home technical task;
Final interview with the Product Team members and CTO.