<img src="https://secure.leadforensics.com/77233.png" alt="" style="display:none;">

Careers

It’s more than a job. It’s working for a purpose.

Data Engineer

Reporting to the Director – Data Engineering, Blackline Safety is looking for a Data Engineer to help build and drive our Blackline Data Platform focused at driving Analytics and Innovation. We have a world class connected safety system and we are building a world class Data platform to match it.

You will be responsible for building real time data streaming and transformation jobs using Java/Scala on EMR-Spark platform, and next generation Big Data Technologies.
Responsibilities include development, build and optimization of our new and existing data pipeline architecture, catered towards internal and external customers. The pipeline needs to be scalable, robust, repeatable, and use industry latest and best practices in Big data, CI/CD and DataOps. You will work in expanding the Blackline Data platform alongside Product, Architecture and Engineering Teams focused at data driven decisions, products and analytics. The data architecture, framework and the technology stack include IoT, EC2, RDS MySQL, RedShift, Spark, EMR, Kafka, Kinesis (Streams, Firehose, Analytics), Lambda, Datalakes, Deltalakes, Event bus, DataDog, CD/CI pipelines, CloudFormation, Jenkins, Terraform, Python, Java, and more.

JOB ID: PSDATENG

Who are you?

You believe big data helps businesses solve real-world problems. You have an aptitude for engineering, big data architecture and manipulating various forms of data frameworks including but not limited to Data warehousing, Modeling, Analytics and Integrations from various data sources. You are curious and innovative and are not bound to a single tool or platform and can solve problems in creative ways. You are proficient in different computing languages such as SQL, Java, Python, Sacla etc.. and aspire to learn alternate ways of resolving issues. You passionate about everything data and look to leverage data, analytics and insights to solve business problems. You are a team-player who is self-motivated , loves to collaborate, is creative and are excited to make a difference to people and product consumed in the real world.

Requirements

  • 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience building , architecting and optimizing ‘big data’ data pipelines leveraging a mix of Java, Scala, Python, Ruby, Glue, Sagemaker, EMR, Lambda, Step Function, cloud formation / Terraform.
  • Experience with big data tools: Hadoop, Spark, Flink, Kafka, DataDog, EventBus, AWS cloud services: EC2, EMR, RDS, Redshift, Redshift Spectrum, API calls, API generation
  • Well versed in Designing ETL Frameworks focused at Apache Spark Streaming and Batch
  • Well versed in Integrating data from diverse, different sources and formats into the datalake in structured data formats like parquet, csv files.
  • Hands-on experience with Cloud Data Warehouse (Snowflake, Redshift) and Big Data technologies implementation including knowledge of DWH Principals around Fact, Dimensions, Incremental loads, CDC, and SCD.
  • Well versed in Kafka streaming pipeline development and maintenance experience.
  • A successful history of manipulating, processing, and extracting value from large, disconnected datasets using ETL/ELT methodology and technologies such as Deltalake, Databricks, Matillion etc and integrating data from silos to a Master Data source
  • Well versed in writing spark transformation jobs to handle complex nested json data formats and joining streaming data with parquet files in Datalake.
  • Good knowledge in administration of Datalake using Lake Formation and roles.
  • Support production workloads (on S3, Hadoop, Hive, Spark, Flink, Kafka etc) and build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Well versed in Translating Business requirements and implementing DQ checks and best practices to enable seamless build and deploy.
  • Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
  • Experience building data products incrementally and integrating and managing datasets from multiple sources
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Good working knowledge on AWS message queuing, stream processing, and highly scalable ‘big data’ data stores.

About Blackline Safety

Blackline Safety is a world leader in the development and manufacturing of wirelessly connected safety products. We offer the broadest and most complete portfolio available in the industry. Our products are designed to save lives and we monitor personnel working alone in populated areas, complex indoor facilities, and the remote reaches of our planet. Blackline’s products are used to keep people safe in the event of falls, missed check-ins, man-downs and exposure to explosive or toxic gas. Our design, development, sales, marketing, support, and production are all performed in-house at our headquarters in Calgary, AB. Blackline Safety is a publicly traded company (TSX: BLN). To learn more about our company visit www.blacklinesafety.com

Blackline Safety is powered by the diversity of our talented employees. We are an equal opportunity employer. We consider all applicants, regardless of age, religion, race, color, ancestry, gender, gender identity or expression, disability, national origin, race, sexual orientation. We enthusiastically encourage all individuals to apply for positions that fit their passions. Come join our inclusive team and start collaborating with us on exciting projects!

Apply today

To apply for this position, please fill out the following form: