JobHire
face icon
Register to automatically apply for this and similar jobs
Register
star

Data Engineer

Vettura LLC

Naperville, illinois


Job Details

Not Specified


Full Job Description

About the job:

    • Design, document, and develop distributed and event-driven data pipelines with cloud-native data stores such as Snowflake, Redshift, Big Query, or ADW.
    • Consult business, product, and data science teams to understand end-user requirements or analytics needs to implement the most appropriate data platform technology and scalable data engineering practices.
    • Prepare data mapping, data flow, production support, and pipeline documentation for all projects.
    • Delivering completeness of source system data by performing a profiling analysis and triaging issues reported in production systems. 
    • Facilitate fast and efficient data migrations through a deep understanding of design, mapping, implementation, management, and support of distributed data pipelines

Requirements

  • Minimum of Bachelor’s Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
  • You have a strong background in distributed data warehousing with Snowflake, Redshift, Big Query, and/or Azure Data Warehouse. You have productionized real-time data pipelines through EDA leveraging Kafka or a similar service.
  • You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise distributed data warehouse.
  • You have a strong understanding and exposure to data mesh principles in building modern data-driven products and platforms
  • You have expert programming/scripting knowledge in building and managing ETL pipelines using SQL, Python, and Bash.
  • You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.

Benefits

  • Minimum of Bachelor’s Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
  • You have a strong background in distributed data warehousing with Snowflake, Redshift, Big Query, and/or Azure Data Warehouse. You have productionized real-time data pipelines through EDA leveraging Kafka or a similar service.
  • You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise distributed data warehouse.
  • You have a strong understanding and exposure to data mesh principles in building modern data-driven products and platforms
  • You have expert programming/scripting knowledge in building and managing ETL pipelines using SQL, Python, and Bash.
  • You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.

Get 10x more interviews and get hired faster.

JobHire.AI is the first-ever AI-powered job search automation platformthat finds and applies to relevant job openings until you're hired.

Registration