We are looking for a talented Lead Data Engineer with a passion for architecting and building analytics ready data assets in a scalable, easy-to-deploy & maintain cloud data platform. The Lead Data Engineer will co-own the data strategy and architecting the right data platform to serve business needs. This person will lead the development of data pipelines and data products necessary to allow marketers, analysts, and data integrators across Brado’s clients to accomplish their goals. This person will also contribute to the vision for developing our modern data infrastructure, and will work closely with fellow engineers, data scientists, and reporting and measurement specialists to establish best practices for creating systems and data products that business will use.
The Lead Data Engineer should possess deep technical skills, be comfortable owning our data strategy and data infrastructure, and be excited about building a strong data foundation for the company. This person will direct the work of other data engineers and grow junior talent to the best of their capacity.
While this is a remote position, ideal candidates will live near one of our collaboration centers in St. Louis, Chicago or the Dallas/Ft.Worth areas, so that they can be on-site with teams regularly.
Key Areas of Responsibility
- Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform and MSFT Bicep, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins.
- Work with internal infrastructure teams on monitoring, security, and configuration of azure environment and applications as it relates to data infrastructure.
- Identify data needs for our clients, our marketing team, and data science team, understand specific requirements for metrics and analysis, and build efficient and scalable data pipelines to enable a data-driven marketing approach.
- Design, develop and maintain marketing databases, datasets, pipelines, and warehouses to enable advanced segmentation, targeting, automation, and reporting.
- Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with database, data warehouse, and data mart environments.
- Assist with the design and management of our technology stack used for data storage and processing.
- Develop and implement quality controls and departmental standards to ensure quality standards, organizational expectations, and regulatory requirements.
- Contribute to the development and education plans on data engineering capabilities, systems, standards, and processes.
- Anticipate future demands of initiatives related to people, technology, budget and business within your department and design/implement solutions to meet these needs.
- Communicate results and business impacts of insight initiatives to stakeholders within and outside of the company.
- 7 years of experience with modern data engineering projects and practices: designing, building, and deploying scalable data pipelines with 3 + years of experience deploying cloud native solutions.
- Strong programming skills in Python, Java, or Scala, and their respective standard data processing libraries.
- 3 years of experience building data pipelines for AI/ ML models using PySpark or Python.
- Experience building data pipelines with modern tools such as Databricks, Fivetran, dbt etc.
- 2 years of experience and strong working knowledge in Databricks and familiarity with lakehouse architecture and delta lake.
- At least 2 years of experience with Azure, SQL, Python, Docker/Kubernetes, CI/CD, Git.
- Strong experience in establishing and maintaining relational databases, SQL, data warehouses, and ELT pipelines
- Experience with Spark, Kafka, etc.
- Experienced in integrating data from core platforms like Marketing Automation, CRM, and Analytics into a centralized warehouse. Knowledge of Marketo, Salesforce.com and Google Analytics is a plus.
- Software development best practices with strong rigor in high quality code development, automated testing, and other engineering best practices.
- Extensive knowledge of ETL and Data Warehousing concepts, strategies, methodologies.
- Experience working with structured and unstructured data.
- Experience establishing real time data pipelines and processing.
- Familiarity with Azure services like Azure functions, Azure Data Lake Store, Azure Cosmos, Azure Databricks, Azure Data Factory etc.
- Ability to provide solutions that are forward-thinking in data and analytics.
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
- BS in Computer Science, Engineering, Statistics, Informatics, Information Systems or another quantitative field.
- Masters degree in Computer Science or Engineering fields preferred.
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (401k, IRA)
- Life Insurance (Basic, Voluntary & AD&D)
- Paid Time Off (Vacation, Sick & Public Holidays)
- Family Leave (Maternity, Paternity)
- Short Term & Long Term Disability
- Training & Development
- Work From Home