Data Engineer III @Valenz
Software Development
Salary unspecified
Remote Location
πŸ‡ΊπŸ‡Έ USA Only
Job Type full-time
Posted 3d ago

[Hiring] Data Engineer III @Valenz

3d ago - Valenz is hiring a remote Data Engineer III. πŸ’Έ Salary: unspecified πŸ“Location: USA

Role Description

As a Data Engineer III, you’ll be responsible for designing, building, and evolving scalable data systems that power analytics, product, and operational decision-making across the organization. You will operate as a senior individual contributor with end-to-end ownership of complex data initiatives, contributing directly to the architecture and evolution of our Databricks-based Lakehouse platform on Azure.

Things You’ll Do Here:

  • Own the design and implementation of scalable, production-grade data pipelines using Databricks, PySpark, SQL, and Python.
  • Operationalize machine learning workflows and feature pipelines.
  • Own and deliver complex, cross-functional data initiatives end-to-end, from ingestion and data modeling through production deployment and ongoing monitoring.
  • Design robust, reusable ETL frameworks using Delta Lake best practices (incremental processing, merge/upserts, schema evolution).
  • Diagnose and resolve performance challenges in distributed Spark workloads (data skew, shuffle, memory pressure, inefficient execution plans).
  • Build and enforce strong data quality practices, including validation frameworks, observability, and automated alerting.
  • Design and evolve data models across medallion architecture layers to support analytics and downstream applications.
  • Implement modern data ingestion patterns, including API-driven, event-based, and AI-assisted ingestion workflows.
  • Partner with analytics, architecture, and engineering teams to support advanced data use cases, including feature engineering and emerging machine learning workflows.
  • Evaluate and adopt new capabilities within Azure and Databricks (e.g., MLflow, Unity Catalog enhancements, platform optimizations) to improve scalability and developer productivity.
  • Contribute to architectural decisions and platform standards, balancing short-term delivery with long-term maintainability.
  • Write high-quality, well-tested, and maintainable code; lead by example through thoughtful code reviews.
  • Act as a go-to resource for diagnosing and resolving complex production issues across systems.
  • Mentor and elevate other engineers through collaboration, design discussions, and technical guidance.
  • Perform other duties as assigned.

Qualifications

  • 4+ years of experience in data engineering or a related field, with a track record of delivering production-grade data systems.
  • Strong hands-on experience with Databricks, Spark/PySpark, and distributed data processing at scale.
  • Deep understanding of Delta Lake and modern Lakehouse architecture patterns.
  • Proficiency in Python and SQL for large-scale data transformation and performance optimization.
  • Proven experience building incremental, idempotent, and highly reliable data pipelines.
  • Strong experience diagnosing and optimizing Spark workloads (partitioning strategies, AQE, caching, file sizing, query tuning).
  • Experience designing data models for analytics and downstream consumption (medallion architecture, dimensional modeling, or similar).
  • Experience implementing data quality, validation, and observability frameworks in production environments.
  • Familiarity with CI/CD, version control, and modern DataOps practices.
  • Experience supporting or integrating with machine learning workflows (feature pipelines, model inputs/outputs, or ML lifecycle support).
  • Familiarity with AI/ML concepts as applied to data engineering (intelligent ingestion, anomaly detection, automation).
  • Demonstrated ability to evaluate and adopt new technologies within cloud ecosystems (Azure, Databricks).
  • Strong communication skills and ability to collaborate with both technical and non-technical stakeholders.

Requirements

  • A plus if you have familiarity with event-driven architectures (e.g., streaming, message queues, or event hubs).
  • Experience working with healthcare data (claims, eligibility, provider, or clinical datasets).

Benefits

  • Generously subsidized company-sponsored Medical, Dental, and Vision insurance, with access to services through our own products, Healthcare Blue Book and KISx Card.
  • Spending account options: HSA, FSA, and DCFSA.
  • 401K with company match and immediate vesting.
  • Flexible working environment.
  • Generous Paid Time Off to include vacation, sick leave, and paid holidays.
  • Employee Assistance Program that includes professional counseling, referrals, and additional services.
  • Paid maternity and paternity leave.
  • Pet insurance.
  • Employee discounts on phone plans, car rentals, and computers.
  • Community giveback opportunities, including paid time off for philanthropic endeavors.
Before You Apply
️
πŸ‡ΊπŸ‡Έ Be aware of the location restriction for this remote position: USA Only
β€Ό Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Data Engineer III @Valenz
Software Development
Salary unspecified
Remote Location
πŸ‡ΊπŸ‡Έ USA Only
Job Type full-time
Posted 3d ago
Apply for this position
Did not apply βœ“
Applied βœ“
Sent Follow-Up βœ“
Interview Scheduled βœ“
Interview Completed βœ“
Offer Accepted βœ“
Offer Declined βœ“
Unlock 152,720 Remote Jobs
️
πŸ‡ΊπŸ‡Έ Be aware of the location restriction for this remote position: USA Only
β€Ό Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Apply for this position
Did not apply βœ“
Applied βœ“
Sent Follow-Up βœ“
Interview Scheduled βœ“
Interview Completed βœ“
Offer Accepted βœ“
Offer Declined βœ“
Unlock 152,720 Remote Jobs
Γ—

Apply to the best remote jobs
before everyone else

Access 152,720+ vetted remote jobs and get daily alerts.

4.9 β˜…β˜…β˜…β˜…β˜… from 500+ reviews
Unlock All Jobs Now

Maybe later