[Hiring] Cloud Data Engineering - Snowflake Data Engineering @Zensar
Cloud Data Engineering - Snowflake Data Engineering @Zensar
Data and Analytics
Salary unspecified
Employment Type contract
Posted 2d ago

[Hiring] Cloud Data Engineering - Snowflake Data Engineering @Zensar

2d ago - Zensar is hiring a remote Cloud Data Engineering - Snowflake Data Engineering. πŸ’Έ Salary: unspecified πŸ“Location: EST (UTC-5), GMT (UTC+0), IST (UTC+5:30)

Role Description

The Senior Analytics Engineer (Data Products) will partner with the SVP of Data & Product Analytics, focusing on the following:

  • Engagement model: No people-management responsibilities.
  • Primary outputs:
    • Cross-product aggregated datasets
    • A semantic layer for metrics and curated datasets
    • Clear dataset documentation and definitions
  • Ways of working: Async-first with written specs, review cycles, and lightweight live syncs during overlap hours.
  • Access and tooling: Work performed in company-managed systems (e.g., Databricks) subject to onboarding and access approvals.

What You'll Do

  • Partner on Unity Catalog operations:
    • Organize catalogs and schemas, tighten naming conventions, and implement permission patterns.
    • Fix access or discoverability issues and document patterns.
  • Partner to deliver the silver and gold layer:
    • Design transformation logic, define table and metric definitions, review outputs, and validate results.
    • Contribute directly (SQL, notebooks, documentation).
  • Build cross-product aggregated datasets:
    • Implement canonical datasets that join and roll up measures and dimensions.
    • Optimize for consistent definitions and performance.
  • Operate datasets like products:
    • Version key tables and set clear expectations for data freshness and completeness.
    • Communicate changes before they affect downstream workflows.
  • Build and publish the semantic layer:
    • Implement metric definitions and a business glossary.
    • Publish examples for self-service in Databricks.
  • Partner on data contracts and quality checks:
    • Work on contracts, schema checks, and lineage to ensure data trustworthiness.
    • Define quality standards and support triage for issues.
  • Support self-service and answer questions:
    • Publish examples and documentation for safe querying.
    • Answer questions and unblock teams.
  • Keep documentation in sync with production:
    • Maintain dataset definitions and metric standards as tables evolve.

Qualifications

  • 6 to 10 years in data engineering, analytics engineering, or a closely related role.
  • Databricks experience, including hands-on work with Unity Catalog, Delta Lake, and SQL or notebook-based development.
  • Proficiency in Python and SQL at an engineering level.
  • Solid understanding of medallion architecture (bronze to silver to gold).
  • Experience building and supporting semantic layers, data catalogs, or self-service data products in production.
  • Track record of building shared, cross-domain datasets.
  • Strong stakeholder management skills.
  • Comfortable with modern engineering workflows: Git-based version control, code review, and testing.
  • Strong written communication skills.

Nice to Have

  • Experience with Databricks Genie or AI-BI features.
  • Familiarity with MCP (Model Context Protocol), LLM tool calling, or AI agent patterns.
  • Background in financial services, insurance, or reinsurance data.

How You'll Work

You will collaborate closely with the SVP of Data & Product Analytics in an async-first model. Strong written communication and proactive flagging of blockers are essential.

Success in the First 60 to 90 Days Looks Like

  • Improved Unity Catalog navigation and access for high-value areas.
  • First set of silver and gold tables in production, validated with actuarial users.
  • Live semantic layer with core metrics defined and curated datasets published.
  • Data contracts and basic validation checks running for new or changed datasets.
  • At least one cross-product dataset shipped and used by multiple downstream consumers.

Hours

The team runs a data workstream sync at 8:45am EST on Monday, Wednesday, and Friday. Availability during this time is preferred, with self-directed work outside of these hours.

Before You Apply
️
remote Be aware of the location restriction for this remote position: EST (UTC-5), GMT (UTC+0), IST (UTC+5:30)
β€Ό Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Cloud Data Engineering - Snowflake Data Engineering @Zensar
Data and Analytics
Salary unspecified
Employment Type contract
Posted 2d ago
Apply for this position
Did not apply βœ“
Applied βœ“
Sent Follow-Up βœ“
Interview Scheduled βœ“
Interview Completed βœ“
Offer Accepted βœ“
Offer Declined βœ“
Application Denied βœ“
Unlock 165,000+ Remote Jobs
️
remote Be aware of the location restriction for this remote position: EST (UTC-5), GMT (UTC+0), IST (UTC+5:30)
β€Ό Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Apply for this position
Did not apply βœ“
Applied βœ“
Sent Follow-Up βœ“
Interview Scheduled βœ“
Interview Completed βœ“
Offer Accepted βœ“
Offer Declined βœ“
Application Denied βœ“
Unlock 165,000+ Remote Jobs
Γ—

Apply to the best remote jobs
before everyone else

Access 165,000+ vetted remote jobs and get daily alerts.

4.9 β˜…β˜…β˜…β˜…β˜… from 500+ reviews
Unlock All Jobs Now

Maybe later