Role Description
CTG is seeking to fill a Databricks Architect position for our client.
Location: Remote
Duration: 12 months
Duties:
-
Design, develop, and implement scalable data and analytics platforms using Databricks and modern cloud technologies
-
Provide technical leadership in architecting end-to-end data solutions supporting data engineering, analytics, AI/ML, and business intelligence use cases
-
Translate business and IT requirements into modern data architecture components using established modernization frameworks
-
Lead the design and optimization of distributed data processing solutions leveraging Apache Spark and Delta Lake
-
Collaborate with data engineers, data scientists, platform engineers, and business stakeholders to deliver secure, reliable, and high-performance solutions
-
Define and enforce best practices for data integration, ETL/ELT pipelines, orchestration, and data modeling
-
Ensure robust implementation of data security, governance, access control, and compliance within cloud environments
-
Perform performance tuning, cluster configuration optimization, and cost management across Databricks environments
-
Support DevOps and CI/CD implementation for data platforms and analytics workloads
-
Contribute to modernization initiatives including legacy system analysis, code restructuring, refactoring, and business logic extraction into reusable components
-
Enable integration of legacy and modern systems using cloud-native services and APIs for reuse by systems of engagement
-
Reuse and enhance digital modernization assets, methods, and collateral
-
Work with cloud-native services such as AWS ECS, AWS Lambda, ElasticCache, and S3 as applicable
Qualifications
-
Strong experience designing and implementing data platforms using Databricks
-
MUST have familiarity with AI and Machine Learning
-
Deep knowledge of Apache Spark, Delta Lake, and distributed data processing concepts
-
Proficiency in Python, Scala, or SQL for data engineering and analytics
-
Experience with cloud platforms such as Azure (ADF, ADLS), AWS (S3, Glue), or GCP (GCS, BigQuery)
-
Hands-on experience with data integration, ETL/ELT frameworks, and orchestration tools
-
Strong understanding of data security, governance, and access control in cloud environments
-
Experience with performance tuning, cluster configuration, and cost optimization
-
Familiarity with DevOps and CI/CD practices for data platforms
-
Ability to communicate complex technical concepts to non-technical stakeholders
-
Experience with legacy modernization, code refactoring, and integration of legacy systems
Requirements
-
8+ years of experience in data engineering, data architecture, or analytics engineering roles
-
3+ years of hands-on experience with Databricks in production environments
-
Proven experience architecting cloud-based data platforms at scale
-
Demonstrated experience working on AI/ML-enabled data solutions is required
-
Experience leading technical teams or acting as a lead architect in enterprise environments
Education
-
Bachelorβs degree in Computer Science, Information Systems, Engineering, or a related field, or equivalent practical experience
-
Excellent verbal and written English communication skills and the ability to interact professionally with a diverse group are required.
To Apply
To be considered, please apply directly to this requisition using the link provided. For additional information, please contact Laura Dominguez at
[email protected]
. Kindly forward this to any other interested parties. Thank you!
The expected base salary for this position ranges from $80.00 to $85.00/hour. Salary offers are based on a wide range of factors including relevant skills, training, experience, education, market factors, and where applicable, licensure or certifications obtained. In addition to salary, a competitive benefit package is also offered.