[Hiring] Principal Security Architect, AI Governance and Compliance @Businessolver
Principal Security Architect, AI Governance and Compliance @Businessolver
Artificial Intelligence
Salary usd 132,000 - 1..
Remote Location
🇺🇸 USA Only
Employment Type full-time
Posted 1wk ago

[Hiring] Principal Security Architect, AI Governance and Compliance @Businessolver

1wk ago - Businessolver is hiring a remote Principal Security Architect, AI Governance and Compliance. 💸 Salary: usd 132,000 - 165,000 per year 📍Location: USA

Role Description

The Principal PM, AI Governance and Compliance owns the technical and operational control layer for AI governance and compliance across the company’s AI-enabled capabilities. This role ensures that AI systems are supported by the right technical standards, review workflows, control points, documentation, evidence, and risk management practices so they can be deployed and operated safely.

This leader works across Security, Legal, Privacy, Product, Engineering, and Architecture to establish practical governance mechanisms that fit how AI systems are designed, built, integrated, monitored, and changed over time. The role requires technical depth in AI system lifecycles, software delivery practices, model and prompt controls, vendor assessments, and evidence-based compliance operations.

The Gig

  • Technical Governance for AI Systems:
    • Define and maintain the governance framework for AI-enabled capabilities across the software and model lifecycle, including intake, design review, implementation controls, testing expectations, deployment review, and ongoing monitoring.
    • Establish technical control requirements for AI systems, including documentation standards, model and prompt inventories, traceability, approval paths, and change management expectations.
    • Ensure governance requirements are practical for engineering teams and embedded into delivery workflows where possible.
  • AI Compliance Operations:
    • Operate the processes required to support internal and external compliance expectations for AI-enabled products and internal AI use cases.
    • Maintain evidence, decision records, inventories, risk assessments, and control mappings needed for audits, client diligence, investor diligence, and internal reviews.
    • Coordinate responses to AI-related diligence requests and partner with subject matter experts to ensure responses are accurate and supportable.
  • Risk Controls and Review Paths:
    • Partner with Security, Privacy, Legal, and Engineering to identify and manage risks related to model behavior, data handling, access patterns, third-party AI services, output quality, explainability, and system changes.
    • Build and run review paths for new AI use cases, material updates, and exceptions requiring elevated scrutiny.
    • Define escalation criteria, mitigation tracking, and approval workflows for higher-risk AI implementations.
  • Technical Partnership with Product and Engineering:
    • Work directly with product and engineering teams to translate policy and control requirements into technical implementation guidance.
    • Help teams design compliant approaches for logging, testing, access control, human review, fallback behavior, documentation, and monitoring.
    • Influence architecture and delivery decisions so governance is built into systems rather than applied after the fact.
  • Inventory, Documentation, and Evidence Management:
    • Maintain current inventories of AI systems, models, vendors, prompts, datasets, and related technical dependencies as required by company governance standards.
    • Ensure documentation is complete and usable across lifecycle stages, including design intent, data usage, review outcomes, testing artifacts, and operational controls.
    • Improve the tooling and process model for collecting, maintaining, and retrieving governance evidence.
  • Control Automation and Operational Scale:
    • Identify opportunities to automate governance activities within engineering and product workflows, including intake routing, policy checks, documentation capture, control verification, and evidence collection.
    • Partner with engineering teams to embed governance checks into existing delivery systems and lifecycle tooling.
    • Scale governance operations in a way that increases control coverage without creating unnecessary process overhead.

Qualifications

  • Bachelor’s degree required in Computer Science, Information Security, Software Engineering, Information Systems, Engineering, or a related technical field.
  • Master’s degree preferred in Cybersecurity, Computer Science, Engineering, Information Assurance, Artificial Intelligence, or a related discipline.
  • Ongoing professional development in AI governance, secure software delivery, privacy engineering, compliance frameworks, and model risk management expected.

Requirements

  • 8+ years of experience in technical product management, security engineering, risk engineering, compliance engineering, platform governance, or a related field.
  • Strong technical understanding of AI and software system lifecycles, including APIs, model integration patterns, testing approaches, logging, monitoring, and deployment controls.
  • Experience working with governance, compliance, privacy, or security requirements in software products, especially in environments involving sensitive data.
  • Proven ability to translate policy and control requirements into technical workflows, engineering requirements, and operating processes.
  • Experience coordinating across Legal, Privacy, Security, Product, and Engineering teams on control design and risk management.
  • Strong written communication skills, with the ability to produce clear documentation, review artifacts, and diligence materials for internal and external audiences.

Preferred Qualifications

  • Experience governing AI or machine learning systems in production environments.
  • Familiarity with emerging AI governance frameworks, model risk management practices, and responsible AI control structures.
  • Experience with technical documentation systems, workflow tools, control repositories, and audit evidence management.
  • Background in security architecture, privacy engineering, enterprise compliance, or regulated SaaS platforms.
  • Experience evaluating third-party AI vendors and integrating vendor controls into internal governance processes.

Benefits

  • The pay range for this position is $132,000 to $165,000 per year (pay to be determined by the applicant’s education, experience, knowledge, skills, and abilities, as well as internal equity and alignment with market data).
  • This role is eligible to participate in the bonus incentive plan.
  • If this position is full-time or part-time benefit eligible, you will receive a comprehensive benefits package which can be viewed here .
Before You Apply
🇺🇸 Be aware of the location restriction for this remote position: USA Only
Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Principal Security Architect, AI Governance and Compliance @Businessolver
Artificial Intelligence
Salary usd 132,000 - 1..
Remote Location
🇺🇸 USA Only
Employment Type full-time
Posted 1wk ago
Apply for this position
Did not apply
Applied
Sent Follow-Up
Interview Scheduled
Interview Completed
Offer Accepted
Offer Declined
Unlock 160,000+ Remote Jobs
🇺🇸 Be aware of the location restriction for this remote position: USA Only
Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Apply for this position
Did not apply
Applied
Sent Follow-Up
Interview Scheduled
Interview Completed
Offer Accepted
Offer Declined
Unlock 160,000+ Remote Jobs
×

Apply to the best remote jobs
before everyone else

Access 160,000+ vetted remote jobs and get daily alerts.

4.9 ★★★★★ from 500+ reviews
Unlock All Jobs Now

Maybe later