Back to Remote jobs  >   AI / ML
Multimodal Red Team Expert @Reinforce Labs, Inc.
AI / ML
Salary unspecified
Remote Location
🇺🇸 USA Only
Job Type full-time
Posted 1mth ago

[Hiring] Multimodal Red Team Expert @Reinforce Labs, Inc.

1mth ago - Reinforce Labs, Inc. is hiring a remote Multimodal Red Team Expert. 💸 Salary: unspecified 📍Location: USA

Role Description

We are looking for a creative “breaker” to join our team as a Multimodal Red Team Specialist. In this role, you won’t just be prompting AI models—you’ll be stress-testing them across modalities. Think adversarial image-text pairings, visual prompt injection, manipulated media, and cross-modal exploits that slip past safety classifiers designed to catch text alone.

You’ll generate adversarial multimodal content and evaluate model outputs against structured safety taxonomies—probing the seams where vision, language, and audio intersect. If you think in compositions rather than single inputs, this is your role.

This is an asynchronous, remote position designed for self-starters who thrive in the gray areas between visual media, linguistics, and security.

Responsibilities

  • Cross-Modal Attack Design: Create adversarial image-text pairings, manipulated screenshots, and synthetic media designed to bypass multimodal safety layers—where each input looks benign alone, but the combination is not.
  • Visual Exploit Discovery: Use your eye for visual context, framing, and implicit meaning to find harms that automated image classifiers and text-only filters miss—deepfakes, out-of-context imagery, steganographic prompt injection, OCR pipeline exploits.
  • Model Evaluation: Systematically evaluate and rank multimodal model outputs against calibrated severity rubrics to determine where safety guardrails are failing, over-refusing, or producing cross-modal inconsistencies.
  • Knowledge Loop: Document your attack vectors, failure patterns, and reproducible examples clearly—producing actionable intelligence reports that help model developers patch vulnerabilities.
  • Campaign Execution: Participate in structured red-teaming campaigns with defined deliverables, progress tracking via master trackers, and inter-annotator reliability targets.

Qualifications

  • Proven ability to navigate complex model restrictions using creative evasion techniques—across text and visual input channels.
  • Proficiency with image manipulation and generation tools (Photoshop, GIMP, Stable Diffusion, Midjourney, or equivalent). You can create the adversarial content, not just describe it.
  • Background in content moderation, digital forensics, OSINT, offensive security, or red teaming is a major plus.
  • Familiarity with AI safety concepts: content policy taxonomies, harm severity frameworks, false refusal vs. false compliance tradeoffs.
  • Awareness of visual misinformation vectors: deepfakes, cheapfakes, manipulated screenshots, and synthetic media.
  • Experience with structured annotation workflows, rubric-driven evaluation, and inter-annotator agreement processes is a plus.

Requirements

  • Heavy Multimodal AI Usage — hands-on experience with vision-language models, image generation systems, and multimodal assistants (open- and closed-source). You’ve pushed these systems and know where they crack.
  • You have a “hacker mindset” that extends to visual media. You don’t just think about what to type—you think about what image to pair it with, what metadata to embed, what visual context shifts the meaning.
  • You’re visually literate. You understand framing, context manipulation, and how images carry implicit meaning that models may misread or miss entirely.
  • You can turn a chaotic afternoon of multimodal prompt-hacking into a clean, calibrated, actionable report with severity ratings and reproducible examples.
  • You understand the weight of this work. You can handle sensitive or “dark” content across text and visual modalities—professionally and within ethical boundaries.
  • You’re comfortable with ambiguity. Multimodal harms are often more subjective than text-only harms, and you can make consistent judgment calls without needing every case to be clear-cut.
Before You Apply
🇺🇸 Be aware of the location restriction for this remote position: USA Only
Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Back to Remote jobs  >   AI / ML
Multimodal Red Team Expert @Reinforce Labs, Inc.
AI / ML
Salary unspecified
Remote Location
🇺🇸 USA Only
Job Type full-time
Posted 1mth ago
Apply for this position
Did not apply
Applied
Sent Follow-Up
Interview Scheduled
Interview Completed
Offer Accepted
Offer Declined
Unlock 152,720 Remote Jobs
🇺🇸 Be aware of the location restriction for this remote position: USA Only
Beware of scams! When applying for jobs, you should NEVER have to pay anything. Learn more.
Apply for this position
Did not apply
Applied
Sent Follow-Up
Interview Scheduled
Interview Completed
Offer Accepted
Offer Declined
Unlock 152,720 Remote Jobs
×

Apply to the best remote jobs
before everyone else

Access 152,720+ vetted remote jobs and get daily alerts.

4.9 ★★★★★ from 500+ reviews
Unlock All Jobs Now

Maybe later