Role Description
We are seeking a Director, Networking to lead the strategy, architecture, and customer adoption of next-generation networking solutions for AI inference data centers. This role uniquely blends internal technical leadership with external customer engagement, with approximately 50% of time dedicated to defining and evolving network strategy, and 50% spent working directly with customers as a solution architect.
You will operate at the intersection of cutting-edge AI infrastructure and real-world deployment, translating emerging technologies into scalable, high-performance, and customer-ready solutions.
Key Responsibilities
-
Network Strategy & Architecture
-
Define and drive the end-to-end networking strategy for AI inference data centers, including fabric design, topology, and technology selection.
-
Architect high-performance, low-latency, and scalable network fabrics optimized for AI inference workloads (Ethernet, RDMA/RoCE, and emerging interconnects).
-
Evaluate and incorporate next-generation technologies (e.g., 400G/800G+, SmartNICs/DPUs, CXL, SONiC, Ultra Ethernet) into future-ready designs.
-
Develop reference architectures, validated designs, and best practices for large-scale AI clusters and inference environments.
-
Partner with engineering, product management, and infrastructure teams to align roadmap with business and customer needs.
-
Influence industry direction through standards bodies, ecosystem partnerships, and technology evaluations.
-
Customer-Facing Solution Architecture
-
Serve as a trusted advisor and technical leader to customers designing and deploying AI inference infrastructure.
-
Lead solution design, requirements gathering, and architecture reviews for enterprise and hyperscale customers.
-
Present technical strategies and architectures to executive and technical stakeholders.
-
Translate complex networking and AI infrastructure concepts into clear business value and deployment guidance.
-
Support go-to-market efforts through customer engagements, workshops, and field enablement.
-
Drive adoption by helping customers bridge the gap between early-stage innovation and production deployment.
Qualifications
-
12+ years of experience in data center networking, with deep expertise in large-scale fabric design.
-
Proven experience architecting high-performance networks for AI/ML, HPC, or cloud infrastructure environments.
-
Strong expertise in Ethernet-based fabrics, TCP/IP, RDMA/RoCE, and modern Clos architectures.
-
Experience with AI infrastructure components such as GPUs, DPUs/SmartNICs, and high-speed interconnects.
-
Demonstrated ability to define technical strategy and influence product or infrastructure roadmaps.
-
Significant customer-facing experience in solution architecture, technical sales, or consulting roles.
-
Strong communication and presentation skills, including executive-level engagement.
Preferred Qualifications
-
Experience with AI inference and training clusters, including GPU fabrics and storage networking.
-
Familiarity with SONiC, network automation (Terraform, Python), and zero-touch provisioning.
-
Knowledge of emerging technologies such as CXL, NVLink, and in-network computing.
-
Background in technical marketing, product management, or technology evangelism.
-
Experience speaking at industry events or contributing to thought leadership.
Salary Range
$220,000 - $275,000 USD + Benefits + Equity
Benefits
-
At Gruve, we foster a culture of innovation, collaboration, and continuous learning.
-
We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work.