Safety Company News
Get Workers
← News

We're hiring: research, engineering, and security roles

Neuraphic is expanding its team across three disciplines: AI research, infrastructure engineering, and security. This is not a general call for talent. It is a description of specific problems we need people to solve, and the kind of thinking those problems require.

The research agenda

Our research program sits at the intersection of machine learning and adversarial security. The central question is whether AI systems can learn to reason about security the way experienced researchers do — not by matching known patterns, but by understanding the structural properties of systems that make them vulnerable.

Current work spans three areas. Adversarial AI: understanding how models behave under deliberate manipulation, including prompt injection, data poisoning, and model extraction attacks. Autonomous security: building AI agents that can investigate, triage, and respond to security events with minimal human direction. Inference-time defense: developing techniques that allow models to detect and resist adversarial inputs during inference, without retraining.

We are looking for researchers with strong foundations in machine learning and a genuine interest in security. Publication record matters less than the ability to formulate precise questions and design experiments that answer them. Familiarity with adversarial machine learning, reinforcement learning, or formal verification is valuable.

The engineering challenges

Neuraphic's platform processes security telemetry in real time, orchestrates autonomous AI agents across distributed infrastructure, and does so under constraints that most ML engineering teams never encounter: zero-trust networking, project-level isolation, no public IP addresses, and compliance requirements that govern every byte of data at rest and in transit.

The engineering problems are correspondingly difficult. We need people who can build low-latency inference pipelines that operate under strict SLAs. Who can design distributed orchestration systems for AI agents that must coordinate across isolated environments. Who understand that in security infrastructure, a ninety-nine percent uptime guarantee is not a selling point — it is roughly eighty-seven hours of annual exposure.

We value engineers who think in systems, not features. Who read post-mortems for pleasure. Who understand that the most important code they write is the code that handles failure.

Security roles

Building AI systems for security requires people who understand both sides of the equation: how attacks work and how defenses fail. We are hiring security engineers and researchers who can red-team our own systems, design threat models for AI-powered infrastructure, and ensure that the tools we build for our customers meet the standards we would demand ourselves.

This work requires a specific kind of skepticism — the ability to look at a system that appears to work and ask what it would take to make it fail. Candidates should have experience with cloud security, penetration testing, or security architecture. Experience with AI systems is a strong advantage but not a prerequisite; we can teach the AI. What we cannot teach is the adversarial mindset.

How to apply

Open positions are listed on our careers page with detailed descriptions of each role. We review every application. The process is direct: a technical conversation, a focused work sample, and a team interview. No puzzle questions, no whiteboard theater. We want to understand how you think about real problems, not how you perform under artificial pressure.