Safety Company News
Get Workers

Careers

We are a small team, pre-launch, building AI security products and autonomous agents. The work is hard, the problems are real, and almost nothing is decided yet. If that sounds like where you want to be, keep reading.


What we are building and why it matters

Most AI companies talk about safety. We are trying to build the infrastructure that makes it enforceable. Neuraphic exists because we believe AI systems need defensive layers that are as sophisticated as the models themselves — real-time threat detection, adversarial robustness, autonomous response. Not compliance theater. Not a checkbox before deployment. Actual security, built into the architecture from the start.

Our first products, Prion and Claeth, address this directly. Prion is a real-time defense layer for AI systems — it monitors inference pipelines for adversarial inputs, prompt injections, and model manipulation attempts. Claeth is autonomous cybersecurity — AI that scans infrastructure, reasons about vulnerabilities contextually, and responds to threats without relying on signatures or static rules. Both are pre-launch. Both are being built right now by a team small enough that everyone knows every line of code.

Workers is our autonomous agent platform. Where the security products protect AI systems, Workers puts them to work — deploying AI agents that can reason, plan, and execute complex tasks with minimal human oversight. The connection between these two efforts is not incidental. Building agents that operate autonomously requires solving security problems that most of the industry has not yet encountered. The team working on Workers and the team working on Prion are, in many cases, the same people, because the problems are inseparable.

The research agenda

Our research sits at the intersection of adversarial machine learning, systems security, and agent architectures. We are interested in questions that do not have clean academic answers yet: How do you detect adversarial inputs when the attack surface changes with every model update? How do you build agents that can operate in adversarial environments without being manipulated? How do you verify that a model's behavior is consistent across deployment contexts when you cannot fully characterize its internal representations?

These are not hypothetical concerns. Every customer who will eventually use Prion needs answers to these questions. Every agent deployed through Workers operates in an environment where these failure modes are possible. The research is not separated from the product. It is the product, at this stage.

We publish what we can. We believe that defensive AI research benefits from openness — the attackers already know the techniques, and the defenders need to share what works. But we are pragmatic about it. Some things we build give us a real advantage, and we protect that. The balance is something we think about carefully, and we expect the people who join us to think about it too.

The engineering challenges

On the security side, we are building systems that need to operate at inference speed. That means making classification decisions about whether an input is adversarial in the time it takes a model to process a single token. The latency budget is measured in microseconds. The accuracy requirements are extreme — false positives break the user experience, false negatives break security. This is compiler-level engineering applied to machine learning, and it requires people who are comfortable working at that intersection.

On the agent side, the challenges are different but equally demanding. Workers needs to manage long-running, stateful agent processes that interact with external systems, handle failures gracefully, and maintain security boundaries even when the agent itself is trying to accomplish tasks that require broad permissions. The orchestration layer is distributed systems engineering. The security layer is adversarial ML. The product layer is making all of it invisible to the developer who just wants to deploy an agent that works.

Our infrastructure is still being shaped. We have opinions — strong ones — about how to build this, but we have not calcified into patterns we cannot change. If you join now, you will influence foundational decisions about architecture, tooling, and process that will define the company for years. That is not a recruitment line. It is a description of what happens when a team this small is building something this ambitious.

Who we look for

We do not have a type. We have had productive conversations with people straight out of graduate programs and people with fifteen years of industry experience. What they had in common was a specific kind of dissatisfaction — with the pace of their previous work, with the gap between what they were building and what they thought mattered, with the feeling of being several layers removed from the actual problem.

We value depth over breadth. We would rather hire someone who has spent three years thinking about one hard problem than someone who has surface familiarity with twenty technologies. We value writing — not because we are a writing company, but because the ability to explain complex ideas clearly is the single best predictor of clear thinking we have found. And we value disagreement. The worst thing that can happen at a company this early is consensus for the sake of harmony. We need people who will tell us when we are wrong, and who can do it with enough rigor that we actually change our minds.

We are remote-first and asynchronous by default. There are no mandatory meetings. There is no time-zone gatekeeping. We communicate primarily through written documents and code. This works well for people who are self-directed and poorly for people who need external structure. We are honest about that tradeoff.

Compensation is early-stage. We are not yet in a position to match what a large company would offer in cash. We compensate with equity, with the scope of the work, and with the reality that the decisions you make here will matter in ways that are difficult to replicate at a larger organization. That is the deal, stated plainly.

We are building the security layer for AI and the agents that run on it — from research to production, as one team.