Companies building powerful AI systems owe the public clarity about what they are building and why. This is not a philosophical position. It is a practical one. AI systems are being deployed in contexts that affect people's lives — their security, their privacy, their economic prospects — and the people affected have a right to understand how the technology works, what its limitations are, and what the company building it considers acceptable risk.
Neuraphic is a young company. We have not yet earned the kind of trust that comes from a long track record. What we can do is describe clearly what we believe, what we publish, how we make decisions, and where we draw the line between openness and necessary restraint. This page is our attempt to do that.
Why transparency matters
Transparency is an accountability mechanism, not a marketing strategy. When a company publishes its safety framework, its scaling policies, and its research findings, it creates a public record against which its actions can be measured. That record makes it harder to quietly abandon commitments, shift positions without explanation, or claim progress that has not been made.
The more capable the technology, the higher the standard of openness required. A company building a simple productivity tool owes its users a clear terms of service. A company building AI systems that can autonomously analyze security vulnerabilities, generate code, or make decisions that affect critical infrastructure owes its users — and the broader public — considerably more than that.
We believe the AI industry as a whole has not yet met this standard. There is too much vague language about "responsible AI," too many safety commitments that lack specificity, and too little willingness to describe failures honestly. We do not claim to be immune to these tendencies. We do claim to be working against them deliberately.
What we publish
We maintain several categories of public documentation, each serving a different purpose.
Our safety framework describes how we think about the risks of increasingly capable AI systems and the principles that guide how we build, evaluate, and deploy our technology. It is not a compliance document. It is a statement of what we believe and what constraints we impose on ourselves as a result.
Our Responsible Scaling Policy defines the specific conditions under which we will and will not deploy systems at different capability levels. It includes the commitment to pause deployment when our safety measures are insufficient — a commitment we consider binding, not aspirational.
Our research publications share what we learn about adversarial AI, defense mechanisms, and autonomous security systems. We publish this work because the field advances faster when knowledge is shared, and because security research that remains private protects only the company that produced it — not the organizations that need it.
Our newsroom documents company announcements, progress updates, and developments as they happen. It is a factual record, not a press office.
Our legal policies — privacy, terms of service, acceptable use, data processing — are all publicly accessible and written in plain language. We believe that legal documents should be understandable by the people they apply to, not only by the lawyers who drafted them.
How we make deployment decisions
Every system we build undergoes capability evaluation before deployment. The purpose of this evaluation is to understand what the system can do — including things it was not designed to do — and to determine whether our safety measures are adequate for those capabilities.
If a system exceeds our ability to deploy it safely, we do not deploy it. This is not a theoretical commitment. We have stated publicly that we will delay or halt deployment when safety thresholds are not met, and we have structured our internal processes to make that decision possible without requiring exceptional courage or organizational upheaval. The decision to pause is procedural, not heroic. It is built into how we operate.
For systems above defined capability levels, we maintain internal review processes that involve people outside the team that built the system. The purpose of this structure is to ensure that deployment decisions are not made solely by the people most invested in deployment happening.
What we do not share
Transparency does not mean publishing everything. Some information, if released, would create more risk than it mitigates. We are transparent about what we withhold and why.
We do not publish proprietary model architectures or training methodologies. These represent significant intellectual property, and their disclosure would also provide a roadmap for replicating capabilities that we believe require the safety infrastructure we have built around them.
We do not publish specific vulnerability details before remediation. When we discover a vulnerability — in our own systems or through our research — we follow responsible disclosure practices. The details become public once the vulnerability has been addressed, not before.
We do not share customer-specific deployment details. The organizations that use our systems have a right to confidentiality about how they deploy them, and we honor that right without exception.
In each of these cases, the decision to withhold is itself disclosed. We do not pretend that the information does not exist. We explain why it is not public, so that outside observers can evaluate whether our reasoning is sound.
Accountability
We invite scrutiny of our claims. Every commitment we make on this page, on our safety page, and in our published policies is something we expect to be held to. If we fall short, we expect to be told — by our users, by researchers, by the public — and we commit to responding with substance, not deflection.
Our governance structure reflects our commitment to accountability beyond shareholder value. We believe that companies building powerful AI systems need oversight mechanisms that go beyond the interests of their investors, and we have structured Neuraphic accordingly.
We will report publicly on our safety and compliance progress as our company matures. These reports will describe what we have accomplished, what we have not, and what has changed since the previous report. They will be honest, because a progress report that omits the problems is not a progress report — it is advertising.
Further reading
Core views on AI safety
About Neuraphic
Responsible Scaling Policy
Research publications