Safety Company News
Get Workers

Acceptable Use Policy

This policy defines what you may and may not do with Neuraphic products and services, including our AI models, APIs, platforms, and applications. Last updated April 1, 2026.


1. Overview

This Acceptable Use Policy ("AUP") applies to all users of Neuraphic, Inc. ("Neuraphic," "we," "us," or "our") products and services, including but not limited to Prion, Claeth, Workers, our APIs, developer tools, console, and any other services we make available (collectively, the "Services"). By accessing or using our Services, you agree to comply with this AUP.

We designed this policy to ensure that our technology is used responsibly and ethically. It is intended to protect users, the public, and the integrity of our systems. We reserve the right to update this policy at any time; material changes will be communicated through our website or directly to affected users.

This AUP supplements our Terms of Service and Usage Policy. In the event of a conflict between this AUP and the Terms of Service, the Terms of Service shall control unless this AUP expressly provides otherwise. Violations of this policy may result in suspension or termination of access to the Services.

2. Prohibited Uses

You may not use Neuraphic Services for any purpose that is unlawful, harmful, or otherwise prohibited by this policy. The following categories of use are strictly prohibited.

2.1 Illegal Activity

You may not use the Services to facilitate, promote, or engage in any activity that violates applicable local, national, or international law. This includes, without limitation:

Fraud, financial crimes, or money laundering schemes of any kind. The generation, distribution, solicitation, or possession of child sexual abuse material (CSAM) or any content that sexually exploits minors. Trafficking in persons, organs, wildlife, or controlled substances. Tax evasion, sanctions violations, or the circumvention of export controls. Any activity that would constitute a criminal offense under the laws of the United States or the jurisdiction in which you reside or operate.

2.2 Violence and Threats

You may not use the Services to promote, incite, plan, or facilitate violence against any individual, group, or entity. Specifically prohibited uses include:

The development, design, manufacture, or deployment of weapons, including chemical, biological, radiological, nuclear, or conventional weapons. Content or activities that support or promote terrorism, violent extremism, or recruitment for violent organizations. The generation of content that encourages, glorifies, or provides instructions for self-harm or suicide. Threats of physical harm, intimidation, or coercion directed at any person or group.

2.3 Harassment and Hate Speech

You may not use the Services to harass, bully, intimidate, or demean individuals or groups. This includes the generation of content that targets individuals or groups based on race, ethnicity, nationality, religion, gender, gender identity, sexual orientation, disability, age, or any other protected characteristic. Persistent, unwanted contact or communication directed at specific individuals using our Services is also prohibited.

2.4 Deceptive Content

You may not use the Services to create, distribute, or amplify deceptive content, including but not limited to:

Deepfakes or synthetic media designed to deceive viewers into believing a real person said or did something they did not. Impersonation of real individuals, organizations, or government entities without explicit authorization. The creation or dissemination of disinformation, including fabricated news articles, fraudulent scientific claims, or manipulated evidence. Content designed to deceive consumers about the nature, quality, or origin of goods or services.

2.5 Privacy Violations

You may not use the Services in ways that infringe upon the privacy rights of others. Prohibited activities include:

Mass surveillance of individuals without lawful authority and appropriate legal process. Doxxing, which includes the collection, aggregation, or publication of private personal information with the intent to harass, intimidate, or endanger. Unauthorized biometric analysis, including facial recognition, gait analysis, voice identification, or emotion detection applied to individuals without their informed consent or lawful basis. The creation of profiles or dossiers on individuals by aggregating data from multiple sources without consent or lawful authority. Tracking, monitoring, or recording individuals without their knowledge and consent where such consent is required by law.

2.6 Election Interference

You may not use the Services to interfere with or undermine democratic processes in any jurisdiction. This includes generating misleading content about voting procedures, candidates, or election outcomes; creating synthetic media intended to influence voters through deception; impersonating election officials or institutions; suppressing voter participation through disinformation or intimidation; and any other activity designed to manipulate the outcome of an election, referendum, or plebiscite.

2.7 Malware and Cyber Attacks

You may not use the Services to develop, deploy, distribute, or facilitate malicious software or cyber attacks. This includes the creation of viruses, worms, trojans, ransomware, spyware, adware, or any other malicious code. You may not use the Services to identify vulnerabilities in systems or networks for the purpose of unauthorized exploitation, to conduct denial-of-service attacks, to gain unauthorized access to systems or data, or to intercept communications without authorization. Security research conducted in good faith under our Responsible Disclosure program is not subject to this restriction.

2.8 Circumventing Safety Measures

You may not attempt to circumvent, disable, or undermine the safety and security measures built into our Services. This includes:

Jailbreaking or prompt injection attacks designed to cause our models to ignore their instructions, safety guidelines, or content policies. Systematic probing of our models to discover and exploit weaknesses in safety filters. The development or distribution of tools, prompts, or techniques whose primary purpose is to bypass our safety measures. Attempts to extract training data, model weights, system prompts, or other proprietary information from our models through adversarial techniques. Good-faith security and safety research conducted in compliance with our Responsible Disclosure policy, including testing of model safety systems, is not subject to this restriction.

2.9 Unauthorized Professional Advice

You may not use the Services to provide professional advice in regulated fields without appropriate human oversight. Specifically, you may not present AI-generated outputs as licensed legal advice, medical diagnoses or treatment plans, or personalized financial or investment advice unless a qualified, licensed professional reviews and takes responsibility for such outputs. Any deployment of the Services in these contexts must clearly disclose that AI is involved and must maintain appropriate human supervision in accordance with applicable professional standards and regulations.

2.10 Spam and Automated Abuse

You may not use the Services to generate or distribute unsolicited bulk communications, including spam emails, messages, comments, or social media posts. You may not use the Services to create fake accounts, fake reviews, fake engagement metrics, or to artificially manipulate online platforms, search rankings, or recommendation systems. Automated content generation at scale that is designed to flood platforms or drown out legitimate discourse is prohibited.

2.11 Non-Consensual Intimate Imagery

You may not use the Services to generate, distribute, or facilitate the creation of non-consensual intimate imagery, including AI-generated synthetic intimate imagery of real persons without their explicit consent.

2.12 EU AI Act Prohibited Practices

You may not use the Services for any of the following practices prohibited under the EU Artificial Intelligence Act or analogous laws in other jurisdictions:

Social scoring systems that evaluate or classify individuals based on their social behavior or personal characteristics, leading to detrimental or unfavorable treatment in unrelated contexts. Real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, except where strictly necessary and authorized under applicable law. Emotion recognition systems deployed in workplaces or educational institutions for the purpose of monitoring or evaluating employees or students. AI systems that exploit the vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation in a manner that is likely to cause significant harm. Subliminal manipulation techniques that deploy AI to materially distort a person's behavior in a manner that causes or is likely to cause physical or psychological harm.

3. Intellectual Property and Model Protections

You may not use outputs from Neuraphic models, APIs, or Services to train, fine-tune, distill, or otherwise improve competing artificial intelligence or machine learning models, whether directly or through intermediaries. This restriction applies regardless of the volume of outputs used or the method of incorporation.

You may not reverse-engineer, decompile, disassemble, or otherwise attempt to derive the source code, architecture, weights, training data, algorithms, or underlying technology of any Neuraphic model or system. You may not use automated methods to systematically extract information about the structure or behavior of our models beyond what is necessary for normal use of the Services.

4. High-Risk Use Restrictions

Certain applications of AI carry elevated risks of harm to individuals. Where Neuraphic Services are used in any of the following high-risk contexts, additional restrictions apply:

Employment and hiring decisions. You may not use the Services as the sole basis for making employment decisions, including hiring, termination, promotion, or performance evaluation. Meaningful human review must be applied before any consequential employment decision is made.

Credit and lending decisions. You may not use the Services as the sole basis for approving, denying, or setting the terms of credit, loans, insurance, or other financial products. All such decisions must include human review and must comply with applicable fair lending and anti-discrimination laws.

Criminal justice and law enforcement. You may not use the Services as the sole basis for decisions relating to criminal investigations, sentencing, parole, probation, or any other determination that affects an individual's liberty. Predictive policing applications must comply with all applicable laws and must incorporate meaningful human oversight.

Government benefits and services. You may not use the Services as the sole basis for determining eligibility for government benefits, public services, or immigration decisions.

In all high-risk contexts, you must maintain audit trails, provide affected individuals with meaningful opportunities to contest automated decisions, and ensure compliance with all applicable laws and regulations, including those governing algorithmic accountability and non-discrimination.

5. Reporting Violations

We encourage anyone who becomes aware of a violation of this policy to report it promptly. Reports may be submitted to [email protected] and should include as much detail as possible, including the nature of the violation, any relevant account information, URLs or screenshots, and the date and time of the observed activity. We will review all reports and take appropriate action. Reports may be submitted anonymously, and we will not retaliate against anyone who reports a violation in good faith.

6. Enforcement

Neuraphic reserves the right to investigate any suspected violation of this policy and to take appropriate action at its sole discretion. Enforcement actions may include, depending on the severity and nature of the violation:

Warning. For first-time or minor violations, we may issue a written warning specifying the nature of the violation and the corrective action required. Users who receive a warning are expected to promptly cease the prohibited activity.

Suspension. For repeated violations, serious violations, or failure to comply with a prior warning, we may temporarily suspend access to some or all of the Services. During suspension, we will communicate the reasons for the suspension and the steps required to restore access.

Termination. For severe violations, violations involving illegal activity, or repeated failure to comply with prior enforcement actions, we may permanently terminate access to the Services. Termination may be accompanied by deletion of associated data in accordance with our data retention policies. Users whose access has been terminated may be prohibited from creating new accounts or otherwise accessing the Services.

We may also report violations to law enforcement or other appropriate authorities where we believe in good faith that a violation involves illegal activity or poses an imminent threat to the safety of any person.

7. Contact

If you have questions about this Acceptable Use Policy, please contact us at [email protected].

Neuraphic, Inc.
A Delaware C Corporation
United States of America