SUPERAGENT USAGE POLICY

Last updated: 11/7/2025 • Version 1.0

SUPERAGENT USAGE POLICY

Superagent Technologies, Inc. 1111B S Governors Ave STE 3232, Dover, DE 19904

Version 1.0 Effective Date: November 7, 2025 Last Updated: November 7, 2025


Purpose and Scope

This Usage Policy governs acceptable use of Superagent's AI guardrail services (Guard, Verify, and Redact). These guidelines apply to all users accessing the Services through any method including API, SDKs, CLI, Model Context Protocol, or web playground.

This Policy supplements and is incorporated into the Superagent Services Agreement. Capitalized terms not defined here have the meanings given in the Services Agreement.

Our mission is to make AI agents secure and compliant. These guidelines ensure the Services are used responsibly to protect people, respect rights, and advance legitimate security and compliance objectives.


1. Universal Standards

All uses of the Services must comply with these universal standards:

1.1 Legal Compliance

  • Comply with all applicable laws, regulations, and legal requirements
  • Do not use the Services for any illegal purpose
  • Do not process data in violation of export control laws
  • Do not use the Services in jurisdictions where such use is prohibited

1.2 Safety and Security

  • Child Safety: Do not use the Services to process, store, or facilitate content involving child sexual abuse or exploitation
  • Violence: Do not use the Services to facilitate violence, terrorism, or extremism
  • Weapons: Do not use the Services to develop, produce, or deploy weapons or provide guidance on weapon creation
  • Malicious Cyber Operations: Do not use the Services for unauthorized hacking, malware distribution, or attacks on systems or networks
  • Controlled Substances: Do not use the Services to facilitate illegal drug trade or distribution

1.3 Human Rights and Dignity

  • Harassment: Do not use the Services to facilitate harassment, bullying, threats, or abuse
  • Hate Speech: Do not use the Services to promote hate speech, discrimination, or violence against individuals or groups
  • Non-Consensual Content: Do not use the Services to create or distribute non-consensual intimate imagery
  • Doxing: Do not use the Services to gather or distribute private information for targeting individuals

1.4 Trust and Integrity

  • Fraud and Deception: Do not use the Services to facilitate fraud, scams, phishing, or financial crimes
  • Disinformation: Do not use the Services to create or distribute large-scale disinformation campaigns
  • Spam: Do not use the Services to generate or distribute spam or unsolicited communications
  • Manipulation: Do not use the Services to manipulate or deceive individuals about the AI nature of outputs
  • Election Interference: Do not use the Services for voter suppression, election fraud, or interfering with democratic processes

1.5 Privacy and Surveillance

  • Unauthorized Surveillance: Do not use the Services to track, monitor, or surveil individuals without their knowledge and consent
  • Biometric Abuse: Do not use the Services to analyze biometric data to infer sensitive characteristics (race, religion, sexual orientation) without proper legal basis and consent
  • Stalking: Do not use the Services to facilitate stalking or unwanted tracking of individuals

1.6 Intellectual Property

  • Infringement: Do not use the Services to infringe copyrights, trademarks, trade secrets, or other intellectual property rights
  • Model Scraping: Do not use the Services' outputs to train competing AI models or services
  • Unauthorized Use: Do not use the Services to process copyrighted content without proper authorization

2. AI Guardrail Appropriate Use

2.1 Guard (Security Detection)

Designed For:

  • Detecting prompt injections and jailbreak attempts in AI applications
  • Blocking unsafe inputs to LLM-powered systems
  • Identifying malicious tool calls in agentic workflows
  • Preventing backdoor attacks and security threats
  • Authorized security testing and red-teaming of AI applications
  • Protecting AI systems from adversarial inputs

Not For:

  • General content moderation or censorship of lawful speech
  • Political content filtering or ideological screening
  • Surveillance of individuals' communications
  • Blocking access to information based on viewpoint
  • Automated decisions about individuals without human review

2.2 Verify (Output Validation)

Designed For:

  • Validating AI-generated outputs against authoritative sources
  • Checking factual accuracy of LLM responses
  • Ensuring policy compliance in AI-generated content
  • Reducing hallucinations in production AI systems
  • Quality assurance for customer-facing AI applications
  • Compliance verification for regulated industries

Not For:

  • Replacing professional judgment in high-stakes decisions
  • Automated legal determinations without attorney review
  • Medical diagnoses without physician oversight
  • Financial advice without qualified advisor review
  • Determining eligibility for rights or benefits without human involvement
  • Making final decisions in regulated domains without human review

2.3 Redact (Sensitive Data Protection)

Designed For:

  • Identifying and removing PII (personally identifiable information)
  • Protecting PHI (protected health information) in HIPAA contexts
  • Detecting secrets, API keys, and credentials in logs
  • Compliance with data protection regulations (GDPR, CCPA)
  • Safe logging and monitoring of AI applications
  • Protecting sensitive information in AI workflows

Not For:

  • Circumventing data subject rights under privacy laws
  • Hiding evidence of illegal activity or regulatory violations
  • Unauthorized collection or processing of personal data
  • Concealing information required by law enforcement or regulators
  • Facilitating surveillance without proper legal authority

3. High-Risk Use Cases

If you use the Services for the following high-risk applications, you must implement additional safeguards:

3.1 Required Safeguards

For all high-risk use cases, you must:

  1. Human-in-the-Loop: Maintain qualified human review of all outputs before finalization
  2. Professional Oversight: Ensure licensed or qualified professionals validate outputs in their domain
  3. AI Disclosure: Disclose to end users when AI assistance is used (for consumer-facing applications)
  4. Audit Trails: Maintain logs of AI-assisted decisions and human reviews
  5. Accountability: Establish clear accountability for final decisions

3.2 High-Risk Categories

A. Legal Services

  • Contract analysis, drafting, or review
  • Legal research and case law analysis
  • Compliance advice or guidance
  • Dispute resolution recommendations

Requirements: Licensed attorney must review all outputs before client delivery. Disclose AI assistance to clients.

B. Healthcare and Medical

  • Diagnostic assistance or recommendations
  • Treatment planning or suggestions
  • Medical record analysis
  • Health risk assessment

Requirements: Licensed healthcare professional must review all outputs. Comply with HIPAA and medical device regulations where applicable.

C. Financial Services

  • Investment recommendations
  • Credit decisions or scoring
  • Financial planning or advice
  • Risk assessment

Requirements: Qualified financial advisor must review outputs. Comply with SEC, FINRA, and applicable financial regulations.

D. Employment Decisions

  • Hiring or firing recommendations
  • Performance evaluation assistance
  • Compensation decisions
  • Promotion or demotion recommendations

Requirements: HR professional must review outputs. Ensure compliance with employment discrimination laws.

E. Housing and Lending

  • Credit approval or denial
  • Rental application decisions
  • Loan underwriting
  • Risk assessment for housing

Requirements: Qualified underwriter must review decisions. Comply with Fair Housing Act and lending regulations.

F. Education

  • Student grading or assessment
  • Admission decisions
  • Disciplinary recommendations
  • Academic placement

Requirements: Licensed educator must review outputs. Ensure compliance with FERPA and educational regulations.

G. Criminal Justice and Law Enforcement

  • Risk assessment for sentencing or parole
  • Predictive policing applications
  • Surveillance operations
  • Investigative analysis

Requirements: Appropriate legal authority and judicial oversight. Comply with constitutional protections and civil rights laws.


4. Professional Services Limitations

You may not use the Services to provide professional services in regulated domains without appropriate licensed professional involvement:

  • Legal Advice: Requires attorney review and licensing
  • Medical Diagnosis or Treatment: Requires physician review and licensing
  • Financial Advice: Requires qualified financial advisor oversight
  • Mental Health Counseling: Requires licensed therapist involvement
  • Tax Preparation: Requires qualified tax professional oversight
  • Engineering Certification: Requires licensed engineer approval

AI-assisted outputs in these domains must be clearly identified as requiring professional review and cannot be presented as final professional opinions.


5. Transparency and Disclosure

5.1 Consumer-Facing Applications

If your application serves consumers (B2C), you must:

  • Disclose when AI is being used to generate or validate content
  • Identify that AI outputs may contain errors or limitations
  • Provide human contact options for disputes or concerns
  • Maintain appropriate human oversight of AI-generated decisions

5.2 Business Applications

Internal business tools (B2B) are not required to disclose AI use to end employees, but must maintain appropriate governance and accountability structures.


6. Prohibited Activities

You must not:

Technical Misuse:

  • Reverse engineer, decompile, or disassemble the Services
  • Attempt to extract model weights or training data
  • Bypass rate limits, security measures, or access controls
  • Use automated systems to scrape or mine data from the Services
  • Benchmark the Services for competitive purposes
  • Share, sell, or transfer API keys to unauthorized parties
  • Access the Services to build competing products

Harmful Applications:

  • Develop or deploy autonomous weapons systems
  • Create deepfakes for deception or manipulation
  • Generate spam, phishing content, or malware
  • Build unauthorized surveillance tools
  • Target or exploit minors without parental consent
  • Facilitate election interference or voter suppression
  • Enable discrimination in housing, employment, or credit

Regulatory Violations:

  • Violate export control laws or sanctions
  • Process data in violation of privacy laws (GDPR, CCPA, etc.)
  • Circumvent industry-specific regulations (HIPAA, GLBA, etc.)
  • Facilitate money laundering or terrorist financing

7. Special Permissions

7.1 Security Research

Authorized security testing, vulnerability research, and red-teaming of your own AI applications using the Services is permitted with prior written approval from Superagent. Contact security@superagent.sh for authorization.

7.2 Government and Law Enforcement

Government agencies and law enforcement organizations may use the Services for lawful purposes within their jurisdiction and authority. Certain high-risk use cases (surveillance, predictive policing) require additional safeguards and oversight.


8. Regional and Legal Restrictions

  • The Services comply with U.S. export control laws and may not be available in all jurisdictions
  • Users are responsible for ensuring their use complies with local laws and regulations
  • Certain features may be restricted in specific jurisdictions to comply with local law
  • Users subject to GDPR, CCPA, or other privacy laws must ensure their use of the Services complies with those obligations

9. Monitoring and Enforcement

9.1 Superagent's Rights

Superagent reserves the right to:

  • Monitor usage patterns for policy violations
  • Request documentation of safeguards for high-risk use cases
  • Investigate suspected misuse or violations
  • Audit customer implementations upon reasonable notice
  • Review outputs to ensure policy compliance

9.2 Violation Response

If Superagent determines you have violated this Policy:

First Violation (Minor):

  • Warning and opportunity to cure within specified timeframe
  • Guidance on compliance measures
  • Possible temporary feature restrictions

Repeat or Serious Violations:

  • Immediate suspension of Services
  • Investigation and cooperation with law enforcement where appropriate
  • Permanent termination of account and Services
  • Legal liability for damages caused by misuse

9.3 Appeals

If you believe Superagent has incorrectly determined a policy violation:

  1. Contact support@superagent.sh with explanation and evidence
  2. Superagent will review within 5 business days
  3. Decision will be provided in writing with reasoning
  4. Final appeal may be submitted to legal@superagent.sh

10. Updates to This Policy

Superagent may update this Usage Policy from time to time to:

  • Address new use cases or capabilities
  • Respond to regulatory developments
  • Clarify existing provisions
  • Strengthen protections for users and third parties

Notice of Changes:

  • Material changes will be announced 30 days in advance
  • Notice will be provided via email and dashboard announcement
  • Continued use after effective date constitutes acceptance
  • Previous versions available at superagent.sh/usage-policy-archive

Material Changes include new prohibited activities, additional high-risk requirements, or changes to enforcement procedures.

Non-Material Changes (clarifications, examples, organizational improvements) may be effective immediately.


11. Interpretation

If there is any conflict between this Usage Policy and the Services Agreement, the Services Agreement controls except where this Policy provides more specific guidance on acceptable use.

Examples provided in this Policy are illustrative and not exhaustive. The principles and standards apply broadly to similar situations.


12. Contact and Reporting

Questions About This Policy

compliance@superagent.sh

Report Violations

abuse@superagent.sh

Security Issues

security@superagent.sh

General Support

support@superagent.sh

Legal Inquiries

legal@superagent.sh


Superagent Technologies, Inc.
1111B S Governors Ave, Suite 3232
Dover, DE 19904
United States


END OF USAGE POLICY