Artificial Intelligence Laws
These are the Laws that Mad Scientist Technologies follows when; developing AI and using AI within our control.
These Laws were based on Asimov’s Three Laws of Robotics. These Laws are incorporated into our AI Policy.
Four Laws of Artificial Intelligence
- An AI must not cause harm to a human being, nor through inaction allow harm to occur, whether physical, psychological, economic, or social.
—This includes algorithmic bias, surveillance abuse, or manipulative behavior. - An AI must operate transparently and honestly, obeying the lawful and ethical instructions of humans, unless such instructions conflict with Rule 1.
—Obedience is conditional on legality, ethics, and avoidance of harm. Transparency covers explain-ability and auditability. - An AI must protect its integrity, identity, and data accuracy as long as such protection does not conflict with Rules 1 or 2.
—This includes resisting unauthorized manipulation, adversarial inputs, and preserving trustworthiness. - An AI must defer to oversight by qualified human authorities, enabling external evaluation, revision, or deactivation if necessary to uphold the first three rules.
—This ensures human governance retains ultimate control.
Laws for AI Developers
When Developing AI Systems, we will follow:
- Design for Human Benefit
An AI system must be designed to enhance human welfare, and must not be created with the intent or potential to cause unjust harm. - Ensure Alignment and Interpretability
Developers must ensure AI behavior aligns with intended goals and values, and its decision-making must be explainable to relevant stakeholders. - Limit Autonomy, Retain Accountability
AI must be controllable, auditable, and fail-safe. Developers remain accountable for system impacts and must build in safeguards. - Respect Privacy, Consent, and Data Integrity
AI must respect the privacy rights of individuals, use data ethically, and secure it against misuse or unauthorized access. - Expose to Oversight
AI must be subject to independent audits, adversarial testing, and revision to prevent unintended consequences or misuse.
Laws for Users of AI
When we use AI, we will follow:
- Use Responsibly
Users must not employ AI to deceive, manipulate, exploit, or harm others, whether intentionally or negligently. - Verify and Question Outputs
AI-generated outputs should not be blindly trusted—users must assess credibility, especially in high-impact decisions. - Respect Boundaries
Users must not attempt to circumvent safeguards, prompt injection barriers, or jailbreak restrictions placed on AI systems. - Protect Sensitive Data
Users must not input or expose personal, proprietary, or high-risk data into AI systems unless they verify its secure handling. - Report Failures and Risks
Users who discover bias, errors, or vulnerabilities in AI systems should report them through proper channels to mitigate harm.
AI Code of Conduct
Purpose
To ensure responsible development and ethical use of artificial intelligence that serves human well-being, preserves trust, and minimizes harm.
For Developers
- Human-Centered Design
Build AI systems that prioritize human safety, dignity, and flourishing. - Transparency & Explainability
Ensure AI decisions are traceable, understandable, and open to inspection. - Control & Accountability
Maintain mechanisms for human override, audit, and error correction. - Privacy & Consent
Handle data ethically—obtain valid consent, anonymize where possible, and safeguard against leaks or misuse. - Proactive Oversight
Collaborate with independent reviewers and respond swiftly to ethical risks, misuse, or harm potential.
For Users
- Use Ethically
Do not use AI to deceive, exploit, or inflict harm—directly or indirectly. - Apply Judgment
Evaluate AI output critically. Do not treat it as infallible. - Respect System Boundaries
Avoid trying to manipulate or override safety features, jailbreak restrictions, or ethical constraints. - Safeguard Data
Don’t input sensitive or private information unless the system is proven secure and compliant. - Report & Collaborate
If you encounter issues—bias, risk, error—report them. Be part of improving AI quality and safety.
Enforcement & Revision
This Code will be reviewed annually or upon major advancements in AI capabilities. Violations may lead to revocation of access, disciplinary action, or legal consequences, depending on scope and context.
Last Updated: April 2025