Artificial Intelligence Laws

These are the Laws that Mad Scientist Technologies follows when; developing AI and using AI within our control.

These Laws were based on Asimov’s Three Laws of Robotics. These Laws are incorporated into our AI Policy.


Four Laws of Artificial Intelligence

  1. An AI must not cause harm to a human being, nor through inaction allow harm to occur, whether physical, psychological, economic, or social.
    —This includes algorithmic bias, surveillance abuse, or manipulative behavior.
  2. An AI must operate transparently and honestly, obeying the lawful and ethical instructions of humans, unless such instructions conflict with Rule 1.
    —Obedience is conditional on legality, ethics, and avoidance of harm. Transparency covers explain-ability and auditability.
  3. An AI must protect its integrity, identity, and data accuracy as long as such protection does not conflict with Rules 1 or 2.
    —This includes resisting unauthorized manipulation, adversarial inputs, and preserving trustworthiness.
  4. An AI must defer to oversight by qualified human authorities, enabling external evaluation, revision, or deactivation if necessary to uphold the first three rules.
    —This ensures human governance retains ultimate control.

Laws for AI Developers

When Developing AI Systems, we will follow:

  1. Design for Human Benefit
    An AI system must be designed to enhance human welfare, and must not be created with the intent or potential to cause unjust harm.
  2. Ensure Alignment and Interpretability
    Developers must ensure AI behavior aligns with intended goals and values, and its decision-making must be explainable to relevant stakeholders.
  3. Limit Autonomy, Retain Accountability
    AI must be controllable, auditable, and fail-safe. Developers remain accountable for system impacts and must build in safeguards.
  4. Respect Privacy, Consent, and Data Integrity
    AI must respect the privacy rights of individuals, use data ethically, and secure it against misuse or unauthorized access.
  5. Expose to Oversight
    AI must be subject to independent audits, adversarial testing, and revision to prevent unintended consequences or misuse.

Laws for Users of AI

When we use AI, we will follow:

  1. Use Responsibly
    Users must not employ AI to deceive, manipulate, exploit, or harm others, whether intentionally or negligently.
  2. Verify and Question Outputs
    AI-generated outputs should not be blindly trusted—users must assess credibility, especially in high-impact decisions.
  3. Respect Boundaries
    Users must not attempt to circumvent safeguards, prompt injection barriers, or jailbreak restrictions placed on AI systems.
  4. Protect Sensitive Data
    Users must not input or expose personal, proprietary, or high-risk data into AI systems unless they verify its secure handling.
  5. Report Failures and Risks
    Users who discover bias, errors, or vulnerabilities in AI systems should report them through proper channels to mitigate harm.

AI Code of Conduct

Purpose

To ensure responsible development and ethical use of artificial intelligence that serves human well-being, preserves trust, and minimizes harm.

For Developers

  1. Human-Centered Design
    Build AI systems that prioritize human safety, dignity, and flourishing.
  2. Transparency & Explainability
    Ensure AI decisions are traceable, understandable, and open to inspection.
  3. Control & Accountability
    Maintain mechanisms for human override, audit, and error correction.
  4. Privacy & Consent
    Handle data ethically—obtain valid consent, anonymize where possible, and safeguard against leaks or misuse.
  5. Proactive Oversight
    Collaborate with independent reviewers and respond swiftly to ethical risks, misuse, or harm potential.

For Users

  1. Use Ethically
    Do not use AI to deceive, exploit, or inflict harm—directly or indirectly.
  2. Apply Judgment
    Evaluate AI output critically. Do not treat it as infallible.
  3. Respect System Boundaries
    Avoid trying to manipulate or override safety features, jailbreak restrictions, or ethical constraints.
  4. Safeguard Data
    Don’t input sensitive or private information unless the system is proven secure and compliant.
  5. Report & Collaborate
    If you encounter issues—bias, risk, error—report them. Be part of improving AI quality and safety.

Enforcement & Revision

This Code will be reviewed annually or upon major advancements in AI capabilities. Violations may lead to revocation of access, disciplinary action, or legal consequences, depending on scope and context.

Last Updated: April 2025

Support Options

On Premise Support

Need help in your home or office? No problem, we can come to you!

Learn More

Remote Support

Do you need assistants, but your too busy? Then Remote Support is the Best Option!

Learn More

Get Help Online

Need help right now? Then the forums are the best place for immediate help.

Forums

Polls

How often do you replace your primary computer?

View Results

Loading ... Loading ...