Ethics

We frequently mention ethics and sustainability as core principles of our platform for several key reasons:

Ethical Foundation

Ethics is a fundamental aspect of our approach to decision intelligence and strategic foresight. Out focus on ethics stems from:

Defining Ethical Standards

Our definition of ethical standards encompasses:

Ethics Framework

Ethical frameworks’ practicality and implementation ability in AI development face a significant challenge rooted in the mismatch between our evolutionary predispositions and the complexity of modern technologies. Humans often prioritize immediate, tangible benefits ("stomach over the head") due to evolutionary biases that favor survival and immediate gratification over long-term considerations.

The Challenge: Human Nature vs. Ethical AI

Pragmatic Use of AI to Support Ethical Frameworks at PreEmpt

Our AI is designed to be instrumental in overcoming these challenges and we have embedded these ethical practices into our ecosystem. Our approaches include:

Eliminating Bias

All of our AI-based systems are designed and tested for unfair bias, with a focus on all protected characteristics including age, race, diversity, ethnicity, nationality, abilities, disabilities, gender, religions, cultures, sexual orientations, and lived experiences.

How: We use a hidden prompt to govern to avoid unfairness and discrimination, and test the AI’s sensitivity to protected characteristics in every run.

We continuously audit AI systems for potential biases and take corrective actions when identified.

How: We use AI self-audits and human audit oversight to spot bias, recommend improvement, and take immediate action, where appropriate.

We promote diversity in AI development teams by incorporating team members with diverse varied gender, race, and life experience perspectives.

How: Ensuring our teams are international, diverse, and inclusive.

We develop AI systems that clearly explain their decisions and actions.

How: We employ five independent machine-learning algorithms to produce Explainable AI, to rate the quality of our service and suggest improvements that are on our development list.

We make AI algorithms and decision-making processes as transparent as possible to users and stakeholders.

How: We describe our policies and processes clearly and through Frequently Answered Questions, providing documentation on the limitations and potential risks of AI systems.

We implement robust data protection measures to safeguard personal information.

How: We separate personal identification data from all other human inputs and never share personal data or inputs with anyone except in the anonymized form, without express permission.

We adhere to privacy-by-design principles in AI development.

How: This is the bedrock of our service. All team members and our decisions are made on this basis. We obtain informed consent for data collection and use, with clear opt-in/out mechanisms.

We establish clear lines of responsibility for AI system outcomes.

How: Our lines of responsibility are short and flat, and can be seen on our About Us, Team page. We work as a team with no silos or fixed reporting lines. Anyone can work with anyone else with no permissions needed.

We implement action mechanisms when AI systems cause harm or make errors.

How: We pride ourselves on excellent customer service and work diligently to solve issues quickly.

We conduct regular audits and impact assessments of AI systems.

How: We use a battery of processes to continuously audit our AI systems, including self-audits, Explainable ASI checks, internal and external human feedback.

We prioritize the safety and security of AI systems to prevent misuse or unintended consequences.

How: Limited access to only a few authorized staff who agree in writing annually as to their responsibilities and who have signed non-disclosure agreements.

We implement robust cybersecurity measures to protect AI systems from external threats.

How: We have robust and best-practice cybersecurity measures in place, audited annually by an independent expert.

We maintain meaningful human oversight in critical AI decision-making processes.

How: We daily critique and improve the AI's performance, take on board feedback from clients, partners, staff, workshop interactions, prospects, and from our AI's suggestions. We also read avidly for new ways to include humans in, and at the center of, the loop.

We ensure that humans can override AI decisions when necessary.

How: We delineate the roles of humans and AI in decision-making processes and give every opportunity for humans to modify or ignore the AI answers.

We develop AI systems that align with human values and promote societal well-being and prioritize AI applications that address pressing societal challenges.

How: Through our hidden prompt and individual prompt instructions.

We assess and mitigate potential negative impacts of AI on society, including job displacement.

How: Through the AI’s ability to recognize these in any Challenge and to provide suggested mitigation actions.

We design AI systems to be energy-efficient and environmentally sustainable and consider the environmental impact of AI throughout its lifecycle, from development to deployment and maintenance.

How: We endeavor to reduce energy and environmental impacts by working remotely, using our home equipment, keeping travel to a minimum, and managing server use by keeping our prompting efficient. We operate a zero-waste policy.

We ensure AI systems are accessible to diverse user groups, including those with disabilities.

How: We make our services available for free or at ultra-low cost to serve all user groups, including those with disabilities.

We design AI to be culturally sensitive and adaptable to different contexts.

How: Our prompting system is designed to be culturally aware, sensitive, and adaptable. We offer the ability for members to state their desired nationality when asking our AI questions, and it responds accordingly.

We implement continuous monitoring systems to track AI performance and impacts.

How: We have a battery of measurement systems in place behind the scenes that enable us to control the AI and its impacts.

We regularly update ethical guidelines to address emerging challenges and technological advancements.

How: We not only regularly update our ethical guidelines but also our strategy at least every six months and use our AI's recommendations to do so.

We participate in cross-industry collaborations to share best practices and address common ethical challenges.

How: This is currently a work in progress and will begin in earnest in 2025.

We engage with policymakers, academics, and the public to inform AI governance frameworks.

How: This is currently a work in progress and will begin in earnest in 2025.

We provide comprehensive ethics training for all AI development and deployment staff.

How: This is currently a work in progress and will begin in earnest in 2025.

We promote public awareness and understanding of AI capabilities, limitations, and ethical implications.

How: This is currently a work in progress and will begin in earnest in 2025.

We ensure AI systems respect and uphold fundamental human rights, as defined by international standards.

How: This is done through specific instructions in our hidden prompt.

We conduct human rights impact assessments for AI applications in sensitive domains.

How: Our AI self-auditing assessments are designed to report any issues with each report run.

We implement ethical data collection, storage, and usage practices.

How: We collect no data on our members and clients other than that visible in their profiles and any anonymized commentary or public feedback that they may give.

We ensure data used in AI training is diverse, representative, and ethically sourced.

How: We publish our policies on ethical data use openly and have trained our AI to the standards required.

We ensure the use of AI is proportional to the task at hand and necessary for achieving the intended goals.

How: We endeavor never to overreach or underreach tasks at hand and adjust if and when we are alerted to such.

We avoid over-reliance on AI when human judgment is more appropriate or ethical.

How: Our AI is designed to take almost all drudge work out of strategic foresight, but critical thinking by our subscribers is essential. We make this point and provide surveys and tools to ensure AI and human thinking is in balance.

By adhering to these comprehensive ethical guidelines, we develop and deploy AI systems that are not only technologically advanced but also ethically sound and socially responsible. We regularly review and update these guidelines to keep pace with the rapidly evolving AI landscape and emerging ethical challenges.

First produced: November 2024

Next review: November 2025

Sustainable Values

Rationale

Our emphasis on sustainability can be understood through several lenses:

As concerned, global citizens, we prioritize the common good, focusing on:

We support The Earth Charter Initiative for sustainable living and a global ethical framework, promoting:

We operate a near-zero carbon emission and water-use policy through a virtual business model including: