Skip to main content

Artificial Intelligence Principles

At the CES Privacy Center, we are committed to thoroughly researching and incorporating emerging AI technologies responsibly into our business and academic activities. As we integrate Artificial Intelligence (AI) into our daily operations, we prioritize maintaining our unique spiritual environment and our role as innovators in higher education. Our approach to AI is guided by the principles of Integrity, Data Protection, Transparency, and Accountability, which align with our mission to develop capable and trusted disciples of Jesus Christ. We invite all members of the CES community to actively engage with these principles and stay informed about responsible AI practices.

1. Integrity

Definition: Integrity in our data, processes, and as individuals.

We ensure the quality and accuracy of data inputs and outputs when using AI systems by developing and applying standardized processes and rules. This involves measures to mitigate risks, prevent data hallucinations, and avoid biases and inaccuracies. Upholding integrity means committing to ethical and inclusive actions, promoting fairness, and ensuring compliance with relevant standards. Our AI technologies are leveraged to enhance our educational and spiritual values, enriching lives in positive and uplifting ways.

2. Data Protection

Definition: Emphasizing both security and privacy.

Data Protection involves safeguarding personal data against unauthorized access, use, and loss by implementing robust security measures and adhering to data privacy laws such as GDPR and FERPA. Personal data is collected, used, and stored only for specified, legitimate purposes, in line with data governance standards and retention policies. This principle prioritizes safety, avoids harm to individuals, and ensures the use of resilient and reliable AI systems.

3. Transparency

Definition: Making AI systems' functionalities, data usage, and decision-making processes open and understandable.

Transparency entails providing clear, intelligible information about how AI systems operate, the data they use, and the logic behind their decisions. This supports the right to explanation, enabling individuals to understand and, if necessary, challenge AI-driven decisions. By being transparent, we build trust and ensure that our AI practices are aligned with our values and ethical standards.

4. Accountability

Definition: Being answerable for the outcomes of the AI systems we use.

Accountability includes establishing clear governance structures that define roles and responsibilities within the institution for AI decision-making. It ensures AI systems are used in compliance with ethical standards and legal requirements. Accountability mechanisms may include policies, procedures, internal audits, trainings, and the establishment of a committee to oversee AI practices. This principle ensures that we are responsible for the impact of our AI systems and fosters a culture of ethical AI use.

Together, these principles form a foundation for the responsible governance of AI. They promote the use of AI technologies in ways that are ethical, legal, respectful of privacy and security, and aligned with our institutional values and goals. By adhering to these principles, we enable innovation, gain efficiency, and harness even greater untapped potential, all while maintaining our commitment to the CES mission. By following these guidelines and principles, we ensure that our use of AI is responsible, ethical, and aligned with our objectives and values.

For more information or assistance, please reach out to the CES Privacy Center.