Artificial intelligence has become a part of everyday life for many of us. Ongoing advancements in AI speed and performance continually reshape how it's used and regulated, making the landscape ever-evolving. This article will explore AI best practices, avoiding incorrect usage, and how to leverage its functionality.
When Using AI Goes Wrong
Every day, a new article is published detailing how someone faces the consequences of misusing AI. This happens at universities and in professional workplaces alike. Examples of this are plentiful, but here are a couple that illustrate why it's crucial to ensure appropriate and safe use of AI:
The first example comes from Yale University where a reporter from the Yale News explained, "On March 25, students enrolled in Computer Science 223, "Data Structures and Programming Techniques," received a Canvas announcement stating that "clear evidence of AI usage" had been detected in one-third of submissions for the course's second problem set. Over 150 students are currently enrolled in the class." Many people are concerned about seeing an incident like this on a large scale, as it discourages ownership of knowledge. One of the instructors of the course, Ozan Erat, noted, "I tell my students that if you let AI do the job for you, AI will take your job." It's important for students to understand what AI is doing if they elect to use it. (Source: Yale News)
The second example comes from the professional world, where in February 2025, two lawyers from the law firm Morgan & Morgan were caught after submitting documents that included fake cases generated by AI in their litigation. Neither of them had bothered to check if the information it gave them about the cases was real. This led to confusion about the original case, their legitimacy as attorneys, and some uncertainty about moving forward with their case. This is an unfortunate reality: AI has become commonplace in many disciplines, leading to a lack of literacy and awareness about how these tools function. (Source: Reuters)
Stories like these are not uncommon. As generative AI grows, we must be increasingly concerned with using it ethically and appropriately.
Best Practices
It's natural to have questions on how and when to use the AI tools available. Some instructors have strict limitations on how AI can be used in their classes, and others encourage its use when done appropriately. When it comes to basics, here are a few good ideas to build a foundation:
Do
Don't
Use AI to help generate ideas and increase productivity.
Enter non-public institutional data into an unapproved AI tool. Non-public data should only be submitted into approved tools upon approval from your institution's AI Committee.
Make sure use of AI is in line with classroom/university policies.
Trust generative AI outputs without fact-checking them.
Fact check all generative AI outputs.
Claim AI-generated content as your own.
Seek guidance from your institutional AI Committee for clarification on AI usage.
Use generative AI for purposes outside of those for which access is granted.
Using AI Effectively
Many people are already aware of generative AI tools like ChatGPT and Microsoft Copilot. These applications can be incredibly useful when used correctly. As with many software programs, full functionality can be obtained with the help of some formal instruction. Here are some resources from CES institutions to help you take advantage of the capabilities of generative AI while ensuring its safe use:
Similar to these CES resources, Microsoft has resources to assist users in crafting prompts that will produce the best outputs from Copilot specifically:
Like many technologies, generative AI can have extremely negative and positive outcomes. It can be used to deceive and manipulate, as well as enlighten and instruct. By implementing the AI Guiding Principles and the strategies provided here, we can all encourage safe and effective use of generative AI.