On "Ethical Use" of AI: A Different Approach
Why colleges should stop trying to police AI usage, embrace this tool as part of student workflows, and adjust assignments and grading to adapt.

I just got back from the ICLCA conference. Great talks and community, as always. A lot of discussion around AI.
Some conversations didn’t quite sit right with me. Lots of terms like “ethical use”, “catching students” using AI, and how to “confront them”.
While I understand the sentiment, I feel this is the wrong approach. Policing AI usage is both (1) impractical (2) a disservice to students who are trying to learn valuable real-world skills.
Rather than trying to control AI usage, I believe we need to embrace it and incentivize producing great final products with it. While this approach has drawbacks, it presents a greater net positive than the punitive one.
This article addresses writing work only
STEM problems have different nuances and will be saved for later discussion. Subjective assignments like essays and long answer are the biggest flashpoint here.
Writing work can be further sorted into two buckets:
- Take-home work: Assignments where AI is often being used, and where I argue that this is fine and should cause us to focus further on output quality and uniqueness in voice and substance.
- In-class work: This medium forces “live” thinking with no assists and will become increasingly important in the future I am advocating for.
Why policing AI use is difficult
Fighting against AI usage on take-home work is a losing battle for three reasons:
- Models are always changing
The models evolve monthly. It will remain impossible to say with certainty that one essay is 100% AI, one essay is 80% AI, and another is 0%. - Confrontations are risky
What if you get it wrong? Accusing a student of academic dishonesty is a serious, high-stakes event. If your "detector" or instinct is wrong and the student wrote it themselves, you've damaged that student's confidence for good. - We’re creating an adversarial classroom
AI is too powerful, and too flexible, for students to resist. If you are saying “reminder, no AI!” and then 70% of the class uses it anyways, you are at odds with the truth, students are feeling guilty, and you are constantly against your students, trying to catch them in the act. That is not a positive teaching relationship.
For these reasons, I think this is not a battle worth fighting. But it’s not all doom and gloom - I think there are real and positive reasons why we shouldn’t even WANT to fight it. Embracing AI usage, and training students on it, is in their best interests.
Is using AI "cheating"?
On the one hand, you can liken AI usage to plagiarism. The student is no longer the one doing the work - someone or something else is. This prevents the brain development that we are striving to deliver. This is why it's been labeled "cheating".
On the other hand, AI is a powerful tool that can help a student learn, grow, and produce great work. When used by a thoughtful and responsible learner, it's a positive aid that can deliver a better result in a shorter amount of time. In this view, the product of one’s output is what matters, not the process used to arrive at it.
I feel that AI is so pervasive, so useful, that we should get away from the “unethical” label and focus on productive use instead. Let’s explore what that looks like.
Like what you're reading?
Subscribe to our newsletter and receive new posts directly to your inbox.
The AI-supported writing workflow
We can all agree that asking AI to write an essay, and submitting unedited, is bad.
On the other end, using AI as a drafting partner, where the writer is heavily involved in both substance, feedback, and final wording, can be very productive, and perhaps offers the best balance of quality and efficiency moving into the future in most fields. It’s critical for students to practice this latter workflow, and college should help.
For a student, they might follow the following steps:
- Ideation: AI can help the writer understand the prompt, brainstorm potential arguments, ideate on those arguments, and find counterarguments.
- Outlining: The writer should own this step, perhaps asking for feedback from the AI, but providing the overall structure and logical flow to the piece. Heavy editing from the writer until satisfied - a good place to also riff longer statements as sub bullets that come easily to you, for inspiration for the AI in drafting.
- Drafting: The AI can produce initial drafts. The writer can consider drafts from different models or with different prompts, and pick the baseline they like.
- Editing: The writer must be heavily involved here and spend time. I have found that taking my favorite AI draft, copying it into a text editor, and then literally rewriting most sentences one by one in my own words, works well. That is what I did on this piece.
- Re-Editing: Do a few re-reads at different times and keep editing, 10-20 mins per session
- Review: AI can as a final reviewer to highlight weaknesses and guide polishing before submission
This process helps with the cold start problem, and also helps to produce a better product. If the students wants an A and not a C, they need to spend time on editing and iterating.
This workflow makes getting started feel less daunting. I just need to (1) get an idea of what I want to say, (2) put it together in a kind of sloppy outline but with a good flow, and then (3) AI takes on a large part of the stress by getting coherent words onto the page. At this point, I have something that “exists” that is somewhat my own, which is a big step. Now, I can really make it my own by rewriting sentence by sentence. This feels very achievable, since the words are already there, and I am just rephrasing. As I get writing, new ideas often come to mind which I take to completion. This ends up with my voice and a good flow.
So, if students are all using AI, how will grading look in this new reality? I believe we can still see a clear gradient in quality between pieces, but the focus will shift more towards uniqueness vs. obvious technical issues.
Everyone is a manager now
Students and workers both are becoming managers of AI. A manager's job is to:
- Monitor the team members (AI, in this case)
- Inspect the work produced, providing feedback on what’s good and what’s bad
- Coach the team member for better performance
- Step in at times to take over and deliver the product themselves
Managers jobs are not easy, and you learn a lot by doing them. As AI can handle more heavy lifting, these managerial skills will become more important over time. Yes, that means change to how we educate and the skills you leave college with. But, I’m not sure it is productive to fight this change - better to embrace it and learn how to educate well in this new environment.
Grading must focus on uniqueness instead of mechanics
In the past, a large portion of a paper's grade was based on mechanics: grammar, spelling, sentence structure, and basic organization. AI washes these errors away. But as we all know, these AI-generated essays are often flawed and frustrating to read.
Common issues with AI generated writing include:
- The voice is bland. You get no sense of a real human, with real opinions or style, behind the words.
- The substance is weak. The arguments are often surface-level, generic, and lack novel insight or specific, well-chosen evidence.
- Structural issues are common. For example, changing voice from one section to another is common when copying an AI’s work for one paragraph around your own original work.
As an overall theme, grading will become more focused on uniqueness. "Does this paper feel different from all the other submissions?" If not, the paper is somewhat weak, AI or not. If the voice is bland and the substance is bland, then we are in merit for saying the paper is weak.
Uniqueness might come in two different flavors (perhaps many more):
- Uniqueness in voice: Do I get a sense of the person behind this writing? Are the word choices interesting? Does it sound like a college student or a corporate robot?
- Uniqueness in argument: Is this a bland, default response to the prompt, or is there an interesting, creative, or contrarian angle? Did the student connect the topic to a personal experience or a niche piece of research?
I recognize that scoring for uniqueness is harder and more subjective. It may lead to more hard conversations with students about their grades. But we all feel it when we're reading AI drivel. That "feeling" is based on real patterns and lack of originality that should be penalized, either way, and so I feel its important to try to explore this further.
What we are losing (and how to get it back)
Accepting that students will use AI at home is not without issues. Writing from one's own brain is good for development. When a 17 year old gets to college, they don’t have the skills or development to manage their AI assistant or know what good looks like. Students need to put in the reps in order to become a good writer, and using AI may inhibit that. This is a real concern.
So, we need to find new ways to force students to ideate and draft without AI's aid. The answer, I feel, is in creating more in-class time for writing that forces these reps.
Leaning on in-class writing
A larger percentage of class time must be dedicated to active writing work with either immediate, informal feedback, from peers or TAs, or at-home grading. The structure of the class might be:
- Take-Home assignments are now focused on delivering a great, polished result. Students are encouraged to use AI, search engines, and any other tools at their disposal. They are graded on the final product's quality, substance, and unique voice.
- In-class work is now frequent, perhaps weekly, for credit. This might necessitate a bit more assigned lecture time, perhaps with one session per week run by a TA. Grading these sessions is less important, but there should be enough consequence to get students to actually try to produce a decent product. This forces the “reps” of writing from scratch.
This hybrid model accepts reality, gets away from the problems with policing AI usage, teaches the new skills required for the modern workplace, and preserves a space for the foundational, independent thinking we all value. I think it could produce more effective and confident writers by the time they graduate.
Final thought on ethics
It is easy to be paternalistic in higher ed. After all, these students are here to learn from smart and experienced people. But too much paternalism can come off as out of touch and fuel a disconnect. I think this AI debate and the label of “unethical behavior” for such a pervasive and important tool risks such a disconnect. We should consider getting on the same side of the table with students, working together with the tools we have to help them produce great work. If we want to stimulate independent thinking in this new reality, the challenge is on us to find ways to do so.
Thanks for reading! Would love to hear your thoughts.




