Subscribe
Close

Colleges across the country are scrambling to define what artificial intelligence should and should not look like in the classroom. 

Most of the public conversation has been dominated by faculty anxiety. Professors keep warning that AI use is making students dumb. They say that it is automating dishonesty and is eroding the habits of reading, writing, and critical thinking that higher education claims to cultivate. 

Panels have been convened. Student handbook policies have been rewritten. Course syllabi now include stern paragraphs about “unauthorized AI use.”

But what’s often missing from that debate is the perspective of the students themselves, who are actually navigating this new terrain every day.

My TikTok story offers a rare glimpse into how a few students are thinking about AI, not as a shortcut, but as a tool they are actively negotiating. Rather than presenting a single narrative about dependence or resistance, the video captures a range of approaches that complicate the idea that students are either cheating or outsourcing their brains to this technology.

Some students I talked to describe deliberately limiting their use of AI, especially in creative disciplines where originality and process are central to the work. For them, avoiding AI is less about fear of punishment and more about protecting their own intellectual development. Other students say they are turning to AI selectively, particularly in technical subjects, where it functions less as an answer machine and more as a tutor that helps them break down steps, clarify logic, and fill gaps left by lectures or textbooks that didn’t quite land.

Still others frame AI as a professional support tool rather than an academic crutch. In an era when students are expected to communicate fluently in emails, applications, and professional correspondence—often without adequate instruction—AI can help them refine tone and structure rather than generate ideas from scratch. Students are adamant that the thinking part is still theirs.

The students are already making distinctions that many college policies have yet to articulate. They differentiate between replacement and reinforcement, between passively receiving answers and actively engaging with material. They talk about learning, not just grades. And they express an awareness of AI’s risks alongside its benefits, referencing concerns about misinformation, deepfakes, and ethical misuse even as they incorporate the technology into their daily routines.

This student-centered perspective challenges a familiar framing in higher education that treats AI as something done to students rather than as something students thoughtfully respond to. It also exposes a disconnect between how AI is discussed in faculty meetings and how it is actually being used in dorm rooms and libraries. While professors often worry that AI flattens thinking, students describe it as something that can either dull or sharpen the mind, depending on how it’s used.

The larger question, then, may not be whether students are using AI, because frankly, they are, but whether colleges are willing to meet them where they are. That means moving beyond blanket bans and panic-driven policies and toward honest conversations about cognition, authorship, and learning in an AI-saturated world.

At the end of the day, everyone’s relationship to AI is different. But taken together, those differences tell a broader story about a generation being asked to learn, create, and think inside a technological shift that higher education itself is still struggling to understand.

Mekhi Neal is a junior Journalism major at Howard University with a passion for storytelling and broadcast media. He focuses on highlighting the experiences and resilience of students, especially within HBCU communities. You can follow him on Instagram.

SEE ALSO:

Howard Students Still Facing Hunger After SNAP Disruptions

Serving My Country, But Not Getting Paid

Stories From Our Partners