The emergence of generative artificial intelligence has sparked alarm in higher education circles, with headlines focused on cheating and plagiarism.
But the deeper issue is more urgent and more interesting: How should higher education evolve when AI becomes a constant companion in how students think, learn and eventually work?
Instead of resisting AI's presence in the classroom, forward-looking institutions must ask a new question: How do we prepare learners for a world where AI isn't the enemy of academic integrity, but a co-creator in cognitive processes, professional problem-solving and ethical decision-making?
Dr. Ryan Straight, an honors and assistant professor in cyber, intelligence and information operations in the College of Applied Science and Technology at the University of Arizona, challenges the assumption that AI is simply a threat to authentic learning.
Instead, he argues, universities must fundamentally reconsider how they define cognition, knowledge production and collaboration in this new landscape.
Designing Assessments for an AI-Integrated Classroom
Conventional assessments often assume that knowledge originates solely from the individual. But Straight challenges that assumption by emphasizing that cognition today is distributed.
Students learn and think not in isolation but through a network of technologies, tools and interactions.
This shift invites educators to move away from closed-book exams and embrace process-based evaluations. "Instead of treating AI as a black-box threat, we should assess how students use it transparently," Straight said.
That could include:
- Assignments requiring documented human-AI collaboration
- Reflection portfolios showing how tools influenced learning decisions
- Collaborative meaning-making across human and machine perspectives
- Real-world problem-solving that values mobilizing knowledge over simply producing it
These approaches not only protect integrity but also reflect how future professionals will actually operate in AI-integrated workplaces.
Building AI Fluency in the Curriculum
To prepare students for those environments, AI literacy must go beyond tool use. Straight argues for fluency: the ability to understand and critique AI's embedded values, assumptions and impacts.
"Every conversation with AI contains echoes of human voices," he said. Large language models are trained on our collective digital footprint. That raises urgent ethical questions: Whose voices are amplified? Whose are erased? How do these patterns influence outcomes in everything from diagnostics to decision-making?
Curricula should explore:
- The environmental and labor costs of AI systems
- The offloading of cognitive effort and its effect on creativity
- Intellectual property in an era of blurred authorship
- The difference between human agency and AI mediation
As Straight puts it, we're not just adopting new tools but fundamentally shifting how knowledge is created and shared.
Just as cars changed transportation infrastructure, AI is redefining education's core assumptions. That change demands ethical frameworks that match the moment.
Helping Gen Z Move From AI Fluency to Responsibility
It's tempting to assume that today's students are "digital natives." But Straight cautions against that oversimplification. Familiarity with platforms doesn't automatically translate to critical awareness of how they shape thought.
The better approach? Don't "harness" Gen Z's tech fluency but collaborate with it.
Educators should design experiences that reflect students' multimodal thinking and fast-paced, cross-platform cognition. That might mean:
- Encouraging co-authorship with AI tools, including explicit credit and critical reflection
- Designing assignments that challenge students to identify bias or gaps in AI outputs
- Teaching "response-ability" — an ethical stance of engaging critically with AI, not passively consuming it
By moving beyond traditional pedagogies, educators can foster what Straight calls "networked wisdom," the ability to think ethically within these technological systems, not just about them from the outside.
Strategic Risks and Opportunities for AI in Higher Ed
The shift to AI-integrated learning isn't optional but already underway. Institutions that embrace this transformation have a chance to lead. But doing so requires humility, creativity and caution.
Straight sees several key opportunities:
- Democratizing knowledge production: With access to advanced tools, students become partners in inquiry, not just recipients
- Distributed learning models: AI becomes part of a learning network, not just a resource
- Reimagining authorship and assessment: Moving beyond outdated notions of originality unlocks more authentic, relational ways to evaluate learning
But the risks are real:
- Techno-solutionism: Believing AI can fix systemic challenges may deepen inequality
- Epistemic erosion: Without careful design, AI could reduce students' capacity for critical thought
- Surveillance creep: Data-rich platforms risk turning learners into trackable data points, undermining autonomy
The stakes are high. But so is the potential for more inclusive, dynamic and meaningful education.
The Path Forward
For higher education leaders, this isn't about choosing whether to allow AI in the classroom. It's about shaping how it's used and modeling the ethical, reflective habits students will need in their careers.
The institutions that thrive will be those that move beyond the panic and toward purposeful adaptation. That means designing programs where AI is not a shortcut or a threat, but a catalyst for deeper learning and ethical growth.
Now is the time to design programs that reflect the realities of AI ethically, strategically and without compromise.