Home Artificial intelligence How Higher Ed Can Reverse It
Artificial intelligence

How Higher Ed Can Reverse It

Share


Ninety percent of faculty say that AI is weakening critical thinking. This is according to a January 2026 survey of 1,057 faculty members conducted by the American Association of Colleges and Universities and Elon University. That concern is echoed globally. A separate January 2026 report from the Brookings Global Task Force on AI in Education—based on an eighteen-month premortem study involving more than 500 stakeholders across 50 countries—concluded that the risks generative AI poses to children’s learning and development currently outweigh its demonstrated benefits.

Taken together, these findings suggest a real possibility of learning degradation when AI is adopted at scale without shared norms, rigorous evaluation, or institutional guardrails. As I reported last September, ninety percent of college students already use AI for academic work, but only 40% of institutions had AI use policies as of 2025. Even where guidance exists, it remains underdeveloped. Digital Promise’s December 2025 review of publicly available AI evaluation frameworks across 32 states and Puerto Rico found that most are still nascent and exploratory, relying largely on perception-based indicators rather than measurable learning outcomes.

The implications extend beyond the classroom and into how students are being prepared—or not prepared—for work in an AI-saturated economy. Workforce readiness may partially be a downstream indicator of learning quality.

Pearson’s January 2026 workforce analysis underscores this disconnect. While 44 percent of U.S. businesses now pay for AI tools—up from just 5 percent in 2023—the majority of workers lack the skills to use them effectively, putting an estimated $4.8–$6.6 trillion in potential economic gains at risk by 2034. Higher education does not appear to be closing this gap. Sixty-three percent of faculty report that spring 2025 graduates were not well prepared to use AI effectively in the workplace, despite frequent exposure to these tools during college. At the same time, 78 percent report increases in academic integrity violations since generative AI became widely available.

Together, these findings suggest a widening disconnect: AI is increasingly present in both education and work, yet students are leaving college neither demonstrably more capable learners nor better-prepared professionals.

The evidence does not point to an inevitable failure of AI in education. It points to a failure of unstructured adoption. Across the same body of research that documents cognitive erosion and workforce misalignment, a quieter pattern also emerges: Institutions that anchor AI use to learning objectives, evidence, and governance can avoid many of these harms.

What Works: Evidence-Anchored Principles

What emerges across the literature is not a single model institution, but a shared set of practices among earlier, more intentional adopters—including public research universities, regional comprehensives, and community colleges piloting AI under faculty-led governance structures. These institutions begin by defining explicit learning objectives before deploying AI, pilot tools before scaling, measure learning-relevant outcomes rather than adoption metrics, and establish standing governance bodies that include faculty, assessment leaders, and academic leadership. Across contexts, these practices consistently resolve into a small set of design principles that determine whether AI augments learning or quietly erodes it.

1. Protect Cognitive Work

Define, at the course and program level, which forms of thinking students must perform independently—such as synthesis, evaluation, and judgment. AI may assist, but should not replace the intellectual labor tied to learning objectives. If a task can be fully outsourced to AI without loss of learning, the task—not the tool—needs redesign. One concrete example comes from Georgetown University’s required first-year writing course (WRIT 1150), where instructors used an in-class policy-and-assignment redesign to protect write-to-think cognitive work. Students in two course sections drafted and debated an AI-use policy, ultimately permitting AI for bounded, process-oriented supports (brainstorming, outlining, and sentence-level editing) while prohibiting direct quoting from AI and disallowing AI for creative writing and personal reflections; they also required transparency about where AI was used and emphasized student responsibility for accuracy and bias awareness.

2. Require Evidence Before Scaling

Move AI tools from pilot to adoption only when institutions can articulate the learning outcome the tool is meant to support, how impact will be measured, and what constitutes sufficient evidence of benefit. Exploration is appropriate early. Scaling without evidence is not. One example comes from the University of London Worldwide’s online undergraduate and postgraduate Law programs, which treated generative AI tutoring as a measurable pilot rather than a campuswide rollout. In a 2023 pilot of “Walter,” an AI “study buddy,” the institution ran pre-intervention and post-intervention surveys to compare student expectations with actual experiences after deployment, then analyzed results quantitatively (including statistical analysis) alongside qualitative feedback. Based on the pilot findings—including reported engagement levels and identified limitations—the report culminated in specific recommendations for how (and under what conditions) to advance integration of AI-driven tutoring in the Law curriculum, rather than assuming benefit and scaling by default.

3. Embed Human Oversight

AI governance should not be ad hoc or optional. Establish standing review structures—at the department or institutional level—that oversee AI use in teaching, assessment, and student support. Oversight must be ongoing, not triggered only by misconduct or crisis. One operational model comes from Brandeis University, which has established an Artificial Intelligence Steering Council as a standing, central advisory body for AI use across academics and administration. Its charge includes reviewing acceptable-use guidance for AI in instruction, research, and administration; offering advisory input on AI procurement, pilots, and funding proposals as they move through IT governance; and collaborating with legal, security, and data governance teams to flag risks related to data use, bias, and intellectual property.

4. Anchor AI Integration in Institutional Values and Scholarly Rigor

At High Point University, the integration of AI is being approached as an extension of pedagogical rigor rather than a stand-alone technical initiative. As Heidi Echols, Director of the Center for Innovative Teaching and Learning, asserts, “The integration of artificial intelligence in higher education must be guided by institutional values.” In practice, this orientation is most visible through CITL’s role as a faculty support hub—providing professional development, teaching resources, and workshops that help instructors align emerging tools with course goals and assessment practices. The emphasis is on strengthening instructional judgment, transparency, and alignment with the university’s educational mission. “The path forward [with AI integration] requires the same rigorous approach we apply to any pedagogical innovation: Articulating clear learning objectives, establishing measurable success criteria, and maintaining systematic evaluation protocols,” says Echols.

5. AI Literacy Through Competencies and Evidence

Julaine Fowlin, Assistant Professor and Executive Director of the Center for the Advancement of Teaching and Learning at the Medical University of South Carolina, argues that AI literacy must be grounded in core competencies and evidence. “The goal of education is to prepare students for authentic problem-solving in the real world, so we should be asking: What core competencies must our learners achieve? What does evidence of that learning look like? Does that evidence now involve an AI-augmented component? What does the research say about teaching and learning, and how can AI help us operationalize and scale these principles?”

Fowlin also emphasizes that institutions need clarity about what AI literacy means in context and that adopting a holistic framework—such as the Digital Education Council’s AI Literacy Framework —creates a shared roadmap across awareness, competency, and discipline-specific application.

6. Evaluate What Matters

At the College of Marin, faculty attention is focused squarely on assessment integrity and learning evidence in an AI-rich environment. English instructor and textbook author Anna Mills underscores the central challenge: “Whether or not our pedagogy incorporates AI, we need to know what the student did to assess their learning.” Mills argues for stronger assessment guardrails—including in-person or otherwise secured assessments—to counter what she describes as the “constant temptation of cognitive offloading” and to ensure that course credits and degrees retain meaning. While pedagogy and learning outcomes may evolve for an AI-enabled world, she notes, evaluation must still be able to distinguish student thinking from tool output, particularly in moments of insecurity or temptation. This focus reflects a broader shift away from counting AI use and toward protecting learning-relevant indicators—evidence of independent reasoning, commitment, and mastery—without which institutions cannot tell whether AI is augmenting learning or quietly eroding it.

The Choice

The evidence is clear: AI can undermine learning when it is adopted without structure, and it can support learning when institutions apply intentional design, evidence, and governance. The practices outlined here—protecting cognitive work, requiring evidence before scaling, embedding human oversight, anchoring AI use in institutional values and scholarly rigor, defining competencies and evidence for AI-mediated learning, and evaluating what actually matters—are already in use. They work.

The question now is not whether these approaches are viable, but whether institutions will apply them consistently and soon enough to matter.

A practical starting point: Choose one course or program, specify the cognitive work students must do independently, define the learning outcome an AI tool is meant to support, establish how impact will be measured, route the decision through a standing governance structure, and review the results before expanding use.

That sequence—applied deliberately and repeated—determines whether AI becomes a force for learning or a shortcut around it.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *