
Issues in Artificial Intelligence in Education: Ethical, Equity, Pedagogical, and Policy Challenges in the Generative AI Era
February 2026
Abstract
The integration of artificial intelligence (AI), particularly generative AI (GenAI), into educational systems has accelerated dramatically since 2022. As of 2025, 86% of education organizations worldwide report using generative AI — the highest rate of any industry (Microsoft Education, 2025). This article provides a comprehensive, rigorous synthesis of the core issues confronting AI in education (AIED), drawing on systematic reviews, policy reports, and empirical studies published through early 2026. Key domains examined include ethical dilemmas (privacy, algorithmic bias, transparency), equity and access disparities, pedagogical impacts on human agency and teacher–student relationships, threats to academic integrity, and systemic implementation barriers. Recent trends point toward human–AI collaboration and multimodal systems, yet persistent gaps in educator training, regulatory oversight, and inclusive design remain. Anchored in UNESCO’s rights-based and human-centered frameworks (UNESCO, 2025a, 2025b), the analysis underscores the urgent need for evidence-based, inclusive policies that preserve learner autonomy, critical thinking, and educational equity while harnessing AI’s transformative potential.
Keywords: artificial intelligence in education, generative AI, ethical challenges, algorithmic bias, digital equity, academic integrity, human-centered AI, UNESCO rights-based approach
Introduction
Artificial intelligence has evolved from experimental tools to a pervasive infrastructure in education. Intelligent tutoring systems, adaptive platforms, and large language models now support curriculum design, personalised feedback, assessment, and administration across K–12, higher education, and lifelong learning contexts. The release of widely accessible GenAI tools in late 2022 triggered exponential adoption; by mid-2025, 86% of education organisations globally were using generative AI (Microsoft Education, 2025).
Yet rapid integration has exposed significant frictions. Systematic reviews document recurring clusters of challenges: technological limitations, pedagogical misalignment, ethical risks, and systemic inequities (García-López & Trujillo-Liñán, 2025). UNESCO’s 2025 anthology frames AI as a disruptive force that compels education systems to re-examine foundational assumptions about knowledge, agency, and inclusion (UNESCO, 2025a). This article synthesises the most current evidence (2024–early 2026) to offer scholars, policymakers, and practitioners a rigorous, up-to-date analysis of these issues.
Recent Trends in AI Adoption
Adoption surged in 2024–2025. In the United States, the percentage of students and educators using AI “often” for school-related purposes rose by 26 and 21 percentage points respectively from the previous year, while the share of students who had never used AI fell by 20 points (Microsoft Education, 2025). Teachers primarily employ AI for content creation, lesson preparation, differentiation, and administrative tasks; students use it for tutoring, idea generation, summarisation, and exam preparation. Less than half of educators and students report deep AI literacy, highlighting a critical training gap (Microsoft Education, 2025).
Research has shifted toward human–AI co-creation, multimodal analytics, and ethical governance. International frameworks from UNESCO emphasise competency development for teachers and students alongside rights-based governance (UNESCO, 2025a, 2025b).
Ethical Challenges
A 2025 systematic review of 53 peer-reviewed studies identified data privacy, algorithmic bias, misinformation, loss of cognitive autonomy, and academic plagiarism as the dominant risks of GenAI in education (García-López & Trujillo-Liñán, 2025).
Privacy and Data Security
GenAI systems ingest vast quantities of student behavioural, performance, and personal data. Education remains one of the most targeted sectors for cyberattacks, and institutional reuse of data for model training without explicit consent raises serious compliance concerns under frameworks such as FERPA and GDPR equivalents (Microsoft Education, 2025; García-López & Trujillo-Liñán, 2025).
Algorithmic Bias and Fairness
Training data frequently embed societal prejudices, producing discriminatory outcomes in grading, recommendations, and content generation — particularly disadvantaging non-native English speakers, racial minorities, and students with disabilities (García-López & Trujillo-Liñán, 2025).
Transparency and Accountability
The “black-box” character of many models impedes explainability, undermining teacher oversight and student trust. Adaptive, internationally harmonised regulatory frameworks emphasising human oversight are urgently required (García-López & Trujillo-Liñán, 2025).
Equity and Access Issues
Nearly one-third of the global population (approximately 2.6 billion people) remains offline, disproportionately affecting girls, rural communities, persons with disabilities, and low-income groups (UNESCO, 2025b). Even where connectivity exists, subscription models and English-centric training data create new layers of exclusion (UNESCO, 2025a).
Pedagogical and Human-Centric Concerns
Over-reliance on AI risks eroding critical thinking, metacognition, and the intrinsic value of effortful learning (Microsoft Education, 2025; García-López & Trujillo-Liñán, 2025). Teachers and students alike report diminished human connections and relational depth when AI mediates instruction. Many educators still lack adequate professional development, increasing workload related to authenticity verification.
Academic Integrity and Assessment
GenAI blurs authorship boundaries and challenges traditional assessment paradigms. Educators increasingly advocate redesigning evaluations around process, reflection, and human–AI collaboration rather than final artefacts (UNESCO, 2025a).
Implementation, Training, and Systemic Barriers
Insufficient infrastructure, high costs, and inadequate teacher training remain major obstacles. Less than half of educators globally have received meaningful AI professional development (Microsoft Education, 2025).
Policy Responses and Future Directions
UNESCO’s ecosystem — including the 2021 Recommendation on the Ethics of AI, 2023 Guidance on Generative AI, and 2025 competency frameworks and rights-based reports — provides the foundational architecture for responsible integration (UNESCO, 2025a, 2025b). Recommended actions include mandatory AI literacy curricula, transparent procurement standards with bias audits, hybrid pedagogical models, and investment in open, multilingual training data.
Conclusion
As AI permeates every layer of education, its challenges transcend technical hurdles to engage the very essence of teaching and learning: human relationships, effortful cognition, fairness, and agency. The evidence through early 2026 reveals a technology of immense promise shadowed by risks of dehumanisation, inequity, and ethical erosion. Realising the benefits demands deliberate, rights-based stewardship grounded in pedagogical wisdom and global collaboration. Only through sustained critical engagement, robust governance, and unwavering commitment to inclusion can education ensure that AI amplifies rather than diminishes human potential (UNESCO, 2025a, 2025b).
References
García-López, I. M., & Trujillo-Liñán, L. (2025). Ethical and regulatory challenges of Generative AI in education: A systematic review. Frontiers in Education. https://doi.org/10.3389/feduc.2025.1565938
Microsoft Education. (2025). 2025 AI in Education: A Microsoft Special Report. https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/bade/documents/products-and-services/en-us/education/2025-Microsoft-AI-in-Education-Report.pdf
UNESCO. (2025a). AI and the future of education: Disruptions, dilemmas and directions. https://www.unesco.org/en/articles/ai-and-future-education-disruptions-dilemmas-and-directions
UNESCO. (2025b). AI and education: Protecting the rights of learners. https://www.unesco.org/en/articles/ai-and-education-protecting-rights-learners
Additional supporting sources:
- Center for Democracy & Technology. (2025). Schools’ embrace of AI connected to increased risks. EdWeek Research Center.
- Gallup & Walton Family Foundation. (2025). Three in 10 teachers use AI weekly, saving six weeks a year.
- HolonIQ. (2025). Global EdTech market projections.
- Carnegie Learning. (2025). The State of AI in Education 2025.
This synthesis provides the most current, rigorously evidenced understanding of AIED issues as of February 2026. Suitable for academic sharing, citation, and community discussion.








