Agenda:
Addressing the Global Impact of Artificial Intelligence on Employment and Education
Chairperson’s Letter:
Dear Delegates,
We are delighted to welcome you to the United Nations Educational, Scientific and Cultural Organisation (UNESCO) Committee for the Schoolhouse Model United Nations 2026 conference. It is an honour to serve as your chairpersons, and we are committed to making this a productive conference for you all.
Meet Your Chairpersons
Hafsah: I’m a high school junior from India. I have been at Schoolhouse since November 2021, and have served as the Secretary General for four editions of the Schoolhouse MUN (including this year!). I joined my first MUN conference as a clueless 7th grader representing Brazil in UNGA, and little did I know that this activity would become one of the most significant parts of my life. I’ve continued to participate and organise Model UNs ever since, with over 5 years of experience. The agenda for this year’s conference is one that is truly close to my heart — I intend to major in cognitive science and work in ed-tech in the future, and I’m particularly interested in areas concerning the implementation of AI systems in the realm of education. I’m very excited to chair this committee along with Sarah and witness productive debate and thoughtful solutions!
Sarah: I’m a high school senior in the US, and a few fun facts are that I love hi-chews, music, figure skating, fashion, math, and the environment! I’ve been involved in Model United Nations since freshman year. For my first two MUNs, I was a general assembly delegate, writing resolutions to increase accessibility to clean water in Kenya and then to decrease maternal mortality rates in Sierra Leone. For my third and fourth, I served as an associate justice, facilitating debate between student attorneys in the International Court of Justice. I’ve also participated in Youth in Government (essentially MUN but in regards to state-level organs), currently serving as Chief Justice of the Supreme Court for my conference! As for Schoolhouse, I’ve been here for around two years now, and chaired for both the UNSC committee of the 2024 Fall SMUN and the UNCSW committee of the 2025 SMUN. Now, I’m excited to be a part of Schoolhouse MUN once again, chairing with Hafsah over such a new and relevant topic!
Introduction to the Agenda
Artificial intelligence is reshaping societies at an unprecedented pace. From using AI as recruiters and automating workplace systems leading to layoffs to AI-assisted classrooms and even AI teachers, its influence cannot be ignored. Governments across the world are investing heavily in AI innovation, corporations are deploying AI at a large scale and international organisations, including UNESCO, are racing to establish ethical and regulatory frameworks about the use of AI in the domains of education and employment. The use of AI raises concerns including but not limited to human rights, economic stability and educational equity. Ultimately, it all depends on how we, as a society, choose to use Artificial Intelligence - the question is, how do we decide how and where to use AI? What regulatory principles must be established and enforced? These questions, delegates, are some of the few you will consider in this committee. We will discuss the impact AI has had on global labour markets and the overall employment process, the integration of AI tools in education systems for both administrators, teachers and students and ethical concerns surrounding the use of AI tools in employment and education.
Delegates, over these coming days, we urge you to harness your creative thinking and problem solving, to fully immerse yourselves in the issues at hand. The solutions that you devise have the potential to set important precedents for the matter at hand. As you engage in fruitful research and thoughtful debate, we urge you, too, to maintain an open mind, to listen attentively even when you disagree on certain topics. In a world that is becoming increasingly polarized, it is more important than ever to continue to be willing to consider different ideas and perspectives.
We look forward to hearing your dynamic discussions and wish you all a fulfilling MUN!
Sincerely,
Hafsah M and Sarah W
UNESCO Chairpersons, Schoolhouse Model United Nations 2026
Background Guide
1. Committee Overview
After the horrifically destructive nature of the first two world wars, leaders from all over the world realized that solely political and economic arrangements would not be enough for long-lasting peace. “Since wars begin in the minds of men, it is in the minds of men that the defences of peace must be constructed” (Constitution of the UNESCO Organization), and so the UNESCO, or United Nations Educational, Scientific and Cultural Organization, was formed. Utilizing education, culture, science and information, the organization aims to bridge the gap between people across the world through the establishment of international standards and legal texts, tools for inter-state cooperation, and lists and designations, all while providing a platform for thought leaders to share important ideas. (1)
The UNESCO is driven by three key bodies: the General Conference, which determines the policies and main areas of focus of the organization; the Secretariat, which oversees overall operations; and the Executive Board, which tracks implementation of initiatives.
Artificial Intelligence is a specific topic that the UNESCO has dealt with before, publishing a recommendation in regards to the ethics of artificial intelligence as well as a guide for policy-makers in regards to AI and education in 2021. The UNESCO considers AI a technology that has the potential to contribute to the goals of the UN but also raises ethical issues that must be managed with a human-centred approach in mind. (2)
2. An Introduction to Artificial Intelligence
2.1 Historical Development
1950 - Alan Turing introduces the Turing Test as a method to assess machine intelligence through an imitation game (3,4). 1956 - The term “artificial intelligence” was first introduced at Dartmouth College. A small group of scientists gathered for the Dartmouth Summer Research Project on Artificial Intelligence, which was the birth of this field of research and the term (5). 1969 - Shakey, the first general-purpose robot, is developed (3). 1997 - IBM’s Deep Blue, an advanced chess-playing computer developed by IBM, famously defeated world chess champion Garry Kasparov in a six-game match, marking a significant milestone in artificial intelligence (6) 2002 - The Roomba, a commercially successful robotic vacuum, is launched (3). 2011 - IBM’s Watson, a DeepQA computer, won the quiz show Jeopardy! demonstrating advances in natural language processing (3, 7) 2016: Google DeepMind’s AlphaGo wins against Go world champion Lee Sedol (3). 2019 - The Organisation for Economic Co-Operation and Development (OECD) adopted the firstof its kind inter-governmental principles for AI. They promote innovative, trustworthy AI that respects human rights and democratic values (8). 2020 - OpenAI introduces GPT-3, a language model with 175 billion parameters, making it one of the largest and most sophisticated AI models to date (9). 2021 - UNESCO adopts its global Recommendation on the Ethics of Artifical Intelligence Framework. The protection of human rights and dignity is the cornerstone of the Recommendation, based on the advancement of fundamental principles such as transparency and fairness, always remembering the importance of human oversight of AI systems (10). 2021-2023 - OpenAI launches DALL-E, followed by DALL-E 2 and DALL-E 3, generative AI models capable of generating highly detailed images from textual descriptions (9). 2023 - the United Nations Secretary General established a high-level Advisory Board on AI to support global governance discussions (11). 2023 - ChatGPT 4 sets a new benchmark in AI capabilities (3). 2.2 Definitions of Key Terms
Artificial Intelligence: a catch-all term that refers broadly to a machine-based system’s ability to analyze, apply logic, and improve its capabilities through data analysis to solve tasks (12).
Machine Learning (ML): a subset of AI that enables machines to create algorithms based on data patterns (e.g., diffusion models, large language models) to perform specific tasks like predicting behavior or generating content (12).
Generative Artifical Intelligence (GenAI): subset of ML focused on producing new content (e.g., text, video, audio, 3D models). Gen AI systems are defined by their content-generating abilities (12).
Deep Learning: a deeper subset of ML utilizing artificial neural networks with multiple layers to process data (12).
Artificial General Intelligence (AGI): a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software (13).
3. Global Actors
3.1 Governments
European Union (EU): The EU has been a leading regulatory actor in the field of AI. In April 2021, the European Commission proposed the first EU artificial intelligence law, establishing a risk-based AI classification system. The new rules establish obligations for providers and users depending on the level of risk of AI risk qualification. It banned applications including voice-activated toys that encourage dangerous behaviour in children, social scoring AI, and real-time and remote biometric identification systems, such as facial recognition in public spaces (14).
The AI Regulations specify that its members must abide by their obligations under international human rights law, while private sector activities should be in line with international frameworks such as the United Nations Guiding Principles on Business and Human Rights and the OECD Guidelines of Multinational Enterprises (15).
Australia: Voluntary AI Ethics Principles guide responsible AI development in Australia, with potential reforms under consideration. Some of these voluntary principles include The AI Ethics Principles published in 2019 which comprise eight voluntary principles for the responsible design, development and implementation of AI, which are consistent with the OECD's Principles on AI and the The Guidance for AI Adoption published in October 2025 replacing the 2024 Voluntary AI Safety Standard (VAISS) (16).
United Kingdom: The UK government's AI Regulation White Paper of August 3, 2023 (the "White Paper") and its written response of February 6, 2024 to the feedback it received as part of its consultation on the White Paper (the "Response") both indicate that the UK does not intend to enact horizontal AI regulation in the near future. Instead, the White Paper and the Response support a "principles-based framework" for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains. The UK considers that a non-statutory approach to the application of the framework offers "critical adaptability" that keeps pace with rapid and uncertain advances in AI technology. However, the UK may choose to introduce a statutory duty on regulators to have "due regard" to the application of the principles after reviewing the initial period of their non-statutory implementation (17).
United States: Currently, there is no comprehensive federal legislation or regulations in the US that regulate the development of AI or specifically prohibit or restrict their use. President Trump has signaled a permissive approach to AI regulation, issuing an Executive Order for Removing Barriers to American Leadership in AI ("Removing Barriers EO") in January 2025, that rescinds President Biden's Executive Order for the Safe, Secure, and Trustworthy Development and Use of AI ("Biden EO") (18).
3.2 International Organisations
United Nations: The UN's AI resolutions encourage Member States to adopt national rules to establish safe, secure and trustworthy AI systems and create forums to advance global cooperation, scientific understanding, and share best practices. On March 21, 2024, the United Nations (UN) adopted a draft resolution on AI, entitled "Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.” On September 19, 2024, the UN's High-level Advisory Body on Artificial Intelligence released "Governing AI for Humanity," a final report on global AI governance. On August 26, 2025, the General Assembly adopted a further draft resolution on AI, entitled "[t]erms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance.” (19)
Organisation for Economic Co‑operation and Development (OECD): The OECD AI Principles have been adopted by dozens of governments, and are among the first formal intergovernmental standards for AI governance. Adopted in 2019 and updated in 2024, they are composed of five values-based principles and five recommendations that provide practical and flexible guidance for policymakers and AI actors. The five values are: 1. Inclusive growth, sustainable development and well-being, 2. Human rights and democratic values, including fairness and privacy, 3. Transparency and explainability, 4. Robustness, security and safety and 5. Accountability (8).
UNESCO: UNESCO produced the first-ever global standard on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence’ in November 2021. It is applicable to all 194 member states of UNESCO. What makes the Recommendation exceptionally applicable are its extensive Policy Action Areas, which allow policymakers to translate the core values and principles into action with respect to data governance, environment and ecosystems, gender, education and research, and health and social wellbeing, among many other spheres (20).
Global Partnership on Artificial Intelligence (GPAI): The GPAI is an integrated partnership that brings together OECD members and GPAI countries to advance an ambitious agenda for implementing human-centric, safe, secure and trustworthy artificial intelligence (AI) embodied in the principles of the OECD Recommendation on AI (21).
4. General Ethical Concerns
AI-driven content generators raise several ethical concerns related to bias, plagiarism, intellectual property, misuse, and the potential to generate misinformation, fake news, or misleading content.
A primary ethical concern in using AI content generators is the potential for bias in their responses. As the LLMs powering content generators are trained on a massive set of pre-existing information, images, and data drawn from several sources, including the Web, biases present in training data will be reflected in the model's output. This can lead to an unfair, biased, inaccurate, or narrowly focused response and discriminatory outcomes, such as racial or gender discrimination.
Another major ethical concern is the potential for misuse and abuse of AI content generators. For example, text-generators like ChatGPT can be used to generate content for malicious purposes such as spreading misinformation or sexist, racist, or otherwise offensive messages. It could also be used for generating harmful content that could incite violence or social unrest or impersonate individuals.
The use of tools like ChatGPT can trigger increased plagiarism by authors and students that is challenging to detect. To address plagiarism aided by AI tools, known as aIgiarism, special tools that aim to distinguish AI-written text from human-written text, such ChatZero and AI Text Classifier, have emerged.
Hackers could effectively use AI content generators to create personalized, convincing spam messages and images with hidden malicious code. This can increase cybersecurity attacks and extend to a large number of victims. Additionally, users may feed sensitive personal or business information to chatbots, which could be misused by the developer. As illustrative examples, consider these two use cases. Recently, an executive cut and pasted the firm's 2023 strategy document into a chatbot and asked it to create PowerPoint slides for presentation. In another incident, a doctor fed his patient's name and medical condition into ChatGPT and asked it to craft a letter to the patient's insurance company. These use cases highlight ethical concerns, data privacy issues, and security fears and risks surrounding AI content generators (22).
5. Employment
AI is reshaping employment at an unprecedented pace. Its effects range widely — from affecting how people are hired, how they work to the type of skills demanded by employers, and general labour market trend.
5.1 AI in the Employment Process
5.1.a Applicants
In recent years, many candidates are increasingly relying on generative AI technologies to draft their resumes, cover letters and application responses. A global study of applicants and hiring professionals found that almost half (~45%) of job seekers use generative AI to craft or improve resumes and cover letters. This trend spans multiple regions including the U.S., U.K., India, Germany, and beyond (21). Applicants often view AI as a means to overcome the “tediousness” of personalising cover letters or writing and structuring resumes, especially because of larger application volumes (22). Surveys of hiring managers show a mixed reception: while many see AI use by applicants as acceptable for proofreading or structural support, a significant minority view unsolicited AI content as a red flag or may scrutinize submissions more closely (25).
5.1.b Reviewers and Employers
AI systems in recruitment are used to screen resumes, rank applicants and match skills to job descriptions. These systems promise efficiency gains but also raise concerns about bias, transparency, and fairness. A major OECD analysis states that AI adoption “may also introduce or perpetuate bias…the adverse impact of AI could be far greater by virtue of the volume and velocity of the decisions it takes” (26). In an audit study of automated resume screening using LLMs, research by the AI Equity Lab at Brookings found that “resumes with white‑associated names were preferred in 85.1% of cases” while “Black-associated names led in just 8.6% of cases” (27). LLMs can also show “self-preference” bias where resumes generated by the same model used for screening are “…23% to 60% more likely to be shortlisted than equally qualified applicants” (28).
These studies show that AI hiring tools, particularly those based on using LLMs, can suffer from demographic biases which risk shaping who gets shortlisted or advanced in the hiring process unless carefully audited and calibrated.
5.2 AI Integration in the Workplace and Projected Impacts on Employment
AI Integration in the workplace refers to the adoption of AI tools and systems to support, enhance or automate job tasks, management functions and business workflows. According to the International Labour Organization (ILO), approximately 2.3% of global employment is at a high risk of automation due to AI exposure. This is especially applicable to routine, task-based jobs. In high‑income countries, this exposure rate is higher (~5.1 % of employment) because many roles involve digital tasks more susceptible to automation. Clerical and administrative roles are particularly exposed (29). A widely cited study using U.S. occupational data estimated that about two‑thirds of current U.S. jobs could be partially automated by AI, though complete replacement is less frequent; rather, AI may take over portions of tasks that make up broader roles (30). OECD surveys reveal that many workers, especially in manufacturing and finance, expect AI to decrease their wages over the next decade, and jobs may change significantly with automation of task segments (31).
AI does have the potential to create new jobs, too. A meta‑analysis of global labour trends indicates that AI could create around 170 million new jobs by 2030, outpacing an estimated 92 million jobs displaced, resulting in a net gain of about 78 million positions globally. These new jobs will cluster in roles that involve AI development, oversight, maintenance, and complementary human skills such as creativity, problem‑solving, and interpersonal communication (32).
6. Education
6.1 Independent student/ teacher usage
Current AI tools are accessible and easy to use, unlike older technologies, which has resulted in widespread use among students without prompting from their schools. According to the Center for Democracy and Technology’s (CDT) 2025 report on AI in schools, 86% of students indicated use of AI, whether using it to start assignments or summarize information or obtain information that was needed more rapidly. (33)
Likewise, teachers, too, have begun to use AI alongside their teaching, 85% indicating any in-class AI usage in the CDT’s report. Ways that teachers use AI in class include brainstorming new ideas for class, making lesson materials, and identifying areas for students to improve on with AI analytics.
6.2 Frameworks and policies addressing AI
6.2.a In schools
Although at a slower rate than that at which students and teachers are adopting AI tools, schools are beginning to integrate AI into their systems.
Compared to 2023, where 31% of teachers reported that their school generally permitted AI, and 2024, where 36% of teachers reported such, 46% of teachers reported AI being generally permitted in their school in 2025 (33).
However, most schools are still lacking specific frameworks. Pertaining to generative AI specifically, in CollegeBoard’s 2025 AI Research Brief on US high school students’ GenAI usage, 39% of schools or districts didn’t allow GenAI, while 47% either had no policy or allowed teachers/ departments to decide their own policy, and 13% encouraged students to use GenAI tools in each class. As illustrated by this report, only around one in five schools had a uniform schoolwide policy of some kind in regards to generative AI. (34)
6.2.b Across states and countries
Large-scale policies addressing AI integration are being adopted as well.
Northern Ireland:
The Education Authority of Northern Ireland (EANI), which comprises 21,000 educators and 380,000 students, is in charge of primary and secondary education across the region. In 2024, EANI integrated the AI productivity assistant Microsoft 365 Copilot to lighten teachers’ workloads. As a result, teachers were able to prepare lesson materials faster and thus focus more on students, easily making personalized materials to address the diverse learning styles that they faced. (35)
United States:
In the United States, 33 states out of 50, as well as Puerto Rico, have adopted official guidance or policy on AI usage in primary and secondary education. (36)
European Union:
The EU AI Act, described earlier under section 3.1, acknowledges the considerable influence AI can have on students, whether positive or negative, and thus provides guidelines in regards to AI use in education. AI systems with the ability to infer emotions (excluding for medical or safety reasons) are prohibited, and the act strives to prevent the violation of fundamental rights in the form of a “fundamental rights impact assessment”. (37)
Brazil:
The Chamber of Deputies of Brazil, in September 2021, approved a bill to regulate AI across the country, inspired by the EU’s ongoing AI regulation efforts. Although it contains many similar aspects, it doesn’t prohibit specific types of AI or designate the definition of “high-risk” to any systems, instead placing the onus on the sectors or systems themselves. (38)
6.3 Impacts
Wharton, Penn and GPT Base, GPT Tutor:
In 2024, researchers from Wharton and Penn conducted an experiment in Turkey with almost 1,000 high school math students, split into three groups given different resources to practice with. One group was given GPT Base, similar to ChatGPT-4; the second was provided with GPT Tutor, also a similar interface but with built-in safeguards and teacher input that used hints to guide students rather than immediately directing them to the answer; the third was the control group and thus received no technology. (39)
The performance of the 2nd (GPT Base) group during the practice session assisted by AI was 48% better than the control group. However, their performance became 17% worse than that of the control group when they had to take an exam without AI assistance. On the other hand, the performance of the 3rd (GPT Tutor) group was 127% better in the practice session where they had AI assistance. On the exam, though, their scores were around the same as the control group’s.
Macquarie University and Virtual Peer:
Through a collaboration with Microsoft and Celebal Technologies, Macquarie University created Virtual Peer. This is an AI chatbot based on active learning and retrieval-augmented generation, which mitigates the risk of hallucinations, as it procures its answers not through independent generation but by referencing select materials curated by university personnel. In October 2024, Macquarie conducted a study with 1,400 second-year psychology students who then used Virtual Peer for two weeks before their final exams. (40)
A significant difference was observed - students scored an average of almost five marks higher. The students also provided positive feedback and many expressed they’d be disappointed if they could no longer use the chatbot. Virtual Peer bridged the gap between students and academic support as a resource that was available even on the weekends, when there typically wouldn’t be office hours. By responding to routine questions from students and providing unlimited questions for students to practice with, the chatbot supported students and alleviated teachers’ workload as well.
6.4 The future of AI in education and potential issues
In the future, AI usage will only continue to grow. The size of the global AI in education market was 7.05 billion USD last year, in 2025, and is projected to reach 9.58 billion USD this year, 2026, and then 136.79 billion USD by 2035, at a CAGR (Compound Annual Growth Rate) of 34.52$ between the latter two years. (41)
But beyond the monetary benefits, is there more to the picture?
6.4.a Interpersonal relationships
According to the CDT report, 50% of students agreed with the idea that AI usage in class resulted in them feeling less connected to their teacher, and 38% with the statement that they’d prefer to work with AI rather than with a teacher when they couldn’t understand a concept. Additionally, more than four in ten students have had sustained conversations with AI for non-academic uses, from getting relationship advice to receiving mental health support. (33)
AI use in education is an issue regarding not just academics but also social skills, as the incentive to reach out to others for help lowers with the constant availability of AI. This can result in students receiving responses that lack the nuance humans have, especially when it comes to important life issues. Furthermore, students miss out on the opportunity to build relationships with their peers and professors, ones that could become especially important in the future, whether in the form of a job opportunity or a letter of recommendation. As AI use becomes ever more prevalent, it’s possible that the social isolation initially prompted by the rise of the Internet and social media will only become further exacerbated, as students become more and more disconnected from those around them. (42)
6.4.b Cheating
While AI tools have the potential to lighten teachers’ workloads, they could also achieve the opposite. 71% of teachers believe that student use of AI has placed an additional burden on them to evaluate whether the student’s work is that of AI or themselves, and 66% are worried about their ability to evaluate so (30). Similarly, 100% of principals in CollegeBoard’s study were concerned about student academic integrity, 59% being very concerned and 41% being somewhat concerned. On the other hand, per Microsoft’s 2025 AI in Education study, students’ top concern in regards to school-related AI usage was the possibility of being accused of plagiarizing or cheating. (43)
With the very presence of AI in classrooms, educators and school leaders will be prone to suspecting students of cheating with AI. Despite the proliferation of AI checkers, none are 100% accurate, so such accusations are often based upon a feeling. While teachers have experience backing up such intuition, inherent biases won’t ever be totally absent, which can result in false accusations. As a result, tension arises between students and teachers, creating an even wider gap.
6.4.c Privacy and security
As reported by the CDT, 29% of teachers indicated that their school monitored student activity on personal devices, while 39% reported that their school monitored student activity outside of school hours as well. Furthermore, around one fourth (23% exactly) of teachers reported that their school had undergone a large-scale data breach. Unsurprisingly, 69% of parents and 49% of students were concerned about the privacy and security of student data. (33)
As students use AI more, it’s inevitable that such tools receive vast amounts of sensitive information. Many schools have already been targeted by cybercriminals, and this amount could only increase without the proper safeguards in place to protect such information. (44)
QARMA (Questions a Resolution Must Answer)
What role should the UNESCO play in addressing the impact of AI on employment and education? How can UNESCO support countries in upskilling and/or reskilling workers whose jobs are likely to be affected by AI automation? Which issues within the topics of employment and education should the UNESCO focus on? What kinds of frameworks can be built to address such problems? How can inspiration be taken from the EU’s AI Act and Brazil’s AI regulation legislation to build such frameworks? How can the UNESCO ensure that AI is used as a productive tool while maintaining high ethical standards that put humans first? What policies can be implemented to ensure AI adoption does not worsen inequalities in education or employment opportunities? In what ways can ethical guidelines for AI use be standardized globally while still allowing flexibility for national contexts? How can UNESCO ensure that developing countries are supported in implementing AI strategies to avoid a global technological divide? References and Resources