Among 22,963 university students surveyed across 120 countries, 91.1% had used ChatGPT in the last academic year. Only 39.1% think it makes them better critical thinkers. And 49.2% don’t know whether their school has an AI policy at all.
Most articles you’ve read about students using AI for homework are based on US teen surveys with 1,000 to 1,500 respondents. Pew, RAND, College Board: all valuable, all American, all small. The picture they paint is real but partial.
We pulled fresh cuts on a public Mendeley dataset (DOI 10.17632/nv2343nwsb.2, fielded October 2024 to February 2025) covering 120 countries and 7 languages. The respondents are roughly seven times the size of the Pew US teen sample and twenty-two times the size of HEPI’s UK undergraduate sample. Every percentage below is our own re-cut, with the question and sample size attached.
Key findings at a glance
- 91.1% of 22,963 university students had used ChatGPT in the 2024-25 academic year. Only 4.7% had used Claude.
- They use it most for summarizing texts (37.4%) and brainstorming (34.1%); least for math (19.2%) and creative writing (14.2%).
- 73.0% find it clarifying, only 43.0% find it reliable. That 30-point gap is the central tension in how students relate to AI.
- Students see AI as a tool, not a teacher: 54.9% say it improves AI literacy, but only 39.1% say it improves critical thinking.
- 62.4% expect employers to require AI knowledge of their workforce. Students view AI literacy as a credentialing requirement.
- 49.2% don’t know if their school has an AI policy. The institutional vacuum is global, not American.
The Dataset Most US Articles Don’t Cite
The primary source for this article is the Mendeley dataset DOI 10.17632/nv2343nwsb.2, version 2 published June 2025, licensed CC BY 4.0 and free to download.
It contains 22,963 university student responses from 120 countries, fielded between October 2024 and February 2025 in 7 languages: English, Italian, Spanish, Turkish, Japanese, Arabic, and Hebrew.
Stack it against the surveys that dominate the SERP:
| Source | Sample | Geography | Education level | Language |
|---|---|---|---|---|
| Mendeley 2024-25 | 22,963 | 120 countries | University | 7 languages |
| Pew (Feb 2026) | 1,458 | US only | Teens 13-17 | English |
| RAND (Mar 2026) | American Youth Panel | US only | Mixed | English |
| HEPI (2025) | 1,041 | UK only | Undergraduates | English |
We downloaded the public CSV (22,963 rows, roughly 180 columns), mapped 120 country-of-study responses to seven regions, and re-cut the Likert-scale questions on tool use, tasks, skill perception, and institutional policy.
Every percentage that follows is our cut, with the question code and sample size attached so the analysis is reproducible from the same public file.
Adoption Is Near-Universal, But Africa and North America Use AI Differently
Of 21,612 students who answered the tool-use questions, 91.1% had used ChatGPT in the academic year. That makes ChatGPT roughly 3.6 times more popular than Google Gemini (25.2%), 4.8 times more popular than Microsoft Copilot (19.1%), and 19 times more popular than Claude (4.7%) among university students. Perplexity sits at 7.0%.
The regional cut is where it gets interesting:
- Europe: 91.2% (n=8,702)
- Latin America: 90.7% (n=2,164)
- North America: 87.7% (n=944)
- Middle East: 85.5% (n=3,174)
- Africa: 80.6% (n=3,253)
- Asia-Pacific: 79.3% (n=2,772)

Adoption is high everywhere. The interesting part is what students do with the tool once they have it.
Among students who use ChatGPT, the share who report using it for academic writing “often or always” runs in almost the opposite order of overall adoption:
- Africa: 32.8%
- Asia-Pacific: 31.9%
- Middle East: 31.9%
- Europe: 27.9%
- Latin America: 23.7%
- North America: 22.8%
In other words, the regions with the lowest overall adoption are the regions where students who do use ChatGPT lean on it hardest for written assignments. The North America to Africa gap is 10 percentage points, on the same Likert question, in the same field period.
The African and Asia-Pacific samples here (3,253 and 2,772) are larger than RAND’s or Pew’s entire US-only studies, which gives the comparison some real weight.
One caveat. The Mendeley dataset uses convenience sampling through partner universities. Country-level coverage is uneven, so these comparisons are directional rather than census-grade. They are still the broadest cross-regional cut available on this question.
What Students Actually Use ChatGPT For (And What They Don’t)
Across the 19,562 ChatGPT users who answered task-frequency questions, the top use is summarizing texts (37.4% reach for it “often or always”), followed by brainstorming (34.1%), exam prep (32.4%), research assistance (31.0%) and academic writing (28.9%).

What students actually use ChatGPT for: top 12 tasks ranked
That ordering matches RAND’s US-only March 2026 numbers more closely than you might expect. RAND found 38% of US students cite better explanations, 35% brainstorming, 33% looking up facts, 33% drafting and revising. Brainstorming holds up almost identically across both datasets, which is a useful sanity check.
The surprises are in what students aren’t using it for. Coding assistance sits at 19.3%, math and calculations at 19.2%, professional writing (e-mails) at 17.3%, and creative writing brings up the rear at just 14.2%. The “never” numbers tell the same story: 46.2% never use ChatGPT for coding, 36.2% never for math, 43.4% never for creative writing.
The dev-community narrative that ChatGPT is principally a coding assistant is not what the median university student experiences. It’s a reading comprehension and writing-prep tool first.
That maps onto how students rate the tool itself. They value its clarity. They doubt its reliability. The size of that gap is the most consequential finding in this dataset.

The trust gap: students value ChatGPT’s clarity, doubt its reliability
73.0% of students agree ChatGPT “simplifies complex information,” by far the highest-rated capability in the survey. Only 43.0% agree it provides reliable information, and only 40.1% are satisfied with its accuracy.
Students treat ChatGPT as a clarifier, not a source of truth. They run material through it to understand it, then double-check the conclusions elsewhere. That 30-point gap between clarity and reliability is what tools designed specifically for student workflows have to close.
Where Students Think AI Helps, And Where It Doesn’t
When we asked students whether ChatGPT improves specific skills, the answers split cleanly along one fault line. ChatGPT helps with the digital and tool-shaped. It does not help with the human and thought-shaped.

Where students think ChatGPT helps and where it doesn’t
The full ranked list, percentage who agree or strongly agree that ChatGPT improves the skill, n around 16,250 to 16,300 per question:
- AI literacy: 54.9%
- Digital communication: 54.0%
- Foreign language: 53.0%
- Data analysis: 52.9%
- Digital content creation: 52.5%
- Academic writing: 51.7%
- Information literacy: 50.9%
- Professional writing: 50.4%
- Programming: 49.9%
- Analytical skills: 46.9%
- Problem-solving: 46.2%
- Creativity: 43.5%
- Critical thinking: 39.1%
- Native language: 37.2%
- Numeracy: 35.6%
- Decision-making: 35.4%
- Interpersonal communication: 34.2%
The pattern is hard to miss. Every skill above 50% is something you do with a tool: writing, organizing, programming, analyzing data, communicating digitally. Every skill below 40% is something you do inside your own head or with another human: critical thinking, judgment, talking face-to-face, working in your mother tongue.
The 2023-24 PLOS ONE paper found the same shape. Two waves of data, roughly 46,000 students between them, pointing to the same conclusion: students think AI augments their tool-using selves and leaves their thinking selves untouched.
That matters because every WEF, McKinsey, and OECD list of “most needed skills 2030” leads with critical thinking, decision-making, and complex problem-solving. Students themselves are telling us, in tens of thousands, that ChatGPT does not develop those skills. It develops the ones around them.
You can read this two ways. Optimistically: AI takes the tooling load off so students can focus their thinking time elsewhere.
Pessimistically: students are outsourcing the parts of work that have a price tag attached and getting better at the parts that increasingly don’t. Both can be true. The data only confirms that students themselves see the gap.
Students Already Believe AI Will Be Required for Their Jobs
The skills-perception story sits inside a bigger frame: students don’t see AI use as optional. They see it as a credential they will need.

What students expect ChatGPT will do to work
When asked what ChatGPT will do to the labour market they are about to enter, the highest-consensus statements are about what employers will demand of them. 62.4% expect employers to require knowledge of AI from their workforce. 62.3% expect demand for AI-skilled employees to rise. 60.9% expect AI to require employees to acquire new skills. These three statements form the strongest cluster of agreement in the entire labour-market section of the survey.
The job-destruction question is more split. 41.2% agree ChatGPT “will reduce the number of jobs”; 26.3% disagree; 32.5% sit on the fence. Compare this against the 60.0% who agree AI will “change the nature of jobs” and 49.0% who agree it will “create new jobs.” Students see job transformation as more certain than job elimination. They are not catastrophists. They are pragmatists who expect the rules to change.
A more uncomfortable finding sits at 48.1%: nearly half of students agree AI will increase inequality between younger and older workers. The students saying this are themselves the younger workers. They see themselves on the right side of that gap, but they see the gap.
This is the missing piece in most AI-in-education narratives. Students aren’t using ChatGPT for homework because they are lazy or because their teachers are absent. They are using it because they have already concluded that AI literacy is a workforce requirement. The question is whether their institutions know this and are preparing them for it.
Half the World’s Students Don’t Know If Their School Has an AI Policy
Asked whether their university has policies or a code of ethics on ChatGPT use, only 33.2% of students said yes. 17.5% said no. The remaining 49.2%, almost half, said “I don’t know.”
That last number is the most useful one in this article. A clear “no” you can debate. “I don’t know” means the institution has either not communicated a policy or has communicated it in a way that does not reach students. Functionally, those students are operating without guidance.
What makes the policy gap painful is that students themselves are uneasy about AI use, AND they want their institutions to step in.

Students want regulation. Their institutions aren’t providing it.
41 to 43% of students agree that ChatGPT encourages cheating, encourages plagiarism, misleads with inaccurate information, and hinders learning by doing the work. On the demand side, 52.4% want university and faculty ethical guidelines, 47.5% want employer guidelines, 45.5% want international regulation. Only 39.0% want government regulation. Students see this as primarily an institutional rather than a legislative problem.
RAND’s March 2026 release found that only about one-third of US students report a clear schoolwide AI policy. Our global cut confirms this is not a US-only problem. It’s a global higher-education problem.
In the institutional vacuum, students are self-regulating with tools they themselves think are dangerous. The biggest predictor of whether AI use in higher education goes well isn’t the model. It’s whether the institution has put adult guidance around it.
What This Means for Students, Educators, and Tool Builders
For students
Use AI for what your peers actually rate it as helpful for. That means summarizing complex sources, brainstorming, drafting in a foreign language, organizing research, and clarifying material you don’t yet understand. Those are the high-trust tasks in the data.
Do the rest yourself, deliberately. The same students rating AI as great for digital writing rate it weakly for critical thinking (39.1%), decision-making (35.4%), numeracy (35.6%), and native-language work (37.2%).
Mind the 73% versus 43% gap: ChatGPT is rated as good at simplifying complex information but only middling at being reliable. Use it to clarify, not to fact-check. Verify any number, citation, or claim before it ends up in submitted work.
If 62.4% of your peers are right that employers will require AI knowledge, the goal is to become AI-literate, not AI-dependent. The difference is whether you can still produce the underlying thinking when the tool is taken away.
For educators
The headline finding is the policy vacuum. 49.2% of students don’t know if your institution has an AI policy. Even a one-page schoolwide statement, sent in a single email, beats nothing.
The four concerns that show up at 41 to 43% (cheating, plagiarism, inaccurate information, hindering learning) are the ones to address explicitly. Spell out what is permitted, what is borderline, and what is not. Students themselves want this: 52.4% support faculty guidelines on AI use. The demand-side is already there.
The labour-market data raises the stakes. 62% of your students expect employers to require AI knowledge of them, but they don’t know what your institution permits or expects. That mismatch is the gap. Curricula that ignore AI fail the 60% who expect it to be required. Curricula that allow uncritical AI use fail the same 60% by leaving them without the foundational thinking skills employers will still expect alongside AI literacy.
For tool builders
The addressable opportunity sits in the gap between 73.0% “simplifies complex information” and 43.0% “reliable information.” Students value LLM-style summarization and don’t trust the output. Grounded summarization, where outputs are anchored to specific source documents the user uploads, directly closes that gap. NoteGPT’s AI homework helper is one product built on exactly this pattern: summaries tied to a specific PDF or video so the student can verify each claim against the original source.
Tools that ignore the trust gap will keep losing student users to the next, equally-distrusted general LLM. Tools that close it have a defensible position. For a wider view of what’s working in this space, see the best AI productivity apps for students.
Frequently Asked Questions
How many students use AI for homework in 2026?
Globally, among university students, 91.1% have used ChatGPT (Mendeley 22,963-student survey, October 2024 to February 2025). Among US K-12 and college students, 62% used AI for homework as of December 2025 (RAND American Youth Panel). Among UK undergraduates, 92% reported using AI tools (HEPI 2025). The exact number depends heavily on geography and education level.
What do students use AI for most?
Top five use cases globally, percentage of ChatGPT users who use it “often or always”: summarizing texts (37.4%), brainstorming (34.1%), exam prep and study assistance (32.4%), research assistance (31.0%), academic writing (28.9%). Coding (19.3%) and math (19.2%) sit much lower than developer-focused commentary suggests.
Is using AI for homework cheating?
Students themselves are split. 41.9% agree AI encourages students to cheat, 41.6% agree it encourages plagiarism, but 91.1% still use ChatGPT. Most institutions haven’t drawn a clear line: only 33.2% of students confirm their school has a written AI policy, and 49.2% don’t know either way.
Does AI help students learn?
Students rate AI as helpful for digital and tool-based skills (AI literacy 54.9%, digital communication 54.0%, data analysis 52.9%, academic writing 51.7%) but markedly less helpful for foundational thinking skills (critical thinking 39.1%, numeracy 35.6%, decision-making 35.4%, interpersonal communication 34.2%). The pattern holds across both the 2023-24 and 2024-25 survey waves.
Where can I download this dataset?
The full dataset is available on Mendeley Data, DOI 10.17632/nv2343nwsb.2, licensed CC BY 4.0. It includes the questionnaire PDF and the anonymized response CSV (22,963 rows). For tools built on top of this kind of grounded analysis, see NoteGPT’s AI homework helper.
How We Did This Analysis
The numbers in this article come from re-cutting the public Mendeley dataset (DOI 10.17632/nv2343nwsb.2), released by Aristovnik, Keržič, Tomaževič, Umek, Brezovar et al. (University of Ljubljana, with roughly 200 international academic partners) in June 2025 under CC BY 4.0.
What we did:
- Downloaded the full 22,963-row Excel file and the 9-page questionnaire PDF directly from Mendeley Data.
- Decoded the Likert scale numeric codes (1 = Strongly disagree to 5 = Strongly agree for opinion questions; 1 = Never to 5 = Always for frequency questions).
- Mapped Q4 country-of-study responses (197 country options) to seven world regions (Europe, North America, Latin America, Middle East, Africa, Asia-Pacific, plus an unmapped bucket).
- Computed “% Agree or Strongly Agree” (codes 4+5) for opinion questions and “% Often or Always” (codes 4+5) for frequency questions.
- Cross-tabbed regional ChatGPT adoption (Q13a) against academic-writing intensity (Q18a) to surface the headline finding.
Reproducibility: the dataset is freely available; our analysis script and headline numbers are available on request. Sample sizes (n=) for each question are noted inline throughout the article.
Caveats worth knowing:
- The Mendeley survey uses convenience sampling through partner universities. Country-level coverage is uneven, so regional comparisons are directional rather than census-grade.
- Self-reported skill development perceptions (Q28, Q29) reflect what students believe about AI’s impact on their skills, not measured learning outcomes.
- Comparisons against Pew (Feb 2026), RAND (March 2026) and HEPI (2025) are directional. Sampling frames, age ranges, and question wording differ across surveys.
- The full questionnaire (Q1 through Q42) covers many topics we have not analysed here, including emotions while using ChatGPT (Q32), labour-market skills mismatch detail (Q31), and discipline-level differences (Q10). Future articles will cut those slices.

