Abstract
Background: The integration of artificial intelligence (AI) in higher education and research is expanding rapidly, driven by tools like ChatGPT. Research suggests that while AI will likely transform education and research activities, its fair implementation hinges on understanding barriers and facilitators to adoption. This study investigates perceptions of AI among students and staff at a UK university, focusing on the impact of sociodemographic factors on AI use, attitudes and AI literacy.
Method: We adopted an explanatory mixed-methods approach, beginning with an online survey of 269 students and staff across each of the host institution’s academic departments. This survey captured self reported use of AI, attitudes, and a measure of self-perceived AI literacy. From this sample, 24 semi-structured online interviews (58.3% students, 41.7% staff) were conducted to further explore barriers and facilitators to AI acceptance and adoption in higher education and research.
Results: Quantitative data revealed significant differences in perceptions of AI usage between students and staff, with both groups overestimating the use of AI by others. Males reported higher AI use, more positive attitudes towards it, and greater self-perceived AI literacy than females. Higher socioeconomic status also predicted more frequent use of AI and older age predicted lower AI literacy. Despite varied AI use reported in academic contexts, 27% of students and 31% of staff reported they had never used AI. Qualitative findings highlighted concerns about academic repercussions of AI use (e.g., punishment for misconduct and potentially diminished performance if AI is prohibited) and AI’s threat to job security. Participants highlighted a lack of clear guidance from universities on AI and there were strong calls for enhanced training and support from universities to promote responsible and ethical use of AI.
Conclusions: To promote effective and responsible use of AI, universities should respond to calls for training from students and staff to allow individuals to make the most of the opportunities offered by AI. Universities should also look to engage in greater dialogue, and provide unambiguous guidance, to address misperceptions in how AI is used by others and to address staff and student fears that influence AI acceptance and adoption.
Method: We adopted an explanatory mixed-methods approach, beginning with an online survey of 269 students and staff across each of the host institution’s academic departments. This survey captured self reported use of AI, attitudes, and a measure of self-perceived AI literacy. From this sample, 24 semi-structured online interviews (58.3% students, 41.7% staff) were conducted to further explore barriers and facilitators to AI acceptance and adoption in higher education and research.
Results: Quantitative data revealed significant differences in perceptions of AI usage between students and staff, with both groups overestimating the use of AI by others. Males reported higher AI use, more positive attitudes towards it, and greater self-perceived AI literacy than females. Higher socioeconomic status also predicted more frequent use of AI and older age predicted lower AI literacy. Despite varied AI use reported in academic contexts, 27% of students and 31% of staff reported they had never used AI. Qualitative findings highlighted concerns about academic repercussions of AI use (e.g., punishment for misconduct and potentially diminished performance if AI is prohibited) and AI’s threat to job security. Participants highlighted a lack of clear guidance from universities on AI and there were strong calls for enhanced training and support from universities to promote responsible and ethical use of AI.
Conclusions: To promote effective and responsible use of AI, universities should respond to calls for training from students and staff to allow individuals to make the most of the opportunities offered by AI. Universities should also look to engage in greater dialogue, and provide unambiguous guidance, to address misperceptions in how AI is used by others and to address staff and student fears that influence AI acceptance and adoption.
Original language | English |
---|---|
Place of Publication | Charlottesville, US |
Publisher | Center for Open Science |
Number of pages | 34 |
DOIs | |
Publication status | Submitted - 20 Aug 2024 |
Keywords
- Academia
- AI
- AI guidance
- AI policy
- ChatGPT
- Education Research