Why HCI Research Matters in Higher Education Systems
Exploring how Human-Computer Interaction principles can transform student evaluation platforms, learning management systems, and academic workflows in universities.
On this page
Human-Computer Interaction research is often associated with consumer products — mobile apps, e-commerce interfaces, social platforms. But some of the most impactful applications of HCI principles are happening quietly inside university systems, shaping how thousands of students and educators interact with digital tools every day.
The Problem with Traditional Academic Platforms
Most university systems are built by IT departments prioritising functionality over usability. The result is software that works but frustrates: complex navigation, unclear feedback, inconsistent design, and forms that feel like government paperwork.
This friction has measurable consequences. Students disengage. Lecturers spend hours on administrative overhead instead of teaching. Evaluation data becomes unreliable because the interface discourages honest use.
HCI as a Research Tool, Not Just a Design Discipline
When building the Lecturer Evaluation System (LES) at Sabaragamuwa University, we approached the project through an HCI lens from the outset — not as an afterthought. This meant:
- User research first: Interviewing students and lecturers about pain points with paper-based evaluation before writing a single line of code.
- Anonymity by design: Understanding that students would self-censor without guaranteed anonymity, and architecting the data model accordingly.
- Cognitive load reduction: Simplifying the evaluation form to the minimum viable set of questions that would still yield actionable data for lecturers.
The outcome was a system that achieved near-100% digital adoption within a semester — not because it was mandated, but because it was genuinely easier to use than the paper alternative.
Actor-Perspective Modelling
One research thread I’ve been developing is an actor-perspective ontological framework for modelling student evaluation processing lifecycles. Traditional system design treats all users as generic “users.” But in a university context, the needs, mental models, and success criteria of an Administrator, a Department Operator, a Lecturer, and a Student are fundamentally different.
Modelling these actors explicitly — their goals, their information needs, their failure modes — leads to interfaces that feel intuitive because they match how people actually think about their roles, not how the database tables are structured.
Practical Implications for System Builders
If you’re building or evaluating an academic platform, here are three HCI principles worth internalising:
-
Match the user’s mental model, not the system’s data model. Students think in terms of “my courses this semester,” not “enrolment records with active status.”
-
Design for trust. Academic systems often collect sensitive data. Visual cues, clear privacy language, and consistent behaviour build the psychological safety that encourages honest engagement.
-
Measure adoption, not completion. A form that 90% of students complete is not necessarily a good form — it might just be mandatory. Track whether users return voluntarily and whether the data quality improves over time.
What’s Next
I’m currently working on extending this framework to cover automated backup systems — specifically the human factors in disaster recovery workflows where operator error under stress is a significant risk factor. More on that soon.
If you’re working on HCI challenges in academic or institutional contexts, I’d love to compare notes — reach out at [email protected].
Dilan Gomas
HCI Researcher & Web Architect at Sabaragamuwa University of Sri Lanka