The Electronic Journal of e-Learning provides perspectives on topics relevant to the study, implementation and management of e-Learning initiatives
For general enquiries email administrator@ejel.org
Click here to see other Scholarly Electronic Journals published by API
For a range of research text books on this and complimentary topics visit the Academic Bookshop

Information about the current European Conference on e-Learning is available here

For infomation on the International Conference on eLearning, click here

For infomation on the European Conference on Games Based Learning clickhere

 

Journal Article

Interactive Technology Impact on Quality Distance Education  pp35-44

Samer Hijazi

© Nov 1999 Volume 1 Issue 1, Editor: Roy Williams, pp1 - 50

Look inside Download PDF (free)

Abstract

This paper reports on a study to determine if existing technology is adequate for the delivery of quality distance education. The survey sample was 392 respondents from a non‑traditional graduate level. The study included 15 descriptive questions on course assessment and satisfaction. The three hypotheses used Chi‑square to find relationships between interactivity and three other variables: progress, communication mode, and the desire to take another course. Responses showed that taking a distance education course was worthwhile. Findings, recommendations and conclusion are included.

 

Keywords: Distance Education, Quality, Interactive, Technology Assessments, E-learning, Interactivity

 

Share |

Journal Article

Quality of e‑Learning: An Analysis Based on e‑Learners' Perception of e‑Learning  pp29-41

Rengasamy Elango, Vijaya Kumar Gudep, M. Selvam

© Mar 2008 Volume 6 Issue 1, Editor: Shirley Williams, pp1 - 75

Look inside Download PDF (free)

Abstract

e‑Learning, of late, has been witnessing an unprecedented expansion as an opportunity for higher education. This expanding alternative mode calls for ensuring and imparting a sound and qualitative education. The present study made an attempt to investigate the issues related to the quality dimensions of e‑learning. Our results revealed the presence of both strengths and weaknesses in the e‑learning system. It is interesting to note, that the e‑learners have expressed diverse opinions with regard to administrative issues, instruction materials, instructors' support, viper sessions (VIPER, Voice Internet Protocol Extended Reach is a software which helps interactive learning through the Internet) , grading and assessment. The findings of the study further demonstrate that if the concept of e‑learning is imparted with a better approach and perspective, the reach will be phenomenal. This study reiterates the relevance of imparting qualitative education through e‑learning.

 

Keywords: Online courses, e-learning, quality assessment

 

Share |

Journal Article

Cultural Impact on Online Education Quality Perception  pp161-172

Manuela Milani

© Apr 2008 Volume 6 Issue 2, Editor: Shirley Williams, pp99 - 182

Look inside Download PDF (free)

Abstract

Numerous stakeholders in the field of education have been working on the development and extent of the use of ICT in different learning communities (higher education, vocational training) and in different multicultural contexts thanks also to EU funding opportunities. In this framework, they have participated in the building of various cross‑national teaching and learning models. The strategies which supported the development of such educational projects introducing online teaching and learning activities in the framework of European projects generally rely on the basic premise of the homogeneity of the educational systems likely to be used, and according to similar methods, the resources and training devices with ICT. This can lead to the negation of potential discrepancies, particularly cultural ones, in educational systems. The aim of this paper is to analyse the concept of "quality in online education within European Online Academic Education's context", how this concept takes shape and how it becomes — or not — part of teaching and learning practices. We decided to focus our attention on the concept of "quality" to understand the eventual impact of the cultural factor on the developing scenario of virtual education because this concept seems to be particularly revealing if we take into consideration its "open nature". The increasing number of virtual campuses reveals how common the development of teaching modules are nowadays together with complete degrees based on inter‑university and transnational collaborations with the aim of transferring learning objects from one educational context to another. Virtual mobility is thus becoming a reality for a greater number of students. However, the multicultural dimension of these new environments has not been investigated yet and in particular the notion of "online teaching quality" is still under‑exploited. This paper intends to provide a review of current works on Online Education Quality Measurement in general focusing on the investigation of Cultural Impact on Quality issues. At the same time this paper intends to shift the attention from students' to teachers' perception of quality and consequently on the possible different evaluation frameworks used within the same context: European Online Education. The paper is part of a PhD research aimed at exploring the impact of cultural dimensions on the design of online courses offered by universities from different European areas. The research notably aims to reveal differences between online courses' models, in order to uncover which one of them can be connected to the cultural dimension they belong to.

 

Keywords: cultural impact, cultural differences, quality, online education, virtual campus, virtual mobility

 

Share |

Journal Article

Learning Objects and Virtual Learning Environments Technical Evaluation Criteria  pp127-136

Eugenijus Kurilovas, Valentina Dagiene

© Jun 2009 Volume 7 Issue 2, Editor: Shirley Williams, pp85 - 190

Look inside Download PDF (free)

Abstract

The main scientific problems investigated in this article deal with technical evaluation of quality attributes of the main components of e‑Learning systems (referred here as DLEs — Digital Libraries of Educational Resources and Services), i.e., Learning Objects (LOs) and Virtual Learning Environments (VLEs). The main research object of the work is the effectiveness of methods of DLE components quality evaluation. The aim of the article is to analyse popular existing LO and VLE technical evaluation tools, and to formulate new more complex tools for technical quality evaluation of LOs and VLEs based on requirements for flexible DLE, as well as to evaluate most popular open source VLEs against new more complex criteria. Complex tools have been created for the evaluation of DLE components, based on a flexible approach. The authors have analysed existing tools for technical evaluation of LOs, and it was investigated that these tools have a number of limitations. Some of these tools do not examine different LO life cycle stages, and other insufficiently examine technical evaluation criteria before LO inclusion in the repository. All these tools insufficiently examine LOs reusability criteria. Therefore more complex LO technical evaluation tool is needed. It was investigated that this new more complex LO technical evaluation tool should include LO technical evaluation criteria suitable for different LO life cycle stages, including criteria before, during and after LO inclusion in the repository as well as LO reusability criteria. The authors have also examined several VLE technical evaluation tools suitable for flexible DLE, and it was investigated that these tools have a number of limitations. Several tools practically do not examine VLE adaptation capabilities criteria, and the other insufficiently examines general technical criteria. More complex VLE technical evaluation tool is needed. Therefore the authors have proposed an original more complex set of VLE technical evaluation criteria combining (1) General (Overall architecture and implementation; Interoperability; Internationalisation and Localisation; Accessibility) and (2) Adaptation (Adaptability; Personalisation; Extensibility and Adaptivity) VLE technical evaluation criteria. The authors have also selected and proposed to use the universal, clear and convenient DLE components' evaluation rating tool, and have evaluated three most popular open source VLEs against technical (both general and adaptation) criteria in conformity with this rating tool.

 

Keywords: managing quality in e-learning, technical evaluation, virtual learning environments, learning objects, repositories

 

Share |

Journal Article

Multiple Criteria Evaluation of Quality and Optimisation of e‑Learning System Components  pp141-150

Eugenijus Kurilovas, Valentina Dagiene

© Mar 2010 Volume 8 Issue 2, ECEL 2009, Editor: Shirley Williams, Florin Salajan, pp51 - 208

Look inside Download PDF (free)

Abstract

The main research object of the paper is investigation and proposal of the comprehensive Learning Object Repositories (LORs) quality evaluation tool suitable for their multiple criteria decision analysis, evaluation and optimisation. Both LORs 'internal quality' and 'quality in use' evaluation (decision making) criteria are analysed in the paper. The authors have analysed several well‑known LORs quality evaluation tools. In their opinion, the comprehensive multiple criteria LOR quality evaluation tool should include both general software 'internal quality' evaluation criteria and 'quality in use' evaluation criteria suitable for the particular project or user. In the authors' opinion, the proposed LOR 'Architecture' group criteria are general 'internal quality' evaluation criteria, and 'Metadata', 'Storage', 'Graphical user interface' and 'Other' are 'customisable' 'quality in use' evaluation criteria. The authors have also presented their comprehensive Virtual Learning Environments (VLEs) quality evaluation tool combining both 'internal quality' (i.e., 'General Architecture') and 'quality in use' (i.e., 'Adaptation') technological evaluation criteria. The authors have proposed to use the quality evaluation rating tool while evaluating LORs and VLEs. The authors have analysed that if we want to optimise LORs and VLEs (or the other learning software packages) for the individual learner needs, i.e., to personalise hisher learning process in the best way according to their prerequisites, preferred learning speed and methods etc., we should use the experts' additive utility function including the proposed LORs and VLEs expert evaluation criteria ratings together with the experts preferred weights of evaluation criteria. In this case we have the multiple criteria optimisation task using criteria ratings, and their weights. Quality evaluation criteria of the main e‑Learning system components, i.e., LORs and VLEs are further investigated as the possible learning software packages optimisation parameters. Scalarization method is explored in the paper to be applied to optimise the learning software packages according to the individualised learners needs. Several open source VLEs evaluation results are also presented in the paper.

 

Keywords: managing quality in e-learning, multiple criteria evaluation, learning object repositories, virtual learning environments, optimisation

 

Share |

Journal Article

How do Students Measure Service Quality in e‑Learning? A Case Study Regarding an Internet‑based University  pp151-160

María Martínez-Argüelles, José Castán

© Mar 2010 Volume 8 Issue 2, ECEL 2009, Editor: Shirley Williams, Florin Salajan, pp51 - 208

Look inside Download PDF (free)

Abstract

This article discusses the importance of measuring how students perceive quality of service in online higher education. The article also reviews the existing literature on measuring users' perceptions about quality in e‑services. Even when there are a lot of articles on this matter, none of them focuses on e‑learning services, so this paper tries to fill that gap. The article proposes using the Critical Incident Technique to perform a qualitative analysis, which contributes to identify the main dimensions and categories that contribute to students' perception of service quality. A case study, regarding a completely online university, is presented and the proposed model is used to obtain some preliminary research results. Among these, key quality dimensions from a student point of view are identified. Some of these dimensions are: learning process, administrative processes, teaching materials and resources, etc. After discussing the research results, a list of recommendations for university managers is formulated. We believe that both the proposed methodology and the case‑study recommendations can be of potential interest for managers of several universities offering online higher‑education worldwide.

 

Keywords: online higher education, perceived service quality, critical incident technique, qualitative data analysis

 

Share |

Journal Article

Methodology for Evaluating Quality and Reusability of Learning Objects  pp39-51

Eugenijus Kurilovas, Virginija Bireniene, Silvija Serikoviene

© Apr 2011 Volume 9 Issue 1, ECEL 2010 special issue, Editor: Carlos Vaz de Carvalho, pp1 - 114

Look inside Download PDF (free)

Abstract

The aim of the paper is to present the scientific model and several methods for the expert evaluation of quality of learning objects (LOs) paying especial attention to LOs reusability level. The activities of eQNet Quality Network for a European Learning Resource Exchange (LRE) aimed to improve reusability of LOs of European Schoolnet’s LRE service for schools are analysed in more detail. As a pan‑European service, the LRE particularly seeks to identify LOs that can “travel well” (i.e., reusable) across national borders and can be used in a cultural and linguistic context different from the one in which they were created. The primary aim is to improve the quality of LOs in LRE. eQNet is doing this by establishing a network consisting of researchers, policy makers, and practitioners (teachers) that develops and applies “travel well” quality criteria to both existing LRE content as well as that to be selected in future from national repositories. The vision driving the LRE is that a significant percentage of high quality LOs developed in different countries, in different languages and to meet the needs of different curricula can be re‑used at European level. The main problem of all existing approaches in the area is a high level of the expert evaluation subjectivity. The authors analyse several scientific approaches, theories, methods and principles to minimise the subjectivity level in expert evaluation of LOs quality, namely: (1) multiple criteria decision analysis approaches for identification of quality criteria, (2) technological quality criteria classification principle, (c) fuzzy group decision making theory to obtain evaluation measures, (d) normalisation requirement for criteria weights, (e) scalarisation method and (f) trapezoidal fuzzy method for LOs quality optimisation. The authors show that the complex application of these approaches could significantly improve the quality of expert evaluation of LOs and noticeably reduce the expert evaluation subjectivity level. The paper also presents several examples of practical application of these approaches for LOs quality evaluation for Physics and Mathematics subjects.

 

Keywords: learning objects, multiple criteria decision analysis, quality evaluation, reusability, optimisation

 

Share |

Journal Article

Motivational Gaps and Perceptual Bias of Initial Motivation Additional Indicators of Quality for e‑Learning Courses  pp3-16

Rosário Cação

© Apr 2017 Volume 15 Issue 1, Editor: Robert Ramberg, pp1 - 103

Look inside Download PDF (free)

Abstract

We describe a study on the motivation of trainees in e‑learning‑based professional training and on the effect of their motivation upon the perceptions they build about the quality of the courses. We propose the concepts of perceived motivational gap and real motivational gap as indicators of e‑learning quality, which reflect changes in both perceived and real students' motivation. These indicators help evaluate the changes in the trainees' motivation, as well as the bias that occurs in the perceptions about initial motivation. In the sample analyzed, the real motivational gap was more negative when the perceived motivational gap was negative and not so positive when the perceived motivational gap was positive. We found that there is a perceptual bias on initial motivation when the perceived motivational gap is not null. This means that, for the sample analyzed, the trainees may have “adjusted” their perception regarding the initial motivation as a function of their final motivation, bringing it closer to the latter and supporting their final status. We also show that these gaps help explain how the trainees' perception of quality is affected: the gaps were minimized at higher levels of perceptions of quality and when they were positive, the perception of quality was higher than average. The two proposed conceptual gaps are useful to measure quality in e‑learning and implement specific actions to improve it. The results of our study are useful as they create insights on perceptions of quality in an indirect way, i.e., without asking the trainees to think about what they believe quality is, so that they can quantify it. They also enable training companies to create additional and complementary indicators of quality of e‑learning courses that can help explain changes in perceptions of quality.

 

Keywords: attitudes, courses, expectations, e-learning, gaps, motivational gap, motivation, motivation to learn, perception bias, quality, quality indicators, quality of e-learning, satisfaction, service, training management, training motivation

 

Share |