STUDIES IN LOGIC, GRAMMAR AND RHETORIC 43 (56) 2015
DOI: 10.1515/slgr-2015-0051
Creating Digital Question Databases: Use of Self-Tests in Teaching Medical Subjects
Barbara Kolodziejczak1, Magdalena Roszak1, Anna Ren-Kurc2,
Andrzej Br^borowicz3, Wojciech Kowalewski2
1 Department of Computer Science and Statistics, Poznan University of Medical Sciences, Poland
2 Faculty of Mathematics and Computer Science, Adam Mickiewicz University in Poznan, Poland
3 Department of Pathophysiology, Poznan University of Medical Sciences, Poland
Abstract. Enhancement of teaching using digital materials is rapidly entering the world of medical studies. Creation of a self-learning environment supported with self-tests is received well, or even enthusiastically, among students. On the other hand, there is a relatively large group of opponents among university teachers, who do not see the need for changes in teaching and testing methodology to be made. This attitude may be surfacing as a result of anxiety connected with implementing new technologies in teaching medical subjects, as well as the belief that implementing new technologies does not have an immediate effect on learning quality. The authors of this article attempt to demonstrate that a thoughtful choice of e-learning platform facilitates the process of implementing online learning and testing aids in medical faculties. The second part of the article presents initial results of studies concerning the efficiency of learning enhanced with self-tests. Our analysis details the results of exams in pathophysiology taken by students of the medical faculty at the Poznan University of Medical Sciences. After the course, an evaluation survey was completed by 195 students concerning the quality of teaching with the use of the OLAT (Online Learning and Training) e-learning portal. It showed that students had positive attitudes toward learning with the use of online materials, particularly with regard to the use of self-tests, which allowed students to check their knowledge independently in exam-like conditions. The article that follows is targeted at those teachers who are interested in implementing a self-study and electronic knowledge evaluation environment for their courses, not necessarily in medical subjects.
Introduction
In medical studies, verification of knowledge through electronic testing is a convenient form of examination that is gradually gaining popularity (Bijol et al., 2015; Kibble et al., 2011; Roszak et al., 2012; Stew-
art et al., 2014). This is supported by the fact that theoretical, yet fundamental, knowledge must be acquired by a medical student in order for that student to be able to follow with the practical aspects of their education. Preparation of a set of course tests is a multi-stage process which usually requires the cooperation of two teams. The first team is responsible for question content, and its members are physicians of the relevant specialty. The other team will usually be composed of competent IT specialists responsible for the technical aspect, i.e., editing the questions, inputting questions into the course structure on the e-learning portal, question database management, support during the testing process, and archiving of results.
The separation of functions between two teams is a good practice that facilitates implementation of online knowledge testing. The team of content experts are not required to be proficient in IT technologies used to build e-learning portals, such as Java, PHP, SCORM, and XML. Their only range of responsibility comprises writing questions, periodical updating of such questions according to the approved schedule, and defining the method for proceeding with the exam in the portal.
Work of the technical team, which comes with no less responsibility than that of the content team, should be organized so as to be "invisible" for students and teachers. Effective development of such a model requires experience and advanced IT competences (Kolodziejczak et al., 2013).
Based on the authors' several years of experience with distance learning in the fields of science and humanities, as well as their experience with implementation of e-learning portals at universities, the opinion is held that the choice of portal application is a point of key importance. It should be preceded by specification and analysis of functional needs and efficiency requirements for the application, according to the characteristics of distance learning in medical studies. On the other hand, implementation of an educational portal at a school or educational organization should serve the purpose of streamlining the organization of the learning process and result in improvement of quality and efficiency. These remarks have become the starting point for viewing the issue of electronic facilitation of the knowledge testing process from two perspectives:
1. A technological aspect - There are multiple factors, according to the applied technology and database management strategy, which affect the process of building and subsequently updating a test base. The article presents an overview of important aspects to be taken into consideration as early as the e-learning platform selection phase, to make further work efficient and smooth.
2. A learning aspect - The authors of the article hypothesized that the effort involved in preparation of self-tests would yield significant results in terms of improvement of education quality. To test this hypothesis, the authors analyzed students' participation in self-tests and their impact on final exam results for two tests in a pathophysiology course within a medical faculty.
Both of the aspects mentioned in the article are important for teachers who are interested in implementing online examinations for their courses. The first aspect points to key issues in terms of building and updating the question database and archiving exam results, on the basis of the solutions implemented in popular and frequently used open source platforms. The other proves the case behind the initiative and can be used as an argument in discussions with decision-makers or those in charge of assigning funds to university development and implementation of technological solutions.
Existing sources of literature concerning the topic at hand usually present analyses of a single selected aspect of online knowledge testing, e.g. technology (Bremer et al., 2005; Costagliola et al., 2009; Sztorc, 2009). In the opinion of the authors of this article, when the issue is viewed from two sides, readers will receive more comprehensive information and decision-making will be facilitated for those who continue to hesitate.
Technological Aspect of Online Test Building
Background. The process of creating and managing a database of questions in a learning portal depends on the applied technology and assumptions undertaken by the authors during the design engineering phase. Therefore, at the point of selecting the platform, attention should be paid to several aspects which may facilitate or hinder future work of the technical team. Thus, before selecting the platform to be used, answers should be found to such questions as:
1. Whether the portal has an integrated or external test question editor.
2. What types of test questions are offered by the integrated editor.
3. Whether the questions can contain multimedia, i.e., images, sounds, videos.
4. Whether the editor supports creating tests with complex structures, e.g., divided into groups of questions.
5. How the tests and test questions are customized, i.e., whether it is possible to pick questions randomly, what time limits can be imposed, whether diverse methods of displaying results are available.
6. Whether it is possible to easily combine the contents of two or more question databases.
7. Whether, and in what format, questions can be imported to the database; commonly used text formats are particularly important.
8. Whether the portal is capable of exporting questions to different formats, including those consistent with the standard QTI.
9. What tools the portal has for analysis and archiving of test results. This, according to the authors, is a basic set of qualities that should be
determined in order to select the portal in a manner that ensures convenient work for the IT team as well as participants of the knowledge evaluation process.
The authors will make an attempt to provide readers with more detailed answers to the above questions, on the basis of two open-source portals, namely Moodle and OLAT. Moodle is among the most commonly used learning portals in Poland, rated highly in terms of efficiency and functionality. OLAT, albeit less popular, is not worse than Moodle in terms of functionality, and even exceeds the latter's performance in terms of user account management.
Editors and types of questions (questions 1-3). Usually, e-learning portals have an integrated question editor, with interfaces varying in terms of user-friendliness. These can typically be used to create several different types of questions. The common types of questions are: multiple-choice questions, true/false questions, multiple-response questions, and gap-filling questions. If the QTI 2.1 (IMS Global Learning Consortium, 2012) standard is implemented in the portal, the user receives an extended range of available interfaces. For example, it is possible to choose one or more answers from a list, determine the sequence of answers on a list of concepts, identify points in an image, select values with a graphic slider, and perform many other functions. Editors usually allow one to insert various types of multimedia into questions, such as videos, audio files, animations, and static images. This option is very useful in medical studies because a question can be based on a visual aid instead of a potentially long and complex description. Both portals (OLAT and Moodle) have built-in test editors with the possibility of adding multimedia. Moodle offers the user 11 types of test questions, while OLAT only offers 4 types.
Grouping of questions (question 4). Grouping of questions according to attributes specified by the author is not usually available in learning portals. In OLAT, it is possible to group questions by specifying names of sections.
You may also determine a time limit and number of randomly picked questions for each section separately. Questions in Moodle are organised into categories. This makes it easier to find questions and add random or matching questions from the given categories to the test. However, there is no option for grouping test questions.
Customization of tests and questions (question 5). Customization of test questions according to the applicable standard QTI 2.1 may cover (Roszak et al., 2013):
- time for answering a given question,
- random order of answers to a question,
- limited number of attempts to answer a question,
- methods of presenting the question and answer on a computer screen,
- highlighting the correct answer to a question,
- showing a hint or comment to a question.
Test customization, on the other hand, may include such parameters as:
- time for filling in the test,
- time for answering a group of questions,
- random drawing of the order of questions within a group,
- showing a timer during the test,
- showing the current result of the test,
- showing the number of questions,
- limiting the number of attempts at launching the test,
- methods of presentation of the whole test, e.g., test menu availability, one or more questions on a page,
- methods of presentation of test results, e.g., only the test result or presentation of correct answers,
- showing a comment to the test.
Currently, availability of the above-mentioned options is already treated as standard, and a typical learning portal should be expected to support these options. OLAT offers a full range of the above specified options. As mentioned before, Moodle only lacks grouping of test questions.
Combining question databases (question 6). The assumed life of a professional database of questions in a given subject is 5 years (Roszak et al., 2013). However, the authors' experience indicates that once built, a database tends to keep evolving and improving. Questions that are too easy or are unclear are often deleted and replaced with others.
Organizational changes in subsequent course editions further enforce a different division of covered material, involving reconstruction of test ques-
tion database contents. All this leads to the requirement for question editing tools to offer a simple and flexible methodology of database management. Combining the contents of two or more question databases or replacing fragments of such databases would be a convenient improvement here. Unfortunately, open source LCMS portals offer relatively limited options to administrators. For example, the OLAT portal only supports deletion, copying and reordering questions within a single database (OLAT, 2015). But in order to combine two or more question files, you need to be able to use XML language and off-platform tools.
A popular portal, Moodle offers more options in this respect so that the same questions from a so-called question bank can be reused many times. A large resource of questions is manageable due to questions being divided into categories. Questions are selected for a specific test either through specific identification or randomly (Moodle. Managing questions; Brzozka, 2011).
Importing and exporting questions (questions 7-8). Another issue is the ability to share databases between educational portals, not necessarily at the same time. That is why it is so important for a learning portal to support export or import of a database built to a specified QTI standard. The OLAT portal offers both of these functions, while Moodle v2.9 does not support either of them.
Creating a database of questions is usually limited to copying their contents and answers in a test format (MS Word, Notepad) to a question editor in the portal. This is a relatively tedious job that produces errors, such as omission of some answers or insertion of two identical answers. A good alternative for this solution would be an option to import questions directly from a text editor to the database. The OLAT portal does not offer this function. In Moodle, you can import and export questions as plain text, using GIFT format. However, despite this being a text format, it requires users to know the templates for building questions of a specific type. A full description of the format is available on the Moodle project website (Moodle. GIFT format). In addition, there are dedicated macros for editing questions for the Moodle environment directly in MS Word. The entire product is then saved in Moodle XML format. Still, in this case the user also needs to have a certain awareness of the content templates used.
The basics of using the Word Quiz Template v2.1 to create questions and ready the questions for import as XML files into Moodle is available on the web page of Moodle Forum (2013). Questions created in Moodle may
also be exported in Word format using a contributed plugin, Word table format (Moodle. Table format). These can then be easily used to support offline review and editing of all components of a set of questions or to create paper tests.
Analysis and archiving of results (question 9). Another important aspect is the availability of analytic tools in the LCMS portal, including statistical analysis and results of complete tests as well as results for individual questions. Through a detailed analysis of questions, e.g. displaying the percentage of right answers given to a specific question, the questions which are too easy, too difficult, or unclear can be removed and thus the whole test can be improved. The portals usually offer an overview of results for an individual user as well as for learning groups. Another very useful function is the export of test results outside the portal for further analysis. For example, the OLAT portal offers the option of saving the data in the popular Excel worksheet file format. Another available option is to import the results from a worksheet to the portal using batch evaluation functions. Moodle also supports export of results in Excel format with highly detailed analysis of results (Moodle. Quiz reports).
Archiving the results of a single test or the entire course is the basic way to secure data from loss in case of malfunction. All LCMS portals offer options for saving specified resources on external media.
Summing up, one may claim that learning portals offer diverse yet continuously improved knowledge verification tools to users. They also offer conditions for building a self-learning environment for the learner. This goal is achieved not only through the provided learning materials, but also through the use of self-checking tools. By reviewing the results of an actual course taught at a medical school, the authors have attempted to answer the question about whether, and to what extent, they enable the intended goal to be achieved.
Educational Aspect — Practical Use of Self-Tests
Background. In this section, the authors present the learning outcomes for Pathophysiology, taught with the use of electronic knowledge evaluation in the OLAT portal. Preparation of a database of test questions and organization of examination in the portal proceeded in accordance with the above specified rules. Two teams worked together: the professional team,
composed of pathophysiology teachers, and IT specialists responsible for deployment and administration of the learning portal.
The pathophysiology course for the 2nd year of the faculty of medicine at the Poznan Medical University is held with the support of online materials available for students from an e-learning portal. Access to these resources is opened for students one week before the course starts and expires after the end of the course. The pathophysiology course starts with a physiology test for which students may prepare using a refresher test on the portal. The portal further publishes the contents of traditional lectures and self-learning materials as preparation for conventional classroom learning. These include lectures with an audio commentary, clinical cases for review before in-class seminars, and self-tests. The learning resources on the portal are divided into 10 thematic areas, according to the programme of local classes. The course ends with an exam on the e-learning portal.
Participants of the pathophysiology course have a number of online refresher tests available at their disposal for testing their knowledge before each of the 3 stages: the initial test in physiology, passing the seminars, and the final subject exam.
Online tests in the pathophysiology course are built on the basis of multiple-choice questions. Self-tests consist of 15 or 30 questions and the time for filling in a test is limited. A student can fill in the same test no more than 3 times, with questions randomly selected each time from the question database.
Participants. The study included the results of the pathophysiology exam that were achieved by 2nd year medical faculty students of the Poznan University of Medical Sciences during the second semester of the 2014/2015 academic year.
Data Collection and Analysis. We assessed students' participation in self-tests and their impact on the final exam results on the basis of two course tests, namely the physiology test and the test for seminar materials. Data in the form of self-test and exam results were collected on the OLAT e-learning portal. All data were entered into Excel spreadsheets and the names of students were deleted prior to further analyses to preserve anonymity. The data were analyzed using the Mann-Whitney U test, Kruskal-Wallis test, regression analysis, and analysis of variance (ANOVA). Calculations were carried out at statistical significance a = 0.05 in STATISTICA v. 10.0 from StatSoft.
Creating Digital Question Databases: Use of Self-Tests in Teaching... Study 1 — Physiology self-test
The analysis was carried out on a sample of n = 246 students, taking into account the last result of the given self-test, which is usually the best result. The number of attempts at resolving the self-test is presented in Table 1.
Table 1. Filling in a physiology self-test (n = 246)
Times the self-test was filled in Number of students Percentage
0 40 16
1 12 5
2 15 6
3 179 73
As was already mentioned, students received access to a refresher test in physiology, consisting of 200 questions, one week before the actual exam. Thirty questions were selected on a random basis from the database, and the time for resolving the test was limited to 45 minutes. Students could fill in the test three times. The test result was visible immediately after completion.
The level of interest in the physiology self-test was very high, as it was filled in by 206 (84%) out of 246 students, of which 179 (73%) students used up all three attempts available. The average number of times a test was filled in was 2.35 and the average score was 25.75 out of 30.
The physiology test consisted of 30 single-choice questions. The average result of the test was 21.61 points (72%). Fourteen students (5.7%) achieved the highest score in the group, i.e. 29 points. All of these students had filled in the self-test three times. The lowest test score in the group was 8 points. It was achieved by 1 person only, who had never filled in the self-test. A Spearman's correlation was run to determine the relationship between the number of attempts at the self-test and the result of the physiology test. There was a weak, positive monotonic correlation between the self-test and the physiology test result (rS = 0.31, p < 0.0001).
The studied group was divided into 2 subgroups. Group 0 was composed of students who never filled in the self-test. Group 1-3 was composed of those who approached the self-test at least once. Statistically significant differences were shown (p < 0.0001 by Mann-Whitney U-test) in the distribution of scores in the physiology test between the two groups of students. Table 2 contains basic descriptive statistics of both groups.
Table 2. Basic descriptive statistics of physiology exam results for group 0 and group 1—3
Group Min Max Mean Standard deviation Median Lower quart ile Upper quart ile
0 (n0 = 40) 8 28 17.95 6.02 17 13.5 23.5
1-3 (ni = 206) 9 29 22.32 5.45 25 17.0 27.0
The analysis was conducted after dividing the students into 4 groups according to the number of times they filled in the self-test (from 0 to 3). The Kruskal-Wallis nonparametric test showed statistically significant differences in the distribution of scores in the physiology test between at least two of the studied groups (p = 0.0001). Dunn's test of multiple comparison shows that there were significant differences only between group 0, who never filled in the test, and group 3, who filled in the test 3 times. The remaining groups showed no major differences in terms of distribution of their exam scores. Table 3 presents the basic descriptive statistics of the four groups under consideration.
Table 3. Basic descriptive statistics of physiology exam results for groups 0—3
Group Min Max Mean Standard deviation Median Lower quart ile Upper quartile
0 (n0 = 40) 8 28 17.95 6.02 17.0 13.5 23.5
1 (m = 12) 12 28 19.58 5.87 17.5 14.5 26.0
2 (n2 = 15) 10 28 20.93 5.51 22.0 17.0 26.0
3 (n3 = 179) 9 29 22.61 5.38 25.0 17.0 27.0
Study 2 — Self-tests for seminars
The traditional part of the course is closed with an exam covering 10 thematic areas. Students can prepare for the test at their own rate and time, on the basis of contents and self-tests that are available throughout the duration of the pathophysiology course. One hundred and ninety-five students of the 2nd year of the faculty of medicine took part in the study. The smaller number of students in that part of the analysis is due to the fact that two groups of students had not yet completed their pathophysiology course as of the time at which this article was written.
The students had 11 self-tests at their disposal because in one of the thematic areas there were two tests. Each self-test could be filled in three times by responding to 15 random questions in the given area within not more than 30 minutes.
Out of 195 students included in the study, 164 (84.1%) filled in at least one test once and 31 students (15.9%) did not fill in any test. The average number of times a test was filled in for all 11 offered tests was 1.15. In addition, 54 (27.7%) persons filled in all the tests at least once, of which 12 (6.2%) persons used up all three attempts for all 11 tests. The level of interest in the particular tests varied, and the total number of times the tests were filled in ranged from 94 to 124. For each of the 11 available tests, an average of 88 (45.2%) students out of 195 did not fill in the test, while 38 students, on average, filled in each test only once. A brief analysis of the number of times the self-tests were filled in is presented in Table 4.
Table 4. Specification of times self-tests were filled in for the 11 self-tests available (n = 195)
Times the self-test was filled in Number of students (%)
Min Max Mean
0 71 (36.4%) 101 (51.8%) 88 (45.2%)
1 33 (16.9%) 46 (23.6%) 38 (19.6%)
2 13 (6.7%) 26 (13.3%) 20 (10.2%)
3 37 (19.0%) 62 (31.8%) 49 (25.0%)
The exam test consisted of 60 single-choice questions for which a maximum available score was 60. The average test result was 37.9 (63.2%) points. The lowest score in the group was 16 (26.7%), and it was achieved by a person who did not fill in any of the self-tests. The highest test score was 55 (91.7%) points, and it was achieved by a person who only filled in three different self-tests once.
The significant Spearman correlation coefficient value of 0.27 confirmed a weak, positive correlation between taking all of the 11 self-tests and the result of the seminar test.
Statistical analysis included two groups of students. Group 1-3 was composed of students who filled in any test at least once and group 0 included those who did not complete any of the tests. No statistically significant differences were shown (p = 0.344 by Mann-Whitney U-test) in the distribution of scores in the exam. Table 5 contains basic descriptive statistics of both groups.
Table 5. Basic descriptive statistics of class exam results for group 0 and group 1—3
Group Min Max Mean Standard deviation Median Lower quartile Upper quartile
0 (n0 = 31) 16 52 36.94 7.67 37 33 42
1-3 (ni = 164) 17 55 38.11 7.53 39 34 43
Then, the studied group was divided into four subgroups according to the criteria presented in Table 6.
Table 6. Division into 4 groups evaluated according to the number of self-test attempts (n = 195)
Group Times the self-test was filled in Descriptive criterion Number of students Percentage
0 0 no test filled in 31 15.9%
1 more than 0 to 10 several selected tests were filled in not more than once 64 32.8%
2 11 to 21 selected tests filled in 1-2 times on average 55 28.2%
3 22 to 33 selected tests filled in 2-3 times on average 45 23.1%
The comparison of average scores received from the exam in the above groups proved no statistically significant differences (p = 0.339 by one-way ANOVA). Table 7 presents the basic descriptive statistics of the four groups under consideration.
Table 7. Basic descriptive statistics of class exam results for groups 0—3
Group Min Max Mean Standard deviation Median Lower quartile Upper quartile
0 (no = 31) 16 52 36.94 7.67 37 33 42
1 (ni = 64) 17 55 37.06 8.55 39 32 42
2 (n2 = 55) 21 54 38.20 6.96 38 34 43
3 (n3 = 45) 23 52 39.49 6.49 40 36 43
Creating Digital Question Databases: Use of Self-Tests in Teaching... Discussion
The analysis of results of Study 1 indicate a higher learning efficiency when students are offered refresher tests taken directly before the actual exam. Statistically significant differences were discovered between the scores received from the physiology exam between the group of students who did not fill in any self-test (Group 0) and the remaining students who filled in a self-test at least once (Group 1-3). When four subgroups were distinguished according to the number of self-test attempts, there were differences between Group 0 and Group 3 only. However, gradual improvement of all descriptive statistics can be seen along with the higher number of attempts at taking the refresher test.
Although no statistically significant differences were discovered, analysis of the results of Study 2 (Tables 5 and 7) shows gradual improvement of course test results with the increasing number of attempts to fill in the self-tests. Within the group of 12 persons who filled in all of the tests three times, the number of points ranged from 32 to 48, with an average of 40.67. Interestingly, within the group of eight persons with the highest test scores (over 50 points), there were two who did not fill in any self-test, and three who filled in self-tests a total of 27 to 30 times. This proves relatively low interest in verifying the students' knowledge with self-tests. Two factors could have influenced this situation:
- firstly, there were other self-learning materials available on the portal, including lectures with audio commentary and resources from local classes;
- secondly, the subject was taught as a 3-week set. Such a short and intensive course did not leave too much time for students to self-study, which was emphasized by students in the questionnaires they filled in at the end of the course.
Results of earlier studies on the efficiency of distance learning in Polish university students' environments included a comparison of results of final exams in medical subjects taught online and traditionally (Poljanowicz et al., 2013, 2014). Furthermore, the students' satisfaction with participation in distance-learning or blended-learning courses has been evaluated (Kolodziejczak et al., 2014; Mokwa-Tarnowska, 2014; Poljanowicz et al., 2014). The results obtained by the authors of this study relate to a different study area, including evaluation of the importance of self-tests available for students on the e-learning portal as they relate to exam results, based on two tests conducted within the pathophysiology course during medical studies.
There is extensive literature which shows that students who participate in taking formative practice quizzes receive better summative results (Kibble, 2011; Lahti et al., 2014; Leaf et al., 2009; McNulty et al., 2015; Panus et al., 2014), although some authors have failed to find strong associations between these two facts (Palmer et al., 2008; Urtel et al., 2006). Our results show a weak correlation between number of self-test attempts and final test scores, which supported the authors' hypothesis. The obtained results are consistent with previous studies concerning medical education.
Course Evaluation Survey
After the end of pathophysiology course, the students (n = 195) filled in an online questionnaire on the learning portal. Closed-ended questions concerned such matters as:
- level of comfort while working with the OLAT learning portal,
- usefulness of learning materials available on the portal,
- advantages and disadvantages of learning with the use of the portal,
- advantages and disadvantages of electronic knowledge testing,
- evaluation of the quality of education at the Pathophysiology Institute
of the Poznan University of Medical Sciences.
In addition, the survey contained an open-ended question about changes that would allow the portal to better meet the students' expectations. In their responses, students emphasized the usefulness of materials with electronic knowledge evaluation. They suggested eliminating the limit of available attempts at filling in the tests and adding feedback to answers selected wrongly. A detailed analysis of survey results will be covered by a subsequent article.
Limitations
Because the OLAT portal was implemented for the teaching of patho-physiology in multiple stages, the authors could only use the results of self-tests for the academic year 2014-2015. On the other hand, results achieved by medical students on tests and exams during the preceding years were produced in non-comparable conditions. This is due to changes in course organization at the medical faculty, through which the pathophysiology course was moved from the third to the second year of study, and the course was taught in 3-week units instead of classes held throughout the whole semester.
Further studies will be conducted to compare long-term results in terms of the effect of self-tests on course exam results.
Conclusions
Preparation and updating of a test question database is a task that requires close cooperation between the persons responsible for content and those acting as technical support. This division of functions is reasonable, particularly at a medical university. Medical doctors do not need to have the relevant IT competences, while IT department personnel do not need to be competent in the field of study presented in the learning resources. The effort put into building a large database of questions is distributed over time, and the deliverable is a long-term investment.
Research on learning quality and the level of students' satisfaction with classes held using online materials published on the e-learning portal shows that such effort is justified, as it translates into measurable teaching achievements.
Students appreciated the ability to test their knowledge before the actual exam. In their questionnaires, they emphasized that unlimited access to selected online resources on the portal was valuable for them, and they would be glad to see such a learning support system in their other classes as well.
REFERENCES
Bijol, V., Byrne-Dugan, C. J., & Hoenig, M. P. (2015). Medical student web-based formative assessment tool for renal pathology. Medical Education Online, 20, 26765. DOI: http:/SSlashdx.doi.org/10.3402/meo.v%v.26765 Bremer, D., & Bryant, R. (2005). A Comparison of two learning management Systems: Moodle vs Blackboard. Proceedings of the 18th Annual Conference of the National Advisory Committee on Computing Qualifications (pp. 135139). Retrieved from http://www.citrenz.ac.nz/conferences/2005/concise/ bremer_moo die. p df
Brzozka, P. (2011). Moodle dla nauczycieli i trenerow. Gliwice: Wydawnictwo He-lion.
Costagliola, G., & Fuccella, V. (2009). Online testing, current issues and future
trends. Journal of e-Learning and Knowledge Society, 5(3), 79-90. IMS Global Learning Consortium. (2012). IMS Question and Test Interoperability v.2.1 Final Specification. Retrieved from Association for Talent Development Website http://www.imsglobal.org/question/index.html#version2.1
Kibble, J. D. (2011). Voluntary participation in online formative quizzes is a sensitive predictor of student success. Advances in Physiology Education, 35, 95-96.
Kibble, J. D., Johnson, T. R., Khalil, M. K., Nelson, L. D., Riggs, G. H., Bor-rero, J. L., & Payer, A. F. (2011). Insights gained from the analysis of performance and participation in online formative assessment. Teaching and Learning in Medicine, 23(2), 125-129.
Kolodziejczak, B., Roszak, M., Kowalewski, W., & Ren-Kurc, A. (2013). Evaluation of the students knowledge with using rapid e-learning tools. In E. Smyrnova-Trybulska (Sc. ed.), E-learning & Lifelong Learning (pp. 189-201). Katowice: Studio Noa.
Kolodziejczak, B., Roszak, M., Kowalewski, W., & Ren-Kurc, A. (2014). Educational multimedia materials in academic medical training. Studies in Logic, Grammar and Rhetoric. Logical, Statistical and Computer Methods in Medicine, 39(52), 105-122.
Lahti, M., Hatonen, H., & Vâlimaki, M. (2014). Impact of e-learning on nurses' and student nurses knowledge, skills, and satisfaction: A systematic review and meta-analysis. International Journal of Nursing Studies, 51(1), 136-149.
Leaf, D. E., Leo, J., Smith, P. R., Yee, H., Stern, A., Rosenthal, P. B., Cahill-Gallant, E. B., & Pillinger, M. H. (2009). SOMOSAT: Utility of a web-based self-assessment tool in undergraduate medical education. Medical Teacher, 31, e211-e219.
McNulty, J. A., Espiritu, B. R., Hoyt, A. E., Ensminger, D. C., & Chan-drasekhar, A. J. (2015). Associations between formative practice quizzes and summative examination outcomes in a medical anatomy course. Anatomical Sciences Education, 8(1), 37-44.
Mokwa-Tarnowska, I. (2014). Struktury wsparcia a efektywnosc ksztalcenia w sro-dowisku e-learningowym. E-mentor, 2(54), 34-39.
Moodle Forum. (2013, April 21). Re: Moodle quiz template [Online forum comment]. Retrieved from https://moodle.org/mod/forum/discuss.php?d=1351 12&parent=986088
Moodle. GIFT format. (Moodle 2.9). Documentation for Moodle 2.9 - GIFT format. Retrieved from https://docs.moodle.org/29/en/GIFTJormat
Moodle. Managing questions. (Moodle 2.9). Documentation for Moodle 2.9 - Managing questions. Retrieved from https://docs.moodle.org/29/en/Managing_ questions
Moodle. Quiz reports. (Moodle 2.9). Documentation for Moodle 2.9 - Quiz reports. Retrieved from https://docs.moodle.org/29/en/Quiz_reports#Grades_report
Moodle. Table format. (Moodle 2.9). Moodle2Word. Retrieved from Documentation for Moodle 2.9 - Microsoft Word table format https://moodle.org/plugins/ view.php?plugin=qformat_wordtable
OLAT (2015). OLAT 7.8 - User Manual. Creating Tests and Questionnaires.
Retrieved from http://www.olat.org/OLATDOGS/help/en/html/unit_tests_ fragebogen.html
Palmer, E. J., & Devitt, P. G. (2008). Limitations of student-driven formative assessment in a clinical clerkship. A randomised controlled trial. BMC Medical Education 8. doi:10.1186/1472-6920-8-29
Panus, P. C., Stewart, D. W., Hagemeier, N. E., Thigpen, J. C., & Brooks, L. (2014). A Subgroup Analysis of the Impact of Self-testing Frequency on Examination Scores in a Pathophysiology Course. American Journal of Pharmaceutical Education 78(9), 165.
Poljanowicz, W., Mrugacz, G., Szuminski, M., Latosiewicz, R., Bakunowicz-Lazarczyk, A., Bryl, A., & Mrugacz, M. (2013). Assessment of the Effectiveness of Medical Education on the Moodle e-Learning Platform. Studies in Logic, Grammar and Rhetoric. Logical, Statistical and Computer Methods in Mediane, 35(48), 203-214.
Poljanowicz, W., Roszak, M., Kolodziejczak, B., & Brtgborowicz, A. (2014). An analysis of the effectiveness and quality of e-learning in medical education. In E. Smyrnova-Trybulska (Sc. ed.), E-learning and Inte'rcultu'ral Competences Development in Different Countries (pp. 177-196). Katowice-Cieszyn: Studio Noa.
Roszak, M., & Kolodziejczak, B. (2012, April). E-ewaluacja wiedzy statysty-cznej studentow medycyny. Kongres Statystyki Polskiej z okazji jubileuszu 100-lecia Polskiego Towarzystwa Statystycznego, Streszczenia referatow (pp. 220-222), Poznan.
Roszak, M., Kolodziejczak, B., Kowalewski, W., & Ren-Kurc, A. (2013). Standard Question and Test Interoperability (QTI) - ewaluacja wiedzy studenta. Ementor, 2(49), 35-40.
Stewart, D. W., Panus, P. C., Hagemeier, N. E., Thigpen, J. C., & Brooks, L. (2014). Pharmacy Student Self-Testing as a Predictor of Examination Performance. American Journal of Pharmaceutical Education 78(2), 32.
Sztorc, J. (2009). Proba oceny systemow zarz^dzania nauczaniem rozprowadzanych na zasadach open source. Zeszyty Naukowe Uniwersytetu Ekonomicznego w Krakowie, 770, 191-201.
Urtel, M. G., Bahamonde, R. E., Mikesky, A. E., Udry, E. M., & Vessely, J. S. (2006). On-line quizzing and its effect on student engagement and academic performance. Journal of Scholarship of Teaching and Learning, 6(2), 84-92.