LITERACY OF HEALTHCARE STUDENTS FROM MEDICAL UNIVERSITY-PLOVDIV

Healthcare specialists should have a sound level of computer literacy to work with medical information systems, electronic health records, telecare solutions and other modern ICT applications. A successful career in an ICT supported position as in healthcare requires proficiency in using computer technology in performing tasks. The knowledge about the level of computer literacy of our students allows two important decisions to be taken: first the necessity of restructuring the content of the subject “Informatics” to make up for the gaps from previous education in technologies and second – the evaluation of volume and nature of electronic educational resources to be included in the training. The aim of the present study is to assess computer literacy of health care students and to investigate the self-assessment of their computer skills. Materials and methods: The study was conducted in the period April – December 2016 among 279 students from different health care specialties. To receive an objective grade, computer literacy was assessed by a didactic test specially designed for this purpose. The process of creation and validation of the assessment tool is discussed. Students were also asked to selfassess their computer literacy. Results: The students demonstrated good computer literacy with no statistically significant impact of gender and age. Female students tend to underestimate their computer skills, while men have realistic self-assessment. Students become more critical with age – older students have lower self-assessment compared to their real performance. Conclusion: The computer literacy of healthcare students is not alarmingly low, it allows them to take full advantage of elearning. However, the course in informatics should include more activities that would allow them to upgrade their computer skills. UDC Classification: 378; DOI: http://dx.doi.org/10.12955/cbup.v5.1001


Introduction
The use of information and communication technologies in medicine and healthcare requires professional competence that includes computer skills.Healthcare specialists should have a sound level of computer literacy to work with medical information systems, electronic health records, telecare solutions and other modern ICT applications.Healthcare students are not technologically oriented since they have chosen a humanitarian profession.Their interests are focused in an area different from the technology, and we cannot expect them to master excellent digital skills.The knowledge about the level of computer literacy of our students allows two important decisions: 1.The necessity of restructuring the content of the subject "Informatics" to make up for the gaps from previous education in technologies.2. The evaluation of volume and nature of electronic educational resources to be included in the training.The aim of the present study is to assess the computer literacy of pre-graduate healthcare students and to investigate the self-assessment of their computer skills.The production and validation of the assessment instrumenta didactic test is described.The results are statistically processed, analyzed and conclusions are drawn.

Literature review
National League for Nursing states that computer literacy content focuses on computer basics and the use of generic software applications such as word processing, databases, presentation software, and the use of electronic communication such as email.(National League for Nursing).Computer literacy is the foundation of information literacya broader concept for abilities for information search, retrieval, evaluation, usage, processing, and presentation.According to Anderson and Gantz (2013), computer related skills and especially Microsoft Office are in the top twenty skills for tomorrow's best jobs.A successful career in an ICT supported position as in healthcare requires proficiency in using computer technology in performing tasks.As Safabakhsh et al. (2016) state, the implementation of evidence-based practice requires nurses and nursing students to be computer literate.This statement is valid not only for nurses but all healthcare staff and students.Nevertheless, the well-understood necessity of such skills for healthcare staff, Michel-Verkerke (2010) reveals that computer literacy is not at an adequate level for all nurses.Topkaya & Kaya (2014) summarize that ICT competencies are not acquired at the undergraduate and graduate levels of nursing education.Spenser (2012) explains that the National League for Nursing, the American Association of Colleges of Nursing and other institutions prompted initiatives to make informatics a fundamental part of nursing education.Deltsidou et al. (2010) say that there is a slow emergence of nursing informatics in nursing curricula even in developed countries.At the Medical University of Plovdiv, informatics has been included in the curricula of all healthcare specialties since 1993.Since then, the course has been regularly updated to reflect the technological advances and their implementation in healthcare, in general, and in the context of the national healthcare system.Nowadays the content focuses on two competencies: • Information literacy and presentation skills; • Understanding of ICT in healthcare and medical information systems, working with software for medical practice.As Choi & Martinis (2013) point out, studies on informatics competency assessment in undergraduate and graduate nursing students are scarce.Most of the research on the subject is based on self-reported questionnaires, rather than objective tests of skills.Self-assessment reflects personal confidence and self-esteem and as such is not objective.Elder & Koehn (2009) argue that nursing students rate themselves higher on their skills than their actual performance of computer skills.Similar results are published by Grant et al. (2009).Discrepancies between assessed and perceived knowledge and skills may lead to stress and withdrawal of technologies in real settings.Lin stresses the inadequate attention paid to computer literacy and computer competency scale validity (2011).Elder & Koehn (2009) also discuss the value of using a computer graded assessment.In our research, special attention is paid to construction, statistical analysis and validation of the assessment tool.According to Digital Economy and Society Index 2017 (https://ec.europa.eu/digital-singlemarket/scoreboard/bulgaria),low performance in digital skills acts as a brake on the further development of Bulgaria's digital economy and society.Informatics and Information Technologies are mandatory disciplines in primary and secondary education in Bulgaria.A pilot online national external evaluation of digital competences of tenth-grade students from secondary schools in all the 28 areas of the country was conducted in 2016.The average success rate was 50.03% (Ministry of Education and Science of the Republic of Bulgaria, 2016).These facts are worrying, and the level of competence of students for working with computer and information in electronic format is put into question.

Methods and materials
The study was conducted in the period April -December 2016.The survey was anonymous and 279 students from different health care specialties at Medical University -Plovdiv agreed to participate.To receive an objective grade, computer literacy was assessed by a didactic test specially designed for this purpose.The students were also asked to self-assess their computer literacy on a five point scale from poor (2) to excellent (6) prior to the test.The influence of gender on grade and self-assessment was analyzed by nonparametric tests of Mann-Whitney and Kruskal-Wallis for independent samples.The relationship between grade and self-assessment was analyzed by Wilcoxon nonparametric test for two related samples.Bivariate Spearman correlation was used to study the relationship between age and grade or self-assessment.We accepted 95% confidence interval.Data were processed with the statistical package SPSS 17.0.

Construction of the assessment instrument
The test aimed at measuring the level of knowledge and skills of students in the following areas: • Operation systems and organization of information; • Word-processing and spreadsheets; • Safe networking.Windows 7, MS Word 10 and MS Excel 10 were accepted as working environments.
The test was based on criteria, i.e. it measured the degree of achievement of educational goals in the defined areas of knowledge, as set out in the Informatics and Information Technologies curricula in the secondary education (Ministry of Education and Science of the Republic of Bulgaria, 2016).The originally developed test contained 30 multiple choice questions with one correct answer and three distractors.The main reasons for choosing this type of questions were: -Fast and objective assessment; -Ability to evaluate not only reproduction of knowledge, but also comprehension and application; -Students are not required to demonstrate reasoning, give examples or original ideas, i.e. situations where multiple choice test items are difficult to apply.Some of the questions were taken from Mateva et al. "Medical Informaticstest items and practical exercises" (2015).Most of the questions were written specially for this assessment tool.The questions assessed the competence of students at different cognitive levels.The principles of creating quality multiple-choice questions, outlined by Burton et al. (1991) were observed in adapting and writing test items.The test was content valid, because: 1. Skills and knowledge in each of the three upper mentioned areas of assessment were measured by at least three test items, and 2. Each question was discussed with experts -the staff of the department "Medical Informatics, Biostatistics and e-learning" at the university.Criterion validity could not be defined because there was no benchmark test to compare results.Construct validity had not been determined because the results of the validation testing did not comply with the requirements for a factor analysis.Face validity was guaranteed by the experts' opinion that it seemed to be a proper way of testing the level of computer literacy of health care students.In order to validate the tool 35 students agreed to do the test anonymously, conscientiously and without cheating.They were given 30 minutes.The received results were processed with the program for psychometric analyses jMetrik (http://www.itemanalysis.com/index.php).The aim was to select for the final version the items which: 1. Had a difficulty level 0<p<1, i.e. there were no questions answered by everybody as wells as questions that nobody had answered; 2. Had a discrimination coefficient r>0.2, according to Shotlekov (2015) 3. Did not compromise test reliability.The analysis of the test items revealed that there were no questions that did not fulfill the first requirement, but there were nine test items with discrimination coefficient below 0.2.The analysis of the internal consistency showed Cronbach's alpha α=0.76.Further analysis showed that if each of the nine questions with r≤0.2 were removed, the Cronbach's alpha would increase.So, the number of the test items was reduced to 21, and the new version was tested again.The results showed that all questions comply with the requirements, α=0.82 and there were no questions that if removed, the reliability would increase.A question was considered answered correctly if there was only one answer and it matched the right one.The final score was the total number of correct responses, min.0 and max.21.The results from doing the test by the students were: x ̅ = 12.09, Ме=13, Мо=14.The frequency distribution of scores has a weak left negative skewness, so the conclusion was that the test was medium to easy for the students.In such cases, according to Ivanov (2006) the cut-off score for equating to a five-point scale is 30% of the total score, which is 7 for our test.The equating of the score to a grade on the five-point scale from "Poor" 2 to "Excellent" 6, which is the formal assessment scale in Bulgaria was done through splitting the score above 7 into approximately equal intervals (Table 1), as described by Shotlekov (2015).
According to the Mann-Whitney test there was no statistically significant difference in the age distribution between genders -p=0.146.We might expect age and gender to affect computer literacy independently.
The students' grades as results of the didactic test and their self-assessments are given in Table 2 for men and for women.The average computer literacy of the investigated sample of healthcare students at the Medical University of Plovdiv was good.The percentage distribution of grades is given in Figure 1.The students, who study health care specialties at Medical University Plovdiv are computer literate.
The ratio of those, who are not, i.e. who received "Poor" is low -4.42%, same is the percent of excellent students.Less than a quarter of the students -24.34% got less than 50% of the total scorethese are the students with poor and fair grades.Ranasinghe et al. (2012) report that nearly half of the first year medical students in Sri Lanka obtained computer literacy score less than or equal to 50%.Results of a study in Iran by Zarei, Rokhafruz & Dianat (2012) showed that medical students' familiarity with computers were low.A study by Deltsidou et al. (2010) 2016) revealed that the ratio of students possessing their own computer with internet connectivity is relatively high -83.75%,compared to other countries.Another major factor, raised by Ranasinghe et al. (2012) for sound computer literacyprevious formal education is also fulfilled for our students.This explains their relatively good understanding of using computers.Ikolo & Okiy (2012) found gender differences in computer literacy skills of medical students with males more familiar with computers than females.Safabakhsh et al. (2016) found gender differences only in the connectivity aspect of computer literacy.Ranasinghe et al. (2012) did not find significant gender differences in the mean score of a computer literacy questionnaire.We did not find statistically significant difference between computer literacy of men and women -p=0.080.We explain this with the equal opportunities for men and women for education.Another possible explanation is that men, who choose a health care career are not so technologically orientated as they are believed to be.
The results of our survey showed that students tended to underestimate their computer skills and their self-assessment was statistically significantly lower than their actual grade -p=0.0001.We found the statistically significant dependence of self-assessment on gender -p=0.025.Further analysis showed that it is the women, who have a significantly lower self-estimation -p=0.0001, while men demonstrated realistic judgment of abilities to work with computers -p=0.362.Literature findings do not support this result.According to Grant et al. (2009) undergraduate university students in North Carolina had a higher perception of their proficiency level than their performance on the assessment for word-processing and spreadsheets.Elder & Koehn (2009) also support the fact that students overrate themselves.A study in Romania by Cazan et al. (2016) reveals that there are no significant differences between the male and the female participants concerning computer self-efficacy.Our students demonstrate critical thinking about their skills.Women do not feel well prepared to work with computers, and they need more encouragement.The implications for the teachers are to include more practical training for basic computer skills in the informatics classes for the students to improve their computer literacy and to increase their self-esteem and confidence to work with computers.Niyomkar (2012) found a positive relationship between age and computer literacy in undergraduate nursing studentsas age increased, computer competency advanced.We did not find a significant correlation between age and computer literacy -rs=-0.088,p=0.150.However, a statistically significant weak negative correlation existed between age and self-assessment -rs=-0.3142, p=0.0001.Results indicate that older students have the same computer skills as their younger colleagues, but their self-effacement increases with age.Educators should pay attention to older students' concerns, assure them and make them feel comfortable with technology.

Conclusion
The computer literacy of health care students is not alarmingly low.It is good enough to enter and complete the course in informatics successfully.It allows them to take full advantage of e-learning.Nevertheless, computer literacy does not differ across gender and age, female students, who are the majority, self-assess themselves lower than their actual skills.The same applies for older students.The course in informatics should include more activities for upgrading computer skills on the one hand, and building confidence and a feeling of comfort with computers on the other.Increasing the computer literacy and self-efficacy of our students is a guarantee for a successful career.

Figure 1 :
Figure 1: Distribution of grades among healthcare students

Table 1 :
Scale for the test assessment of the students' computer literacy

Table 2 :
Students computer literacy grade and self-assessment Deltsidou et al. (2010)) a deficit in nursing students' IT competencies in Greece.On the contrary,Choi & Martinis (2013)found out that undergraduate nursing students at the School of Nursing at University of Massachusetts were competent in basic computer knowledge and skills.Compared with their colleagues from other countries, our students have a sufficient level of computer knowledge and skills.Ranasinghe et al. (2012)andDeltsidou et al. (2010)point out owning a personal computer as a strong predictor for computer skills availability.Our previous research,Kirkova-Bogdanova et al. (