Scholarly article on topic 'Comprehending the Macro Through the Lens of the Micro: The Use of PIRLS 2006 Findings to Inform Comparative Case Studies Across the South African Achievement Scale'

Comprehending the Macro Through the Lens of the Micro: The Use of PIRLS 2006 Findings to Inform Comparative Case Studies Across the South African Achievement Scale Academic research paper on "Educational sciences"

0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "Comprehending the Macro Through the Lens of the Micro: The Use of PIRLS 2006 Findings to Inform Comparative Case Studies Across the South African Achievement Scale"

Special Issue

Comprehending the Macro Through the Lens of the Micro: The Use of PIRLS 2006 Findings to Inform Comparative Case Studies Across the South African Achievement Scale

International Journal of Qualitative Methods January-December 2016: 1-12 © The Author(s) 2016 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/1609406915624576 ijqm.sagepub.com

Lisa Zimmerman1 and Brigitte Smit2

Abstract

The South African Grade 4 learners had the lowest achievement mean of all participating education systems for the Progress in International Reading Literacy Study (PIRLS) 2006. Further investigation was required to explore the potential reasons for the low performance. Although traditionally, secondary analyses of the data for international comparative studies of learner achievement have involved quantitative procedures, the mixed methods research design used for this study with emphasis placed on the qualitative phase was a unique departure from this methodological status quo. The aim was to explore schooling conditions and teaching practices for Grade 4 learners' reading literacy development across the range of South African education contexts. In the first phase of the research, teacher and school survey data linked to a nationally representative learner sample (n = 14,299) for the PIRLS 2006 were used for description of Grade 4 teachers' instruction practices and schooling conditions. This description took place on the basis of the reclassification of the teacher and school survey data according to class language profiles and average class performance as aligned to each of the achievement benchmarks of the PIRLS 2006 and further benchmarks created to describe the performance levels of the majority of South African learners. Thereafter, in the second phase, seven qualitative school and teacher case studies from each reclassification subsample were purposively selected to add illuminatory depth to the study. The sampling strategy allowed for scrutiny of cases with high PIRLS achievement profiles against cases with poor achievement profiles, and the comparisons afforded a far greater understanding of the problem. This article reflects on the methodological rationales, processes, and outcomes associated with this methodological choice.

Keywords

mixed methods, qualitative case studies, reading literacy, PIRLS 2006

The Progress in International Reading Literacy Study (PIRLS) is an international assessment study of reading literacy at Grade 4 that is conducted every 5 years by the International Association for the Evaluation of Educational Achievement (IEA). Forty countries and 45 education systems participated in the PIRLS 2006. In South Africa, more than 30,000 Grade 4 and Grade 5 learners were assessed using instruments translated into the 11 official languages of the country. The Grade 5 learner sample was included as a national option in South Africa to track achievement progression in the education system. The PIRLS 2006 focused on (a) processes of comprehension, which involved being able to focus on and to retrieve explicitly stated information, to make straightforward inferences, to interpret and to integrate ideas and information, and

to examine and to evaluate content, language, and textual elements; (b) purposes for reading, which included the examination of literary experience and the ability to acquire and to use information; and (c) reading behaviors and attitudes toward

1 Department of Psychology of Education, College of Education, University of South Africa, Pretoria, South Africa

Department of Educational Leadership and Management, College of Education, University of South Africa, Pretoria, South Africa

Corresponding Author:

Lisa Zimmerman, Department of Psychology of Education, College of Education, University of South Africa, Pretoria, South Africa. Email: zimmel@unisa.ac.za

Creative Commons CC-BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 License (http://www.creativecommons.org/licenses/by-nc/3.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).

reading. Information on the home, school, and classroom contexts of these learners also was gathered by means of questionnaires (Mullis, Kennedy, Martin, & Sainsbury, 2006). The achievement mean for the study was fixed at 500 scale points on the achievement scale. The sampled South African Grade 4 learners (n = 16,299) had the lowest performance in the PIRLS 2006, a mean of253 scale points (SE = 4.6). Most alarmingly, in comparison to the 94% international median for Grade 4,87% of the South African Grade 4 learners didnot reach the lowest of four international benchmarks on the PIRLS achievement scale, suggesting that the majority had not achieved the basic reading skill proficiency indicative of the benchmark (Howie et al., 2008; Mullis, Martin, Kennedy, & Foy, 2007; Zimmerman, 2010).

Taken with other results from large-scale assessments of learners' reading literacy achievement in the South African education system (Department of Education [DoE], 2003, 2005; Moloi & Strauss, 2005), the PIRLS 2006 results (Howie et al., 2008) suggested that South African teachers are struggling to develop their primary school learners' reading literacy abilities, a situation that has dire consequences for their learners' learning progression throughout the remainder of their education. Pretorius and Machet (2004) stated that there is scant research on reading in South Africa, whereas Fleisch (2008) concluded that there have been few published studies that describe and explain the patterns of classroom life that lead to academic achievement or failure. Furthermore, given that school contexts play an integral role in classroom undertakings (Postlethwaite & Ross, 1992; Reynolds, 1998), there is also insufficient research on the schooling conditions that either promote or impede the teaching of reading literacy in South African primary school classrooms. Descriptive studies often might have a lower status in academic circles than do research studies wherein the researchers test a model or attempt to substantiate a prediction (Smith, 2008) and are, perhaps, seen as being less scientific or not leading to useful generalizations. However, a rush to explaining phenomena via hypotheses and models might mean that important phenomena are underde-scribed and poorly measured (Smith, 2008). At the time of the PIRLS 2006, there had been scant empirical research on reading in South Africa (Pretorius & Machet, 2004); nor is there evidence of in-depth research attempts to understand why teachers were experiencing problems with the teaching of reading literacy or even thorough descriptions of what they were doing in their classroom practices or how schools were supporting literacy development. As such, description was considered an important outcome goal as the rush to theory testing might pre-empt adequate description or measurement of the phenomenon (Smith, 2008), meaning that interventions might be based on less than solid foundational understandings of what is happening and what is needed to address the difficulties experienced by teachers and schools (Zimmerman, 2010). Indeed, a growing corpus of research linked to South African learners' reading literacy has started to emerge more recently (Klapwijk, 2012; National Education and Evaluation Development Unit, 2013; Pretorius, 2014; Zimmerman & Smit, 2014). Nevertheless, there is still much need for further research insights

particularly because the performance profile for learners from South Africa for PIRLS 2011 showed little improvement from the 2006 cycle (Howie, van Staden, Tshele, Dowse, & Zimmerman, 2012).

Given the need to understand better the low learner outcomes for the PIRLS 2006 from a teaching and learning perspective, the PIRLS 2006 school and teacher questionnaire data were used for the purpose of secondary analysis. While traditionally, secondary analysis of PIRLS data has tended to involve only quantitative procedures, the research design discussed in this article ensued from the recognition of the important linkages that can be forged between information gleaned from larger representative samples at the macro level such as the PIRLS data and in delving into micro-level cases linked to these samples to explore the processes and realities present in individual contexts (Fleisch, 2008). The macro-level findings of the PIRLS 2006 in South Africa offered a springboard for investigating Grade 4 teachers' reading instruction practices and schooling conditions for reading literacy development using learner assessment outcomes as a starting point to guide the investigation (Zimmerman, 2010). In this article, we explore and reflect on the mixed methods research design utilized for this research. Firstly, the research questions for the study and the mixed methods research design used to answer these questions are explicated, particularly the design rationale and its composition. Given the centrality of the PIRLS 2006 international benchmarks for the research, a brief overview then is provided with emphasis placed on the links between the benchmarks and sampling for the quantitative phase because this played the most significant role in the qualitative phase case selections. Among reflections on the implementation of the mixed methods research design, the value of the use of quantitative data to inform the qualitative phase of the research study is considered to conclude the article.

The Use of Mixed Methods for This Study

In this section, the research questions and how these questions were addressed are briefly considered as an introduction to the research design for the study. The actual mixed methods research design chosen for the research and reasons for this design choice then are explicated. Thereafter, research methods, sampling, data collection, and analysis procedures for the first quantitative phase of the research are introduced, followed by discussion of the same methodological foci for the second qualitative phase. The approach to methodological norms and ethical considerations are also provided.

Research Questions for the Study

The overall research question for the mixed methods research study was:

What influence do schooling conditions and teaching practices have on curriculum implementation for Grade 4 reading literacy development?

Answering this overall research question required integration of the findings from two research subquestions for the study. The two subquestions each manifested at two phases of the research process, one of which was quantitative and the other qualitative. The first research subquestion was:

What are the schooling conditions in which Grade 4 reading literacy instruction practices occur at each identified PIRLS 2006 achievement benchmark?

To answer this question, selected data from the PIRLS 2006 school questionnaire were used to provide a descriptive overview of a representative sample of learners' Grade 4 schooling contexts on the basis of these learners' mean performance in PIRLS 2006 and the predominant language profiles of learners in schools. The question was also addressed via school case studies selected from the representative sample to complement and to extend the findings from the first phase.

The second research subquestion that dealt with teaching practices for Grade 4 reading literacy was:

What are the practices of teaching Grade 4 reading literacy at each identified PIRLS 2006 achievement benchmark?

To answer this question, selected data from the PIRLS 2006 teacher questionnaire were used for description and comparison of practices according to class average profiles and language of teaching. Thereafter, qualitative analyses of cases characterized by learner performance trends in reading from PIRLS 2006 and language of instruction complexities, with purposively selected teacher participants, were undertaken during the second phase to complement findings from the first phase (Zimmerman, 2010).

The Mixed Methods Research Design for the Study

Rationale for use of mixed methods. The zealous following of a single research paradigm might lead to a warped sense of its value in the research process, with a failure to engage pragmatically with those aspects that it cannot, by its makeup, address (Brannen, 2004). Thus, the goal was to select research methods that were best suited to interrogating the questions to be addressed, rather than selecting methods that purely paid homage to their presumed link to the ontological and episte-mological position for the research (Zimmerman, 2010).

Complementarities occur when differing data sets are used to address complementary but different aspects of the research (Hammersley, as cited in Brannen, 2004), a feature which was present for this research. It would seem that Brannen's (1992, p. 16) justification for combining qualitative and quantitative research approaches, which is one solution to the so-called "duality of structure'' in understanding society reflected in both approaches, dovetails with the argument for complementarity between the two approaches. That is, there are macrostructural ways of understanding society that call for a deterministic explanatory mode associated with quantitative

research. There are also microstructural approaches to understanding society that emphasize the creative and interactive explanations and processes associated with qualitative research approaches (Brannen, 1992). Macrostructural and microstructural levels of inquiry, thus, cannot be conducted using the same methods. However, according to Brannen (1992), macro-level social phenomena need to be grounded in statements about social behavior in concrete micro-level contexts. This justification conformed to the aims for this research in that meso-level school data and micro-level data collected from teachers in Grade 4 classrooms were used both to ground and to illustrate the macro-level PIRLS 2006 systemic data used for secondary analysis. The macro, therefore, becomes more clearly known through the lens of the micro. If the macro cannot be fully understood without being contextualized through the micro, then micro and macro cannot stand in opposition to each other (Mason, 2006), which seemingly intimates that they are complementary in nature. The research design for this study, thus, departed from the understanding that both qualitative and quantitative research can complement each other. As Johnson, Onwuegbuzie, and Turner (2007) state, mixed methods research can be used to probe a data set to determine its meaning. In this instance, one method also was used to inform the other method, which is another purpose of mixing qualitative and quantitative research (Johnson, Onwuegbuzie, & Turner, 2007). The goal of mixed methods use for this research, therefore, was to add breadth and scope to the study as well as to contribute to the knowledge base via examination and attempts to understand different aspects of a complex phenomenon (Onwuegbuzie & Collins, 2007), such as the teaching of Grade 4 reading literacy in an educationally heterogeneous context of South Africa. The design also was based on recognition of the argument of Bryman (2007) who asserts that mixed methods research is not necessarily an exercise in testing findings against each other but is rather about forging an overall or negotiated account of the findings that brings together both components of the conversation or debate, which was a goal for this research (Zimmerman, 2010).

Partially mixed sequential equal status design. Leech and Onwuegbuzie's (2009) typology of mixed methods research design, which was developed according to three dimensions, was used to aid in the choice of a design for this study. These dimensions are (a) the level of mixing of methods (partially mixed vs. fully mixed), (b) time orientation (concurrent vs. sequential), and (c) emphasis of approaches (equal status vs. dominant status). Fully mixed methods involve the use of quantitative and qualitative methods within one or more stages of the research process or across these stages, whereas when partially mixed methods are used, the qualitative and quantitative components are conducted either concurrently or sequentially in their entirety and are only mixed at the data interpretation stage. Affording equal status means that the qualitative and quantitative phases of a study have approximately equal emphasis with respect to addressing the research questions (Leech & Onwuegbuzie, 2009).

Figure 1. The partially mixed sequential equal status research design for this study (Zimmerman, 2010).

For the purposes of this research, a partially mixed sequential equal status design (Leech & Onwuegbuzie, 2009) was considered most appropriate because the research comprised two phases, one quantitative and the other qualitative. In the first phase, teacher and school-level survey data from a nationally representative sample from the PIRLS 2006 were used to describe Grade 4 language teachers' reading literacy instruction strategies and the schooling conditions in which they were implemented. This description took place on the basis of the reclassification of the teacher and school survey data according

to class language profiles and learners' average class performance aligned to each of the benchmarks of the PIRLS 2006 discussed in more detail below. Thereafter, qualitative case studies of teachers' practices and contexts for teaching from each reclassification subsample were purposively selected to add illuminating depth to the study. The results of the quantitative first phase were used to inform the use of the qualitative method (Onwuegbuzie & Collins, 2007), particularly in terms of sampling decisions and to aid in the development of data collection strategies for the qualitative method. Figure 1

Table 1. Percentage of South African Learners Reaching the PIRLS 2006 International Benchmarks Versus the International Median (Howie et al., 2008; Zimmerman, 2010).

PIRLS 2006 International

Benchmarks Benchmark Descriptions

International South African Grade 4 Median (%) Median % (SE)

Low (400-474)

Intermediate (475-549) High (550-624)

Advanced (625+)

Basic reading skills and strategies (recognize, locate, and reproduce explicitly 94 stated information in texts and answer some questions seeking straightforward inferences) Learners with some reading proficiency who can understand the plot at a 76

literal level and can make some inferences and connections across texts Linked to competent readers who have the ability to retrieve significant 41

details embedded across the text and can provide text-based support for inferences

Able to respond fully to the PIRLS assessment by means of their integration 7

of information across relatively challenging texts and the provision of full text-based support in their answers

13 (0.5) 7(1.1)

3 (2.0)

1 (1.5)

Note. PIRLS = Progress in International Reading Literacy Study.

illustrates the partially mixed sequential equal status research design for this study. It also outlines the methodological undertakings for each phase of the research in terms of sampling choice and specific methods of data collection, aspects that will be explicated in the following sections.

Reclassification of PIRLS 2006 learner achievement data for mixed methods sampling purposes. For the PIRLS 2006, four set benchmarks along the PIRLS scoring scale were established by means of scale anchoring analysis (Kennedy & Trong, 2007) to lead to a qualitative description of learner performance and competencies linked to assessment items at these set scores on the performance continuum (Howie et al., 2012). These benchmarks comprised an Advanced International Benchmark set at 625 points, a High International Benchmark of 550 points, an Intermediate International Benchmark of475, and a Low International Benchmark set at 400. These benchmarks are cumulative in that learners who were able to reach the higher benchmarks also demonstrated the knowledge and skills for the lower benchmarks (Howie et al., 2008; Zimmerman, 2010). Table 1 delineates the benchmarks and provides the international as well as South African Grade 4 learner achievement median for each.

These benchmarks acted as a starting point for the mixed methods research study (Zimmerman, 2010) reflected on in this article based on the recognition that exploration of schooling conditions and teaching practices aligned to performance at these benchmarks could lead to further insight into differences in performance levels across South African schools (Zimmerman & Smit, 2014). For the first phase of the research, the PIRLS 2006 Grade 4 learner (n = 14,299) English language and African languages achievement data were firstly reclassified according to class mean performance. Thereafter, the achievement data were further reclassified according to language of instruction. This resulted in a reclassification of data into English first language (EFL)1 or English additional

language (EAL)2 categories for those learners who wrote the PIRLS tests in an African language, their language of instruction for their first 3 years of schooling. There are 11 official languages for teaching and learning in South African schools, making the implementation of PIRLS 2006 in South Africa ''... the largest, most ambitious and complex national design within an international comparative study ever undertaken" (Howie et al., 2008, p. v), particularly in terms of sampling linked to languages of instruction. For sampling, in this study, the mean Grade 4 class performance score of each EFL or EAL class was calculated, and, because PIRLS 2006 background questionnaire data are aligned with average learner performances, each learner (n = 14,299) in each class in the sample was allocated the mean class performance score to allow for comparison of teaching practices according to class average performance. These mean class performances then were checked for possible alignment to each of the PIRLS international benchmarks (Zimmerman, 2010; Zimmerman, Howie, & Smit, 2011).

This sample reclassification led to the discovery that 70% (SE = 5.3) of learners tested in English were in EFL classes where the class average achievement was outside the parameters of the PIRLS international benchmarks. Moreover, all learners (n = 11,496) tested in an African language were in EAL classes where the average class achievement was below the low international benchmark (Zimmerman, 2010; Zimmerman et al., 2011). Given that the vast majority of the sample were in classes with an average below the international benchmarks—suggesting that most had not yet achieved basic reading literacy competencies—it was necessary to identify further benchmarks lower on the achievement scale for investigation into teaching and learning associated with these benchmarks. The benchmarks of 325 and 175—75 scale points and 225 points below the low international benchmark,

1 English as a first language classes are situated in primary schools where instruction is only offered in one language, English, from the start of school, despite the enrolment of learners with other vernaculars at these schools.

2 There are schools with classes of learners who first start using English as the language of learning and teaching (LoLT) at Grade 4. Prior to this grade, these English additional language learners have used another language, usually an African language mother tongue, as the LoLT.

Table 2. Final Sample Used for Secondary Analysis of PIRLS Questionnaire Data (Zimmerman, 2010).

South African South African Benchmark Benchmark

175-249 325-399

Learners in Classes With a Mean - -

Reaching the Selected Benchmarks % (SE) % (SE)

EFL learners EAL learners

Note. SE = standard error of measurement; NR = not reached.

respectively—were selected because they were deemed to be analytically meaningful in terms of learner distributions at these further South African benchmarks. Table 2 outlines the final sample used for analysis. This sample reclassification led to seven subsamples defined by average class performance aligned to the benchmarks and class language profile (i.e., EFL and EAL 175, EFL and EAL 325, EFL 400, EFL 475, and EFL 550; Zimmerman, 2010). These seven profiles formed the basis for secondary data analysis of the PIRLS 2006 questionnaires in the first phase of the mixed methods research study.

In mixed methods research, quantitative data can assist sampling for the qualitative component by identifying representative sample members or helping to identify outlying or deviant cases (Johnson et al., 2007). The seven benchmark and language-based profiles thus served as the basis for purposive sampling in the second phase of the research to lead to the selection of sample members from across the achievement spectrum for PIRLS 2006. EFL and EAL schools with PIRLS 2006 Grade 4 class averages at benchmarks 175, 325, 400, 475, and 550 were, therefore, included in the sampling frame for selection of cases for the qualitative phase of the research study. Schools meeting these criteria were scattered throughout all of the nine provinces in South Africa, but focus was first placed on sampled schools from the Gauteng province for convenience purposes based on researcher access to the province. Five schools and their teachers in the Gauteng province agreed to participate. This included EFL schools with performance at 550, 400, and 325 as well as an EAL school with performance at 175. No EFL school with performance at EFL 175 was available for participation at the time, and, because the Gauteng school with a mean performance at EFL 475 declined to participate, a school in the KwaZulu-Natal province was selected (Zimmerman, 2010). Each of the six cases sampled was viewed as a critical case, which involves the choice of a representative case most likely to represent the phenomenon under exploration, thought to have been achieved by the sampling criteria of class average performance and language of instruction. The main argument for the use of this type of case is that what is valid for these participants is more likely to be valid for others too (Flyvbjerg, 2004; Merriam, 1998; Zimmerman, 2010). As Baskarada (2014) points out, case studies can involve benchmarking best practices. The sampling strategy allowed for this as well as highlighting instances chosen to represent important variations in the phenomenon being investigated (Basskarada, 2014).

Intermediate

Low International International High International

Benchmark 400-474 Benchmark 475-549 Benchmark 550-624

% (SE) % (SE) n % (SE)

6 (3.9) NR

Phase 1: Secondary Data Analysis of PIRLS Teacher and School Questionnaire Data

The PIRLS 2006 school questionnaire gathered information from school principals about availability and use of materials to teach reading, the school reading curriculum, and instructional policies, in addition to school demographics and resources. The teacher questionnaire sought information about the structure and content of reading instruction in the classroom (Kennedy, 2007). For the PIRLS 2006 main study, questionnaire data as provided by Grade 4 teachers and school principals in these teachers' schools were reported by means of percentage of learners responding to each category of a variable accompanied by mean reading achievement of the learners in each category. Thus, the teacher and school data were presented from the perspective of learners' educational experiences (Trong & Kennedy, 2007). The same descriptive reporting occurred for this study; however, the descriptive summaries of the response distribution were considered within and across the seven reclassified subsamples according to benchmark mean performances and the class average performance assigned to each learner that was generated for this research (Zimmerman, 2010). Frequencies and mean scores were generated for selected variables in the teacher and principal questionnaire data. Where appropriate, the mean scores were calculated per benchmark and presented as cross-tabulations. IDB analyzer (IEA, 2009) was used to estimate correctly the standard errors, given the cluster sample.

Phase 2: Case Studies of Teachers' Reading Instruction Practices and Teaching Contexts

Figure 2 illustrates the six data collection methods used for each case study of teacher practices in schooling context. Each method informed the overall case and further acted to inform either the implementation or analysis of the other methods. As such, each method led to the convergence of evidence for the overall case (Yin, 2003).

The PIRLS 2006 school and teacher questionnaires completed in the sampled schools in 2006 were used to contextua-lize the cases. This was viewed as important to cover the contextual conditions that formed the boundaries of these cases (Yin, 2003) because these conditions are highly pertinent to understanding teachers' practices.

926 25 (7.0) 484 23 (6.4) 297 11(4.3) 237 13 (5.0) 84 7249 59 (4.l) 174 2(1.2) NR NR NR NR NR

Figure 2. Case study data collection methods (Zimmerman, 2010, adapted from Yin, 2003).

Teachers also were asked to review released PIRLS 2006 passages. In doing so, they completed an open-response questionnaire to indicate their opinions on the suitability of the passage for their learners and whether or not they had received exposure to similar content. This was aimed at exploring the level of exposure that the learners had to the type of texts and questions presented by the PIRLS. This was conceptualized as the opportunity that they had to learn the type of content needed to respond to the items of PIRLS 2006—also referred to as opportunity-to-learn (McDonnell, 1995).

Semistructured interviews were initiated with both the participating Grade 4 teacher and the Head of Department (HoD) responsible for overseeing the language subject area at Grade 4 in the sampled schools. The interviews focused on teachers' understandings of and goals for teaching reading literacy, viewpoints of the curriculum for the teaching of reading literacy, descriptions of typical lessons, opinions on what experiences have shaped their teaching strategies, experiences in interacting with their learners for reading literacy, and ideas about which strategies are most useful. The HoD interview focused on the HoDs' career path; their role as HoD at the school; the goals and planning process for reading literacy development undertaken by the HoD and teachers; the strategies for reading development used; teaching time allocation for language and for reading instruction; specifically at Grade 4, description of a typical learner at the school; and opinion on the official teaching curriculum.

Classroom observations provided the opportunity to investigate teachers' teaching practices in situ instead of just from the second-hand accounts provided via the other data collection methods for the study (Cohen, Manion, & Morrison, 2000). One reading comprehension lesson undertaken by each teacher was observed followed by an interview regarding the lesson. Moreover, each passage and questions used for the reading comprehension during the lesson observed was collected for later comparison, with the passages and questions from other lessons observed for the other cases.

In literacy research, the analysis of artefacts usually involves the examination of physical evidence of literacy instruction, learning, or practice (Purcell-Gates, 2004). For this research, this meant that a review of products of literacy instruction, learning, and practice took place. The language workbooks of a learner in each participating teacher's class were reviewed. The quantity, quality, and type of activities evident, especially for reading comprehension, were recorded as well as the quality of the learners' written responses to these activities in terms of amount, content, and developmental level.

Photographs were taken of both the literacy resources available at the school, reading materials available in the Grade 4 teachers' classrooms, and the print environment in the classroom overall.

The data were analyzed for qualitative content (Schreier, 2012) using constructivist grounded theory processes (Char-maz, 2014). The data were coded using descriptive, process,

and in vivo codes (Saldaña, 2013), and categories were created in order to theme the data. The constructivist grounded theory as method of analysis also was utilized for memo writing, which Saldaña (2013) elaborates on in analytical memo writing. This analytical writing and coding was conducted using computer-assisted qualitative data analysis software (CAQDAS), ATLAS.ti™, for transparency, credibity, and audit trail purposes (Zimmerman, 2010). Concepts had been operationalized in an exploratory conceptual framework (Zimmerman, 2010) prior to the data collection to make them measurable via the empirical research process. This gave the opportunity to extend the framework to include additional concepts and constructs as well as commenting on its applicability based on the analysis. Multiple data sources, member validation, and maintenance of a chain of evidence were used as strategies to improve construct validity. Thus, theoretical sampling meant that data collection activities were guided by provisional theoretical ideas to gain a deeper understanding of the cases to facilitate further theorization (Baskarada, 2014).

Methodological Norms

The quantitative and qualitative research traditions both treat issues of validity differently, despite sometimes using similar terminology (Dellinger & Leech, 2007). Although in this research the decision was made to deal with the methodological norms for both the qualitative and the quantitative phases separately, cognizance also was taken that this study was a mixed methods undertaking, which presented other considerations in terms of ensuring the quality of the undertaking. The central consideration for the validity of the mixed methods undertaking was to ensure its design quality and interpretive rigor. Design quality refers to the quality of inputs such as data, design, and data analysis procedures (Teddlie & Tashakkori, 2009). Interpretive rigor involves the integrity of the process of meaning making (Teddlie & Tashakkori, 2009). Also, Teddlie and Tashakkori (2009) indicate that transparency is used as an indicator for quality in both quantitative and qualitative research studies. Transparency is judged by the clarity of explanations that researchers provide regarding all stages of the research process (Teddlie & Tashakkori, 2009).

Related to the validity and reliability of the data used for secondary analysis for Phase 1, any study undertaken under the auspices of the IEA, such as the PIRLS 2006, must conform to the technical standards that have been stipulated for such studies. The standards are grouped into four areas, namely, (a) the design, management, operation, and quality control of international studies; (b) the construction of instruments for measuring student achievement and background questionnaires for collecting information from students, teachers, and schools; (c) survey data collection procedures in schools; and (d) data processing, analysis, and reporting (Gregory & Martin, 2001). These rigorous standards directed the implementation of the PIRLS 2006 study in South Africa.

To ensure consistency in the fieldwork within and across countries and to ensure compliance with the IEA/PIRLS 2006 data collection guidelines and standards, a monitoring process was put into place and an international quality control manager acted as an external, objective observer of the process. National quality control officials also were appointed with monitoring occurring in 8% of the sampled South African schools (Howie et al., 2008).

For the second phase, credibility was approached in terms of the quality of the case studies by means of the collection of multiple sources of data evidence for each of the cases. As another quality check, participants had the opportunity to review, to corroborate, and to revise the research findings, should they deem it necessary, through a process of member validation (Barone, 2004; Bryman, 2004). In terms of the dependability of the second phase research, the CAQDAS (Atlas.ti v.5) software, Atlas.ti™, assisted in the creation of an audit trail in which the evidence for the conclusions drawn were presented in a linear manner to show how the data collection and analysis led to the conclusions drawn (Barone, 2004; Bryman, 2004). Because situated and contextual understandings are at the core of qualitative explanation and argument, Mason (2006) considers it unfortunate that qualitative explanations are seen as being too localized or contextualized to underpin generalization or theorization. She further qualifies this statement by suggesting that understanding how social processes and phenomena are contingent on or embedded within particular contexts is a crucial part of social explanation, and understanding how they play out across a range of different contexts also makes possible cross-contextual generalizations (Mason, 2006), which aligned to the case sampling strategy for this phase of the research. The illustrative, in-depth description that is afforded by the qualitative case study offers ''others ... a database for making judgements about the possible transferability of findings to other milieus'' (Bryman, 2004, p. 275).

Ethical Undertakings for the Research

Consent to conduct the PIRLS 2006 main study was received from the then National Minister of Education by the PIRLS national research co-ordinators in 2005. The sampled schools' teachers and learners who participated in the PIRLS 2006 gave their informed consent and assent for participation prior to data collection for the study. Permission also was sought from the learners' parents (Howie et al., 2008). Therefore, permission to use the South African PIRLS 2006 data for secondary analysis for this research was obtained from the PIRLS national research co-ordinators at the Centre for Evaluation and Assessment, University of Pretoria. For the second phase case studies, informed consent to conduct the research was obtained from the National DoE, school management at each school, and teacher participants. Clearance to undertake the research was received from the Ethics Committee of the Faculty of Education at the University of Pretoria, South Africa, and all ethical principles for conducting research were adhered to.

Implications: How Data From an International Comparative Study of Educational Achievement Informed Qualitative Research

The unique contribution of the study was its illustration of teaching practices across the PIRLS achievement spectrum particularly by means of sampling of the qualitative case studies linked to class average achievement on the PIRLS 2006 benchmarks. The rationale for the focus on the PIRLS 2006 achievement benchmarks for sampling for the case studies was not based on a goal to investigate teacher effectiveness aligned to learner performance in the PIRLS. Rather, it was to investigate how teachers engaged with learner literacy instruction, given a number of average learner performance outcomes, ranging from low to high performance, and schooling conditions. Instead of offering definitive explanations for learner performance in PIRLS 2006 in terms of teachers' practices and schooling conditions (a goal often linked to large-scale assessment studies), the goal was to offer nuanced perspectives of how teachers are addressing reading literacy instruction for learner cohorts functioning at a variety of levels on the literacy development continuum in various contexts representative of schooling in South Africa (Zimmerman, 2010). As far as we are aware, this is the first time that learner achievement profiles from an international comparative study of educational achievement such as the PIRLS have been used to inform case selections.

Given the sheer size of the South African Grade 4 sample (n = 14,299) and the data available for secondary analysis of the PIRLS 2006 teacher and school questionnaire as well as the acknowledgment that thorough, in-depth case studies would need to be undertaken to complement the quantitative data, the mixed methods research study was conceptualized as a partially mixed equal status design. Based on later reflection on which findings shed the most light on schooling practices and teaching conditions for the implementation of the curriculum for reading literacy development across a range of educational contexts in South Africa, it became evident that the qualitative phase of the research study was perhaps the dominant component of the mixed methods research study in terms of outcomes for the research. As such, it might be that the actual process of the research study leads to a better determination of the research typology used for mixed methods research and brings to light how one determines the weight of each component of the research.

Although some insights were gleaned from the quantitative phase descriptive statistics regarding teachers' comprehension instruction practices, and significant differences were found between the benchmarks, it was difficult to ascertain any major patterns of response distribution or practices that stood out from the others at each of the class average benchmarks. Although the reason for this is not entirely certain, this might have been as a result of overly positive reporting by teachers for the items or misunderstandings of the meaning of the items. The use of teacher questionnaires in relation to teaching practices in low-performance contexts such as South Africa,

therefore, might be problematic because teachers might feel vulnerable and defensive, thereby resulting in unreliable or unrealistic responses (Zimmerman, 2010). Another possible explanation as pointed out by Shiel and Eivers (2009) in relation to the PIRLS teacher questionnaire data is that:

There is difficulty in establishing associations between frequency of teaching various skills or strategies and student performance. Teachers may emphasise a particular strategy (e.g. daily teaching of decoding, engagement of students in oral reading) because a class is weak and needs additional support, or because it is on the curriculum and must be covered. Hence, many associations between frequency of instruction and achievement in PIRLS are weak, not statistically significant, or counter-intuitive. (p. 355)

Reddy (2005) further indicates that increased local engagement in the design and implementation of cross-national studies are required to inform policy and practice. For this purpose, the background questionnaires for these studies need to have contextual relevance. Where there are systemic differences between countries, these need to be reflected in the items of the background questionnaires to capture national circumstances so as to be of value in that context beyond just provision of comparative achievement scores with other countries (Reddy, 2005).

Although secondary data analysis can reveal what is happening, it cannot disclose why in detail, because this requires combined approaches (Smith, 2008). As a result, the Phase 2 case studies of teachers' instructional practices became more important and, thus, perhaps more dominant in terms of the research design in extending the findings further. Perhaps in contexts where there is less of an issue in terms of quality of teaching and associated learning outcomes, qualitative research as a follow-up to illuminate practices might not be as crucial, but in contexts such as South Africa where the reasons for poor learner achievement are not well researched in the scholarly literature, the use of qualitative cases was particularly useful to gain further insights. The value of these case study findings to feed back into the selection of national option items for the background questionnaires for further cycles of international comparative studies such as PIRLS is an important consideration too, particularly in light of Reddy's (2005) comments and those of other researchers with regard to these assessments (Rutkowski & Rutkowski, 2010).

Additionally, most school improvement research has involved the study of places where something exceptional appears to be happening such as those schools with high-class mean achievement for the PIRLS 2006. A strength of this study is that it also involved the investigation of a range of school situations to learn what is possible under normal circumstances (Levin, 2006). The identification and sampling of schools across the achievement spectrum could not have been determined without the reclassification of the learner achievement data from the PIRLS 2006. Levin (2006) argues that we will not learn how to improve learner outcomes broadly by

looking only at places that are already exceptional. Indeed, the research design for this study fitted with Levin's (2006, p. 401) suggestion that we need ''... less attention to studies of effective schools that are based on outliers in favour of much broader samples of schools, including some that are failing badly.''

The case studies allowed for the presentation of research in a more publicly accessible format capable of serving multiple audiences. The research process itself is thought to have been more accessible and, as such, is argued to aid in the democratization of decision making and knowledge (Cohen et al., 2000). Associated with this strength is the recognition that the concrete, practical, context-bound knowledge produced by case studies can contribute to the learning process of others who can use it to aid in their understanding of the issue that is illustrated. Because the aim of the research was to be praxis enriching, the case study approach taken provided an avenue for learning about the practical manifestations and implications of teachers' practices through case studies (Flyvbjerg, 2004). These case studies, therefore, began in the practical world of teachers' practices and experiences in specific schooling contexts but the knowledge generated in terms of these cases is considered as being capable of contributing to practical situations and theory building (Cohen et al., 2000). Because the improvement ofliteracy development in South African schools is a national priority that has to involve multiple role players at national, regional, and local levels, the value of accessible, practical knowledge to aid decision making is paramount.

In consideration of more practical limitations for the first phase of the research, a number of points are offered. Firstly, the number of subsamples created and used for the secondary analysis did create complexity in the analysis and interpretation of the differences and similarities between each of the subsam-ples. Also, although language of instruction is a fundamental issue in South African schools, it was difficult to differentiate conditions and practices between EAL and EFL schools below the PIRLS international benchmarks via the quantitative methods used. Another limitation was the small sample size for the EAL 325 and EFL 550 subsample groupings, which meant that associated findings for these groupings are illustrative and not generalizable to the overall school population. It must also be noted that content validity for the South African benchmarks of 175 and 325 on the PIRLS achievement scale was not established because such an undertaking was outside of the parameters of the present study (Zimmerman, 2010).

A limitation also might have been created by the sequential nature of the mixed methods research design chosen for this research study. Smith (2008) mentions that a problem can be defined by large-scale analysis of relevant secondary data of a numeric nature. In a second phase, this problem can be examined in-depth with a subset of cases selected from the first phase (Smith, 2008). Because secondary data were used to inform the second phase of this research study that involved the generation of primary data, there was a delay between the collection of the PIRLS 2006 data in 2005 and data collection from schools and teachers in 2009. This time delay was not

regarded as problematic because no major changes to these educational settings, to the larger communities in which these schools are situated, to learner educational characteristics, or to teacher expertise were surmised for this time period. Moreover, formalized government initiatives to improve reading instruction in schools were introduced in the first quarter of 2008 (DoE, 2008a, 2008b, 2008c, 2008d) and were only in the process of being implemented in schools at the time of data collection. Related to this time delay, participant selection at these school sites might have been more of a limitation. Although an attempt was made to contact those teachers who participated in the 2005 PIRLS main study, this was not feasible in every instance. Thus, of the six teachers who participated, only one could definitely remember participating in the 2005 study. Nevertheless, the characteristics of the teachers who did participate were not dissimilar to the characteristics of those teachers from the 2005 study, as identified by the descriptive statistics. It was determined that their age ranges followed the same trends for the majority of teachers and their years of experience also suggested similar characteristics to the participants for the 2005 study. Moreover, analysis of these teachers' practices revealed that they generally aligned to overall trends linked to whether or not these teachers were teaching in low- or high-performing schools from the PIRLS 2006 (Zimmerman, 2010)

As far as the improvement of learner achievement in South Africa is concerned, the most successful countries tend to be those with the lowest levels of inequality (Levin, 2010). Thus, the onus is still on all role players in the South African education system to work toward lessening the existing inequalities that perpetuate the achievement gap between privileged and nonprivileged learners. Levin and Fullan (2008) sum up the task that lies ahead for all role players most pertinently:

Large-scale, sustained improvement in student outcomes requires a sustained effort to change school and classroom practices, not just structures such as governance and accountability. The heart of improvement lies in changing teaching and learning practices in thousands and thousands of classrooms, and this requires focused and sustained effort by all parts of the education system and its partners. (p. 291)

The mixed methods research study undertaken allowed for macro-level insights into potential reasons for learners' poor achievement to use for such improvement initiatives. Most importantly, the micro-level qualitative exploration and illustration of grassroots-level schooling conditions and teaching practices across a range of educational contexts in the country linked to a larger representative sample at the macro level for PIRLS 2006 meant that, as Fleisch (2008) earlier implored, there was a contribution to much needed in-depth research into the processes and realities present in individual contexts for reading literacy development.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

Barone, D. M. (2004). Case-study research. In N. K. Duke & M. H. Mallette (Eds.), Literacy research methodologies (pp. 7-27). New York, NY: The Guilford Press.

Baskarada, S. (2014). Qualitative case studies guidelines. The Qualitative Report, 19, 1-25. Retrieved from http://www.nova.edu/sss/ QR/QR19/baskarada24.pdf

Brannen, J. (1992). Combining qualitative and quantitative approaches: An overview. In J. Brannen (Ed.), Mixing methods: Qualitative and quantitative research (pp. 3-37). Aldershot, England: Avebury.

Brannen, J. (2004). Working qualitatively and quantitatively. In C. Seale, G. Gobo, J. F. Gubrium, & D. Silverman (Eds.), Qualitative research practice (pp. 312-326). London, England: Sage.

Bryman, A. (2004). Social research methods (2nd ed.). Oxford, England: Oxford University Press.

Bryman, A. (2007). Barriers to integrating quantitative and qualitative research. Journal of Mixed Methods Research, 1, 1-18. doi:10. 1177/2345678906290531

Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Thousand Oaks, CA: Sage.

Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education (5th ed.). London, England: Routledge Falmer.

Dellinger, A. B., & Leech, N. L. (2007). Toward a unified validation framework in mixed methods research. Journal ofMixed Methods Research, 1, 309-332. doi:10.1177/1558689807306147

Department of Education. (2003). Systemic evaluation foundation phase mainstream national report. Pretoria, South Africa: Author.

Department of Education. (2005). Systemic evaluation report: Intermediate phase Grade 6. Pretoria, South Africa: Author.

Department of Education. (2008a). Foundations for learning campaign: 2008 -2011 (Government Gazette, Vol. 513, No. 30880). Pretoria, South Africa: Author.

Department of Education. (2008b). National reading strategy. Pretoria, South Africa: Author.

Department of Education. (2008c). Teaching reading in the early grades. A teacher's handbook. Pretoria, South Africa: Author.

Department of Education. (2008d). Foundations for learning campaign. Assessment framework. Intermediate phase. Pretoria, South Africa: Author.

Fleisch, B. (2008). Primary education in crisis: Why South African schoolchildren underachieve in reading and mathematics. Cape Town, South Africa: Juta & Co.

Flyvbjerg, B. (2004). Five misunderstandings about case-study research. In C. Seale, G. Gobo, J. F. Gubrium, & D. Silverman (Eds.), Qualitative research practice (pp. 420-434). London, England: Sage.

Gregory, K. D., & Martin, M. O. (2001). Technical standards for IEA studies: An annotated bibliography. Amsterdam, the Netherlands: International Association for the Evaluation of Educational Achievement.

Howie, S. J., van Staden, S., Tshele, M., Dowse, C., & Zimmerman, L. (2012). Progress in International Reading Literacy Study 2011. Summary report. South African children's reading literacy achievement. Pretoria, South Africa: Centre for Evaluation and Assessment, University of Pretoria. Howie, S. J., Venter, E., van Staden, S., Zimmerman, L., Long, C., du Toit, C.,... Archer, E. (2008). Progress in International Reading Literacy Study 2006 summary report: South African children's reading literacy achievement. Pretoria, South Africa: Centre for Evaluation and Assessment, University of Pretoria. International Association for the Evaluation of Educational Achievement. (2009). International database analyzer (version 2.0.0.0). Computer software. Hamburg, Germany: IEA Data Procession and Research Center.

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal ofMixed Methods Research, 1, 112-133. doi:10.1525/sp.1960.8.2.03a00030 Kennedy, A. M. (2007). Chapter 3. Developing the PIRLS 2006 background questionnaires. In M. O. Martin, I. V. S. Mullis, & A. M. Kennedy (Eds.), PIRLS 2006 Technical report (pp. 23-33). Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College. Kennedy, A. M., & Trong, K. L. (2007). Chapter 12. Reporting student achievement in reading. In M. O. Martin, I. V. S. Mullis, & A. M. Kennedy (Eds.), PIRLS 2006 Technical report (pp. 173-193). Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College. Klapwijk, N. M. (2012). Reading strategy instruction and teacher change: Implications for teacher training. South African Journal of Education, 32, 191-204. Leech, N. L., & Onwuegbuzie, A. J. (2009). A typology of mixed methods research designs. Quality and Quantity, 43, 265-275. doi:10.1007/s11135-007-9105-3 Levin, B. (2006). Schools in challenging circumstances: A reflection on what we know and what we need to know. School Effectiveness and School Improvement, 17,399-407. doi:10.1080/09243450600743459 Levin, B. (2010). The challenge of large-scale literacy improvement. School Effectiveness and School Improvement, 21, 359-376. doi: 10.1080/09243453.2010.486589 Levin, B., & Fullan, M. (2008). Learning about system renewal. Educational Management Administration & Leadership, 36, 289-303. doi:10.1177/1741143207087778 Mason, J. (2006). Mixing methods in a qualitatively driven way. Qualitative Research, 6, 9-25. doi:10.1177/1468794106058866 McDonnell, L. M. (1995). Opportunity to learn as a research concept and a policy instrument. Educational Evaluation and Policy Analysis, 17, 305-322. doi:10.3102/01623737017003305 Merriam, S. B. (1998). Qualitative research and case study applications in education: A conceptual introduction. San Francisco, CA: Jossey-Bass.

Moloi, M., & Strauss, J. (2005). South Africa working report. The SACMEQ II project in South Africa: A study of the conditions of schooling and the quality ofeducation. Harare, Zimbabwe: SAC-MEQ; Ministry of Education, South Africa. Mullis, I. V. S., Kennedy, A. M., Martin, M. O., & Sainsbury, M. (2006). PIRLS 2006 Assessment framework and specifications.

Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College. Mullis, I. V. S., Martin, M. O., Kennedy, A. M., & Foy, P. (2007). PIRLS 2006 international report: IEA's study of reading literacy achievement in primary schools. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. National Education and Evaluation Development Unit. (2013). National report 2012. The state of literacy teaching and learning in the Foundation Phase. Pretoria, South Africa: Author. Onwuegbuzie, A. J., & Collins, K. M. T. (2007). A typology of mixed methods sampling designs in social science research. The Qualitative Report, 12, 281-316. Retrieved from http://www.nova.edu/ sss/QR/QR12-2/onwuegbuzie2.pdf Postlethwaite, T. N., & Ross, K. N. (1992). Effective schools in reading: Implications for educational planners. An exploratory study. The Hague, the Netherlands: International Association for the Evaluation of Educational Achievement. Pretorius, E. J. (2014). Supporting transition of playing catch-up in Grade 4? Implications for standards in education and training. Perspectives in Education, 32, 51-76. Pretorius, E. J., & Machet, M. P. (2004). The socio-educational context of literacy accomplishment in disadvantaged schools: Lessons for reading in the early primary school years. Journal ofLanguage Teaching, 38, 45-62. doi:10.4314/jlt.v38i1.6027 Purcell-Gates, V. (2004). Ethnographic research. In N. K. Duke & M. H. Mallette (Eds.), Literacy research methodologies (pp. 92-113). New York, NY: Guilford Press. Reddy, V. (2005). Cross-national achievement studies: Learning from South Africa's participation in the Trends in International Mathematics and Science Study (TIMSS). Compare: A Journal of Comparative and International Education, 35, 63-77. doi:10.1080/ 03057920500033571 Reynolds, D. (1998). Schooling for literacy: A review of research on teacher effectiveness and school effectiveness and its implications for contemporary educational policies. Education Review, 50, 147-162. doi:10.1080/0013191980500206

Rutkowski, L., & Rutkowski, D. (2010). Getting it 'better': The importance of improving background questionnaires in international large-scale assessment. Journal of Curriculum Studies, 42, 411-430. doi:10.1080/00220272.2010.487546 Saldaña, J. (2013). The coding manual for qualitative researchers

(2nd ed.). Thousand Oaks, CA: Sage. Schreier, M. (2012). Qualitative content analysis in practice. Thousand Oaks, CA: Sage. Shiel, G., & Eivers, E. (2009). International comparisons of reading literacy: What can they tell us? Cambridge Journal of Education, 39, 345-360. doi:10.1080/03057640903103736 Smith, E. (2008). Pitfalls and promises: The use of secondary data analysis in educational research. British Journal of Educational Studies, 56, 323-339. doi:10.1111/j.1467-8527.2008.00405.x Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioural sciences. Thousand Oaks, CA: Sage.

Trong, A. M., & Kennedy, A. M. (2007). Chapter 13. Reporting PIRLS 2006 questionnaire data. In M. O. Martin, I. V. S. Mullis, & A. M. Kennedy (Eds.), PIRLS 2006 Technical report (pp. 195-122). Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College. Yin, R. K. (2003). Case study research: Design and methods (3rd ed.).

Thousand Oaks, CA: Sage. Zimmerman, L. (2010). The influence of schooling conditions and teaching practices on curriculum implementation for Grade 4 reading literacy development. Unpublished doctoral thesis, University of Pretoria, Pretoria, South Africa. Zimmerman, L., Howie, S. J., & Smit, B. (2011). Time to go back to the drawing board: Organisation of primary school reading development in South Africa. Educational Research and Evaluation, 4, 215-232. doi:10.1080/13803611.2011.620339 Zimmerman, L., & Smit, B. (2014). Profiling classroom reading comprehension development practices from the PIRLS 2006 in South Africa. South African Journal of Education, 34, 1-9.