Scholarly article on topic 'Evidence-Based Research in Community Rehabilitation: Design Issues and Strategies'

Evidence-Based Research in Community Rehabilitation: Design Issues and Strategies Academic research paper on "Educational sciences"

0
0
Share paper
OECD Field of science
Keywords
{"Chronic illness" / "Community rehabilitation" / Evidence-based}

Abstract of research paper on Educational sciences, author of scientific article — Andrew M.H. Siu, Daniel T.L. Shek, Peter K.K. Poon

This review highlights a number of methodological issues that arise when a randomised controlled trial (RCT) is conducted on community rehabilitation programmes. These methodological issues are discussed with reference to examples of evidence-based studies conducted with the Hong Kong Society for Rehabilitation. In conducting RCTs of community rehabilitation programmes, we recommend using randomisation, a control or comparison group, at least single-blinding, and objective outcome measures. We also discuss strategies used to control inter-subject differences, the importance of pilot testing, and follow-up assessments. Qualitative evaluation and process evaluation can provide important evidence for enhancing the quality of programmes and examining why and how programmes either work or do not work. In view of the resources available to community rehabilitation settings, we recommend a combination of four strategies in community trials: (a) quantitative evaluation using experimental or quasi-experimental designs, (b) subjective outcome evaluation, (c) qualitative evaluation, and (d) process outcome evaluation.

Academic research paper on topic "Evidence-Based Research in Community Rehabilitation: Design Issues and Strategies"

HKJOT 2009;19(1):20-26

INVITED COMMENTARY

Evidence-based Research in Community Rehabilitation: Design Issues and

Strategies

Andrew M.H. Siu1, Daniel T.L. Shek2 and Peter K.K. Poon3

This review highlights a number of methodological issues that arise when a randomised controlled trial (RCT) is conducted on community rehabilitation programmes. These methodological issues are discussed with reference to examples of evidence-based studies conducted with the Hong Kong Society for Rehabilitation. In conducting RCTs of community rehabilitation programmes, we recommend using randomisation, a control or comparison group, at least single-blinding, and objective outcome measures. We also discuss strategies used to control inter-subject differences, the importance of pilot testing, and follow-up assessments. Qualitative evaluation and process evaluation can provide important evidence for enhancing the quality of programmes and examining why and how programmes either work or do not work. In view of the resources available to community rehabilitation settings, we recommend a combination of four strategies in community trials: (a) quantitative evaluation using experimental or quasi-experimental designs, (b) subjective outcome evaluation, (c) qualitative evaluation, and (d) process outcome evaluation.

KEY WORDS: Chronic illness • Community rehabilitation • Evidence-based

Introduction

Evidence-based practice is a collection of methods designed to integrate research evidence into the clinical reasoning process of health professionals (Law, 2002); it also involves an intensive and judicious review of the best available evidence for specific interventions in clinical practice. While the concept of evidence-based practice has been widely adopted by rehabilitation professionals in Hong Kong, local research evidence available in the field of community rehabilitation, however, is relatively scarce. This paper reviews our experience of evidence-based research on community rehabilitation programmes in Hong Kong, especially with reference to a series of studies conducted with the Hong Kong Society for Rehabilitation (HKSR). We discuss the key methodological issues in conducting clinical trials using experimental designs in community

rehabilitation settings. We also suggest strategies for handling these methodological issues and discuss why and how qualitative evaluation methods and process outcome evaluation may enhance the information collected from quantitative outcome research.

Clinical Trials of Community Rehabilitation Programmes

The quality of evidence-based research studies are often judged on the degree to which the study design adheres to the "gold standard" of a randomised controlled trial (RCT). RCTs are true experimental designs in which the researcher can effectively manipulate the independent variables, assign subjects randomly to control and experimental groups, apply a standard intervention and data collection protocol in the research process, and

department of Rehabilitation Sciences, The Hong Kong Polytechnic University, 2Department of Applied Social Sciences, The Hong Kong Polytechnic University, and 3The Hong Kong Society for Rehabilitation, Hong Kong SAR, China.

Reprint requests and correspondence to: Dr. Andrew M.H. Siu, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR, China. E-mail: rsandsiu@inet.polyu.edu.hk

Hong Kong Journal of Occupational Therapy ©2009 Elsevier. All rights reserved.

use double-blinding so that both subjects and investigators are unaware of whether a subject belongs to the experimental or control group (Portney & Watkins, 2000). RCTs present strong evidence for guiding practice, because they are designed to examine causal relationships between rehabilitation interventions and outcomes while ruling out alternative explanations of results. In systems for the classification of clinical evidence (such as Scottish Intercollegiate Guidelines Network, 1999), clinical evidence is regarded as more valuable the closer the characteristics of a study are to an RCT.

While RCTs have been widely accepted as providing the best evidence, they also have many known limitations. RCTs tell us the outcome of interventions, but they do not address how treatment works, why it fails to work, or how it could work better (Marshall, 2002). It is also difficult to conduct RCTs when a clinical condition is rare, or when it is substantially difficult to recruit a large sample pool for randomisation. RCTs are also set up to test a specific intervention under specific experimental conditions, and practitioners may find it difficult to apply these interventions under such optimal testing conditions in community rehabilitation setting. There is also criticism that significant results obtained from RCTs may not represent significant clinical changes in patients (Marshall, 2002).

In the past few years, we have conducted a number of evidence-based studies to evaluate community rehabilitation programmes with the HKSR, including the Health-in-Action (HIA) programme (Chan, Siu, Poon, & Chan, 2005; Siu, Chan, Chui, & Poon, 2004), and the Arthritis Self-Management programme (Siu & Chui, 2004). We have identified six common issues in the set-up of an experimental design in community rehabilitation settings: (a) set-up of control groups, (b) heterogeneity in research subjects, (c) use of blinding, (d) choice of objective outcome measures, (e) a pilot study, and (f) follow-up assessment.

Randomisation and Set-up of the Control Group First, we note a number of issues related to the set-up of control groups for clinical trials in community rehabilitation settings. These settings usually recruit programme participants through referrals from a network of professionals, promotional materials, or open seminars. In community settings, it is often quite difficult to convince participants that they may be randomly assigned to a control group after they have volunteered to join the intervention programme. Because of this difficulty in recruiting research participants, we have used comparison groups that result in quasi-experimental designs. The first example is a clinical trial of the HIA programme (known as the Chronic Disease Self-Management Programme in English-speaking countries), in which we used a Tai Chi interest group for

comparison with the HIA. While the randomisation of subjects to experimental and comparison groups still provides good protection against history and maturation effects, it is also well known that Tai Chi groups have a beneficial effect on some aspects of health. The effects of the HIA could be masked in comparison with Tai Chi. Therefore, during data analysis, it was necessary to conduct single-group analysis as well as comparisons between groups.

In a second study evaluating an arthritis self-management programme (Siu & Chui, 2004), the comparison group was composed of participants who did not join the intervention programme after the initial briefing. We actually found a higher baseline in those participants in the comparison group than in the experimental group, which explains why they were less motivated to join the intervention. While we believe that the use of comparison groups is usually more feasible in community trials than control groups, research results based on the use of comparison (or non-equivalent) groups need to be carefully discussed with regard to self-selection biases.

Control of Inter-subject and Inter-group Differences To address the needs of people with disabilities, community rehabilitation programmes often try to be as inclusive as possible. Using the HIA project as an example, the selection criteria for the study is that participants should have a chronic illness, be aged 18 and above, and not have attended any self-management or health education programmes in the past 2 years. Based on these selection criteria, subjects are likely to have a wide range of traits (such as age, type of illness, and years of education) that could be linked to the outcome variables. The heterogeneity of research participants can also lead to potential problems in randomisation, such as an imbalance in baselines of outcome variables between the experimental and control groups. The power of statistical analysis can be lessened as well, because the observed outcome is a compound effect of the intervention as well as specific subject traits.

Control over inter-subject or inter-group differences can be imposed through several methods, such as adding inclusion and exclusion criteria in subject selection (especially in the control of previous exposure to similar programmes), screening subjects according to objective measures, matching subjects before randomisation, and using blocking, a repeated measures design, or analysis of covariance (statistical control) (Portney & Watkins, 2000). In the study of the HIA programme, we found it difficult to add more criteria to subject selection, as this is likely to result in greater difficulty in subject recruitment. In view of these issues, we used a repeated measures design and matching of subjects (using the variables of history of illness and diagnostic group) as the key strategies for reducing the

heterogeneity of subjects between the experimental and comparison groups.

Use of Blinding

Double-blinding refers to a situation in which neither the researcher nor the research participants know which group a patient falls into. It is used to counter the threat to validity that researchers may give biased outcome assessment ratings of participants in the experimental and control groups if grouping is known to them. In addition, if participants are aware of their own grouping, it may lead to potential changes in their motivation. Participants in the control group may try to obtain compensatory treatment or put more effort into self-management to compensate for their potential loss of treatment opportunities. In the studies we carried out, it was not always possible or feasible to implement blinding. Since many community rehabilitation programmes are prolonged, participants can usually notice which group they are in. Blinding of researchers, however, can often be achieved by hiring a research or rehabilitation assistant as an independent assessor of objective outcomes. In our experience, it appears that a single-blind study, which involves blinding of the therapist or researcher, is often more feasible in research on community rehabilitation programmes.

Choice of Objective Outcome Measures In the selection of appropriate outcome measures, the researcher needs to review the theory behind the design of the programme, identify the expected outcomes of the programme according to intervention objectives, and conduct a review and critique of available outcome measures for the key dependent variables (Finch, 2002). In the studies we have conducted on the self-management programme, we have largely adopted outcome measures developed in English-speaking countries (Lorig et al., 1996). In adopting an overseas instrument, it is necessary to conduct a pilot study to evaluate the quality of the translation, content validity, cultural relevance, and reliability (interrater, test-retest; Benson & Clark, 1990). If more resources are available, the researcher may consider recruiting a larger sample to evaluate factorial structure of the instrument and collect further evidence on construct validity (Kline, 2000).

While self-report measures are commonly used in evaluating community-based interventions, objectives measures should be collected whenever possible. For example, in self-management or patient education programmes for persons with diabetes, obesity or kidney disease, suitable laboratory and blood tests can be used. On the other hand, global measures such as quality of life, well-being, and general and mental health are often less sensitive to changes in outcome in the short-term. These measures

should be applied carefully and sparingly unless these outcomes are expected from the programme in the follow-up.

Last, it is increasingly important for community-based programmes to collect information on the cost-savings of a programme. Because of the escalating costs of health care in developed countries, government and funding bodies are increasingly interested in obtaining evidence on the cost-savings of new programmes. While most community service agencies may not have the resources and expertise to conduct a full-scale cost-benefit analysis of programmes, service agencies should provide data on how new interventions may reduce hospital stays and visits to specialist clinics, and on whether patients may become well enough to fulfil life roles (such as worker or homemaker). While we have collected this information in studying self-management programmes for people with chronic illness, we have not often found significant changes in utilisation of health care services among patients, as patients are often afraid to leave the queue of public health care services (which is largely free in Hong Kong).

Pilot Study

Conducting an experimental design in community rehabilitation settings could be quite expensive and involve many changes to regular practices in administration, intervention, and evaluation. Our experience shows that a pilot test is essential in the success of a full-scale experiment in community settings. In a typical pilot test, the researcher conducts the study with a smaller study sample using designs such as single-group pretest post-test designs, multiple baseline designs, or single-case study designs (Ottenbacher, 1986; Todman, 2001). During the pilot test, the researcher may wish to determine whether intervention programmes and data collection procedures are running smoothly, to collect data on the reliability of the instruments, and to obtain effect sizes for estimation of the required sample size. Pilot tests are also helpful for showing the workers involved that the experiment is feasible and to increase the confidence of administrators that a bigger investment in the study is worthwhile.

Follow-up Assessment

Follow-up assessments are often conducted in evidence-based research for community rehabilitation programmes for several reasons. First, the outcomes of many community rehabilitation programmes may take time to consolidate. For example, in the HIA programme, participants may need time to practice implementing their exercise plans and relaxation activities and develop these habits gradually. Second, adding follow-up measurements in the research design can substantially increase the power of statistical analysis. In a repeated measures design,

subjects become their own controls and we can estimate the changes over time in the within-subject effects. In most cases, this increases the power of the F test for testing the between-group effects.

A key design issue is determining the timing of the follow-up assessment. Extended follow-up assessment can be expensive to implement, and attrition of subjects usually increases over time. To determine the timing of follow-up assessments, researchers need to carefully consider how expected outcomes may change over the period of follow-up assessment. Problems of attrition may be reduced by rapport with the therapist or the cohesiveness of group participants, or consolidating the commitment of participants at the beginning of the programme. In cases of attrition, researchers should consider exploring any systematic differences in profile, outcomes or satisfaction between those who drop out and those who continue participation. The researcher might also consider using intention-to-treat analysis if needed.

Strategies for the Design of Evidence-based Research for Community Rehabilitation Programmes

Summing up the previous discussion on the design issues of community trials, we recommend the combination of four strategies for collecting evidence in community-based rehabilitation: (a) quantitative evaluation using experimental designs, (b) subjective outcome evaluation, (c) qualitative evaluation, and (d) process evaluation.

Strategy 1: Quantitative Evaluation Using Experimental Designs

Quantitative evaluation using experimental designs provides key evidence in support of community rehabilitation programmes. RCTs should be attempted whenever possible, because they provide strong evidence for new programmes. In view of the practical difficulties of setting up a RCT, the use of quasi-experiments is widespread among clinical trials in community rehabilitation services (McCall, Green, Strauss, & Groark, 1998).

Among the compendium of potentially useful quasi-experimental designs (Reichardt & Mark, 1998), non-equivalent group designs appear to be the most relevant and feasible in the practice of community clinical trials. In non-equivalent group designs, in which participants are placed into groups on a non-random basis to deliver the intervention, threats to internal and construct validity have been the major design issues facing community trials. Non-equivalent group designs provide partial protection against history and maturation effects

but are prone to self-selection bias. On the whole, several design strategies may be applied to address threats to validity in quasi-experiments (Murray, Moskowitz, & Dent, 1996):

(a) a priori matching or stratifying groups before intervention,

(b) equating non-equivalent groups on the pre-test of outcome measures using analysis of covariance, (c) using objective outcome measures, (d) hiring independent evaluation personnel who are blind to the grouping of research participants, and (e) employing different research methodologies and using triangulation to examine whether results converge. With regard to data analysis, general linear models (GLMs) have been widely employed in community group trials (Varnell, Murray, Janega, & Blitstein, 2004). It is also worth noting that recent advances in linear mixed models and growth modelling may be a promising strategy for analysing and interpreting data from non-equivalent group designs (Bijleveld et al., 1998).

Strategy 2: Subjective Outcome Evaluation Subjective outcome evaluation should be conducted for community rehabilitation programmes, the major areas of evaluation including satisfaction levels with the programme and its perceived outcomes. With regard to levels of satisfaction, questions can be designed to assess the perceived relevance of the programme to its participants' needs, and satisfaction with the intervention or teaching-learning process, with the worker, and with the arrangement of the programme. In terms of perceived outcomes, the researcher can ask participants the degree to which they think the objectives of the programme have been achieved and the perceived benefits of the programme. Both the research participants and the rehabilitation workers should complete a subjective outcome evaluation questionnaire to assess these aspects of the programme (Shek, Ng, Lam, Lam, & Yeung, 2003), which in turn can provide valuable information for programme enhancement and development.

Strategy 3: Qualitative Evaluation

Qualitative studies can answer questions such as, "What are the changes and do they exist?" or "questions of process" (Williams, 2002), and provide important information on the outcomes and process of an intervention (Shek, Tang, & Han, 2005). For the evidence-based studies we have carried out, we used focus groups as the main tool in qualitative evaluation. In-depth interviews were conducted with programmes run in an individual mode, such as the weight management programme or the self-hypnosis workshops offered by the HKSR. The interview schedule for focus group or in-depth interviews usually includes questions on initial perceptions of the programme before joining, reasons for joining the programme, perceived benefits and limitations, the teaching-learning or intervention

process, personal factors affecting the success of the programme, satisfaction with the programme (such as delivery or arrangement of the programme), and what may be missing from the programme (unmet needs). If resources allow, it is most helpful to conduct focus group interviews with the participants before and after the intervention by the same interviewer. In addition to a list of semi-structured questions, we also find that programme pamphlets or flip charts are very useful memory aids in focus group interviews. Furthermore, participants are encouraged to retain their programme workbook, log book, action plans or self-monitoring records (diaries) as an aid to their memory and for sharing their experience with others in the discussion.

To prevent bias in responding, it is desirable that participants in intervention groups be interviewed by a researcher who is not involved in delivering the intervention. We have often noted that participants who perceive the programme as being more beneficial and who are more expressive are often more motivated to participate in focus group evaluations. It should be noted that there could be a substantial positive bias when discussing the perceived benefits of programmes in focus groups. Therefore, instead of just focusing on the benefits or success of the programme, the researcher may wish to devote more time and effort in understanding how and why the programme works, especially with regard to the change process leading to the positive outcomes.

Strategy 4: Process Evaluation

Process evaluations are very helpful for programme development, monitoring of programme quality, and evaluation, and should go hand-in-hand with outcome evaluation (Chen, 1996). Researchers, clinicians, participants, caregivers, workers, administrators, and other stakeholders can be invited to participate in focus group discussions and individual interviews to give their views on the programme's design and implementation. Regular process evaluation in the form of meetings, contacts, interviews, focus groups, and surveys can be conducted with the workers and instructors to help in understanding the implementation details. Based on the views collected, issues of concern can be discussed and resolved, and proposals for changes in the programme content and implementation are able to be identified. In community rehabilitation programmes, many process variables are known to confound programme outcomes (Markham, Basen-Engquist, Coyle, Addy, & Parcel, 2002), such as (a) inadequate training of workers, (b) low fidelity of programme implementation, (c) inadequate dosage of the programme delivered, (d) low motivation of participants, and (e) concurrent and previous exposure to other similar programmes. Several strategies can be implemented to

minimise the adverse effects of the above problems on the programme's effects. With regard to the problem of inadequate training, all rehabilitation workers should be required to receive standardised training on leading the programme before they can implement it. Concerning the problems of programme fidelity and dosage, information (e.g. actual hours of delivery and degree of programme fidelity) can be collected for self-monitoring and self-reflection. With reference to participants' motivation, researchers need to plan ahead on ways to motivate workers and students. Briefing sessions should be conducted before the start of programmes to promote the interest of participants and workers, and to help participants develop appropriate expectations of the programme.

In studies in which a large number of process variables may confound outcomes, the researcher should first attempt to conduct bivariate analysis of pairs of processes and outcome variables (such as using correlation and linear regression) before multivariate analyses are attempted. The influence of process variables could be evaluated as a covariate in GLMs, or mediator and moderator analyses could be conducted.

Process evaluation could also be used to test theories underlying the change mechanisms of the intervention. For example, we used social cognitive theories to theorise the change process in self-management of chronic illness. In one study (Chan, Chan, & Siu, 2004), we showed that the cultivation of social norms in HIA group programmes would lead to changes in values and attitudes towards self-management. In another study (Lung, Siu, Yau, & Chan, 2005), we showed that group dynamics (such as cohesiveness, interpersonal learning, and universality) would induce positive changes in the self-efficacy and self-management behaviour of individual members. The evidence gathered from analysing the relationships between changes in outcomes could help to test the theory of change and highlight the factors contributing to the change process of the intervention.

Conclusion

The quality of evidence in research on community rehabilitation programmes has typically been assessed in terms of how far an experiment adheres to the gold standard of RCT. Because of a number of practical issues in setting up RCTs in community settings, researchers should attempt to implement randomisation of subjects, as well as use control groups, at least use single-blinding, and carry out objective outcome measures to limit the key threats to validity. To control inter-subject or inter-group differences, researchers should carefully review their selection and exclusion criteria, conduct screening of subjects, and apply matching or blocking procedures. Follow-up

assessments are essential in evaluating community-based programmes, and they should be carefully planned in view of expected outcomes, available resources, and rates of attrition over time. Pilot testing should be conducted since it provides opportunities for the researcher to estimate effect size and required sample size, evaluate the psychometric properties of instruments, and increase the acceptance of the study by administrators and frontline workers.

Quasi-experimental designs, in particular non-equivalent group designs with repeated measurements in follow-up, are key alternative designs for implementing RCTs in community settings. Because of the lack of randomisation, selection biases are a major threat to quasi-experimental designs. Researchers might consider using a priori matching or stratification of groups before intervention, equating non-equivalent groups on pre-tests of outcome measures using analysis of covariance, and employing both mixed research methodologies (qualitative and quantitative) to examine the convergence of results. Furthermore, recent advances in linear mixed models and growth curve analysis may provide an alternative method of analysis in addition to the commonly used methods of repeated measures analysis of variance and GLMs.

We recommend using a combination of four strategies in community trials: (a) quantitative evaluation using experimental or quasi-experimental designs, (b) subjective outcome evaluation, (c) qualitative evaluation, and (d) process outcome evaluation. Qualitative, subjective and process evaluations are not merely adjuncts to quantitative evaluation, but can provide important evidence for enhancing the quality of programmes. Qualitative and process evaluations are particularly helpful in examining the underlying change processes of a programme and process variables that need further monitoring. Subjective evaluations can be conducted to collect perceptions and satisfaction ratings of stakeholders, which in turn can provide important information for programme development and quality improvement.

References

Benson, J., & Clark, F. (1990). A guide for instrument development. In the American Occupational Therapy Foundation, Readings in occupational therapy research (pp. 275-283). Rockville, MD: American Occupational Therapy Foundation.

Bijleveld C., van der Kamp, L., Mooijaart, A., van der Kloot, W. A., van der Leeden, R., & van der Burg, E. (1998). Longitudinal data analysis. London: Sage.

Chan, S. C. C., Chan, C. C. H., & Siu, A. M. H. (2004, December). Preliminary report on factors influencing outcomes of the Chronic Disease Self-Management Programme in Hong Kong. Paper presented at the Keys to Positive Living conference: Self-managing health and chronic illness, Hong Kong.

Chan, S. C. C., Siu, A. M. H., Poon, P. K. K., & Chan, C. C. H. (2005). Chronic disease self-management programme for Chinese patients: A preliminary multi-baseline study. International Journal of Rehabilitation Research, 28, 351-354.

Chen, H. T. (1996). A comprehensive typology for programme evaluation. Evaluation Practice, 17, 121-130.

Finch, E. (2002). Physical rehabilitation outcome measures: A guide to enhanced clinical decision making (2nd ed., pp. 6-25). Hamilton, Ontario: BC Decker.

Kline, P. (2000). A psychometricsprimer. London: New Association Books.

Law, M. (Ed.). (2002). Evidence-based rehabilitation: A guide to practice. Thorofare, NJ: Slack.

Lorig, K., Stewart, A., Ritter, P., Gonzalez, V., Laurent, D., & Lynch, J. (1996). Outcome measures for health education and other health care interventions. Thousand Oaks, CA: Sage.

Lung, G. P. Y., Siu, A. M. H., Yau, M. K. S., & Chan, C. C. H. (2005). Group process and therapeutic factors in Chronic Disease Self-Management Programme (CDSMP). Paper presented at The New Perspectives—International Conference on Patient Self-Management, Victoria, Canada.

Markham, C. M., Basen-Engquist, K., Coyle, K. K., Addy, R. C., & Parcel, G. S. (2002). Safer choices, a school-based HIV, STD, and pregnancy prevention programme for adolescents: Process evaluation issues related to curriculum implementation. In A. Steckler & L. Linnan (Eds.), Process evaluation for public health interventions and research (pp. 209-248). San Francisco, CA: Jossey-Bass.

Marshall, M. (2002). Randomised clinical trials—misunderstanding, fraud and spin. In S. Priebe & M. Slade (Eds.), Evidence in mental health care (pp. 59-71). New York: Brunner-Routledge.

McCall, R. B., Green, B. L., Strauss, M. S., & Groark, C. J. (1998). Issues in community-based research and programme evaluation. In I. E. Siegel & K. A. Renninger (Eds.), Handbook of child psychology: Vol. 4. Child psychology in practice (5th ed., pp. 955-997). New York: John Wiley.

Murray, D. M., Moskowitz, J. M., & Dent, C. W. (1996). Design and analysis issues in community-based drug abuse prevention. American Behavioral Scientist, 39, 853-867.

Ottenbacher, K. J. (1986). Evaluating clinical change: Strategies for occupational and physical therapists. Baltimore, MD: Williams & Wilkins.

Portney, L. G., & Watkins, M. P. (2000). Foundations of clinical research: Applications to practice (2nd ed., pp. 152-169). Upper Saddle River, NJ: Prentice-Hall.

Reichardt, C. S., & Mark, M. M. (1998). Quasi-experimentation. In L. Bickman & D. Rog (Eds.), Handbook of applied social research (pp. 193-228). Newbury Park, CA: Sage.

Shek, D. T. L., Ng, H. Y., Lam, C. W., Lam, O. B., & Yeung, K. C. (2003). A longitudinal evaluation of a study of a pioneering drug prevention program (Project Astro MIND) in Hong Kong. Hong Kong: Beat Drugs Fund Association.

Shek, D. T. L., Tang, V., & Han, X. Y. (2005). Quality of qualitative evaluation studies in the social work literature: Evidence that constitutes a wakeup call. Research on Social Work Practice, 15, 180-194.

Scottish Intercollegiate Guidelines Network (1999). An introduction to SIGN methodology for the development of evidence-based clinical guidelines. Edinburgh: Author.

Siu, A. M. H., Chan, C. C. H., Chui, D. Y. Y., & Poon, P. K. K. (2004, December). Evaluation of the Chronic Disease Self-Management

Programme: Preliminary results. Paper presented at the Keys to Positive Living conference: Self-managing health and chronic illness, Hong Kong. Siu, A. M. H., & Chui, D. Y. Y. (2004). Evaluation of a community rehabilitation service for people with rheumatoid arthritis. Patient Education and Counseling, 55, 62-69.

Todman, J. B., & Dugard, P. (2001). Single-case and small-n experimental designs: A practical guide to randomization tests. Mahwah, NJ: Lawrence Erlbaum Associates.

Varnell, S. P., Murray, D. M., Janega, J. B., & Blitstein J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94, 423-432.

Williams, B. (2002). The role of qualitative research methods in evidence-based mental health care. In S. Priebe & M. Slade (Eds.), Evidence in mental health care (pp. 109-125). New York: Brunner-Routledge.