The impact of trial stage, developer involvement and international transferability on universal social and emotional learning programme outcomes: a meta-analysis



Yüklə 120,02 Kb.
səhifə1/4
tarix08.08.2018
ölçüsü120,02 Kb.
#61073
  1   2   3   4

Running head: trial stage, developer involvement and international transferability: Meta-analysis

The impact of trial stage, developer involvement and international transferability on universal social and emotional learning programme outcomes: A meta-analysis

Wigelsworth, M., Lendrum, A., Oldfield, J., Scott, A., ten Bokkel, I., Tate, K., & Emery, C.



The impact of trial stage, developer involvement and international transferability on universal social and emotional learning programme outcomes: A meta-analysis

Abstract

This study expands upon the extant prior meta-analytic literature by exploring previously theorised reasons for the failure of school-based, universal social and emotional learning (SEL) programmes to produce expected results. Eighty-nine studies reporting the effects of school-based, universal SEL programmes were examined for differential effects on the basis of: 1) stage of evaluation (efficacy or effectiveness); 2) involvement from the programme developer in the evaluation (led, involved, independent); and 3) whether the programme was implemented in its country of origin (home or away). A range of outcomes were assessed including: social-emotional competence, attitudes towards self, pro-social behaviour, conduct problems, emotional distress, academic achievement and emotional competence. Differential gains across all three factors were shown, although not always in the direction hypothesised. The findings from the current study demonstrate a revised and more complex relationship between identified factors and dictate major new directions for the field.

Key words: meta-analysis, socio-emotional, efficacy, developer, transferability

Literature Review

There is an emerging consensus that the role of the school should include supporting children’s emotional education and development (Greenberg, 2010; Weare, 2010). This is often accomplished through the implementation of universal social and emotional learning (SEL) programmes which aim to improve learning, promote emotional well-being, and prevent problem behaviours through the development of social and emotional competencies (Elias et al., 2001; Greenberg et al., 2003).



What is SEL?

Social and emotional learning (SEL) is represented by the promotion of five core competencies: self-awareness; self-management; social awareness; relationship skills; and responsible decision-making (Collaborative for Academic, Social and Emotional Learning, 2002). Although a broad definition serves to encompass many aspects for the effective promotion of SEL, it does little to differentiate or identify ‘essential ingredients’ of a programme of change. As a result, SEL is implemented through a variety of formats, differing levels of training and support, varying degrees of intensity, and variation in regards to the relative importance placed in each of the five core competencies. However, most SEL programmes typically feature an explicit taught curriculum, delivered by teachers (with or without additional coaching and technical support), and are delivered during school hours (examples of SEL programmes can be seen at casel.org).

Currently, a wide range of SEL programmes feature in schools and classrooms across the world, including in the USA (e.g. Greenberg, Kusche, Cook, & Quamma, 1995), Australia (e.g. Graetz et al., 2008), across Europe (e.g. Holsen, Smith, & Frey, 2008), and in the UK (e.g. DfES, 2007). Although a poor tool for assessing the complexity of any specific curriculum intervention or context, meta-analytic approaches are required to support the hitherto theoretical assumption of the universality of teachable SEL competencies. Indeed, recent meta-analyses in the United States (Durlak, Weissberg, Dymicki, Taylor, & Schellinger, 2011) and the Netherlands (Sklad, Diekstra, Ritter, & Ben, 2012) have been used to suggest that high quality, well-implemented universal SEL interventions, designed to broadly facilitate a range of intra- and inter-personal competencies, can lead to a range of salient outcomes, including improved social and emotional skills, school attitudes and academic performance, and reduced mental health difficulties (Durlak, Weissberg, Dymnicki, Taylor, & Schellinger, 2011; Sklad, Diekstra, Ritter & Ben, 2012; Wilson & Lipsey, 2007). However, individual SEL programmes are not always able to produce the same impressive results indicated by these meta-analyses when adopted and implemented by practitioners in schools (Social and Character Development Research Consortium, 2010).

Research in prevention science suggests a number of possible reasons for this discrepancy, including implementation failure (Durlak & DuPre, 2008), a reliance on the results of ‘early’ trials focusing on the internal logic of intervention, rather than their ‘real world’ applicability (Flay, et al., 2005), developer involvement in trials (Eisner, 2009) and a lack of cultural transferability of interventions (Castro, Barrera, & Martinez, 2004). Although implementation fidelity is now recognised as an important feature in the successful delivery of SEL programmes (included in Durlak et al.’s 2011 meta-analysis), there has been no similar empirical consideration of the other factors. Underlying such explanations is an implicit assumption of a degree of invariance or ‘treatment’ approach in the implementation of SEL programmes. Many consumers of educational research will recognise a ‘medical model’ of evaluation (typically involving experimental designs), an approach which is not without debate (for a brief summary see Evans and Benefield (2001)). Accordingly, prior research in educational evaluation has noted that such an approach is potentially limited, as associated methodologies for investigation (neatly described by Elliott and Kushner (2007) as the, “statistical aggregation of gross yield” (p.324)) will fail to capture complexities of the interactions within specific contexts required to explain findings. Indeed, lack of process and implementation has been noted in this particular field (Lendrum & Humphrey, 2010). However, suggested alternate directions (e.g. anthropological, illuminative, case study (ibid)) can fail to capture the prevalence or magnitude of trends, and, in their own way, also fail to uncover important lessons as to the successful implementation of educational interventions. Therefore, there is an opportunity to extend prior influential work (i.e. Durlak, Weissberg, Dymicki, Taylor, & Schellinger, 2011; Sklad, Diekstra, Ritter, & Ben, 2012) utilising meta-analytic approaches, to examine key indicators potentially influencing successful outcomes, and in doing so, to consider extent to which such techniques are useful in this context.



Key indicators

In attempting to explain the high degree of variation between programmes in achieving successful outcomes for children, this article now considers the rationale for exploring theoretically important (e.g. as conceptualised by Lendrum and Wigelsworth (2013)), but often ignored factors, and hypothesises their likely effect on SEL programme outcomes.



Stage of evaluation: Efficacy vs. effectiveness

An important (and often promoted) indicator as to the potential success of a programme is its history of development and evaluation. Ideally, an intervention should be tested at several stages between its initial development and its broad dissemination into routine practice (Greenberg, Domitrovich, Graczyk, & Zins, 2005) and frameworks have been provided in recent literature to enable this. For instance, drawn from evaluations of complex health interventions, Campbell et al. (2000) provide guidance on specific sequential phases for developing interventions: developing theory (pre-phase), modelling empirical relationships consistent with intended outcomes (phase I), exploratory trialling (phase II), randomised control trials under optimum conditions (phase III), and long term implementation in uncontrolled settings (phase IV). An intervention should pass through all phases to be considered truly effective and evidence-based (Campbell et al., 2000).

An important distinction in Campbell et al.’s framework is the recognition that interventions are typically first ‘formally’ evaluated under optimal conditions of delivery (phase III) (more broadly referred to as efficacy trials (Flay, 1986)), such as with the provision of highly-trained and carefully supervised implementation staff. Subsequent to this, a programme may be tested under more ‘real world’ or naturalistic settings, using just the staff and resources that would be normally available. This is aligned to Campbell et al.’s phase IV and is commonly known as an effectiveness trial (Dane & Schneider, 1998; Flay et al., 2005). Although both types of trial may utilise similar overarching research designs (e.g. quasi-experimental or randomised designs), they do seek to answer different questions about an intervention. Whereas efficacy studies are typically conducted to demonstrate the efficacy and internal validity of a programme, effectiveness is used to test whether and how an intervention works in real-world contexts (Durlak, 1998; Greenberg et al., 2005). This also allows identification of factors that may influence the successful adoption, implementation, and sustainability of interventions when they ‘go to scale’ (Greenberg, 2010), for instance by highlighting additional training needs or workload allocation pressures. Thus, a programme that demonstrates success at the efficacy stage may not yield similar results under real world conditions. Indeed, research indicates that practitioners in ‘real-world’ settings are generally unable to duplicate the favourable conditions and access the technical expertise and resources that were available to researchers and programme developers at the efficacy stage (Greenberg et al., 2005; Hallfors & Godette, 2002) and thus fail to implement programmes to the same standard and achieve the same outcomes (Durlak & DuPre, 2008).

An example of this distinction is an effectiveness trial of the Promoting Alternative Thinking Strategies (PATHS) curriculum in the Netherlands (Goossens et al., 2012) (a programme closely aligned to the general descriptors provided in the introduction). The study failed to replicate expected effects demonstrated in earlier efficacy trials (e.g. Greenberg et al., 1995). Demonstrating an implementation strategy that allowed for high degrees of adaptation, the authors of the study concluded that the implementation strategy adopted was “not a recipe for effective prevention of problem behavior on a large scale” (p245).

Despite calls for prevention programmes to be tested in multiple contexts before they are described as ‘evidence-based’ (Kumpfer, Magalhães, & Xie, 2012), there is, as yet little clarification in the SEL literature (including major reviews) regarding the stage of evaluation of programmes and whether those classified as ‘successful’ or ‘exemplary’, have achieved this status on the basis of efficacy alone or have also undergone effectiveness trials.

The involvement of programme developers

There are many logical reasons why the developer of a specific SEL intervention would also conduct evaluation trials, especially during efficacy phase (as above). However, there is evidence from associated fields to suggest that the involvement of the programme developer with an evaluation may be associated with considerably greater outcomes (Eisner, 2009). For example, in a review of psychiatric interventions, studies where a conflict of interest was disclosed (e.g. programme developers were directly involved in the study) were nearly five times more likely to report positive results when compared to truly ‘independent’ trials (Perlis et al., 2005). Similarly, an independent effectiveness study of Project ALERT, a substance abuse prevention programme, failed to find positive outcomes, despite successful efficacy and effectiveness studies conducted by the programme developer (St. Pierre et al., 2005).

Eisner (2009) posits two possible explanations for this phenomenon. The cynical view proposes that the more favourable results in developer-led trials stem from systematic biases that influence decision-making during a study. Alternatively, the high fidelity view argues that implementation of a given intervention is of a higher quality in studies in which the programme developer is involved, leading to better results. In either case, developer involvement leads to an inflation of outcome effect sizes compared to those which might be expected from ‘real world’ implementation of a given programme. The obvious consequence of such an effect is the inherent difficulty in replication of expected effects in any wider dissemination or ‘roll out’ of the programme. If the intended outcomes of an intervention may only be achieved if the programme developer is available to enforce the highest levels of fidelity, then its broad dissemination and sustainability across multiple setting is unlikely to be feasible. Despite Eisner’s observations, recent reviews and meta-analyses of SEL programmes do not distinguish between evaluations conducted by external researchers and those led by, or with the involvement of, programme developers or their representatives.

Cultural transferability

Issues of cultural transferability have particular implications for SEL programmes. This is because perceived success in the context of the USA (around 90% of the studies included in Wilson and Lipsey’s (2007) and Durlak et al.’s (2011) reviews originated there) has resulted in rapid global dissemination and adoption of SEL programmes. For instance, PATHS (Greenberg & Kusché, 2002), Second Step (Committee for Children, 2011), and Incredible Years (Webster-Stratton, 2011) have been adopted and implemented across the world (e.g. Henningham, 2013; Holsen et al., 2008; Malti, Ribeaud & Eisner, 2011).

International transfers of programmes provide valuable opportunities to examine aspects of implementation, with special regard to the fidelity-adaptation debate (Ferrer-Wreder, Adamson, Kumpfer, & Eichas, 2012). This is because a major factor in the successful transportability of interventions is their adaptability (Castro, Barrera, & Martinez, 2004). Accepting the view that successful outcomes may rely on at least some adaptation to fit with cultural needs, values, and expectations of the adopters within countries of origin (Castro, Barrera & Martinez, 2004), the complexities of international transferability across countries becomes apparent. Adaptations vary, and although surface level changes (e.g. modified vocabulary, photographs, or names) may be beneficial and enhance cultural acceptability, deeper structural modifications (e.g. different pedagogical approaches or modified programme delivery), and may compromise the successful implementation of the critical components of an intervention. This may have serious negative consequences, to the extent that change is not triggered and the outcomes of the programme are not achieved. Indeed, there is arguably the potential for programmes to be adapted to cultural contexts to such an extent that they become, in effect, new programmes requiring re-validation, ideally through the use of an evidence framework such as Campbell et al.’s (2000) in order to test the underlying programme theory and internal validity.

Unsurprisingly, findings for adopted or imported programmes are mixed. For instance, a number of studies report null results for ‘successful’ USA programmes transported into the UK including anti-bullying programmes (Ttofi, Farrington, & Baldry, 2008), sex, and drugs education (Wiggins et al., 2009). Some programmes show mixed success when transferred internationally (such as PATHS), reporting varying levels of success from null results in England (Little et al., 2012) to mixed effects in Switzerland (treatment effects were identified for only some outcomes) (Malti, Ribeaud, & Eisner, 2011). Conversely, the SEL programme ‘Second Step’ (Committee for Children, 2011) has been shown to have positive effects across several sites in the USA (e.g. Cooke et al., 2007; Frey, Nolen,Edstrom & Hirschstein, 2005) and in Europe (Holsen et al., 2008). Therefore, there are questions as to the extent to which programmes can achieve the same intended outcomes when transported to countries with different education systems, pedagogical approaches, and cultural beliefs.



Research Questions and Aims

This study is the first of its type to examine the potential effects of the identified factors on the outcomes of universal school-based programmes. To date, previous reviews have been limited; reporting on a limited palate of intervention type (Durlak et al., 2011), main effects of SEL programmes only (Sklad et al., 2012), or have examined the effect of programme variables themselves (Durlak et al., 2011). The purpose of the current study is to build upon this prior work to assess to what extent meta-analytical techniques can help explain inconsistencies in demonstrating positive programme outcomes. Given the variability identified in the proceeding review, the relative usefulness of meta-analytical techniques will also be considered.

As previous reviews have already established positive main effects across a variety of skills and behaviours, our hypotheses focus on the differential effects of the categories identified through the literature review, specifically;

1) Studies coded as ‘efficacy’ will show larger effect sizes compared to those coded as ‘effectiveness’

2) Studies in which the developer has been identified as leading or being involved will show larger effect sizes in relation to independent studies

3) Studies implemented within the country of development (home) will show larger effect sizes than those adopted and implemented outside country of origin (away)



Methods

A meta-analytic methodology was adopted to address the study hypotheses, in order to ensure an unbiased, representative and high quality process of review. ‘Cochrane protocols for systematic reviews of interventions’ (Higgins & Green, 2008) were adopted for the literature searches, coding, and analytical strategy. To address the common issue of comparing clinically diverse studies (‘apples with oranges’), outcome categories were classified on the basis of prior work in the field (e.g. Durlak et al., 2011; Sklad et al., 2012; Weare & Nind, 2010; Wilson & Lipsey, 2007) and analysed seperately.

For the purposes of the current study, we have co-opted Denham’s (2005) framework of social and emotional competence. This is an extremely close fit to the five SEL competencies and provides an additional layer of specification to ensure specific outcomes are identifiable alongside broader measures of SEL.

***Table 1 ***

Literature search

Four search strategies were used to obtain literature. First, relevant studies were identified through searches of major scientific databases, specifically; ASSIA, CINALAH, Cochrane database of systematic reviews, EMBASE, ERIC, MEDLINE, NICE, PsychINFO, and additional web searching using Google Scholar. Second, a number of journals most likely to contain SEL based publications were also searched. For instance; Prevention Science, Psychology in the Schools, and School Psychology Review. Third, organisational websites promoting SEL were searched to identify additional studies (e.g. casel.org). For all searches, the following key terms were used in different combinations to help maximise the search results:

SEL, social, emotional, wellbeing, mental health, intervention, programme, promotion, initiative, pupil, school, impact, effect, outcome, evaluation, effectiveness, scale, efficacy, pilot, independent, developer”.

Fourth, the reference list of each identified study was reviewed.



Inclusion criteria

Studies eligible for the meta-analysis were: a) written in English; b) appeared in published or unpublished form between 01 January 19951 and 01 January 2013; c) detailed an intervention that included the development of one or more core SEL components as defined by Denham (2005); d) delivered on school premises, during school hours; e) delivered to students aged 4 – 18 years; f) detailed an intervention that was universal (i.e. for all pupils, regardless of need); g) included a control group; h) reported sufficient information for effect sizes to be calculated for programme effect.



Exclusion criteria

Studies with the sole intended outcome of either academic attainment or physical health (e.g. drug use or teen pregnancy) were excluded. This criterion did not include studies where academic attainment was presented a distal outcome (e.g a secondary or tertiary measure after capturing social and emotional outcomes), reflecting that academic attainment is seen as a distal outcome derived from improvements in social and emotional skills (see Zins, Bloodworth, Weissberg, & Walberg, 2007). Studies that applied universal interventions to small groups within classes (e.g. targeted subgroup) were also excluded. Studies without appropriate descriptive statistics (i.e means and standard deviations) were also excluded.

A total of 24,274 initial records were identified. Inclusion / exclusion criteria were applied to these brief records, and as multiple researchers were used to search for literature, large numbers of duplicates were also removed. After this process, 327 initial records remained, in which the full text was required to assess eligibility. From these full texts, a further 238 studies were excluded, leaving a final 89 studies to be retained for coding and analyses.
Coding

Methodological variables

To account for methodological variability affecting outcomes, three methodological variables were coded. This included:



Time of assessment: post-test (0-6 months, follow up (7-18 months), extended follow up (over 18 months).

Period of schooling: early primary (4-6years), late primary (7-11years), early secondary (12-14years), and late secondary onwards (15-18years).

Type of design: randomised, cluster randomised, wait-list (non randomised), matched comparison, other.

Two dichotomous variables assessing implementation were also coded; implementation integrity (implementation reported: yes / no) and, implementation issues (issues with implementation: yes / no).



Independent variables

In order to address the current study’s hypotheses, studies were coded by their stage of evaluation (efficacy or effectiveness), any involvement of the intervention author (developer involvement), and by place of origin and implementation (cross-cultural transferability):



Efficacy/ Effectiveness: Studies were coded dichotomously (efficacy or effectiveness) according to whether trial schools or teachers had received, support, training, staff or resources that would not normally be available to the organisation purchasing/ adopting the intervention (efficacy) (Flay, 1986; Dane & Schneider 1998), or whether the intervention had been conducted in naturalistic settings without developer/ researcher support , using just the staff and resources that would be normally available (ibid).

Developer involvement: Studies were coded as ‘led’, ‘involved’, or ‘independent’ according to whether a study was identified as receiving developer support. This was accomplished through the list of authors, methods section, and any declaration of competing interests in the individual papers. Where there was no citation of a programme developer studies were crossed referenced across programme authors (where known). Where a programme a developer was not present in the author list, no developer support was referenced in the methods section (or, indeed, independence was declared), and there was a declaration of competing interest, the programme was coded as ‘independent’. This is on the basis that were a programme author involved in cases coded as ‘independent’, their contribution was minor enough to warrant omission from the reporting of the study.

Cross-cultural transferability: Studies were coded dichotomously (home or away) according to whether the trial had taken place in the country of programme development. We assessed transferability by national boundaries only (e.g. different States within the USA were considered ‘home’ for a USA programme). A programme that had received surface level adaptations (e.g. translation) was considered to be ‘transported’ programme and classified as ‘away’. We did not find any evidence of a programme being so fundamentally altered (i.e. change to its internal logic model) as to classify as a new ‘home’ programme.

Outcome variables

Outcome variables were classified on the basis of prior work in this area (e.g. Durlak et al., 2011; Sklad et al., 2012; Weare & Nind, 2010; Wilson & Lipsey, 2007) with some minor adaptation to fit with the obtained literature and the adopted conceptual framework from Denham (2005). Seven different outcomes were identified:



1) Social-emotional competence (SEC): This category included measurement of general socio-emotional competency that incorporated both the emotional and relational aspect of Denham’s model. Examples include total scores on omnibus measures (either self, peer or teacher rated) such as the ‘Social Skills Improvement System’ (Gresham & Elliot, 1990), and measures of interpersonal problem-solving/ conflict resolution behaviour.

2) Attitudes towards self (ATS): Outcomes relating exclusively to the skills of self-awareness were classified in this category. This included measures of self-esteem, self-concept, and general attitudes towards self, measured by instruments such as the ‘Student Self Efficacy Scale’ (Jinks & Morgan, 1999).

3) Pro-social behaviour (PSB) – This category was used to classify all measure of behaviours intended to aid others, identified as social awareness and/or social problem solving in Denham’s model. Examples include self, peer, and teacher rated social behaviour and behavioural observations. An example measure includes the pro-social behaviour subscale of the ‘Strengths and Difficulties Questionnaire’ (Goodman, 2001).

In addition to Denham’s framework, outcomes common to the logic model of the interventions were also coded;



4) Conduct problems (CP): Observational, self, peer, or teacher rated anti-social behaviour, externalised difficulties, bullying, aggression, or physical altercation was included in this category. An example instrument categorised as CP is the ‘Child Behaviour Checklist’ (Achenbach & Edelbrock, 1983).

5) Emotional distress (ED): Outcomes grouped in this category were representative of internalised mental health issues. Examples include self-rated, depression, stress and/or anxiety, measured by instruments such as the ‘Beck Depression Inventory’ (Beck, Steer, & Brown, 1996).

6) Academic Attainment (AA): Measures of attainment were judged through the use of reading tests, other standardised academic achievement measures (such as the Peabody Picture Vocabulary Test (Dunn & Dunn, 1997)) or teacher-rated academic competence.

7) Emotional Competence Only (ECO): An additional category was coded to include measures exclusively associated with the internal domains related to emotional competency. Measures that assessed emotional skills (such as the Emotion Regulation Checklist (Shields & Cicchetti, 1997)) were included in this category, in order to differentiate some of the more generalised SEC inventories which provide single scores for both intra and inter personal skills.

Coding reliability: Coding of study characteristics (e.g. methodological characteristics) and outcomes data were completed by a small team of doctoral students who were familiar with the literature on social and emotional skills. All coders received training, which included practice double-coding of exemplar literature. Coders were allowed to proceed once they had reached full agreement with the lead authors. A training manual and online coding system were also provided to ensure consistency in coding. Coding consistency was monitored, and some variation in agreement in later coding led the lead authors to review the full dataset. The lead authors reached 100% agreement for the final coding selection.

Analytical strategy

Standard meta-analytic protocol as specified by Higgins and Green (2008) was followed for the analytical strategy and is detailed below. As data were derived from different sources, all scores for individual studies were standardised and calculated so that positive values indicated a favourable result for intervention groups compared to control (e.g. so measures of reduction in problem behaviours did not confound the analyses). All analyses were conducted using meta-analytic software (‘MetaEasy’ v1.0.2. Kontopantelis & Reeves, 2009).

For individual studies reporting more than one intervention, results were treated as multiple, separate entries. Results examining either: a) just subgroups (e.g. those identified with special educational needs); b) multiple interventions beyond only the identified SEL intervention (e.g. bespoke academic add-ons), or; c) Comparisons of different intervention treatments with each other (i.e. no control group) were not included in the analyses. When a study reported more than one outcome from the same category (e.g. conduct problems), scores were averaged to obtain a single figure.

Determining whether one category of study was significantly different (α <.05) from another (i.e. efficacy vs. effectiveness), was calculated using technique known as proportional overlap. As described by Cumming and Finch (2005), this approach does not require a specialised understanding of meta-analytic techniques, and instead assesses how closely related two confidence intervals, such as those presented in figures 1-3 (see ‘results’). Using the same approach, individual variables (e.g. main programme effects) were assessed on the basis that the 95% confidence intervals did not cross zero.

A low number of individual studies for some of the categories for analysis prevented statistical significance testing being carried out (at least 5 studies for each category are required to be confident that any significant difference is accurate (Hedges & Pigott, 2001)), but this did not prevent the comparison of effect sizes.

For the purposes of the current study, all effects sizes are reported as Hedge’s adjusted g (Hedges & Olkin, 1985), as described in Kontopantelis and Reeves (2009). Hedge’s g is almost identical to the often used Cohen’s d, but provides a superior estimate of effect when sample sizes are small. Both g and d are interpreted in exactly the same way. Estimation was based on a random effects model (Dersimonian & Laird, 1986). This technique was considered particularly suitable for the current study, as it assumes that different studies estimate different, but related, treatment effects (Higgins & Green, 2008).



The degree of diversity in programme effects were examined using summary values, specifically the Q and I2 statistics. Q is used to assess whether there was statistically significant variability (or ‘heterogeneity’), whereas I2 provides a measure of the degree of heterogeneity, measured by a percentage (0%-100%). 25, 50, 75 are considered low, medium, high degrees of heterogeneity respectively (Higgins, Thompson, Deeks, & Altman, 2003). For instance, if category is shown to have a significant Q value, and high I2 (>75%), it can be said to be associated with a great deal of difference in the how successful individual programmes are in that category.
Yüklə 120,02 Kb.

Dostları ilə paylaş:
  1   2   3   4




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə