Survey will open in a new window or redirect momentarily, please keep this tab open! If the survey does not open, please enable popups and refresh the page.

Status:

Dismiss
A link to the study pre-analysis plan allows forecasters to read further if they want additional information.
Study: Entrepreneurship education and teacher training in Rwanda
Authors: Todd Pugatch, Moussa P. Blimpo


It is important to inform respondents about the intervention context. Here we highlight key features of a new entrepreneurship curriculum.
Background on education in Rwanda: Primary school (grades 1-6) in Rwanda is compulsory. All Rwandan secondary students are required to enroll in entrepreneurship courses through six years of secondary school (S1-S6, equivalent to grades 7-12). In 2016, Rwanda reformed its required upper secondary (S4-S6, equivalent to grades 10-12) entrepreneurship course by introducing interactive pedagogy and a focus on business skills, which covers the full cycle of business creation and development, including product development, registration and legal issues, marketing, accounting, and customer relations.

Respondents should know the sample size in the experiment, the level of randomization and the implementing organization.
Intervention: In that year, a subset of schools was randomly selected for two years of intensive teacher training and support (treated schools). The program covered more than 100 schools, 260 teachers, and 6,800 students, and was implemented by the government and a large international NGO. A control group of equal size received the curriculum and standard government training only. The training received by treated teachers was subject-specific (entrepreneurship), incorporated peer feedback meetings, and included follow-up support.

A figure can be used to summarize the experimental findings and break up text blocks.

When providing details of an intervention, respondents may want to know how successful implementation was. For example, did all teacher attend all six training sessions? If this information is not yet known, it scan be good to state it explicitly, or to provide bounds (see the example for Bouguen and Dillon).

The training had three main components:

1) Intensive teacher training: Entrepreneurship teachers received multi-day training sessions each academic term beginning April 2016 through January 2018. Each of the six sessions was held during holidays between terms and lasted four days. Training emphasized lesson planning, engaging students in classroom discussions, encouraging students to create entrepreneurship “portfolios” of their work, and assisting student business clubs to form and grow. Trainings culminated in a “mock day” in which teachers rehearsed upcoming lessons.

2) Exchange visits: Teachers participating in the intervention visited each other’s schools to learn from and provide feedback to their peers.

3) Outreach and support: Teachers received ongoing outreach to support their implementation of the curriculum, including visits from trained “Youth Leaders” which contained product-making demonstrations (e.g., for household goods such as soap or candles) co-taught with the teacher, advising of student business clubs, classroom observation, participating in teacher exchange visits, and addressing any other concerns. Student business clubs were encouraged to submit their ideas to regular business competitions held for treated schools.

The target population should be clearly specified. It is also useful to provide dates on what the intervention was implemented.
Target population: The study focused on the cohort entering S4 (10th grade) in 2016, with training provided to this cohort’s entrepreneurship teacher as they progressed to S6 (12th grade). The control group and the treated group received the new entrepreneurship curriculum. Teachers in control schools did not receive the intensive training, exchange visits, or outreach provided to treatment schools.

Here we outline the level at which study outcomes were measured and the sampling for these respondents.
Outcomes overview: Outcomes were measured at the student level. Approximately 15 students were sampled from each school. We ask you to predict the experimental results for three outcomes: scores on a standardized entrepreneurship test, whether students dropped out of school, and business participation.

It is important to highlight the timeline for measuring outcomes.
Outcome: Student dropout: One outcome we are interested in is the percent of respondents who dropped out of school. This outcome measures dropout at any time after baseline (April 2016) through when the endline surveys were completed June-October 2018. The final training was in January 2018, with final exchange visits and outreach in April.

Respondents should know what kind of treatment effect they are predicting. If we were targeting a non-academic sample, we have wanted to describe this differently.
Please predict the difference in the percent of respondents who dropped out of school between the group in which teachers received entrepreneurship-specific training and the control group (the average treatment effect).


  • This link takes participants to the two page description.
  • We provide the control groups mean and standard deviation as a reference.
  • An example can be useful for helping ensure respondents understand their predictions.
Notes:
  • Click here for a reminder of the intervention and study background, which will open in a new window.
  • Reference: In the control group, an average of 9% of respondents dropped out over the duration of the study (with a standard deviation of 29 percentage points).
  • As an example, if you enter 8.7 it means you think student dropout will be 8.7 percentage points higher in the treatment group. If you enter -8.7 it means you think student dropout will be 8.7 percentage points lower in the treatment group. If you enter 0 it means you think the program had no impact.

There are many ways to elicit predictions. In DellaVigna et al. (2020) we outline four variations in elicitation strategy (two examples are provided below): (1) small versus large reference values (see last bullet above); (2) whether predictions are in raw units or standard deviations; (3) text-entry versus slider responses; and (4) small versus large slider bounds. Our results suggest that reference values and units seem to have little effect on responses, though wider slider bounds are associated with higher forecasts.

We bound numeric responses at +-1 SD to avoid confusion among survey respondents.

Here we bounded our slider scale at +-2 SD, but elicited predictions in raw units. To avoid acquiescence, the slider must be moved to continue to the next question (the default is not 0).