Our Evaluation Practice
We encourage high standards of evidence. We specialise in applying experimental and quasi-experimental designs that are relevant to the context. We embed these designs in our mixed methods approach to conduct evaluations focused on impact, programme activities and context, model comparisons and overall learning.
We promote relevant and inclusive processes, often working with our clients and stakeholders to ensure an appropriate evaluation. We provide technical assistance to evaluation teams.
We focus on informing decisions, helping people access quality evidence to inform decisions throughout the implementation process to inform success, and at the end for accountability purposes.
- Impact evaluation
- Formative evaluation
- Summative evaluation
- Process evaluation
- Outcomes evaluation
- Experimental design
- Quasi-experimental design
- Randomised trials (RCT)
- Mixed methods
Our Research Practice
We review and synthesise evidence, including everything from reviewing literature, documents and data, to undertaking meta-analyses. We synthesise these data so they are useful for their intended purpose
We analyse data, using the most appropriate qualitative or quantitative techniques for both secondary and primary data. We use the IDI and explore other administrative datasets.
- Statistical analysis
- Systematic review
- Meta analysis
- Literature review
- Segmentation research
Our Monitoring Practice
We inspire progress through evidence. Data is vital when working with multiple stakeholders to achieve “collective impact”. We develop theories of change with stakeholders, to ensure measures are relevant and have the buy-in required to succeed. We also develop and assess measures to inspire confidence and ensure monitored evidence supports the overall goals.
We make quality data accessible. We create reports that pull together relevant data, and in real time, so it is useful for everyone who can benefit from data and continuous improvement – implementation teams, managers and funders.
- Progress reporting
- Collective impact monitoring
- Evaluative monitoring
- Data management system
Our Measurement Practice
We design and test measures. We specialise in designing and testing measures, including instruments that can inform delivery teams as well as broader monitoring and evaluation frameworks. We not only design according to best practice principles (stage 1), but we test our measures before implementation (stage 2) and apply techniques to test the framework and/or measures after data is collected (stage 3).
We improve measurement. We provide technical assistance in sample design and surveying approach to ensure reliable comparisons can be made. Using statistical and psychometric techniques, we also identify how measures can be improved, testing if the measures reflect what was intended and identifying how the instrument and data can be modified before any “fuzzy” results are reported.
- Evaluation framework
- Implementation plan
- Instrument design
- Sampling design
- Population estimation
- Scale validation
- Measurement review
Our Data Science and Analytics Practice
We learn from data. We apply statistical modelling, data analysis, and machine learning to unlock your data.
- insights generation
- descriptive analysis/profiling
- multivariate/regression modeling
- validation/operationalization of predictive models
- randomized-controlled trials, AB/split testing
- SAS, SQL, R, Python, Cloud