Aichholzer et al. (2016) - Evaluating e-Participation. Frameworks, Practice, Evidence

From Online-Partizipation
Jump to navigation Jump to search

Aichholzer, Georg, Herbert Kubicek, and Lourdes Torres. 2016. Evaluating E-Participation. Frameworks, Practice, Evidence. Edited by Georg Aichholzer, Herbert Kubicek, and Lourdes Torres. Springer International Publishing. doi:10.1007/978-3-319-25403-6.

Summary missing


  • there can be no universal standard for evaluation criteria but rather it needs specific criteria for each form of online participation
  • generic Input-Activities-Output-Outcome-Impact model


  • the book reports the European Cooperation Research Project (ECRP) e2democracy, funded by the ESF, in which researchers from Austria (), Germany, and Spain cooperated
  • they implemented roughly similar e-participation projects in local communities in the three countries
    • Austria: Bregenz, Mariazeller Land
    • Germany: Bremen, Bremerhaven, Wennigsen
    • Spain: Pamplona, Saragossa
  • the online participation projects were all aimed at reducing CO2 emissions (carbon dioxide)
    • each community had to sign an agreement between administration, local businesses and a panel of citizen with certain aims of carbon dioxide reductions
    • project would provide the online tool as well as the evaluation
  • 24 months of field work with varying start dates (earliest December 2009, latest March 2011)


Georg Aichholzer, Herbert Kubicek, Lourdes Torres

  • identify the lack of comparative research as the biggest problem in e-participation research (p1/2)
    • "The biggest barrier to valid assessment is the lack of comparability in existing research, which is mostly case oriented, providing a set of highly heterogenous cases. There is a need for international and interdisciplinary comparative empirical research. As the effects of electronic tools are highly dependent on their context, it is necessary to compare similar tools in a similar context in order to detect success factors. Success can only be assessed and success factors can only be identified by comparing a number of cases with the same kind of participation on the same subject and by the same target group of participants." p1p
  • introduce research project
  • outline individual chapters

C2 Closing the Evaluation Gap in e-Participation Research and Practice

Herbert Kubicek and Georg Aichholzer

  • focus on top-down initiatives
  • offer a summary of the variety of different forms of participation on- and offline (p13pp)
    • Kubicek favours
      • Information
      • Consultation
      • Cooperation
    • International Association for Public Participation:
      • Inform
      • Consult
      • Involve
      • Collaborate
      • Empower
  • identify two gaps
    • lack of established success criteria
      • "There is neither conceptual agreement on success criteria and indicators nor any valid empirical studies assessing the expected effects in a number of comparable cases. Research on the use and the effects of e-participation is still far from being able to provide empirical evidence for success factors." p21
    • lack of systematically comparative research
      • particular problem is identifing the effects of online tools in combination with other offline tools
  • argue that this can be no universal evaluation framework but
    • "As a Summary of these considerations, we do not recommend striving any longer for a general evaluation framework for e-participation. Instead, we argue for a twofold "relativity theory" of e-participation evaluation, claiming that different evaluation criteria and methods have to be chosen in relation to the kind of participation procedure (e.g., consultation or cooperation) and, for each kind ofprocedure, relative to different groups of actors." p39
  • instead propose a two-fold "relativity theory" of evaluation (p38) but that the framework needs to be adapted to both
    • different participation formats
    • the different stakeholders involved
      • decision-makers
      • organizers
      • users/participants
      • target groups/people concerned
      • the general public
  • i.e. in which there are specific and often distinct evaluation criteria


  • basis of their evaluation framework: Generic Input-Activities-Output-Outcome-Impact Model by the OECD (see figure 2.4 on page 32); in the definition of OECD (taken from OECD Glossary of Key Terms in Evaluation and Results Based Management)
    • inputs are resources invested
    • activites are what is done in order to achieve the intentend results
    • output includes "products, capital goods and services which result from a development intervention" or the change that derives from it
    • outcome are short or medium-term effects
    • impact are long term effects, no matter whether intended or not
  • Kubicek has adapted this model to the kind of top-down initiated types of online consultation that he has been researching (see Kubicek et al. (2011) - Erfolgreich beteiligt?); making some noteworthy amendments to the OECD definition (p33)
    • input: involves also the immaterial conditions, the context of participation
    • output: is here defined as the 'supply side',i.e. material that is presented to the participants
    • outomce: defined as 'demand-side' including usage figures and the quality of the contributions

C5 Evaluating Public (e-)Consultation Processes

by Herbert Kubicek

  • test evaluation tools by applying them to the six different cases
    • comparing similar cases (i.e. the two cases in each country)
    • comparing once the organizers perspective, and once the participants perspective
  • evaluation tools (some examples of these tools are cited in the text)
    • a template for evaluations by external observers in which they assess the process according to preset factors and criteria based both on data collected and observation
    • questionnaires for interviewing organizers
    • questionnaires for interviewing participants
  • lessons learned - these apply both to online as well as offline participation
    • no general template, questions need to be adapted to specific context
    • rarely is there a common view among a group of stakeholders, rather diversity of opinions
    • re panel surveys of citizens: citizen have trouble remembering their original expectations
      • "A third lesson concerns the idea of an ex-ante and ex-post comparison of expectations and actual experience. While this may work in interviews with organizers, we learned that when it comes to participating citiezns, many of them did not remember in the ex-post survey what they had expected at the start." p107
      • interestingly, Kubicek argues that it suffices to conduct the ex-post survey as what is relevant for an assessment of the whole processes is whether at the participants believe their expectations have been met, not whether this is actually the case: "Therefore, with regard to the cost of conducting an evaluation, for a final assessment, it is sufficient to conduct only an ex-post survey and to ask how expectations have been met or missed." p107
    • for participants it is difficult to assess the impact of a participation process because this may take months (or longer) to emerge

C9 Comparing Output and Outcome of Citizen-Government Collaboration on Local Climate Targets

C10 Attitude and Behavior Changes Through (e-)Participation in Citizen Panels on Climate Targets

C14 The Manager's View of Participation Processes with Citizen Panels

C15 What Difference Does the "E" Make? Comparing Communication Channels in Public Consultation and Collaboration Processes

C16 Summary and Outlook