Evaluation in Participatory Design: A Literature Survey

This paper basically is as the title states a “literature survey” which aims to find out whether the previous work around participatory design, have achieved the central aims of mutual learning, democracy, workplace quality and empowerment or at least to what extent. The author’s follow a selective approach by which a group of 143 papers either from PD Conferences or special journal issues on PD of which 66 papers were considered relevant, only 17 of which deal with the aforementioned aims of PD have been thoroughly investigated and evaluated based on answering seven questions that are proposed by the authors.

A typical project would be evaluated based on how and to which extent participants were involved and influential in the decision-making process, or it might focus on the output in terms of people involved and quality of results generated, whereas outcome-focused evaluation would look for after effects in terms of whether or not people’s life was changed later after the project and if mutual learning was conducted.

Normally in a PD Evaluation the questions that arise are: what is the purpose of the Evaluation, who conducts and who participates? By which criteria? And Which method to follow?

So only 17 papers out of 143 were found to be relevant to evaluation in PD projects and were divided into three categories, one for the papers that suggests theoretical methods or frameworks and haven’t conducted any actual evaluation practices, the second is for the papers which addressed the outcome in terms of what benefits have people or organizations have gained, the third category sums the papers that contributes to evaluation in PD by some way or another.

Then neglecting 4 papers which were theoretical and left with 13 papers to dissect, the authors analyzed them based on the seven proposed questions.

Most of the papers had a summative purposes aiming to evaluate what has been done such as the gain and relation between the depth of participation and the outcome while some exception had formative purposes focusing on informing ongoing PD projects. However, none of the papers focused on the central aims of Evaluation of PD like the democracy, empowerment, etc.…

For all of the papers the evaluations were all conducted by the authors mostly and in some minor cases involved external researchers but their roles were unclear.

In some papers participants took no part in the evaluation process, while other papers evaluations included participants from future users, stakeholders to designers, neglecting the funding parties though.

All of the papers except for one exception had their criteria well described but mostly by the authors after the project completion.

All papers use qualitative methods from interviews to open-ended surveys with some exceptions which use a mix with some quantitative approaches in their evaluation which is excepted as the number of participants was fairly small (< 50).

All papers have the PD community of researchers as intended audience with two exceptions where the participants themselves were the audience in order to support the decision making aim and see to which extent it had been met. It seems that participants’ participation was considered enough.

For almost all the papers the evaluation intended use was for learning whether it is the level of community gain in relation to the outcome or for informing decision-making during the PD process.

Finally, the authors conclude by discussing the result of their survey and falsifying Hirshheim’s survey from thirty years ago to conclude that evaluation in PD does exist and in fact is a complicated phenomenon and the ones that do exist are not as clear and explicit as they should be and it is not being used to improve democracy in PD or empower mutual learning and therefore there is a need to have more systematic evaluations, and although that PD evaluation frameworks do exist, they are rarely used.

The Seven Questions model proposed by this paper is a really useful and handy tool for assessing PD evaluations so I thought It is better to mention them explicitly below:

  1. What is the purpose(s) of the evaluation?
  2. Who conducts the evaluation?
  3. Who participates in the evaluation?
  4. Who defines the evaluation criteria?
  5. What evaluation method(s) are applied?
  6. Who is the intended audience of the evaluation?
  7. What is the intended use of the evaluation?

I Believe that the authors were right because although they based their survey on only 143 papers using simple search techniques and looking for keywords in the abstract or the topic yet I wouldn’t be surprised that the same result will be for other papers if they do exist, however the good thing is that researches are moving towards evaluation in PD focusing on the empowerment of participants and emphasizing on democracy and mutual learning.

Literature:

Bossen, C., Dindler, C., & Iversen, O. S. (2016). Evaluation in Participatory Design : A Literature Survey. Participatory Design Conference, 151–160. https://doi.org/10.1145/2940299.2940303


A paper that I think explains the participatory design would be:

John Bowers and James Pycock. 1994. Talking through design: requirements and resistance in cooperative prototyping. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’94), Beth Adelson, Susan Dumais, and Judith Olson (Eds.). ACM, New York, NY, USA, 299-305. DOI=http://dx.doi.org/10.1145/191666.191773

Leave a Reply

Your email address will not be published. Required fields are marked *