Solving the Problem through ‘Problem-Solving’?

Having sparked heated academic debate at the CHI 2016 conference (or so I’ve been told by those in attendance), it can be assumed that Ulasvirta and Hornbæk’s paper “HCI Research as Problem-Solving” was not awarded best paper for its universal acceptance by fellow academic within the field. Rather, it was perhaps the bravery or, depending on which side of the debate you fall, audacity of the authors’ attempts to define an already contentious field and subsequently evaluate existing work against this definition that provoked such a strong reaction.

The ‘problem’ I refer to in the title to this post refers to the difficultly of defining and articulating a universally agreed, coherent and distinct research agenda for HCI. Although to some, the multi-disciplinary nature of HCI is a virtue, to others it is a fatal flaw. Critics are concerned that this melting-pot of concepts, approaches and agendas translates into an inability to evaluate both the quality and impact of HCI research; in essence, these critics are asking “do too many cooks spoil the broth?”

In what is an ambitious undertaking, Ulasvirta and Hornbæk seek to overcome the “messiness” of defining this multidisciplinary arena by reframing HCI as a ‘problem-solving’ discipline. In doing so, they attempt to outline a clear conceptual framework against which the relative “success” of existing and future HCI research can be assessed.

The paper draws upon and extends Larry Laudan’s philosophy of science which defines scientific progress by its ability to improve upon the problem-solving capacity of research and design. In an attempt to declutter HCI research, the authors suggest simplifying all HCI research into a three part typology that categorises problems as either empirical, conceptual or constructive.  By simplifying and categorising research problems in such a way, the authors suggest that the subsequent application of a formal, universal and consistent assessment criteria is then possible, enabling researchers (as well as critics) to better-assess how ‘well’ a given problem has been addressed (or solved) by the proposed “solution” – or its problem solving capacity. The assessment criteria proposed by Ulasvirta and Hornbæk is broken into 5 parts: Significance, Effectiveness, Efficiency, Transfer, and Confidence. The problem-solving capacity of a given ‘solution’ is therefore dependent upon its ability to satisfy each of these criteria.

Not content with abstractly reframing and defining what constitutes “good” HCI research, the authors then go on to somewhat boldly (and no doubt controversially) retrospectively apply their own framework to existing HCI research as a means of highlighting current limitations and areas for improvement in HCI problem-solving capacity. Their content analysis of CHI 2015 best papers suggests an apparent gap in HCI focus on the more conceptual problems, an omission that Ulasvirta and Hornbæk suggest demands improvement if the problem-solving capacity of future HCI research is to be improved. They suggest failings in the writing styles of current papers and highlight a need for papers to more explicitly and clearly reflect the problems they are tackling.

Perhaps it is my naivety as a fledgling HCI researcher that has left me dazed and confused by this paper, but undoubtedly I remain unpersuaded by the argument to reframe HCI as purely problem-solving in nature. Whilst the temptation to simplify the arduous task of explaining my latest academic endeavours to bewildered family and friends is admittedly appealing, I can’t help but find myself slightly uncomfortable with this specific definition. Whilst the penultimate section of the paper seeks to refute the common criticisms of the problem-solving approach as “solutionist”, I can’t shake the feeling that it is. The division of problems into three distinct ‘types’ feels uncomfortably close to the oversimplification of complex problems that is at the heart of anti-solutionist critiques. This is not to say that aiming for solutions is negative, but does the classification of problems in such a simplified way negate the need to deconstruct and understand the complexities of problems prior to designing solutions?

Equally, whilst the analysis of past papers through a novel lens is no doubt insightful in this context, it seems unfair to assess the problem-solving capacity and impact of papers based upon a standard, or criteria, to which the original paper was not intended to adhere. Is it useful to create a framework and then try to shoehorn existing work into it, or would it perhaps be more useful to draw upon analyses of existing work to instead inform the framework?

A paper that exemplifies HCI to me is ‘Evocative of Experience: Crafting cross-cultural digital narratives through stories and portraits’ by Rachel Clarke and Pete Wright.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *