ReEvaluation
A series of blog posts created in the "design methodologies" module at ZHdK, teached by Dr. Joëlle Bitton.

This week was all about re evaluating ones designs and the critical aspects of evaluating. As mentioned in the text of Greenberg & Buxton, evaluations in ACM CHI are dominated by quantitative empirical usability evaluations (about 70%) followed by qualitative usability evaluations (about 25%) and probably underlying a methodology bias due to the assumption, that certain methods are considered more correct than others. I feel like this really is being a problem, if people adapt their research to the most credible technique, instead of choosing the technique based on it’s efficiency and power in filtering out the critical information. This really gets problematic when trying to evaluate innovative systems, where usability testing can kill great ideas. If the underlying technology is immature, usability testing will only highlight the downsides of a design.

Another problem mentioned is that researchers often tend to use techniques and methods to prove that their assumption is right. Hence they formulate hypotheses - which is great for having a more substantial base to build solutions upon - but test them in a way which supports their assumption the most. Instead, trying to refute ones hypotheses would be much more powerful in terms of learning about all the details that need to be considered.

Finally I’d like to point out how important it is to always reflect on ones process. Evaluation is absolutely necessary and critical in the design process, but can cause a lot of damage if it’s badly timed or used wrong. Therefore it is really important to ask oneself, which information one would like to get out of the research and which method is the most suitable for it.

Readings

Bardzell, J., Bolter, J., & Löwgren, J. 2010. “Interaction criticism: three readings of an interaction design, and what they get us”. In Interactions. 17:2. 32–37.

Greenberg, S., & Buxton, B. 2008. “Usability evaluation considered harmful (some of the time)”. In Proceedings of CHI ’08.

Nørgaard, M., & Hornbæk, K. 2006. “What do usability evaluators do in practice?: an explorative study of think ­aloud testing”. In Proceedings of DIS ‘06.

Preece, J., Rogers, Y., & Sharp, H. 2002. “Introducing Evaluation”. In Interaction Design. Wiley.

Sengers, P., & Gaver, B. 2006. “Staying open to interpretation: engaging multiple meanings in design and evaluation”.  In Proceedings of DIS ‘06.