04 Apr

Unmoderated Remote Testing

Remote testing is incredibly useful for websites.  After all the worldwide web is just that–Global.  Remote testing means that one can get feedback unencumbered by location of participants.  Rather than intercepting people physically, one can grab people as they go about their business on the site you are testing, for example.  Recruitment is no longer bound to location. And, with sites like Loop11, it is super easy to recruit users.  Just one link, and you are ready.  Without the need for a synchronous appointment, you can rack up numerous user rests.

There are drawbacks to remote testing.  The most important is that one loses much in the way of emotion, expressions, and verbal feedback from users.  This can make it challenging to understand the reasons that users click the buttons they click.

However, remote user testing can offer high volume feedback and identify trends.  In other words, while you might not be able to say why someone did something, you can pretty clearly say categorically that certain trends are obvious.

22 Mar

Moderate User Testing

Moderated User Testing is a useful way for testers to work with users who are not in the same location as themselves.  There are certain challenges, such as passing on incentives, but at the same time there are enormous benefits, such as being able to reach testers globally.

For the tester, videotaping the session is essential.  Moderated user testing can capture facial expressions and user quotes, but it is often challenging to read and assess all of that in real time.

One drawback is that appointments need to be made to run the test.  This isn’t an asynchronous experience, in other words.  Scheduling something with a remote tester can be challenging, and in certain projects one might find that you have a lot of no shows.  So, this could have the challenge of being time consuming.

But, once everything is captured, even with a small subset of users, one can gain quite a lot of feedback, particularly attitudinal feedback.  Moderate user testing is also useful in that it allows for the correlation of attitudinal feedback and behavioral feedback.

14 Mar

Testing from Afar

Testing can be incredibly useful–even essential to rolling out an new product.  But it can be cost-prohibitive.  Small firms might not have the resources to find the right users, employ testers, set up a room with specialized one-way glass, etc.  Of course, people do testing in this way for important reasons: if you have a specialized set up, you need to have the testers on-site.

But, often remote research has significant advantages.  If you are developing a website with global reach, testing remotely allows you to create a diverse testing base. You save money in terms of space and set up.  Newer tools allow testers to remote in, see all the key strokes, but also the facial expressions of the testers.

Remote testing isn’t without its challenges.  If you connection to your remote tester fails, you are out of luck.  You might not be able to observe facial expressions clearly with the interface.  You do have to find a way to send incentives to remote testers.  And, you might  get push back from your stakeholders as remote testing isn’t universally accepted.

Even with the possible downsides, the significant positive points make remote research an important tool for user experience researchers.

28 Feb

Reading all the Signs

I remember feeling like my first semiotics class was eye opening.  I had never considered that there could be an order to language or that that there was a science to understanding this order.  Now, all this is a bit of an aside, but I bring it up because there is a parallel with usability testing.  There is both an order to how people act and then a tandem act in which evaluators observe to make sense of what people do.

Video helps this latter act considerably.  Without it, the evaluator will need to scribble notes and inevitably miss things.  With the video, the tester has the audio, including all the textual responses,  the gesture of the mouse, and the facial expressions.  All of these tools are helpful in assessing usability.

The key is for the tester to create a framework where users feel comfortable testing the site, and sharing their ideas.  Once that framework is in place, then one will find very useful information.  But, without it, the user won’t feel comfortable sharing.  A script helps the tester to be assured that they are saying the same thing each time.  But, this script also helps the tester feel ready to put their user at ease.

Once that is done, one has the long task of many sense of the data.  Often wading through all the information is almost as much fun as generating the data.  Interpretation of evaluation data is the process of bringing into order disorder through noticing patterns.  Once the patterns are clear, a good tester then develops a scheme to make sure that these patterns are obvious to anyone who reads the deliverable.

16 Feb

Listening and Hearing

Talking is my occupation. Teaching is in a manner of speaking about talking and talking and talking.  Or, I should say that teaching is about attempting to communicate an idea in multiple ways.  Some of those ways are about your voice, others are about the hearing the voice of others, and sometimes its about reiterating their voice.

In this week, I have found my voice increasing muted by laryngitis, and it has made me think a little about the role of voice in my work, both in teaching but also in evaluation.  it almost seems as if you might not need a voice at all in order to allow your participants to share theirs.  But, really, evaluation and testing aren’t really about just listening, they are about sharing, framing, and positioning as well.  In honor the time spent by participants, one must create a situation that sets up the participant’s experience.

It isn’t just about the words that one says, but also about the tone of voice, the pacing of the things that are said, and even the inherent emotion in the phrases that are said. The evaluator or user experience tester is not unlike a hostess, setting up everything to put their guest at ease.  In a situation that is carefully organized, the participant is then able to share their ideas.

07 Feb

Quantitative and Qualitative Data

Testing and scholarly research are sort of similar. You have a problem, and you want to understand why that problem is occurring, for example.  Both use quantitative and qualitative data. But, in research, you want to be conclusive, exhaustive, and categorical.

In testing, you just want to make the problem better.  So, in that way, in testing, you don’t choose all the ways of understanding the problem, but a few methods.  The key is to make sure to choose methods that actually help you assess the problem accurately.  Success rate, for example, can help you assess if people are accomplishing a particular task. But, what if your goal is that users employ much of your site, then you want to measure how many pages they are viewing.

There is a useful diagram on the Neilsen Norman website that illustrates the ways that particular testing tools relate to behavioral or attitudinal data.  The article also illustrates what issues can be best illustrated by quantitative data, for example, like how much or how often.

Quantitative data should likely be paired with qualitative data.  After all, if you know that most of the people going to your app stop at a certain point, you don’t know why.  It might be because it is so terribly boring, or because it is so terrible interesting.  Or, it could be that the text becomes illegible.  Or…well, it could be anything.  So, pairing the quantitative data, often found in analysis, with qualitative data give you the information you need to understand the problem.

To go back to my original statement, testing help you know enough to fix a particular app or website.  You can make the situation better for the user.  Quantitative and qualitative data are the tools that you use to make these improvement decisions.  But, in terms of scholarship, you would likely need to have many, many more point of feedback to make a categorical assessment.  So, while you might be able to use a small study to fix a particular mobile app, this doesn’t necessarily help you make broad generalizations about all mobile apps.

01 Feb

Tasks, Tasks, Tasks

You might have a problem and a desire to solve that problem but where do you go next. Imagine being in a situation where your museum app is opened regularly but then no other features are accessed, as assessed through analytics.  You know that you need to figure out why this is happening.  What is your first step?

User testing, such as task analysis, can help you understand where challenges are going.  To use your money wisely, you should test with the demographics that mimic those who are already using your app.  Right now, you are hoping to figure out why the people who are using your app are having problems. Of course, the challenges with the app might also be turning off those who are not even logging in.  But, leave that challenge aside right now.

So, start with the types of people who are using your app.  Think of the ways that you can categorize them.  What age are they?  What gender? Education level? Salary level? Are they familiar with technology?  Are they museum visitors? Members?  After making this snapshot of users, then you will need to create a screener that helps you creating a testing sample that mimics your audience.  You might even create a faceted matrix to help you get the right mix of participants.

After that, then you begin thinking about the scenario and tasks, you would want to assess during task analysis.  You will need to try to think of something that is not so prescriptive as to miss challenges and not something that is so broad as hide trends. Try to think of actual scenarios in your institution.  Once you have created your scenario, say, you are a new visitor to the museum looking for ivory sculptures and you have downloaded the app onto your phone, then you need to create a list of tasks.  You want to develop tasks based on items that you have already seen.  Your tasks should help you explore the ways that users employ all facets that you are exploring.

Finally, you will want to make sure to use this task analysis exactly the same with each participant.  In the end, hopefully, you will be able to see trends between each of the users problems.  You might find that everyone is having trouble with the login screen. Or you might find people in a particular demographic have a hard time seeing the exit buttons.

In the end, task analysis is quite useful, because you are creating a systematized way of observing how a number of people use the same digital product.  It allows you to see where there are challenges in order to make improvements.

25 Jan

Formative vs. Summative

Do you have that sweet or salty conversation with people?  For your information, I am salty.  If you actually know me, this is not a surprise.  I reading about formative testing versus summative testing, I have been trying to really understand when each is best.  Is this more personal preference on the part of the tester or is it due to the requirements of the client?

Formative testing invites users to talk through choices.  It is useful for its low-tech implementation, and effective for gaining quick insight.  But, there is the challenge of having an intercessor there.  In the end, it is cost-effective, particularly when you don’t have a working prototype.  But, summative testing is useful in seeing if what you have actually works.  Additionally, if done remotely, there isn’t a moderator to intercede.

In terms of personal preference, I really like formative testing, for its mix of qualitative and quantitative data.  But, I also believe that it isn’t really a personal preference thing.  This is not so much if you are inherently sweet or salty, but rather where in the meal you, as a consultant have shown up.  If you are invited at the beginning you get to choose, and you might choose the one that you prefer.  But, often, you show up after the meal has been ordered, and it is already being cooked.  As such, you can taste the soup on the stove, and then offer suggestions for improvement, but you don’t get to say which ingredients shouldn’t go in the pot.  Or more simply,  often, you are going to be invited to look at an interactive or website that is already made, and so summative evaluation is the best choice for the client.

16 Jan

User Testing vs Research

When I think of the term ivory tower, I have a very clear mental image.  A glistening white tower, rectilinear in its aspect, is poised atop a rocky outcropping, on a lonely island.  The beach, an access point to the tower, has a pier on it.  Museums are like that beach.  There are in the same vaginitis as the ivory tower.  They have the same zip code, if you will.  But, they look drastically different, and their level of access is incredibly different.

Museums sit on this interstitial point between academia and so many other things: leisure spaces, K-12 classrooms; studio classes; edutainment.  In terms of understanding visitors and the types of digital interpretation that they produce, they can learn some things from both disciplines.  First, museum studies and information science both offer fruitful research that can inform practices.  But, second, research and user testing are not the same thing.  Research is in depth and large scale.  Research is often predicated on big numbers in order to be able to demonstrate statistical significance.  In museums, run by people with graduate degrees earned through rigorous research and rousing defenses, there is an important role for this type of visitor research.

But, user testing is a different sort of animal.  It is something that can be done in one day.  It can employ as few as three people to demonstrate a trend.  In other words, you are not writing a full report to show to the board of trustees.  Testing help you keep the digital project going, and make sure you are on the right track.  User testing is a check and a balance rather than chapter and verse on your project.

12 Dec

JOINT STATEMENT FROM MUSEUM BLOGGERS & COLLEAGUES ON FERGUSON

JOINT STATEMENT FROM MUSEUM BLOGGERS & COLLEAGUES ON FERGUSON

The recent series of events, from Ferguson to Cleveland and New York, have created a watershed moment. Things must change. New laws and policies may help, but any movement toward greater cultural and racial understanding and communication must be supported by our country’s cultural and educational infrastructure. Museums are a part of this educational and cultural network. What should be our role(s)?

Schools and other arts organizations are rising to the challenge.University law schools are hosting seminars on Ferguson. Colleges are addressing greater cultural and racial understanding in various courses. National education organizations and individual teachers are developing relevant curriculum resources, including the#FergusonSyllabus project initiated by Dr. Marcia Chatelain. Artists and arts organizations are contributing their spaces and their creative energies. And pop culture icons, from basketball players torock stars, are making highly visible commentary with their clothes and voices.

Where do museums fit in? Some might say that only museums with specific African American collections have a role, or perhaps only museums situated in the communities where these events have occurred. As mediators of culture, all museums should commit to identifying how to connect to relevant contemporary issues irrespective of collection, focus, or mission.

We are a community of museum bloggers who write from a variety of perspectives and museum disciplines.  Yet our posts contain similar phrases such as  “21st century museums,” “changing museum paradigms,” “inclusiveness,” “co-curation,” “participatory” and “the museum as forum.”  We believe that strong connections should exist between museums and their communities. Forging those connections means listening and responding to those we serve and those we wish to serve.

There is hardly a community in the U.S. that is untouched by the reverberations emanating from Ferguson and its aftermath. Therefore we believe that museums everywhere should get involved. What should be our role — as institutions that claim to conduct their activities for the public benefit — in the face of ongoing struggles for greater social justice both at the local and national level?

We urge museums to consider these questions by first looking within. Are staff members talking about Ferguson and the deeper issues it raises? How do they relate to the mission and audience of your museum?  Do you have volunteers? What are they thinking and saying? How can the museum help volunteers and partners address their own questions about race, violence, and community?

We urge museums to look to their communities. Are there civic organizations in your area that are hosting conversations? Could you offer your auditorium as a meeting place? Could your director or other senior staff join local initiatives on this topic? If your museum has not until now been involved in community discussions, you may be met at first with suspicion as to your intentions. But now is a great time to start being involved.

Join with your community in addressing these issues. Museums may offer a unique range of resources and support to civic groups that are hoping to organize workshops or public conversations. Museums may want to use this moment not only to “respond” but also to “invest”in conversations and partnerships that call out inequity and racism and commit to positive change.

We invite you to join us in amplifying this statement. As of now, only the Association of African American Museums has issued a formal statement (show link) about the larger issues related to Ferguson, Cleveland, and Staten Island. We believe that the silence of other museum organizations sends a message that these issues are the concern only of African Americans and African American museums. We know that this is not the case. This is a concern of all Americans. We are seeing in a variety of media – blogs, public statements, and conversations on Twitter and Facebook — that colleagues of all racial and ethnic backgrounds are concerned and are seeking guidance and dialogue in understanding the role of museums regarding these troubling events. We hope that organizations such as the American Alliance of Museums; the Association of Science-Technology Centers; the Association of Children’s Museums; theAmerican Association for State and Local History and others, will join us in acknowledging the connections between our institutions and the social justice issues highlighted by Ferguson and related events.

You can join us by…

  • Posting and sharing this statement on your organization’s website or social media
  • Contributing to and following the Twitter tag #museumsrespondtoFerguson which is growing daily
  • Checking out ArtMuseumTeaching which has a regularly updated resource, Teaching #Ferguson: Connecting with Resources
  • Sharing additional resources in the comments
  • Asking your professional organization to respond
  • Checking out the programs at The Missouri History Museum. It has held programs related to Ferguson since August and is planning more for 2015.
  • Look at the website for International Coalition of Sites of Conscience. They are developing information on how to conduct community conversations on race.