three looks at users
a comparison of methods for studying digital library usage

user-centred design
weak version
follow user-centred guidelines
read prior user studies
follow UCD best practices
strong version
meet our own users
watch their tasks
experience their context
(and)
follow user-centred guidelines
read prior studies
follow best practices

user-centred design
“With rare exception, libraries appear to view think-aloud protocols as the premier research method for assessing the usability of OPACs, Web pages, local digital collections, and vendor products.”
- Covey, 2002, p. 24

user-centred design
strong version
meet our own users
watch their tasks
experience their context
usability testing
whomever we can recruit
watch our tasks
watch them experience our context

user-centred design
strong version
meet our own users
watch their tasks
experience their context
usability testing
whomever we can recruit
watch our tasks
watch them experience our context

user-centred design
strong version
meet our own users
watch their tasks
experience their context
usability testing
whomever we can recruit
watch our tasks
watch them experience our context

target studies
Variations - questionnaire study, contextual inquiry study
Variations2 - questionnaire study, activity logging study

Variations

variations2

three looks at usage
user satisfaction questionnaire (2 studies)
session activity logging
contextual inquiry

questionnaire 1
Variations usage in library
users recruited to fill out survey immediately after use
n = 30
paper-based survey including demographic questions and satisfaction rating items

results (n = 30)
frequency :: once a week (26); more than 5 times per week (7 of the 26)
purpose :: studying for an exam or completing an assignment for class (17); personal listening (5)
satisfaction (1 low, 7 high) :: 5.56 overall mean; all items averaged above 5 except for “slow...fast” (4.77)
likes :: “very useful” (2); “simply tremendous to use...a veritable heaven for all musicians here”
dislikes :: waiting to retrieve recordings, serialized retrievals (7); navigation difficulties, playback delay (2); sound skipping or cutting off (2)
recommendations :: more detail (liner notes, track times, etc.) (3); more music or types of music (2); improved search (2)

questionnaire 2
variations2 usage by a class of 30
users recruited to fill out survey immediately after use
12 responses
web-based survey including demographic questions and satisfaction rating items

results (n = 12)
frequency :: 2x/week (all); > 5x/week (3 of the 12)
typical purposes :: exam prep, class assignment (11); recital or performance prep (11); personal listening (4)
satisfaction (1 low, 7 high) :: 5.38 overall mean; all items > 5 except for “number of screens/windows: confusing…very clear” (4.86)
likes :: availability of scores & song texts (5); speed improvement over Variations (2)
dislikes :: difficulty of handling the many windows (2); many unique responses
recommendations :: want the “repeat” option from Variations (2)

session activity logging
variations2 usage by a class of 30 for a 7-song listening assignment (listen to song, write a short paragraph of analysis)
software logged user actions
quantitative analysis by scripts
detailed manual analysis

results
sessions :: 128, 30 minutes average length
items retrieved :: 3.5 average
maxima :: 7 simultaneous windows; 11 sessions in a day
feature usage ::
bookmarking - 11%
menubar - 17%
view record details - 23%
total button presses ::
stop - 200
pause -385
play - 588
total manual slider adjustments :: 295

detailed analysis results
“Karita” began her session by clicking on the first song (3:02 in length) on the pilot assignment web page.  It took 28 seconds for her to log in, see the audio player, and hear the song. 16 seconds later, she paused the audio.  81 seconds later Karita clicked on the hyperlink in the audio player to view the detailed bibliographic information of the recording.  After 6 seconds, she clicked on the score link on the assignment web page.  The score viewer took 11 seconds to appear.  45 seconds later, she closed the "view details" window and maximized the score viewer… etc.
 only analyzed one full session
 revealed no significant issues
 many unanswered questions

contextual inquiry
14 observations of normal user activity; 10 were in music library
listening assignments for class
recital planning assignment
preparing personal audition “package”
studying a piece for private lesson
detailed history/analysis of one song
exam preparation
researcher took notes, discussed w/user
analyzed data using contextual design work models

Slide 19

sequence model detail

Slide 21

work note affinity

method comparison

disclaimer
This material is based upon work supported by the National Science Foundation under Grant No. 9909068. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

for further information
http://variations2.indiana.edu
http://mypage.iu.edu/~mnotess
mnotess@indiana.edu