Course Overview: If a research project involves building human-centered computing (HCC) applications or understanding human-computer interactions, then to assess project success human-subject studies are a must. More and more, we are building computing tools that directly interface with humans—to improve student learning, benefit group work, create a healthy habit, provide better recommendations, crowdsource our next trip, protect our sensitive data from malicious websites, or gather critical insights from a big data visualization. How do we know that these computing systems are effective? What evidence do we have that the research project was successful? How do we conduct a meaningful assessment? What are the appropriate metrics? How can we improve the system? To answer all these questions, an in-depth understanding of the state-of-the-art empirical methods in HCC is essential.
Course Objectives: In this course, students will become broadly familiarized with the state of the art empirical methods in human-centered computing. Students will learn how to use human subject data to (1) systematically evaluate human-computer interactions and (2) provide insights to inform the design of human-centered computing systems.
This course will cover a range of empirical methods, such as experiment design, hypothesis testing, log analysis, grounded theory, and crowdsourcing, which will train graduate students to critically examine the implications of human-computer interactions in different CS areas such as security, machine learning, data science, or visualization. Both qualitative and quantitative methods will be covered. Emphasis will be given on understanding the state of the art of these methods, and on developing intuition about which methods are appropriate in various application contexts.
Upon successful completion of this course, students will be able to (somewhat independently) design and (reasonably) conduct human subject experiments in any area of computing.
Prerequisites: This course will be accessible to students with a wide range of backgrounds in
different areas of computer science. Students are required to have a core research area (e.g.,
security, data science, NLP, or visualization), in which they have taken and passed core courses.
Students that are currently research active are strongly encouraged. A mere interest in an area
with no familiarity with its core concepts and research questions is not sufficient.
CS + X students are welcome to take this course. Requirement of a core research area still applies, which should be of CS. Examples of X include communication, health, learning sciences, or urban planning.
Some familiarity with statistics will be assumed.
Readings: We will use the following textbook for the course (which is freely available as an ebook from the UIC library):
Ways of Knowing in HCI
Judith S. Olson and Wendy A. Kellogg
Research papers will be assigned based on student interests and listed topics of the course.
Method of Instruction: This class will consist of two primary components: explorations and studios. Explorations will cover learning fundamental empirical methods and familiarizing oneself with the literature (research reading). Each class will consist of two parts: a lecture and a student presentation.
Studios are hands-on labs where students work in groups during the class meeting and the instructor provides immediate feedback. Students will be required to undertake a project of their interest, relevant to any of the general themes of the course. The project will consist of original research and will be in groups of 2 or 3 students (depending on student enrollment). There will be weekly status reviews (for any necessary course correction), a final presentation, and a project report.
Student Deliverables: Students will have to read all assigned readings and be expected to actively participate in class discussions. Each student will present at least one research paper to the class. For the final project, students will submit a proposal of their selected topic by week 5 and a final report (structured as a research paper) at the end of the class; students will also give a brief presentation on their findings. Following is a tentative breakdown of the grading plan.
|Student Deliverables||% of total grade|
|Discussion points for each assigned reading (textbook and papers)||15%|
|Class participation + IRB certification||15%|
|1||Introductions and charting the landscape + Research Ethics|
|2||Experimental Research / Hypothesis Testing + statistics|
|3||Survey Research + Factor Analysis + SEM|
|4||Sensor Data Streams|
|5||Understanding User Behavior Through Log Data and Analysis|
|6||Curiosity, Creativity, and Surprise as Analytic Tools|
|7||Field Deployments: Knowing from Using in Context|
|8||Project updates + discussions|
|10||Research Through Design|
|12||Eye Tracking and Identifying Emotions|
|13||Thanksgiving break (probably)|
|15||Reading and Interpreting Ethnography|
|16||Final project presentations|
Exams: There will be no exams.
Overlap with other courses: This course does not have a significant overlap with any of the CS 594 courses from previous semesters and other professors. CS 590 (Research methods in computer science) discusses doing research in computer science and exposes students to methods in different areas of computer science, but this exposure is minimal—via one guest lecture form one domain expert. In this proposed course, students will learn how to use different empirical methods in human-centered computing. Focus will be on operationalization of concepts, understanding the reliability and validity of metrics, hypothesis testing, and identifying the suitability of different data analysis methods (e.g., ANOVA, Factor analysis, Structural Equation Modeling, or Contextual Inquiry). Ideally, students will be trained to independently design and conduct human subject experiments in any area of computing.
CS 422 (UI design and development) and CS 522 (Human-computer interaction) covers usability testing—particularly discount usability—which is a type of method to evaluate computer applications involving end-users. Usability testing is primarily a practitioner’s tool—not a rigorous research method. This course will not focus on usability testing, but empirical methods, such as hypothesis testing or grounded theory—more on knowledge generation and discovery.