Posters, Demos, Workshop Papers
The diversity and the scale of available online instructions introduce opportunities but also
user challenges in currently used software interfaces;
Users have limited computational resources, and thus often make strategic decisions when
browsing, navigating, and understanding instructions to accomplish a task.
These strategic user interactions possess nuanced semantics such as users' interpretations,
intents, and contexts in which the task is carried out.
My dissertation research introduces techniques in constructing data structures that capture
the diverse strategies users employ in which the collective nuanced semantics across
multiple strategies are preserved.
These computational representations are then used as building blocks for designing novel
interactions that allow users to effectively browse and navigate instructions, and provide
contextual task guidance.
Specifically, I investigate 1) structure of instructions for task analysis at scale, 2)
structure of collective user task demonstrations, and 3) structure of object uses in how-to
videos for tracking, guiding and searching task states.
My research demonstrates that the user-centered organization of information extracted from
interaction traces enables novel interfaces with contextual task support.
In this position paper, I present a set of data-driven techniques in modeling the learning
learner workflow and the learning task as graphical representations,
with which at scale can create and support learning opportunities in the wild.
I propose the graphical models resulting from this bottom-up approach can further serve as
proxies for representing
learnability bounds of an interface.
I also propose an alternative approach which directly aims to "learn" the interaction bounds
modeling the interface as an agent's sequential decision making problem.
Then I illustrate how the data-driven modeling techniques and algorithm modeling techniques
create a mutually beneficial bridge for advancing design of interfaces.
Learner-driven subgoal labeling helps learners form a hierarchical structure of solutions with
which are conceptual units of procedural problem solving.
While learning with such hierarchical structure of a solution in mind is effective in learning
problem solving strategies,
the development of an interactive feedback system to support subgoal labeling tasks at scale
requires significant expert efforts,
making learner-driven subgoal labeling difficult to be applied in online learning environments.
We propose SolveDeep,
a system that provides feedback on learner solutions with peer-generated subgoals.
SolveDeep utilizes a learnersourcing workflow to generate the hierarchical representation of
and uses a graph-alignment algorithm to generate a solution graph by merging the populated
which are then used to generate feedback on future learners' solutions.
We conducted a user study with 7 participants to evaluate the efficacy of our system.
Participants did subgoal learning with two math problems and rated the usefulness of system
The average rating was 4.86 out of 7 (1: Not useful, 7: Useful),
and the system could successfully construct a hierarchical structure of solutions with
learnersourced subgoal labels.
We conducted a series of exploratory studies on sensemaking behaviors people exhibit while
videos. The three different scenarios we examined are a) when people seek for alternatives in
ingredients, tools and
actions, b) when people seek for explanations or more detail on certain instructions, and c)
when people use text
search and when people use video when learning how to cook a dish.
We found a) people often make arbitrary decisions on substituting ingredients, cooking
tools, or cooking
actions while following instructions, b) people satisfice by verifying knowledge with little
data and not wanting to deviate
from the initially chosen video, and c) people use text search for definitions and
confirmation of substitutions while
they use video search for explanations and precise details for instruction steps
In culture analytics, it is important to ask fundamental questions that address salient
characteristics of collective human behavior.
This paper explores how analyzing cooking recipes in aggregate and at scale identifies these
characteristics in the cooking culture, and answer
fundamental questions like ”what makes a chocolate chip cookie a chocolate chip cookie?”.
Aspiring cooks, professional chefs and cooking hobbyists share their recipes online
resulting in thousands of different procedural instructions towards a shared goal. However,
focus merely on analysis at the ingredient level, for example, extracting ingredient
information from individual
recipes. We introduce RecipeScape, a prototype interface which supports visually querying,
browsing and comparing cooking recipes at scale. We also present the underlying
computational pipeline of RecipeScape that scrapes recipes online, extracts their ingredient
and instruction information, constructs a graphical representation, and computes similarity
between pairs of recipes.