Posters, Demos, Workshop Papers
In this position paper, I present a set of data-driven techniques in modeling the learning material,
learner workflow and the learning task as graphical representations,
with which at scale can create and support learning opportunities in the wild.
I propose the graphical models resulting from this bottom-up approach can further serve as proxies for representing
learnability bounds of an interface.
I also propose an alternative approach which directly aims to "learn" the interaction bounds by
modeling the interface as an agent's sequential decision making problem.
Then I illustrate how the data-driven modeling techniques and algorithm modeling techniques can
create a mutually beneficial bridge for advancing design of interfaces.
Learner-driven subgoal labeling helps learners form a hierarchical structure of solutions with subgoals,
which are conceptual units of procedural problem solving.
While learning with such hierarchical structure of a solution in mind is effective in learning problem solving strategies,
the development of an interactive feedback system to support subgoal labeling tasks at scale requires significant expert efforts,
making learner-driven subgoal labeling difficult to be applied in online learning environments. We propose SolveDeep,
a system that provides feedback on learner solutions with peer-generated subgoals.
SolveDeep utilizes a learnersourcing workflow to generate the hierarchical representation of possible solutions,
and uses a graph-alignment algorithm to generate a solution graph by merging the populated solution structures,
which are then used to generate feedback on future learners' solutions.
We conducted a user study with 7 participants to evaluate the efficacy of our system.
Participants did subgoal learning with two math problems and rated the usefulness of system feedback.
The average rating was 4.86 out of 7 (1: Not useful, 7: Useful),
and the system could successfully construct a hierarchical structure of solutions with learnersourced subgoal labels.
We conducted a series of exploratory studies on sensemaking behaviors people exhibit while watching how-to-cook
videos. The three different scenarios we examined are a) when people seek for alternatives in ingredients, tools and
actions, b) when people seek for explanations or more detail on certain instructions, and c) when people use text
search and when people use video when learning how to cook a dish.
We found a) people often make arbitrary decisions on substituting ingredients, cooking tools, or cooking
actions while following instructions, b) people satisfice by verifying knowledge with little data and not wanting to deviate
from the initially chosen video, and c) people use text search for definitions and confirmation of substitutions while
they use video search for explanations and precise details for instruction steps
In culture analytics, it is important to ask fundamental questions that address salient characteristics of collective human behavior.
This paper explores how analyzing cooking recipes in aggregate and at scale identifies these characteristics in the cooking culture, and answer
fundamental questions like ”what makes a chocolate chip cookie a chocolate chip cookie?”.
Aspiring cooks, professional chefs and cooking hobbyists share their recipes online resulting in thousands of different procedural instructions towards a shared goal. However, existing approaches
focus merely on analysis at the ingredient level, for example, extracting ingredient information from individual
recipes. We introduce RecipeScape, a prototype interface which supports visually querying, browsing and comparing cooking recipes at scale. We also present the underlying
computational pipeline of RecipeScape that scrapes recipes online, extracts their ingredient and instruction information, constructs a graphical representation, and computes similarity between pairs of recipes.