I build the systems that help organizations turn user understanding into better product and learning decisions at scale.
I'm a mixed-methods UX research leader and former STEM educator who builds the systems that turn fragmented user signals into institutional wisdom. The throughline across a decade of work is the architecture of understanding: scaling democratized research programs, governing AI-enhanced workflows with strict human-in-the-loop guardrails, and embedding rigorous methodology directly into the product lifecycle.
I bridge the technical execution of a Staff Researcher with the grounded empathy of a former K-12 teacher. That combination shows up in three places.
Research as a shared practice. Most recently, I led the UX research function at Guild. I grew the function from just me into a shared organizational practice supporting more than 100,000 working adult learners across product, design, marketing, strategy, coaching, and operations. The templates, training, and tiered governance I built let partner teams run their own tactical research while the central team focused where deep rigor was non-negotiable.
AI transformation, governed. Starting in early 2023, I led Guild's company-wide adoption of generative AI: AI-enabled research operations, human-in-the-loop validation, vendor evaluation, governance for regulated employer partners, and a sustained enablement program of training, working groups, and weekly translation of frontier developments for a non-technical audience.
Methodological depth. Underneath it all is the craft: usability testing, in-depth interviews, concept and prototype validation, surveys at scale, longitudinal and diary studies, behavioral segmentation, cross-product synthesis, funnel diagnostics, and research with constrained populations (working adults, shift workers, students). Earlier in my career I was a UX researcher at LogMeIn embedded in the GoTo product cycle, and a researcher at the Norwegian Geotechnical Institute in Oslo studying socioeconomic vulnerability. Before that, I taught high school sciences in the Bay Area.
I hold a PhD in Education and Quantitative Methods from UC Santa Barbara, with earlier graduate work in environmental and earth system science (Stanford) and engineering mathematics (Dalhousie).
My current independent work focuses on responsible AI, especially as it intersects with how people learn and how products affect users. I do applied research and synthesis on AI's effects on youth wellbeing, with co-authored pieces in Psychiatric Times and After Babel, and expert testimony to state legislatures.
Click any section to expand. Confidentiality has been respected throughout — proprietary details have been anonymized or generalized.
Built UX research as a function, a shared practice, and a culture across the company. By the time I left, research wasn't a checkbox or a service: it was a continuous input shaping product, strategy, and operations across every part of the business.
Guild was early-stage, growing fast, and figuring out what working adults needed from an education benefit. Research was scattered across functions, with no shared standard for how questions got asked or how answers traveled back into decisions. I was hired onto the product design team, but as I built credibility on early projects, requests started coming from outside product.
I built research as a shared organizational practice from the start. Templates, training, and a tiered model let partner teams own tactical research while the central team focused where rigor mattered most. By the end, "what does the user voice say?" was a default question in product reviews, strategy meetings, and roadmap discussions.
Each layer answered a different question. Templates let designers and PMs do their own tactical research without reinventing the basics. Standards kept quality from drifting as the team grew. Curriculum turned partner teams into capable collaborators. The system worked because all three did.
The methodological tension between what the rigorous answer would take and what the business needed by Friday was constant. Rigor isn't a fixed bar; it's calibrated to the consequences of being wrong. Some questions warranted six-month studies. Others needed a tight survey by end of week. The tiered model below taught partner teams how to ask "which tier is this?" before "who runs it?", and it kept the central team focused on the studies that genuinely required depth.
By 2024, partner teams across the company were running their own usability tests, surveys, and discovery work as routine practice. Research was no longer a checkbox or a service: it was an asset that informed strategy, shaped iteration, and was part of how decisions actually got made. The cultural shift mattered as much as the operational one.
Starting in early 2023, weeks after ChatGPT's launch, I led the company's adoption of generative AI as both a research operations capability and a workforce-wide transformation.
ChatGPT launched in late November 2022. Early in 2023, I started using it in my own research work to test where it actually helped (synthesis, lit review, prompt design for survey items, faster pattern-finding) and where it produced confident nonsense.
The work started informally. Colleagues began bringing me questions ("Can I use this for X? Is this risky? What tool should I pick?") that needed answers nobody was set up to give. By 2024, "AI Transformation" was part of my title. The role grew around the work.
I structured the AI work the same way I'd structured the research function: enablement for non-experts, governance for shared practice, and a regular rhythm to keep the company connected to a fast-moving field.
The weekly newsletter was the most visible artifact of the program. Each issue translated frontier AI research, product releases, and policy debates for a non-technical company audience: Stanford labor papers, OpenAI benchmarks, compute-economics primers, the evolving conversation from "AI safety" to "responsible AI." Every issue followed the same shape: a current development, a plain-language explanation of why it mattered for our work, links to original sources, a short section on how teams across Guild were trying things and what they were finding, and a "what's changing" framing for decisions ahead.
Around the newsletter sat the practice that made it work. A cross-functional working group convened regularly to surface use cases, share what was working, and resolve hard questions before they became fires. I consulted directly with leaders across the company on AI choices their teams were facing. I gave talks at all-hands meetings to a company of more than 1,000 people and built training programs that ran across functions. AI transformation became one of Guild's small handful of formal annual priorities, with measurable goals I owned.
An early, confident, wrong AI output reaching a customer, partner, or board is the fastest way to lose trust in AI inside a company. My role wasn't just to evangelize AI. It was to know when an AI output was credible and when it wasn't, and to teach others to do the same.
Guild's employer partners ran in regulated industries (healthcare, financial services, retail) where data privacy and AI governance were not abstract. We worked closely with our partners' security and compliance teams on clear governance: what data could go through which models, how outputs were validated before reaching customers, and where human review was non-negotiable.
The principle: AI's value in research isn't speed; it's speed plus rigor. Faster synthesis only counts if the synthesis is right. The human in the loop is the one who knows when it isn't.
Internal processes that previously took days or weeks of manual spreadsheet work were redesigned with simple AI workflows and human review checkpoints. Cycle time dropped substantially, and QA quality improved because the AI surfaced patterns reviewers could check against.
For mature questions like value-prop wording and naming on familiar terrain, custom GPTs grounded in our accumulated research produced directionally reliable answers without standing up new studies. The boundary mattered: we used these tools where the data was deep, not on genuinely new questions where AI would have been guessing.
Cross-source synthesis on long-running studies became tractable for the first time. Researchers could query against months or years of accumulated qualitative data rather than rely on memory and notes. Catalog navigation, retention, and longitudinal projects all benefited.
The team developed a working pattern for AI-assisted synthesis: AI surfaces possible themes, the researcher tests them against the raw data, the researcher writes the final claim.
What mattered most was the rhythm: a weekly artifact that signaled the company was paying attention, a working group that turned questions into shared practice, and a culture that took AI seriously without taking it as gospel. By the time I left in mid-2025, AI was woven into how Guild worked.
A longitudinal mixed-methods study with 12 working adult learners over a full year. Two patterns surfaced in the diaries and were validated against behavioral data at scale. The findings reshaped how four teams across the company understood retention.
Guild's members are working adults, often frontline full-time, often parents, whose employers cover the cost of going back to school. When a learner who started a program never returns, the loss compounds across three stakeholders: the worker who is trying to build a future for themselves, the employer partner (Walmart, Target, Chipotle, Humana, and others) who funded the benefit to retain talent, and Guild, whose revenue depends on members persisting through their programs.
The behavioral and survey infrastructure already in place was extensive. Behavioral data captured what learners did at every step of the journey: applications, accepts, program starts, first logins, in-program activity, pause triggers, graduation. Survey data captured what learners said at key moments: ingoing expectations, first-experience signals, quarterly pulse checks, pause-triggered surveys, outcomes.
What the existing data could not tell us was the texture of the weeks before a pause. We knew when learners paused, what they reported at the moment of pause, which segments paused at higher rates, and how long pauses tended to last. We did not know what differentiated learners who came back from those who did not, or what the lived experience of trying to re-engage actually looked like. The moment a learner paused was not the moment the pause began.
I designed a year-long video diary study with 12 working adult learners, purposively sampled across segments, employer partners, and program types. Each participant recorded short asynchronous video diaries (5–10 minutes) every week, and we held quarterly 60-minute interviews to go deeper. The full study ran a calendar year per participant.
The diary study traded sample size for time depth. Time depth was what was missing.
Disengagement was a cascade that began weeks before a learner disappeared from the platform. Momentum compounded in both directions: wins built on wins, setbacks built on setbacks. Across the 12 participants, the shapes of their years were different in detail but recognizably the same in structure: an initial lift, an inevitable dip, and then a fork. The spiral was the structure of the experience itself, not a property of any individual learner.
The cascade was not unusual. It was the structure. Each participant's year contained its own version: financial stress, caregiving demands, a health event, a work shift, an academic setback. The shocks were not preventable. What differentiated learners who recovered from those who did not was what happened after the shock, at the fork.
At every dip there is a branching: something pulls the learner back, or the spiral compounds further. Life shocks cannot be prevented, but the conditions at the fork can be shaped. And the fork is later than expected; being-behind is itself a demotivation-spiral signal. The window for intervention is not at the moment of the shock; it is in the weeks of trying to come back.
Once the spiral framework was named, we tested it against the broader behavioral data. The diaries had surfaced the holidays as a spiral trigger: retail and hospitality hours spike, caregiving load increases, financial stress stacks. The expected pattern in the behavioral data was a V-shape: learners drop off at the holidays and rebound in January once life normalizes.
Working adults front-loaded coursework in their first term, hoping to accelerate a slow part-time pace. Coaches encouraged this; "how much can you handle?" was the default conversation. But semester one is where academic re-acclimation happens, especially in gateway math and statistics. The heaviest loads collided with the hardest content. The capacity frame was wrong: semester one is not a test of throughput; it is a re-entry into the role of student.
Validation at scale: behavioral data confirmed the pattern. Learners who took 1–2 courses in their first term persisted at meaningfully higher rates than those who took 3 or 4+. The diaries had explained why.
The spiral framework gave coaches vocabulary for disengagement patterns they could feel but could not name. Outreach moved from generic re-engagement nudges to fork-aligned, learner-specific conversations. First-semester course-load recommendations shifted: coaches recommended fewer courses up front, especially in gateway math. Lighter-load learners persisted at higher rates.
Outreach timing moved from immediate post-pause contact to the weeks after, when learners were actually trying to come back. Tone moved from performance language ("don't fall behind") to momentum language ("pick up where you left off"). Holiday-aware re-engagement campaigns were rebuilt around the post-shock recovery window, not the shock itself.
The pre-existing pause survey was rewritten to feed a real-time retention dashboard, segmented by partner and industry. Industry-specific shock calendars (retail's back-to-school plus holidays; healthcare's holidays plus shift-coverage demands) became standing input to campaign timing and coach workload planning. The spiral framework was adopted across product, coaching, marketing, and strategy as a shared vocabulary, kept alive in strategic conversations through a biweekly newsletter that surfaced ongoing learner experience as strategy was built.
Disengagement is a gradual erosion of momentum, not a sudden decision. The work that matters most in these journeys is not preventing the shock. It is meeting people at the fork.
A funnel step had been declining year-over-year since 2020. A mixed-methods diagnostic showed the failure mode was inertia, not friction, and four teams coordinated to reverse the decline.
One step in Guild's funnel, between application approval and program start, had been declining year-over-year since 2020. A slow erosion, easy to miss in any one quarter, but compounding. These were people who had cleared eligibility, completed an application, and been approved. Then a meaningful share never started.
Existing assumptions ranged from "life got in the way" to "the product was confusing." Neither was specific enough to fix.
A touchpoint survey was already running at this funnel step, capturing hundreds of responses per month on self-reported readiness, intent, and barriers. It told us what learners said and how many said it, but the open-text answers were short and the patterns ambiguous.
I added eight in-depth interviews with members who had paused at exactly this step. Eight was enough to surface the language and lived experience of the gap. The interview findings then sharpened the survey itself, with better questions, better prompts, and better coding for the next round of responses.
Members who did not start were not blocked by obstacles. They were members for whom nothing in particular had happened. The gap simply had nothing in it strong enough to hold against the rest of their lives.
Guild's members are working adults, out of school for years or decades. Many are frontline shift workers, often parents, with little slack. School is a real source of hope; so is the daily reality of their lives. Four reasons the gap was particularly hard for this population:
The application gave members a concrete near-term task with clear steps. Approval replaced that with "wait six weeks for your program to start," exactly the kind of long-horizon commitment hardest for people whose attention is already consumed by this week's shift, this month's rent, or this morning's school pickup.
The application was driven by hope. The gap was where hope had to convert into logistics: how to do homework after a 10-hour shift, where the kids would be, what happens if a class is missed. Members had not yet done that practical thinking, because before approval they were focused on getting in. Reality showed up in the gap, and reality was harder than the emotional yes had accounted for.
In the application phase, members were actively making time for school: researching programs, talking to coaches, getting employer signoff. The activity was a commitment device. Approval ended the work, and nothing replaced it. When a kid got sick, or a shift changed, or rent came due, school had no anchor strong enough to defend its place.
Most had been out of school for years. There is real psychological distance between "I got accepted" and "I am a student." The gap was the period that identity transition had to happen, but there were no peers met, no classroom to walk into, no syllabus. Members were left to make the shift alone, often in environments where "student" was not a role anyone around them recognized.
The four reasons reinforced each other. Long-term planning is harder under stress. The emotional yes weakens because practical reality was never worked through. Life shocks land harder when there is no replacement work and no student identity to defend. The gap was not empty time. It was a period that required active support of meaning, momentum, and identity.
The interventions did not just "fill the gap." Each one targeted a specific reason the gap decayed:
Coaches reached out within the first week of approval, not to check in but to help members do the practical thinking the emotional yes had skipped: how school would fit into an actual schedule, what the first week of class would look like, what to plan for in the first month. This converted hope into logistics, and replaced the activity of applying with the activity of preparing.
Blog posts and emails featured learners further along in the journey, planning guidance written for working adults balancing shifts and caregiving, and previews of what the first weeks would actually feel like. Less brand voice, more peer voice. The shift supported the identity transition from "I got accepted" to "I am a student."
Orientation content, a personal to-do list for the weeks before class, and connection to peers starting at the same time. Each task was small enough to feel doable inside a busy life and concrete enough to anchor the abstract program in near-term action. This replaced long-horizon abstraction with weekly-feeling structure.
The three interventions spanned coaching, communications, and product, plus operations to support implementation. No single team owned the funnel step, and the obvious failure mode was research landing in a deck and dying there as handoffs got lost between teams.
I used a RAPID decision framework to assign explicit cross-functional ownership for each intervention. RAPID names five roles for every decision:
For each intervention (coaching outreach changes, content rebuild, in-product to-do list), we named who Recommended, who Agreed, who Performed, who provided Input, and who Decided. That structure surfaced disagreements early, prevented work from getting stuck in the seams between teams, and meant each shipped change had a single accountable owner.
The funnel step rate improved measurably after the changes shipped, and the direction held over time. Just as importantly, the framing became a shared mental model across the teams that owned the journey: motivation is perishable in the gaps between commitment and action, and the gap requires active support of meaning, momentum, and identity. Subsequent work on other funnel steps adopted the same lens.