top of page

Responsible AI in practice

Categorizing POVs on AI for Improved Cataloging and Design of RAI Tools,

The University of Texas at Austin (2024)

SITUATION

Closing the gap between ethical AI principles and responsible AI practice requires usable pathways, not more resources.

Introduction
Responsible AI (RAI) guidance has proliferated to the point of saturation, yet many resources still feel hard to apply in real settings. Practitioners need to find relevant tools quickly, understand them, and use them in context—but existing catalogs are fragmented, non-centralized, and rarely designed around how people actually seek, evaluate, and operationalize RAI resources. Without better discoverability and context-fit, RAI work becomes inconsistent, inequitable, and difficult to scale across teams.

Key Takeaway
When RAI tools are designed and cataloged around stakeholder context, decision workflows, and usability, practitioners can more confidently select, trust, and apply them. This reduces “principles fatigue” and increasing real-world adoption.

Goals

Generate human-centered insights about what professionals value in RAI tools, what they struggle with, and what a centralized, accessible RAI resources library/cataloging system should enable. 

 

Research Questions

  • RQ1: What RAI issues do professionals with different situated perspectives want addressed; why? 

  • RQ2: How do professionals want RAI tools to be designed/implemented and RAI practices to be measured? 

  • RQ3: What stakeholders and human factors are relevant to RAI tool design and how tools might be cataloged; why? 

Desired Outcomes

  • Greater accessibility of relevant RAI tools for practitioners (especially sole proprietors, startups, governance teams, ethicists, consultants). 

  • Usability insights for creators of RAI resources/tools (agencies, NGOs, governance marketplace players). 

  • Design insights for an entity that could build a free, centralized RAI resources library. 

Challenges and Risks

  • Sensitive domain constraints: confidentiality and organizational risk pushed parts of the inquiry toward vagueness. 

  • Sampling skew: recruitment may bias toward UX/RAI advocates; AI developers were absent; participants skewed to startups/sole proprietors.

  • Concept ambiguity: inconsistent distinction between “AI ethics” and “RAI” occasionally confused participants. 

  • Scope evolution: research goals shifted as the literature review continued after the participant study.

  • Analysis overhead: transcript cleanup and synthesis required significant tooling and time. 

TASK

I planned and executed a participatory study to translate "RAI in the abstract" into practical, user-centered design insights over 10 months.

Team

  • Lead Researcher: Vanessa Sanchez, MSIS candidate

  • Supervisors: Dr. Min Kyung Lee, Dr. Kenneth R. Fleischmann, Dr. John Neumann 

 

What I Did

  • Framed the problem: Why RAI tools are hard to discover, evaluate, and apply. 

  • Designed a participatory research approach to elicit needs from cross-sector professionals (IRB exemption obtained). 

  • Synthesized findings into themes + actionable implications for tool design and cataloging. 

 

Timeline

  • Nov 2023–July 2024: Ongoing literature review that shaped/shifted the research gap. 

  • March 2024–April 2024: Study execution (survey + 12 remote sessions with workshop components)

  • May 2024-Aug 2024: Analysis + short reports + write-up with graphics

Tools

  • Zoom (remote sessions). 

  • FigJam, Figma, Word, Notion, GoogleScholar

  • Otter.AI + ChatGPT + Notion AI used to support transcript accuracy/insight extraction during analysis.

Skills

  • Mixed-methods qualitative research design (survey + interviews + workshop component) 

  • Participatory facilitation (remote workshop-style elicitation) 

  • Thematic analysis + cross-analysis 

  • Systems thinking: mapping needs to inform cataloging/library requirements 

Deliverables

Capstone Master's Report, "short reports" for participants, conference poster

ACTION 1

I defined the problem in "user terms."

Framing decision

I centered the study on access, usability, relevance, customization, and evaluation, rather than just “what is ethical AI.” 

 

Why it mattered

This positioned the research toward practical adoption barriers and the design of a library/catalog that supports real professional workflows. 

Output

A thesis stance: considering human factors and situated perspectives leads to more successful design + cataloging of RAI tools. 

ACTION 2

I ran participatory research with 14 cross-sector practitioners

Design of the Study​

  • Survey of 14 participants

  • 12 follow-up remote 1x1 Zoom sessions that included a workshop component


Participant Diversity
AI ethicists, startup consultants, a lawyer, engineers, a scientist, UX professionals, researchers across industries, various levels of seniority, and various levels of familiarity with AI.

Execution Choices

  • Iterated the interview protocol after early sessions when participants needed a stronger context setup.

  • Improved session flow continuously with participant feedback.

ACTION 3

I synthesized themes into design insights for cataloging + tools

Theme synthesis

  1. Professionals struggle to determine which RAI tool fits their context. 

  2. Strong need for customized solutions and better coordination in RAI practice (often via third-party consultancy). 

  3. Strong emphasis on inclusiveness + accessibility in RAI tool design. 

 

What Participants Valued in “Ideal” Tooling

User-friendly, inclusive, credible, transparent, context-aware, aligned to org strategy/values. 

 

Prioritization Signal Captured

Participants placed highest value on 1.) Governance and ethical oversight, then 2.) Testing/validation/compliance, 3.) Ops optimization, 4.) Community/collaboration, 5.) Training/education, 6.) Transparency/accountability, and 7.) Customization/context-specific solutions. 

RESULTS

A paper and practitioner-centered map of what usable, context-aware RAI tools must enable and what would make a centralized RAI library usable

Results Summary

Across a diverse group of professionals, the study surfaced that the biggest barrier is not the absence of RAI resources—it’s decision paralysis and mismatch: people can’t tell what applies to their context, want more tailored guidance and coordination support, and consistently call for tools that are accessible, inclusive, credible, and measurable. These insights translate into concrete requirements for a centralized cataloging system that supports different “entry points” and real professional constraints. 

 

Key Findings

  • Tool choice is unclear even when resources exist. 

  • Demand for standards and clearer industry guidance is strong. 

  • Accessibility/inclusivity isn’t “nice to have”; it’s core to adoption. 

 

Deliverables Produced

  • Paper synthesizing insights + implications for a centralized RAI resources cataloging system. 

  • Categorized findings across participant contexts, needs, and measurement ideas

 

Credibility Notes

Sampling skew + missing AI developers + concept ambiguity (RAI vs AI ethics) + confidentiality constraints + evolving research gap. 

Read the full paper

RELEVANCE

Why this project matters for product teams building RAI infrastructure

1) Human-Centered Responsible AI in Practice

I translated “principles fatigue” into practitioner needs, focusing on adoption barriers and measurable operations. 

2) Designing Information Systems for Real Decision-Making

This work is fundamentally about discoverability, evaluation, and workflow fit, which are the mechanics of an effective cataloging system and resource library.

3) Research-to-design Translation

I shaped themes into actionable future artifacts: personas, journeys, affordances, classification + schema for resource records, and early user testing plans. 

EVOLUTION

Next time, I would tighten the study framing earlier to reduce participant ambiguity and improve sample balance.

Process Insights

  • What went well: The participatory format surfaced practical needs (tool choice confusion, demand for standards, accessibility expectations) that principle-only analyses often miss. 

  • How to do better next time: Narrow language earlier (RAI vs AI ethics), strengthen context onboarding from session one, and rebalance recruitment toward AI developers and additional underrepresented professional roles. 

Future work can define a user-tested blueprint for a centralized RAI resource library.

Next step to build on these findings

Create the conceptual system package (personas + journeys + taxonomy/schema + core affordances) and validate it through early user testing and stakeholder feasibility interviews with potential “owner” entities (government / NGO / academia). (p.41–42)

Future work directions

  • Develop persona groups + empathy maps for key RAI stakeholder segments. 

  • Map user journeys by entry point (technical understanding vs regulatory vs outcomes vs strategy). 

  • Define system affordances for a library: filtering, context matching, credibility signals, standards alignment. 

  • Propose a classification system for “RAI resources” + a metadata schema for resource records. 

  • Conduct early concept testing of these artifacts (no build) with practitioners. 

  • Interview potential library “owners” (e.g., public sector / NGOs / academia) to evaluate feasibility and governance.

  • Expand recruitment to include AI developers and more enterprise governance contexts to test whether needs diverge.

Still curious?

Let's dig deeper. Reach out for a personalized walkthrough or more case studies. 

Other case studies

© Vanessa Sanchez 2021 - Forever   |   Made with 💛 + ☕ + ✨

bottom of page