Before Any Output is Accepted
Slide Idea
This slide establishes two mandatory governance checks that must be applied before accepting any automated output: (1) the Lived Experience Constraint, which asks whether the task requires lived experience or cultural specificity that an automated system cannot reliably replicate, and (2) the Learning Displacement Check, which asks if automating this step reduces or removes creative labor or learning opportunities. These are presented as professional governance questions—structural considerations about appropriate automation use—not personal values or individual preferences.
Key Concepts & Definitions
Lived Experience Constraint as System Boundary
The lived experience constraint as system boundary refers to the recognition that certain knowledge, perspectives, and creative expressions are fundamentally grounded in embodied human experience, cultural context, and subjective positioning that automated systems cannot access or authentically replicate. This establishes a technical limitation rather than a value judgment: systems trained on text and images lack the phenomenological dimension of actual living experiences they represent—they cannot have experienced discrimination, grief, joy, religious practice, cultural traditions, sensory disabilities, or countless other human realities that shape authentic expression. When a task requires drawing upon such lived experience (writing from marginalized perspective, representing cultural practices accurately, expressing grief authentically, designing for accessibility needs one has directly experienced), automated systems face fundamental capability constraints: they can pattern-match against training data containing descriptions of experiences, but cannot access the experiential knowledge that would enable authentic generation. Research on AI cultural alignment demonstrates that systems systematically fail when tasks require cultural specificity, contextual nuance, or experiential authenticity that training data cannot capture. The governance check asks: does this specific task require lived experience or cultural specificity? If yes, automated generation is inappropriate regardless of technical capability, because the output will lack experiential grounding it purports to represent.
Source: Birhane, A., & Guest, O. (2020). Towards decolonizing computational sciences. arXiv preprint arXiv:2009.14258.
Learning Displacement as Labor Substitution
Learning displacement as labor substitution refers to the phenomenon where automation of skill-building tasks prevents individuals from developing competencies they would have gained through performing those tasks manually, resulting in deskilling and loss of learning opportunities even when automation successfully completes immediate tasks. This represents distinct concern from job displacement (automation replacing employment): learning displacement occurs when students, apprentices, or developing practitioners use automation for tasks they should perform manually to build skills. The economics literature on automation distinguishes displacement effects (automation substitutes for labor reducing demand) from reinstatement effects (new tasks create labor demand), but learning displacement adds educational dimension: even when automation creates economic value, it may destroy educational value by removing practice opportunities essential for skill development. A student who automates essay outlining never learns structural thinking; a designer who automates layout exploration never develops compositional judgment; a programmer who automates debugging never builds diagnostic capability. The governance check asks: whose creative labor or learning opportunity is reduced or removed if this step is automated? This question makes visible the developmental cost that immediate efficiency gains might mask.
Source: Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33*(2), 3-30.
Governance Framework Distinction from Personal Values
Governance framework distinction from personal values refers to the critical difference between structural decision-making criteria that apply systematically across contexts (governance) versus individual preferences, moral positions, or personal comfort levels (values). Governance frameworks establish rules, procedures, and decision criteria that constrain acceptable actions based on organizational principles, professional standards, regulatory requirements, or ethical boundaries—these apply regardless of individual feelings. Personal values reflect individual beliefs about what's right, important, or preferable—these vary between people and contexts. The slide explicitly states governance checks are "professional governance questions, not personal values" to prevent common conflation: students sometimes treat governance frameworks as optional preferences ("I personally value human creativity so I won't use automation, but others might choose differently") when actually they establish boundaries that apply categorically ("if task requires lived experience the system cannot access, automation is inappropriate regardless of personal values or preferences"). This distinction matters because governance violations represent professional failures not mere value differences. Research on AI ethics frameworks demonstrates the importance of distinguishing governance (enforceable standards applicable across contexts) from values (important but potentially variable moral positions).
Source: Raji, I. D., et al. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33-44).
Cultural Specificity and Authentic Representation
Cultural specificity and authentic representation refers to the requirement that depictions of particular cultural practices, traditions, perspectives, or experiences be grounded in genuine cultural knowledge and lived participation rather than external observation or pattern-matching against descriptions. This recognizes epistemological distinction between knowing about culture (information accessible through description) and knowing from within culture (experiential knowledge accessible only through cultural participation and lived experience). Automated systems trained on text and images describing cultural practices can generate outputs that superficially resemble authentic cultural expression but fundamentally lack the experiential grounding, contextual understanding, subtle nuance, and insider perspective that authentic representation requires. Research on AI cultural alignment demonstrates systematic failures when systems attempt to represent cultural practices without access to lived cultural knowledge: outputs may contain factual errors, perpetuate stereotypes, miss essential context, or appropriate sacred/sensitive cultural elements inappropriately. The governance check recognizes this as technical limitation: certain representation tasks require cultural insider knowledge automated systems cannot possess, making automation inappropriate regardless of output quality from external observer perspective.
Source: Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33*(4), 659-684.
Creative Labor Value Beyond Output Quality
Creative labor value beyond output quality refers to the recognition that creative work produces value through the process of creation itself—skill development, creative problem-solving, decision-making practice, aesthetic judgment formation—not solely through finished outputs. This challenges efficiency-focused automation logic that evaluates tasks purely by whether automation produces acceptable outputs faster: even when automation succeeds by output-quality metrics, it may destroy process value that manual work generates. A student writing an essay outline through manual structural thinking develops organizational skills transferable to future work; automating outline generation produces acceptable outlines but eliminates skill-building. A designer manually exploring layout variations builds compositional judgment; automating exploration produces layouts but prevents judgment development. Professional creative practice recognizes this dual value: outputs serve immediate project needs, but process serves long-term capability development. Research on expertise development demonstrates that skill acquisition requires extended deliberate practice—repeated engagement with challenging tasks receiving feedback and refining performance. Automation that eliminates this practice eliminates the learning. The governance check surfaces this: if automating a step removes someone's creative labor, it potentially destroys their learning opportunity regardless of output quality.
Source: Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100*(3), 363-406.
Why This Matters for Students' Work
Understanding these governance checks as binding constraints rather than optional considerations fundamentally changes how students approach automation decisions, shifting from "can I automate this?" to "should I automate this given what automation cannot access and what learning it would eliminate?"
Students often evaluate automation solely through capability lens: can the system do this task? If yes, automating seems reasonable; if no, automation fails. However, the lived experience constraint introduces a different evaluation dimension: even when systems technically produce outputs, those outputs may be inappropriate if they claim to represent experiences or cultural knowledge the system cannot access. A student assigned to write from the perspective of someone experiencing homelessness might discover automated systems can generate plausible-sounding narrative containing accurate factual details about homelessness. Capability check passes: system produced coherent text. But lived experience check fails: the task requires drawing on experiential knowledge of homelessness—the sensory experience of sleeping outdoors in winter, the psychological experience of social invisibility, the practical knowledge of navigating services—that training data descriptions cannot capture. Using automated output would constitute misrepresentation: claiming to speak from lived experience the writer does not possess. Understanding this constraint prevents students from conflating technical capability (system can generate text) with appropriateness (system should generate this specific text).
The cultural specificity dimension of lived experience constraint proves especially important for students working across cultural contexts. Students sometimes assume that research about culture plus automated assistance equals adequate cultural representation. However, research provides external knowledge about culture (observable practices, historical facts, demographic patterns) while authentic representation often requires internal knowledge from culture (meaning significance, contextual appropriateness, subtle nuance, insider perspective). A student creating content about religious practices might use automated assistance to generate descriptions of ritual observances. But religious practice involves experiential dimensions—spiritual significance, personal meaning, emotional resonance, sacred boundaries—that observation-based descriptions cannot capture. Someone from within religious tradition knows which aspects are appropriate for external sharing versus which are sacred/private, understands contextual meaning that observable actions carry, recognizes when representations perpetuate outsider misunderstandings. Automated systems lack this insider knowledge making certain religious content generation inappropriate regardless of factual accuracy. Recognizing this teaches students an important professional lesson: not all knowledge is equally accessible, and claiming to represent experiences or cultures one hasn't lived risks misrepresentation and harm.
Understanding learning displacement as distinct from task completion changes students' automation cost-benefit analysis. Students often evaluate automation through immediate efficiency: does automation save time completing this task? If yes, automation appears purely beneficial—same output, less effort. However, learning displacement reveals hidden cost: the eliminated effort was skill-building opportunity, and automation's efficiency gain trades immediate time savings for long-term capability loss. Consider student learning to structure arguments: manually creating essay outlines requires thinking through logical organization, considering alternative structures, deciding which arrangement best serves rhetorical purpose—this thinking builds structural reasoning skills transferable to future writing, presentations, project planning. Automating outline generation eliminates this thinking, preventing skill development. The immediate cost is invisible (student doesn't notice not learning structural thinking), but long-term cost is substantial (student enters professional practice lacking organizational skills). Understanding learning displacement enables students to recognize this trade-off: short-term efficiency versus long-term capability development.
The governance framework distinction from personal values proves crucial for professional development. Students sometimes treat ethical questions as matters of personal preference: "I personally don't feel comfortable using AI for creative work, but I respect that others make different choices." While personal values certainly matter, framing governance checks as mere preferences undermines their function. The lived experience check isn't preference—it's constraint based on what systems can and cannot access. If task requires lived experience system lacks, using automation constitutes misrepresentation regardless of personal comfort level. Similarly, learning displacement isn't preference—it's a structural concern about skill development. If automating eliminates learning opportunities someone needs, that represents educational harm regardless of individual values about automation. Treating governance as a framework rather than values teaches students a professional lesson: some decisions aren't matters of personal preference but requirements based on professional standards, ethical boundaries, or structural considerations that apply categorically.
Understanding whose labor or learning opportunity gets displaced develops students' stakeholder awareness. The governance check asks "whose creative labor or learning opportunity is reduced or removed?"—the "whose" matters. Students might not notice displacement when it affects others rather than themselves. A student automating research assistance might not recognize that the research process teaches information literacy, source evaluation, synthesis skills—if someone else needs to learn these skills and automation eliminates their practice opportunity, displacement occurs regardless of whether the automating student already possesses those skills. Professional contexts routinely involve this: senior practitioners might appropriately automate tasks they've already mastered, but preventing junior colleagues from performing those same tasks eliminates juniors' learning opportunities. Developing a habit of asking "whose learning am I potentially eliminating?" builds awareness essential for responsible team practice.
The governance checks together establish a framework for principled automation decisions rather than ad hoc case-by-case judgments. Students sometimes struggle with automation decisions because they lack systematic evaluation criteria: "should I use automation for this specific task?" becomes an endless series of uncertain judgment calls. The two binding checks provide decision structure: (1) Does this require lived experience or cultural specificity the system cannot access? If yes, don't automate. (2) Does automating eliminate someone's needed learning opportunity? If yes, don't automate. If both checks pass, automation may be appropriate pending other considerations (accuracy, efficiency, professional norms). This framework doesn't eliminate judgment—students must still assess whether tasks require lived experience or whether learning opportunities matter in specific contexts—but it provides systematic evaluation structure rather than leaving students with vague discomfort or uncritical adoption.
How This Shows Up in Practice (Non-Tool-Specific)
Filmmaking and Media Production
Film production applies lived experience constraints systematically when casting and content creation decisions involve authentic representation of specific identities or experiences.
Casting decisions explicitly recognize lived experience requirements. When productions seek to portray characters with disabilities, contemporary professional standards increasingly require casting actors with those lived disabilities rather than non-disabled actors simulating disability. This reflects lived experience principle: authentically portraying the experience of navigating a world with disability requires experiential knowledge—how disability shapes daily movement, social interaction, self-perception—that research and simulation cannot replicate. An actor without disability can study descriptions of disability experience, but lacks the embodied knowledge that enables authentic portrayal. Industry shift toward authentic casting represents recognition that certain representation tasks require lived experience, making substitution (whether through non-disabled actors or hypothetically through automated character generation) inappropriate regardless of surface resemblance to authentic experience.
Documentary filmmaking applies cultural specificity constraints when representing communities. Documentary ethics increasingly emphasize community participation in production when films portray specific cultural groups: having community members involved in filming, editing, narrative decisions ensures representation grounded in lived cultural knowledge rather than external observation. This recognizes that outsider filmmakers, however well-researched, lack insider understanding of cultural context, appropriate representation boundaries, meaning nuance. A filmmaker might observe and document cultural practices accurately from an external perspective but miss significance, sacred boundaries, appropriate contextualization that community members know from lived participation. Contemporary documentary practice treats this as a governance requirement rather than optional sensitivity: authentic cultural representation requires cultural insider knowledge.
Script development teams apply learning displacement awareness when allocating creative work. A production with senior and junior writers must decide which tasks junior writers perform manually versus which senior writers complete or automate. Writing dialogue, developing character arcs, structuring scenes represent learning opportunities for junior writers—performing these tasks builds screenwriting skills they need. If seniors complete all creative tasks seeking efficiency, juniors lose skill-building opportunities. Professional practice recognizes this: mentorship involves deliberately assigning challenging creative tasks to developing practitioners even when seniors could complete them faster, because learning value outweighs efficiency loss. This represents learning displacement awareness in practice.
Design
User experience design applies lived experience constraints when designing for accessibility and specific user populations.
Accessibility design increasingly requires involving people with disabilities throughout the design process—not merely as research subjects but as design collaborators with decision-making authority. This reflects lived experience principle: designing for blind users requires knowledge of how blind people actually navigate digital interfaces using screen readers, what information architectures support versus hinder navigation, which design patterns create barriers—knowledge accessible through lived experience of blindness, not merely through reading accessibility guidelines. Designers without disabilities can study accessibility standards, but standards codify minimum requirements, not the experiential knowledge needed for genuinely excellent accessible design. Professional practice treats disability community involvement as a requirement: designing for populations without including people from those populations produces designs grounded in assumptions rather than lived reality.
Cultural design work applies cultural specificity constraints. When design projects involve specific cultural contexts—designs for particular ethnic communities, religious groups, regional cultures—professional practice increasingly recognizes that authentic culturally-appropriate design requires cultural insider involvement. Color symbolism, visual metaphors, interaction patterns, aesthetic preferences carry cultural meanings that outsider designers miss. A designer might research cultural preferences finding general principles, but cultural meaning operates at subtlety levels research cannot capture. Professional standards increasingly require cultural community participation ensuring designs grounded in lived cultural knowledge.
Design education applies learning displacement awareness when teaching fundamental skills. Design programs must decide which manual skills students learn before introducing digital tools and automation. Sketching, manual layout exploration, physical prototyping represent learning opportunities building spatial reasoning, compositional judgment, material understanding—skills foundational to design thinking. If students immediately use automated layout tools never engaging manual exploration, they never develop visual judgment automation cannot build. Professional educators recognize this: certain skills must be learned through manual practice before automation is appropriate, because automation prevents the learning that manual practice enables.
Writing
Literary publishing applies lived experience constraints increasingly explicitly through "own voices" principles in contemporary publishing practice.
Publishers increasingly prioritize authors writing from their own lived experiences when acquiring books portraying specific marginalized identities, cultural backgrounds, or experiences. A book about being transgender authored by transgender writer grounds narrative in lived experiential knowledge—how gender dysphoria feels phenomenologically, how social transition affects relationships, how medical processes actually work—that cisgender writers cannot access through research. This isn't a claim that cisgender writers cannot write transgender characters, but recognition that certain narratives claiming authentic representation of lived experience require the author to possess that lived experience. Contemporary publishing treats this as a professional standard: acquiring memoir-style or authentic-voice narratives about specific lived experiences requires authors to possess those experiences.
Journalism applies cultural specificity constraints through reporter assignment and source practices. When covering stories about specific cultural communities, professional journalism ethics increasingly emphasize assigning reporters from those communities when possible or ensuring extensive community source participation. A reporter covering Indigenous land rights issues might conduct thorough research, but Indigenous reporters bring lived cultural knowledge—historical context from community perspective, understanding of tribal sovereignty principles, awareness of appropriate terminology and sensitive issues—that outsider reporters must actively seek from sources. Professional journalism recognizes this: certain coverage requires cultural insider knowledge either from reporters or extensively quoted community sources.
Writing pedagogy applies learning displacement awareness when introducing writing assistance tools. Writing instructors must decide which writing tasks students perform manually versus which tasks automation may assist. Prewriting, outlining, drafting, revision represent distinct skill-building stages: outlining develops structural thinking, drafting develops idea articulation, revision develops critical evaluation. If students automate outlining, they never build organizational skills. If they automate drafting, they never develop articulation practice. Professional writing pedagogy treats this as a crucial decision: students need manual practice with foundational skills before automation becomes appropriate, because automation of skill-building tasks prevents the learning those tasks enable.
Computing and Engineering
Software engineering applies lived experience constraints when building systems serving specific user populations with distinct needs.
Development teams building accessibility technology increasingly include engineers with disabilities as core team members, not merely as consultants. An engineer who is blind brings lived knowledge of screen reader usage patterns, common interaction frustrations, effective workarounds—knowledge essential for building excellent assistive technology that non-disabled engineers cannot fully access through research and testing. Professional practice recognizes this: building technology for specific populations ideally includes team members from those populations contributing lived experiential knowledge that shapes design decisions.
AI/ML development applies cultural specificity constraints when building systems operating across cultural contexts. Teams building language models, content moderation systems, or recommendation algorithms for global deployment must recognize these systems operate in culturally-specific contexts they may not adequately represent. A content moderation system trained primarily on Western content may misclassify culturally-appropriate content from other regions as violations. Professional practice increasingly requires cultural insider participation in dataset creation, model evaluation, and deployment decision-making, ensuring systems grounded in diverse cultural knowledge rather than defaulting to a single cultural perspective.
Computer science education applies learning displacement awareness systematically when teaching programming fundamentals. Educators must decide which programming tasks students perform manually versus which development tools and code assistance students use. Algorithm design, debugging, code optimization represent skill-building opportunities: manually debugging develops diagnostic thinking, manual algorithm design develops computational reasoning, manual optimization develops efficiency judgment. If students immediately use automated debugging, code completion, or algorithm suggestion tools, they never build underlying skills those tools replace. Professional CS pedagogy treats this as a fundamental educational decision: students need manual practice building foundational skills before automation assistance becomes appropriate, because automation of learning tasks prevents skill development.
Engineering design education applies learning displacement similarly. Engineering students must perform calculations manually, create technical drawings by hand, analyze failures through systematic investigation—these manual processes build engineering judgment that automated analysis tools cannot teach. Once students possess foundational skills, automation becomes an appropriate professional tool. But automating skill-building tasks prevents students from ever developing the capabilities that would enable them to use automation appropriately.
Common Misunderstandings
"The lived experience check is about preventing cultural appropriation or respecting feelings—it's an ethical preference, not a technical constraint"
This misconception treats lived experience requirements as a moral position about respectful representation rather than recognizing them as technical limitations about what systems can and cannot access. The lived experience constraint isn't primarily about cultural appropriation ethics (though that's certainly relevant) or about protecting feelings (though misrepresentation can cause harm)—it's about fundamental capability boundary: systems trained on text and images describing experiences cannot access the phenomenological, embodied, contextual knowledge that living those experiences provides. A system can read thousands of descriptions of grief written by grieving people, but it has never felt grief—the emotional weight, the physical sensations, the altered perception of time, the intrusive thoughts. When a task requires drawing on grief experience (writing authentically grieving character, creating art expressing grief, designing grief support resources), the system faces technical limitations: it can pattern-match against grief descriptions, but cannot access grief experiential knowledge. This isn't about moral propriety—it's about epistemological access. The governance check asks: does this task require knowledge accessible only through lived experience? This is a technical question about knowledge requirements, not a value question about appropriateness preferences. Professional contexts apply similar constraints routinely: medical diagnosis requires clinical experience not just medical textbook knowledge, architecture requires understanding how spaces feel to inhabit not just how they look in plans, teaching requires classroom management experience not just pedagogy theory. These represent capability requirements grounded in what experiential knowledge enables, not preferences about respectful practice.
"Learning displacement only matters for beginners—once someone has skills, automation doesn't harm their learning"
This misconception assumes learning completes at some proficiency threshold after which automation becomes purely beneficial, ignoring that skill development continues throughout professional practice and that automation prevents learning even for experienced practitioners engaging new domains or advanced skills. While it's true that learning displacement particularly harms beginners who haven't yet built foundational skills, experienced practitioners continue learning throughout careers: learning new techniques, developing advanced capabilities, building expertise in adjacent domains, refining judgment through repeated practice. Automation that eliminates practice opportunities prevents this ongoing development regardless of existing skill level. A senior designer who automates all layout exploration never develops judgment about emerging design patterns; an experienced programmer who automates all debugging never builds diagnostic skill with new programming paradigms; a seasoned writer who automates all structural revision never develops organizational skills for new genres. Additionally, the learning displacement check asks "whose learning"—not just "my learning." An experienced practitioner might appropriately automate tasks they've mastered, but if that automation becomes standard practice preventing juniors from performing those tasks, it eliminates juniors' learning opportunities. Professional contexts recognize this: senior engineers deliberately assign debugging tasks to junior engineers even though seniors could fix bugs faster, because juniors need debugging practice to build skills. The governance check makes this visible: whose creative labor or learning opportunity is reduced? This question applies regardless of the automator's skill level, because displacement can affect others.
"These checks are subjective—different people will disagree about whether tasks require lived experience or whether learning displacement matters"
This misconception treats governance checks as subjective judgment calls admitting unlimited variation, rather than recognizing them as structured analytical questions with assessable criteria despite inevitable edge cases. While applying the checks requires judgment and some cases involve ambiguity, the core questions provide clear evaluation structure. Lived experience check: Does this task require drawing on knowledge accessible only through having lived specific experiences or cultural participation? This question has assessable answer: tasks explicitly requiring personal narrative from specific subject position clearly require lived experience (writing memoir about being refugee, creating art expressing experience of chronic illness, designing for disability one has lived); tasks requiring general creative or analytical work typically don't require specific lived experience (designing generic e-commerce interface, writing fantasy fiction, analyzing historical data). Borderline cases exist, but many applications are clear. Learning displacement check: Whose creative labor or learning opportunity is reduced? This has an identifiable answer: if student automates essay outlining, student's structural thinking practice is eliminated; if professional writer automates outline having already mastered structural thinking, no learning displacement occurs for that writer (though might occur for apprentice writers losing practice opportunities). The checks won't resolve all cases without judgment, but they provide a systematic evaluation framework with clear criteria—this differs fundamentally from "subjective preference." Professional contexts routinely use similar frameworks requiring judgment within structure: medical ethics requires assessing patient capacity (involves judgment but follows evaluation framework), engineering requires safety factor selection (involves judgment but uses established methods), legal practice requires reasonableness assessment (involves judgment but applies legal standards). The governance checks function similarly: structured analytical questions requiring judgment but not reducible to arbitrary personal preference.
"If I cite my sources and acknowledge AI assistance, I can use automation for anything because I'm being transparent"
This misconception assumes transparency about automation use automatically makes that use acceptable, ignoring that the governance checks establish tasks for which automation is inappropriate regardless of disclosure. Transparency about automation is important professional practice—readers, collaborators, and evaluators deserve to know when automated assistance was used. However, disclosure doesn't resolve the fundamental problems the governance checks identify. If a task requires lived experience the automated system cannot access, generating output anyway and disclosing automation doesn't fix the core problem: the output still lacks experiential grounding it claims to represent. A student writing a personal essay about immigration experience using automated assistance and disclosing that use has been transparent—but if the student hasn't actually lived immigration experience, the essay still misrepresents having experiential knowledge the student lacks. Disclosure makes misrepresentation visible but doesn't eliminate it. Similarly, if automating a task eliminates needed learning opportunities, disclosure doesn't restore that learning. A student automating structural thinking and acknowledging automation has been transparent—but transparency doesn't replace the skill-building that automation prevented. The governance checks identify tasks where automation is inappropriate regardless of transparency: tasks requiring experiential knowledge systems lack, and tasks that eliminate needed learning opportunities. Transparency matters for tasks where automation is appropriate—it enables readers to evaluate outputs knowing how they were created—but it doesn't make inappropriate automation acceptable.
Scholarly Foundations
Birhane, A., & Guest, O. (2020). Towards decolonizing computational sciences. arXiv preprint arXiv:2009.14258.
Critical examination of how computational systems encode particular epistemologies while excluding others, particularly concerning whose knowledge and experiences get represented. Discusses how AI systems trained on text cannot access lived experiential knowledge from marginalized communities, leading to systematic misrepresentation. Establishes that certain knowledge is fundamentally inaccessible to systems lacking lived experience. Directly relevant for understanding lived experience constraint as technical limitation rather than mere ethical preference.
Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3-30.
Economics framework analyzing how automation affects labor through displacement effects (automation substitutes for labor reducing demand) and reinstatement effects (new tasks create labor demand). Discusses how automation changes task allocation between humans and machines. While focused on employment, the framework applies to learning displacement: automation substituting for skill-building tasks reduces learning opportunities analogous to how it reduces employment opportunities. Establishes displacement as a systematic phenomenon with measurable effects. Relevant for understanding learning displacement as structural concern not individual preference.
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33-44).
Framework for AI auditing establishing distinction between governance requirements (systematic standards applicable across contexts) and values (important but potentially variable moral positions). Discusses how responsible AI deployment requires enforceable governance frameworks not merely ethical principles. Establishes accountability structures, assessment procedures, and remediation practices. Directly relevant for understanding why slide frames are checked as "professional governance questions, not personal values"—governance establishes binding constraints rather than optional preferences.
Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659-684.
Applies decolonial theory to AI examining how systems perpetuate epistemic injustice by privileging certain ways of knowing while marginalizing others. Discusses cultural specificity and importance of lived experience particularly for communities historically marginalized in technology development. Establishes that authentic representation of cultural knowledge requires cultural insider participation not external observation. Relevant for understanding cultural specificity, dimension of lived experience constraint and why certain representation tasks require lived cultural knowledge.
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363-406.
Foundational research on expertise development demonstrating that expert performance requires extensive deliberate practice—repeated engagement with challenging tasks, receiving feedback, refining performance over extended periods. Establishing that skill acquisition cannot be a shortcut: expertise requires time-intensive practice. Directly relevant for understanding learning displacement: automation eliminating practice tasks prevents deliberate practice that expertise development requires. No amount of reading about skills replaces practicing skills. Automation preventing practice prevents expertise development.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Analysis of how algorithmic systems perpetuate and amplify existing social biases particularly affecting marginalized communities. Discusses how systems trained on biased data lacking diverse lived experiences produce outputs that misrepresent or harm marginalized groups. Establishes that technical solutions cannot resolve problems rooted in absence of lived experiential knowledge from affected communities. Relevant for understanding why lived experience constraint matters: systems cannot authentically represent experiences they cannot access, and this limitation causes real harm.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Comprehensive examination of AI systems including discussion of labor displacement, knowledge extraction, and whose perspectives get encoded in training data. Discusses how automation affects creative labor and learning opportunities particularly in contexts where junior practitioners need skill development. Analyzes power dynamics determining whose knowledge and labor get valued versus displaced. Relevant for understanding learning displacement in the broader context of how automation affects skill development and labor practices.
D'Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.
Framework examining how data practices and AI systems encode particular perspectives while marginalizing others, with emphasis on lived experience and standpoint epistemology. Discusses why authentic representation requires centering lived experiences of people from represented communities rather than relying on external observation. Establishes that certain knowledge is positional—accessible from particular social locations, not universal. Relevant for understanding epistemological foundations of lived experience constraint: why certain knowledge requires having lived specific experiences or cultural positions.
Boundaries of the Claim
The slide establishes two binding governance checks that must be applied before accepting automated outputs. This does not claim that these are the only relevant governance considerations, that applying these checks is always straightforward without judgment required, or that passing both checks automatically makes automation appropriate.
The characterization of these as "binding governance checks" establishes them as mandatory considerations rather than optional preferences, but doesn't specify exact decision procedures for all cases. The lived experience check asks whether tasks require lived experience or cultural specificity automated systems cannot replicate—this question has clear answers for obvious cases (memoir writing about one's lived experience clearly requires lived experience; generic data analysis typically doesn't) but involves judgment for borderline cases (creative writing drawing on but not explicitly representing specific lived experiences; research about cultures one hasn't lived within). Similarly, the learning displacement check asks whose creative labor or learning opportunity is reduced—clear for straightforward cases (student automating skill-building task clearly eliminates own learning; expert automating mastered task may not) but requires assessment for complex situations (collaborative work where automation affects multiple people differently; contexts where short-term efficiency needs compete with long-term learning goals).
The framework doesn't specify: how to resolve cases where lived experience requirements are uncertain, what constitutes sufficient cultural insider knowledge versus requiring deeper lived experience, how to weigh learning displacement against legitimate efficiency needs in professional contexts, or how to handle situations where people disagree about whether checks are satisfied. These represent ongoing judgment questions rather than algorithmic decision rules.
The emphasis on distinguishing governance from personal values clarifies that these aren't optional moral preferences, but doesn't claim that all professional or ethical AI considerations reduce to these two checks. Other governance questions matter: accuracy, bias, privacy, security, environmental impact, economic fairness. The slide presents two specific binding checks among the broader governance landscape, not a comprehensive framework capturing all relevant considerations.
The references cited point to scholarly sources (Noble, Birhane & Guest, Crawford) that ground these governance principles but aren't specified in detail on the slide itself. Students seeking deeper understanding should consult these sources recognizing that governance frameworks evolve as research and professional practice develop.
Reflection / Reasoning Check
1. Consider a specific creative or analytical task you might encounter in your field (writing assignment, design project, technical analysis, media production). Apply both governance checks systematically: First, does this task require lived experience or cultural specificity that an automated system cannot reliably replicate? Break this down: What knowledge does the task require? Is any of that knowledge accessible only through having lived particular experiences or participated in particular cultural contexts? If you're uncertain, what makes lived experience requirements unclear—is the task genuinely ambiguous or are you unsure what "lived experience" means? Second, if you automated this step, whose creative labor or learning opportunity would be reduced or removed? Be specific about the "whose"—your own? A collaborator's? A hypothetical future practitioner? What skills would manual performance build that automation would prevent? Now consider: If both checks indicate automation is inappropriate, but automation would save significant time and produce acceptable output, does efficiency justify proceeding anyway? What does your answer reveal about whether you're treating these as binding governance constraints versus optional preferences?
This question tests whether students can apply governance checks systematically to concrete cases, understand what constitutes lived experience requirements and learning displacement, and genuinely treat checks as binding rather than merely aspirational. An effective response would identify specific task with enough detail to enable analysis (not generic "writing project" but specific assignment with defined requirements), systematically assess lived experience requirements (articulate what knowledge task requires, identify whether any knowledge is experientially grounded, explain reasoning for conclusion), identify whose labor/learning would be affected by automation with specificity (not vague "someone's learning" but concrete identification: "I'm student who needs to develop structural thinking skills, so automating outlining would eliminate my learning opportunity"), and most importantly demonstrate understanding of binding constraint (recognize that if checks indicate automation inappropriate, efficiency doesn't override this—governance isn't cost-benefit optimization but boundary establishment). Common inadequate responses treat checks as suggestions ("lived experience would be nice but automation is faster so I'll use it"), fail to identify specific lived experience requirements (conclude no lived experience needed without analyzing what knowledge task requires), or ignore learning displacement entirely (focus only on output quality not process value). This demonstrates whether students understand governance as framework constraining acceptable practice or merely as aspirational ideals.
2. The slide explicitly states these are "professional governance questions, not personal values." Reflect on why this distinction matters: What's the difference between saying "I personally value human creativity so I choose not to use AI for creative work" (personal value) versus "This task requires lived experience the system cannot access, therefore automation is inappropriate" (governance constraint)? Think about implications: If governance, then it applies regardless of individual preferences—someone who doesn't share your values about human creativity must still comply. If personal values, then it's optional—others can make different choices. Which framing do the governance checks use, and what makes you confident in that interpretation? Now consider professional contexts: Can you think of other governance frameworks in professional practice (medical ethics, engineering safety standards, journalistic integrity, research ethics) that establish binding constraints rather than optional values? What makes them governance versus values? What happens when professionals treat governance requirements as personal preferences ("I personally prefer to get informed consent but others might not")? What does this reveal about why the distinction between governance and values matters for professional practice?
This question tests whether students understand fundamental distinction between governance frameworks (systematic binding constraints) and personal values (individual preferences), can identify which framing applies to these checks, and recognize professional implications of this distinction. An effective response would articulate clear distinction (governance applies categorically regardless of personal agreement; values reflect individual beliefs that legitimately vary), correctly identify that checks use governance framing (evidenced by "binding" language, technical capability arguments, professional practice grounding—not moral preference language), provide concrete examples of professional governance from other fields (medical informed consent isn't optional preference, engineering safety factors aren't personal choice, research ethics violations aren't merely value differences), explain consequences of treating governance as values (if informed consent becomes personal preference, patient protection collapses; if safety standards become individual choice, public safety fails; if research ethics become optional, participant harm occurs), and recognize that governance frameworks serve protective function that personal values cannot (governance establishes boundaries protecting stakeholders even when individuals don't personally value those protections). Common inadequate responses conflate governance and values (treat them as synonyms or only superficially different), incorrectly identify checks as personal values (interpret "governance questions" language as merely emphasizing importance of values rather than establishing different category), fail to recognize professional governance examples (cannot identify binding constraints in other fields), or don't understand implications (think treating governance as values is merely semantic difference without practical consequences). This demonstrates whether students understand that professional practice requires systematic frameworks constraining acceptable action beyond individual moral preferences.