What We Will Make Visible Today

Slide Idea

This slide identifies four specific types of process artifacts that will be documented and preserved during the session: fragments (preliminary or partial work), rejected outputs (generated results deemed unsuitable), revised specifications (modified versions of initial requirements), and decision logs (records of choices made and reasoning applied). These artifacts represent standard pre-production and evaluation materials used in professional media practice to maintain traceable records of creative and technical decision-making processes.

Key Concepts & Definitions

Production Artifacts

Production artifacts are tangible or digital objects created during the development process that document work-in-progress, intermediate outputs, or supporting materials distinct from the final deliverable. In professional media production, these artifacts include script breakdowns, location scouts, camera tests, rehearsal footage, blocking diagrams, lighting plots, and edit decision lists—materials that support production but do not appear in finished work. Production artifacts serve multiple functions: they facilitate communication among collaborators, create historical records enabling retrospective analysis, support quality assurance by documenting process compliance, and preserve institutional knowledge that can inform future projects. While final deliverables receive public distribution, production artifacts typically remain internal documents, though they may be shared in educational contexts, behind-the-scenes features, or archival collections.

Source: Caldwell, J. T. (2008). Production culture: Industrial reflexivity and critical practice in film and television. Duke University Press. 

Fragments

In creative and technical work, fragments are incomplete, preliminary, or partial pieces of work that represent stages in an iterative development process. Fragments may include rough drafts, exploratory sketches, prototype components, partial implementations, or test samples created to investigate possibilities before committing to complete execution. Unlike polished work intended for evaluation, fragments prioritize exploration and experimentation—they need not be coherent, functional, or presentable. The value of preserving fragments lies in maintaining a record of the exploration process: what directions were tried, what possibilities were considered, and how understanding evolved through making. Fragments reveal the non-linear, exploratory nature of creative work that finished products conceal.

Source: Sterman, S., Cuenca, E., Li, L., & Drucker, S. M. (2020). Towards creative version control. ACM Transactions on Interactive Intelligent Systems, 10(2), 1-44. 

Rejected Outputs

Rejected outputs are generated results—whether by human creators or systems—that were evaluated and deemed unsuitable for use based on explicit or implicit criteria. In generative AI contexts, rejected outputs might fail to meet specifications, exhibit technical flaws, produce inappropriate content, or simply prove less effective than alternatives. The act of rejection represents a judgment: the output was assessed against requirements, constraints, or quality standards and found wanting. Documenting rejected outputs alongside the reasons for rejection creates visible records of evaluation criteria and decision-making that would otherwise remain implicit. This documentation supports learning (understanding what does not work and why), iteration (informing subsequent attempts), and quality assurance (demonstrating that alternatives were considered and evaluated).

Source: Kolko, J. (2012). Iteration and variation in creative work. https://www.jonkolko.com/writing/notes/14

Specification Revision

A specification revision is a modified version of requirements, constraints, or success criteria that differs from an earlier formulation. Revisions occur when initial specifications prove inadequate, ambiguous, technically infeasible, or misaligned with actual goals as understanding develops. Unlike corrections of simple errors, revisions often reflect evolved understanding of the problem space: clarification of ambiguous language, addition of overlooked requirements, relaxation of overconstraining criteria, or fundamental reframing of objectives. Documenting specification revisions—not just the final version but the progression through versions and the reasoning behind changes—makes visible how problem understanding develops through iterative engagement. This documentation prevents "specification amnesia" where teams forget why particular constraints exist or what alternatives were considered.

Source: Novakouski, M., Lewis, G., Anderson, W., & Davenport, J. (2012). Best practices for artifact versioning in service-oriented systems (Technical Note CMU/SEI-2011-TN-009). Software Engineering Institute. 

Decision Log

A decision log is a chronological or structured record that documents significant choices made during a project, including what was decided, when, by whom, what alternatives were considered, what criteria informed the decision, and what consequences or trade-offs were anticipated. Decision logs differ from project management task lists or simple change logs: they capture reasoning, not just outcomes. Effective decision logs answer questions like "Why did we choose approach X over approach Y?" and "What constraints or information informed this choice?" Decision logs serve multiple stakeholders and timeframes: they help current team members maintain shared understanding, enable future maintainers to reconstruct reasoning when revisiting decisions months or years later, and create learning resources that externalize tacit judgment. In professional practice, decision logs may be formalized through Architecture Decision Records (ADRs), design rationale documents, or engineering notebooks.

Source: Microsoft. (2024). Design decision log. Code with Engineering Playbook

Version Control

Version control (also called revision control) is a systematic approach to managing changes to files, documents, code, or other artifacts by recording modifications, attributing them to specific contributors, and enabling retrieval of previous states. Version control systems (VCS) like Git maintain complete histories showing what changed, when, why, and by whom. Beyond simple backup functionality, version control enables collaboration by allowing multiple people to work on shared artifacts concurrently, supports experimentation by making it safe to try changes (since earlier states can be recovered), and provides auditability by creating transparent records of project evolution. In software development, comprehensive version control predicts successful continuous delivery. The practice extends beyond code to encompass test scripts, configuration files, documentation, and any artifact requiring change management. The discipline of frequent, well-described commits creates a detailed record of development thinking that functions as living documentation.

Source: DORA. (2026). Capabilities: Version control. DevOps Research and Assessment

Why This Matters for Students' Work

The four artifact types named in the slide—fragments, rejected outputs, revised specifications, and decision logs—collectively address a fundamental pedagogical challenge: making learning visible to learners themselves. Creative and technical work involves continuous judgment, experimentation, and revision, yet these processes often remain invisible unless deliberately documented. Students who preserve these artifacts create external records of their thinking that can be examined, reflected upon, and learned from.

Fragments document exploration and experimentation. Students often discard preliminary work once they have moved beyond it, treating early attempts as disposable scaffolding. However, fragments reveal how understanding develops through making—what was tried first, what assumptions were operating, how approaches evolved. When students encounter difficulties later in a project, examining fragments can reveal patterns: perhaps initial intuitions were sound but prematurely abandoned, or perhaps early explorations identified problems that were subsequently forgotten. Fragments also provide evidence of effort and iterative engagement that finished work conceals. For students concerned that their work does not demonstrate sufficient process, preserved fragments provide concrete evidence of exploration.

Rejected outputs make evaluation criteria visible. When students generate multiple alternatives and select one, the rejected alternatives represent implicit judgments about quality, appropriateness, or effectiveness. Unless rejected outputs are preserved alongside reasons for rejection, these judgments remain entirely tacit. Documenting what was rejected and why externalizes evaluation criteria that can then be examined: Were rejections based on technical quality? Alignment with goals? Aesthetic preference? Practical constraints? This examination helps students develop explicit understanding of their own quality standards and refine those standards over time. For collaborative work, documented rejected outputs and rejection reasoning create shared understanding among team members about evaluation criteria.

Revised specifications document how problem understanding evolves. Students often approach projects with initial specifications that prove inadequate once work begins—too vague, too constraining, based on incorrect assumptions, or addressing the wrong problem. Revising specifications is appropriate and necessary, but when revisions go undocumented, valuable learning is lost. Students might not recognize that specification ambiguity caused implementation difficulties, or they might forget what constraints were originally proposed and why they were modified. Documented specification revisions create a record of problem-framing evolution that reveals how requirements clarification and implementation inform each other. This record helps students develop metacognitive awareness about specification quality and learn to anticipate common specification problems.

Decision logs document reasoning that would otherwise remain implicit or be forgotten. Students make hundreds of decisions during projects: what approach to take, what tools to use, what features to prioritize, what trade-offs to accept. Many decisions get made quickly based on intuition, convenience, or limited information. When work produces unsatisfactory results, students struggle to diagnose problems because they lack records of what decisions were made and why. A decision log creates an external memory that can be consulted during troubleshooting ("We chose approach X because of constraint Y—is that constraint still valid?") and reviewed afterward for learning ("We repeatedly chose approaches that optimized for A at the expense of B—was that the right priority?"). Decision logs also support collaboration by making individual reasoning accessible to team members.

Together, these four artifact types create a visible record of process that complements finished work. This record serves immediate practical functions—supporting iteration, enabling diagnosis, facilitating collaboration—and longer-term learning functions by externalizing thinking that can be reflected upon. For students developing professional practices, these documentation habits build capacity for systematic learning from experience rather than vague intuition about "what worked" or "what did not work."

How This Shows Up in Practice (Non-Tool-Specific)

Filmmaking and Media Production

Pre-production in professional film and television generates extensive documentation of process artifacts. Script breakdowns identify every element required for production; successive versions of these breakdowns reflect refined understanding of requirements as location scouts occur, budgets constrain possibilities, and technical considerations emerge. Early shot lists propose ambitious coverage that gets revised as time constraints, equipment availability, and practical blocking challenges become apparent. These revised shot lists document how creative vision adapted to production realities.

Camera tests and lighting tests produce fragments—experimental footage testing different lenses, focal lengths, color temperatures, or exposure settings. These tests inform decisions about visual approach but rarely appear in final work. Preserving test footage with annotation about what was learned enables future productions to reference earlier experiments rather than starting from scratch. Similarly, rehearsal footage captures performance variations and blocking explorations that inform final choices but are not themselves final takes.

Edit decision lists (EDLs) document every edit point in a sequence, but most editing involves trying multiple cutting patterns before selecting one. An editor might try cutting a scene with different takes, different rhythms, or different juxtapositions. The rejected edit variations represent judgments about pacing, performance, and narrative clarity. Some editors maintain "graveyard sequences"—collections of rejected edit choices—so they can revisit alternatives if later decisions create new needs. Directors reviewing these alternatives can articulate why particular choices work better, making implicit editorial judgment explicit and teachable.

Design

Designers generate numerous variations and iterations before converging on final designs. Early sketches explore different layout possibilities, visual hierarchies, or interaction patterns. These sketches are fragments—rough, quick, focused on investigating possibilities rather than presenting polished work. Design studios that maintain sketch archives create institutional memory showing how different designers approached similar problems and what exploration patterns proved productive.

Design critique involves presenting alternatives and receiving structured feedback. Rejected design directions might fail usability requirements, prove technically infeasible, violate brand guidelines, or simply be less effective than competitors. Documenting rejected alternatives with annotation about why they were rejected creates visible records of design reasoning. Junior designers learning from these records gain access to evaluation criteria that would otherwise remain implicit in senior designers' tacit judgment.

Design specifications evolve as projects develop. Initial wireframes propose interfaces that user testing reveals to be confusing. Accessibility audits identify contrast ratios that require color adjustments. Technical implementation exposes interaction patterns that perform poorly. Each specification revision reflects new information or refined understanding. Version-controlled design specifications with commit messages describing rationale create traceable records showing how requirements evolved and why changes were made.

Writing

Professional writers produce multiple drafts, each representing revisions based on rereading, feedback, or evolved understanding. Early drafts often explore tangential ideas, follow organizational structures later abandoned, or develop arguments subsequently refined. These drafts are fragments of an exploration process. Writers who maintain draft versions can examine their own revision patterns: do they typically cut material (suggesting initial overwriting) or add material (suggesting initial underspecification)? Do they restructure organization frequently (suggesting weak initial planning) or mainly refine prose (suggesting strong structural intuition)?

Editorial feedback often suggests revisions that writers accept, reject, or modify. Rejected suggestions represent judgments about voice, audience, purpose, or craft. A writer might reject a suggestion to cut a passage because, despite concerns about length, the passage serves an essential rhetorical function. Documenting these rejection decisions with brief rationale creates records of writerly judgment that can be examined retrospectively and refined over time.

Writers working with constraints—word limits, source requirements, style guidelines—often revise those constraints as work develops. An initial plan to address three subtopics proves inadequate; revised specifications narrow to two subtopics treated more thoroughly. An assigned word limit proves insufficient for adequate treatment; negotiated revisions adjust the limit or narrow scope. Documented specification revisions show how writers negotiate between constraints and communicative goals.

Computing and Engineering

Software developers create branches for experimental features, some of which never merge into production code. These experimental branches are fragments—explorations of technical approaches that inform final implementation decisions even when not directly incorporated. Development teams that maintain experimental branches can reference them when similar problems recur or when initially-rejected approaches become viable as conditions change.

Code review processes generate rejected implementations. Pull requests might be declined because they introduce technical debt, violate architectural patterns, lack adequate test coverage, or solve problems in ways that create maintenance challenges. Each rejection represents a judgment about code quality based on technical and organizational criteria. Recording rejection rationale in code review comments creates searchable institutional knowledge about coding standards and architectural principles.

Software specifications—requirements documents, technical design documents, API contracts—evolve as implementation reveals ambiguities, edge cases, or infeasibilities. Version-controlled specifications with detailed commit messages document how requirements understanding developed. When teams encounter bugs or unexpected behaviors, they can trace back through specification versions to understand what assumptions changed and whether those changes were intentional or inadvertent.

Engineering decision logs document architectural choices, technology selections, and approach trade-offs. A decision to use a particular database might be based on performance requirements, team expertise, licensing costs, and operational complexity. Six months later, when that database proves inadequate for new requirements, the logged decision provides context: what constraints drove the original choice, what alternatives were rejected and why, and what assumptions have changed. This context informs whether to address limitations through migration, adaptation, or acceptance.

Common Misunderstandings

"Documentation creates bureaucratic overhead without improving work quality"

This objection treats documentation as an external compliance requirement rather than as a tool for thinking and learning. Badly designed documentation requirements can indeed become bureaucratic burdens—extensive forms requiring information no one uses, mandatory procedures disconnected from actual decision-making, or retrospective documentation created solely for audits. However, the artifact types identified in the slide serve different functions: they externalize thinking as it occurs, create external memory supporting immediate work, and generate learning resources for future reflection. The confusion arises from experiences with documentation-as-surveillance or documentation-as-procedure-compliance rather than documentation-as-cognitive-tool. Effective documentation serves practitioners directly by clarifying their own thinking and supporting their iterative work, not solely external evaluators or compliance requirements.

"Only final work matters; intermediate artifacts are disposable"

This perspective conflates evaluation criteria (finished work demonstrates capability) with learning processes (intermediate artifacts reveal thinking and development). For assessment purposes focused solely on outcomes, intermediate artifacts may indeed be unnecessary. However, for learning purposes focused on process development and iterative improvement, intermediate artifacts are essential evidence. Students who discard fragments, rejected outputs, and revision histories eliminate the very materials needed to examine their own decision-making patterns and refine their processes. The misunderstanding treats learning as if it occurs solely through producing final work rather than through examining how that work emerged through successive decisions and iterations. Professional practice that values continuous improvement requires retrospective analysis, which depends on preserved records of process.

"Comprehensive documentation means recording everything exhaustively"

This misinterpretation imagines documentation as attempting to capture complete, exhaustive records of every micro-decision and exploratory step. Such documentation would indeed be impractical and overwhelming. The slide identifies four specific artifact types, each serving particular functions: fragments (preserving exploratory work), rejected outputs (documenting alternatives considered), revised specifications (tracking requirement evolution), and decision logs (recording significant choices with reasoning). None of these requires exhaustive documentation of every thought or action. Instead, they represent strategic documentation—capturing meaningful moments when exploration occurs, evaluation happens, understanding changes, or significant decisions get made. The skill lies in judging what is worth documenting, not in documenting everything indiscriminately.

"Documentation should be retrospective—created after work is complete"

This timing misconception treats documentation as post-hoc annotation rather than as concurrent with work itself. While retrospective documentation is sometimes necessary, it suffers from reconstruction problems: memory is selective and inaccurate, reasoning gets rationalized after-the-fact rather than captured as it occurred, and the cognitive state during decision-making is inaccessible to later reconstruction. The artifact types identified in the slide are created during work: fragments exist as byproducts of exploration, rejected outputs are generated and evaluated in real-time, specification revisions occur when understanding changes, and decision logs capture reasoning at the moment of decision. Documentation created concurrently with work requires less effort (no need to reconstruct from memory) and provides more accurate records (captures actual reasoning rather than rationalized narratives).

Scholarly Foundations (Annotated)

Caldwell, J. T. (2008). Production culture: Industrial reflexivity and critical practice in film and television. Duke University Press.

Examines cultural practices and industrial documents in professional film and television production, including analysis of production artifacts, trade materials, and the ways practitioners use documentation to reflect on and communicate about their work. Provides grounding for understanding production artifacts as standard professional practice rather than as academic impositions. Essential for situating artifact documentation within professional media production contexts.

https://www.dukeupress.edu/production-culture

Sterman, S., Cuenca, E., Li, L., & Drucker, S. M. (2020). Towards creative version control. ACM Transactions on Interactive Intelligent Systems, 10(2), 1-44.**

Investigates how version control concepts from software development might apply to creative artifacts like graphics, video, and interactive media. Analyzes what creative practitioners need from history management systems, how they currently manage versions informally, and what challenges arise when adapting software version control to creative domains. Relevant for understanding how fragments and intermediate versions function in creative workflows beyond traditional software development.

https://doi.org/10.1145/3385430

Novakouski, M., Lewis, G., Anderson, W., & Davenport, J. (2012). Best practices for artifact versioning in service-oriented systems (Technical Note CMU/SEI-2011-TN-009). Software Engineering Institute.

Describes systematic approaches to versioning software artifacts including specifications, interface contracts, and service implementations. Addresses challenges of managing artifact evolution in complex systems where multiple dependencies require coordination. Relevant for understanding how specification revision management functions in professional technical practice and why version control disciplines matter.

Microsoft. (2024). Design decision log. Code with Engineering Playbook.

Provides practical guidance on maintaining design decision logs including what information to capture, when to create entries, and how decision logs support traceability, knowledge transfer, and retrospective analysis. Represents industry practice for systematic decision documentation in software engineering contexts. Useful as concrete example of how decision logs function in professional practice.

Kolko, J. (2012). Iteration and variation in creative work.

Distinguishes between iteration (informed refinement of existing work) and variation (exploration of alternatives) as complementary creative processes. Discusses how both processes generate artifacts—iterations show progressive refinement while variations preserve rejected alternatives. Addresses common tension between committing to specific directions and maintaining openness to alternatives. Relevant for understanding the functional roles of both refined and rejected outputs.

DORA. (2026). Capabilities: Version control. DevOps Research and Assessment.

Synthesizes research on version control practices in software development, demonstrating that comprehensive version control predicts continuous delivery success and team performance. Emphasizes version control not just for source code but for test scripts, configuration files, deployment scripts, and infrastructure specifications. Particularly relevant given AI adoption: strong version control practices serve as "safety net" enabling teams to experiment with AI-generated code confidently. Establishes evidence base for version control as foundational professional practice.

Rabiger, M., & Hurbis-Cherrier, M. (2013). Directing: Film techniques and aesthetics (5th ed.). Focal Press.

Comprehensive text on film directing that extensively documents pre-production processes, production planning artifacts, and evaluation procedures. Provides concrete examples of script breakdowns, shot lists, camera diagrams, blocking notation, and other production artifacts used in professional filmmaking. Situates artifact creation within actual production workflows rather than as abstract concepts.

Tang, A., & Lau, M. F. (2014). Software architecture review by association. Journal of Systems and Software, 88, 87-101.**

Examines how architectural decisions get documented in software engineering practice, what information proves useful for different stakeholders, and how documentation supports architecture review, maintenance, and evolution. Provides empirical evidence about what makes decision documentation valuable versus burdensome. Relevant for understanding practical considerations in balancing documentation effort against value derived.

Boundaries of the Claim

This slide identifies four specific artifact types for documentation during a particular session; it does not claim these are the only valuable artifacts to document or that all projects require all four types. Different domains, project types, and team structures may emphasize different artifacts or employ additional documentation practices beyond these four.

The slide references these artifacts as examples of "standard pre-production and evaluation artifacts in professional media practice." This grounds the list in established professional practice but does not claim universal adoption—professional practices vary across organizations, projects, and individuals. Some professionals maintain extensive documentation; others work with minimal formal records.

The slide does not specify the format, level of detail, or tools for creating these artifacts. These are implementation questions requiring judgment about project needs, team preferences, and available resources. A fragment might be a rough sketch, a code snippet, a paragraph of text, or a prototype component; rejected outputs might be preserved with extensive annotation or minimal metadata; specification revisions might be tracked through formal version control or informal document versions; decision logs might be structured databases or informal notes.

The slide does not claim that documentation eliminates all tacit knowledge or makes all decision-making fully explicit. Some knowledge legitimately remains tacit, and some decisions occur at levels of granularity where documentation would be impractical. The goal is strategic visibility—making meaningful process moments visible—not exhaustive documentation of every thought and action.

The framework does not address important questions about access, ownership, and use of documented artifacts: who can view decision logs, whether rejected outputs become part of public portfolios, how fragments might be used in assessment, or what rights students have over process documentation. These are consequential questions with ethical and practical dimensions beyond the scope of identifying artifact types.

Reflection / Reasoning Check (Optional for Students)

1. Consider a recent project where work was ultimately satisfying. If recreating that work from scratch right now, without access to the files themselves, how much of the decision-making process could be accurately reconstructed from memory alone? What decisions would be difficult to remember: the alternatives considered, the reasons one approach was chosen over another, the sequence in which different aspects were addressed, or the problems encountered and solved? What does this reveal about the role of external documentation versus internal memory in preserving process knowledge?

This question prompts students to test assumptions about memory reliability and recognize that much decision-making becomes invisible even to the decision-makers themselves over time. An effective response would identify specific categories of information difficult to reconstruct (alternatives considered are typically forgotten faster than final choices; reasoning gets rationalized retrospectively; problem-solving sequences get smoothed into cleaner narratives than actually occurred), acknowledge gaps or uncertainties in reconstruction, and connect these limitations to the value of concurrent documentation that captures process as it unfolds rather than attempting retrospective reconstruction. The question assesses whether students understand that documentation serves cognitive limitations, not just external stakeholders.

2. Imagine collaborating with a partner on a project, both independently generating options for a particular component. Each has created five possibilities. When comparing them, three from one set and two from the other are unsuitable for reasons that become immediately apparent once discussed. If only the successful options are preserved and rejected ones discarded without annotation, what information would be lost? How might preserving rejected options with brief notes about why they were rejected support the collaborative process, both immediately and if revisiting this component weeks later?

This question tests understanding of rejected outputs as carrying information about evaluation criteria and problem constraints. An effective response would identify specific information lost when rejected options disappear without trace: the evaluation criteria applied (technical feasibility? alignment with goals? aesthetic appropriateness?), the exploration breadth (how much of the possibility space was examined?), and the reasoning that might prevent similar unsuitable options from being proposed again. The response should articulate both immediate benefits (creating shared understanding of evaluation criteria between collaborators) and longer-term benefits (preventing repeated proposal of previously-rejected approaches, documenting what constraints drove rejections in case those constraints change). The question assesses whether students recognize that rejected outputs are not merely failed attempts but rather information-bearing artifacts documenting the evaluation process.

Return to Slide Index