Generative AI has rapidly transformed professional knowledge work, yet prevailing prompt engineering paradigms frame effective human-AI interaction primarily as a matter of crafting better prompts. Even emerging multi-round optimization approaches tend to prioritize performance, consistency, and task completion rather than authorship preservation, evaluative reframing, and accountable decision-making. This paper introduces FERC (Frame-Explore-Refine-Commit), a governed interaction cycle for multi-shot human-AI collaboration that structurally safeguards human intent, judgment, and responsibility. Drawing on creativity research-particularly models of iterative problem construction and co-evolution between generation and evaluation-FERC operationalizes metacognitive control in hybrid intelligence systems. We demonstrate how FERC reframes the Double Diamond design model for AI collaboration, provide practical behavioral guidelines, and introduce a Human-AI Collaboration Maturity Model tested across executive training contexts. Finally, we present FERC-bot, a metacognitive training interface that makes collaboration structures visible and measurable. Together, these contributions reposition prompt engineering from artifact optimization toward process governance, enabling scalable and authorship-preserving hybrid intelligence in organizational settings.
Research Article — 27 April 2026
Ensuring Human Authorship in the Age of Generative AI
The FERC Framework for Governed Human-AI Collaboration
Authors: Jacob Sherson, Janet Rafner, Roni Reiter-Palmon, Izabela Lebuda, Matthias Söllner, Yoed Nissan Kenett, Blerim Emruli, Manuel Rindle, Selina Weiss, Benjamin Goecke, Florent Vinchon, Yaoli Mao, Dominik Dellermann, Seyedahmad Rahimim, Janet H. Marler, Andy Nguyen, Jens Christian Bjerring, Steve Dipaola
