In this episode of the Smarter by Design podcast, I’m joined by Clark Quinn, a cognitive scientist who has spent his career translating decades of learning research into practical guidance for organizations. He is the founder of Quinnovation and co-founder of the L&D Accelerator. His work is grounded in a simple conviction: most organizations are leaving enormous potential on the table — not for lack of effort or care, but because the science of how people actually learn has rarely made it into the room where learning decisions get made.
In most AEC firms, learning and development didn’t start with a formal strategy. It emerged organically. Executives responsible for talent came up through practice. L&D leaders stepped into their roles because they wanted to make their firms better, not because they were trained in the discipline. Subject matter experts shared what they know without ever having been taught how to teach.
As a result, most learning organizations in the AEC industry were largely built by accident rather than by design. And in that gap lies a significant opportunity: to create learning that doesn’t just inform, but actually improves capability and performance.
That is what this conversation is about.
Clark walks us through the science that most accidental L&D leaders never had access to. He explains why training so often stops at information transfer, what it really takes to design for performance rather than content delivery, and what the research says about learning design that actually moves the needle. We explore the shift from content-heavy training to practice-led learning, how to identify the root causes behind critical performance gaps before reaching for a training solution, and how to determine whether learning is even the right intervention.
We also step back and look at what a true learning ecosystem requires: not just courses, but performance support, job aids, communities of practice, mentoring, and the cultural conditions where learning compounds over time. Where knowledge is shared openly. Where failure is discussed. And where leadership sets the tone.
Finally, we go deep on one of the most important dynamics in any AEC firm: how to effectively work with busy and highly billable subject matter experts by drawing out what they know, pairing them with skilled learning designers, and building a coaching culture that makes expertise transferable at scale.
If you lead an AEC firm, build learning programs, or teach others what you know—and you’ve largely been figuring it out as you go—this conversation offers a foundation for doing it smarter. By design.
▶ Watch or Listen
Watch or listen to this episode via YouTube, Spotify, Apple Podcasts or wherever you get your podcasts.
📺 🎧 YouTube
📺 🎧 Spotify
🎧 Apple Podcasts
📚 Show Notes + Resources
Don’t miss the full list of resources, organizations, experts, books, articles, and concepts referenced in this episode following the Episode Transcript section below.
📃 Episode Transcript
This transcript was lightly edited for clarity.
Chris: Clark, you have spent years helping organizations rethink learning and development through the lens of cognitive science, performance, and innovation. And one of the ideas you're especially known for is the performance ecosystem. I really want to talk about this and dig into it today, but I want to start with how you got into this field. From my understanding, you got into this through cognitive science. Can you talk about that journey, and what drew you from cognitive science into learning and development?
Clark: I was doing tutoring on campus and got a job doing computer support for the office doing tutoring. I saw—I hate to tell you how long ago this was—that computer-supported learning was a relatively new idea. But I saw the connection and said, "Hey, maybe this is worth studying." We had a program where you could design your own major, and I ended up doing exactly that. It's been my career ever since—supporting learning through technology.
And it turns out technology means conversations as well as devices, and lots of other things. My first job out of college was designing and programming educational computer games, and that notion of engagement has remained a recurrent theme because it's relevant to learning. But I realized we didn't know enough. We were coming up with questions about how to design things, and I read an article calling for cognitive engineering—focusing on how we think, work, and learn, and designing to match how we think. My twist was, "Let's design for how people learn."
I did a PhD, was doing the academic route for a while, and when I came back to the US after a few years overseas, I said, "This makes perfect sense: to connect organizations with learning needs with the technology and learning science I had studied." So I came in a back way—from designing and programming educational computer games, getting that focus on cognitive science, and then meeting with organizations. I never had the traditional instructional design training. Instead, that focus on how our brains operate and trying to design solutions that work in conjunction with that has remained a recurrent theme.
Chris: If you go back to young Clark, learning and trying to do these things—what was the biggest surprise about how people learn that you didn't know when you first got started?
Clark: I guess the same as for a lot of people: the thought that learning really requires effort and practice. In some sense, learning came easy to me—I loved reading, loved looking at diagrams. I'd sit in front of the bookshelf with a World Book open and just work through the diagrams and understand things. So the thought that it actually took work—and that in particular, to have an ability to do, not just to know, but to do—really requires persistent practice. And the nuances get quite fractal, in the sense that everything unpacks more and more.
Just in short: if you want to be able to do something new that you can't do now, you're going to have to practice doing it and get feedback. That was what surprised me, and it ends up being true for a lot of people—that mismatch. Everybody's been through school, so we think that's what learning looks like. And school is actually a really bad model. We forget how little we actually learn from schooling.
Chris: Is that because it's more focused on information transfer versus—why did we learn so little from school, in your way of phrasing it, and what would be better?
Clark: School does have persistent, reinforced learning, which is good, but it's focused largely on knowledge. Most schools want you to be able to know things—to know history, to know science—and not do anything with it. In organizations, what we care about is people being able to do things with that knowledge. And that's what you mentioned—the performance ecosystem. There are two components to that. One is the focus on performance. That's really what differentiates, I think, school from organizational learning: we really need people to be able to do things, not just know things.
Chris: Or a traditional K–12 approach versus a vocational program, which is focused on helping you actually perform things in the world.
Clark: Sure. And those can be better or worse too. Some claim to be vocational and aren't, and others really get it. When you look at how we traditionally learned before schools—and schools are a relatively new invention, only in the past few hundred years—learning looked much more like apprenticeships. John Lave and Etienne Wenger talk a lot about this in historical practice. And I really like what Alan Collins and John Seely Brown talked about with cognitive apprenticeship. They looked at different initiatives—teaching reading, writing, and mathematics—and found similarities that suggested: if you really want to develop understanding, it should look a lot like apprenticeships.
Chris: "Cognitive apprenticeship" meaning—instead of physical skills—knowledge work skills?
Clark: Developing those skills to do reading, writing, arithmetic, and other things. Increasingly, our work is knowledge work, not physical work. We're getting robots that can run half marathons. We have complicated machines to harvest crops. So increasingly we're in an information age and we need to do knowledge work. That's why cognitive apprenticeship becomes a useful way to think about how we should design learning for actually being able to do mental tasks, which is largely what we do these days.
Chris: And so if the role of traditional education is not necessarily to build skills but to lay down some kind of intellectual foundation and teach you how to think—my impression from reading about you is that you're a bit of an outsider and something of a critic of the system. What I'm inferring is that we took that same approach from school and brought it into the workplace: more information transfer, and not teaching in a way that could help change behavior or change performance.
Clark: I'll push back a bit on your statement that school is about getting you foundations and learning to think—actually it doesn't do a very good job of teaching you to think. It teaches you the foundations, and you do need the foundations, as people like Paul Kirschner make quite clear. But I also think we don't do a good enough job of developing people's ability to do, and we don't develop people's ability to think, as you suggest. That's actually what we should be doing—in addition to laying those foundations—teaching you to think, expand your own learning. That's not really part of the curriculum.
The notion that you change the curriculum and the teachers will adapt is actually naive. You have to equip the teachers with skills to adapt, to add these things, to look at their overall perspective. Just assuming they can do this is not fair to them, nor to the students. So I think we could and should be adding the learning-to-think, metacognitive, meta-learning skills in K–12 and higher ed, but they're not there now. And so organizations assume it without actually testing it or developing it.
Chris: Organizations assume that when new people come into their organization, they know how to think?
Clark: They know how to learn.
Chris: They know how to learn?
Clark: They can think in particular domains they have been trained in or have prior experience, and you'll hire for that experience. But assuming those people know how to continue to self-develop and acquire new skills is naive. Now we do have the L&D department—it's supposed to train—but their training is very much influenced by that experience with school. We also tend to take people who are good at their job and turn them into trainers, and they don't know how to train.
And then when we have disruptions like 9/11 or COVID and everybody has to go online, we say, "Here's an authoring tool, you're now an instructional designer." And we have a lot of people doing instruction who don't understand instruction. Cammy Bean wrote the book The Accidental Instructional Designer, and that's too often the case. We have people expected to know learning who don't, and yet they have to develop it. So they track back to what they learned in school about how to do education, and it looks a lot like school—presenting information and testing information. People will recite back what they've learned, and yet we have what in cognitive science we call "inert knowledge," because when they go out in the real world where that knowledge is relevant, it doesn't even get activated. They've never used it in context.
That's what I would argue L&D should care about—remedying not just people's ability to know enough stuff, but people's ability to do stuff.
Chris: You've thrown me three fastballs and I'm trying to figure out which one to swing at. I definitely want to come back to what you said about people not knowing how to learn, and I want to talk about people not knowing how to teach. Those seem like two meaty subjects. Can we start with people not knowing how to learn? How does one learn?
Clark: A good, effective self-learner should set themselves goals and design experiments. Our colleague Harold Jarche talks about personal knowledge mastery and has a workshop on it that really digs into the skills required. He's broken it up into three major components: Seek, Sense, and Share.
Seek is how you gather information. It turns out there are lots of folk psychologies about each of these steps. For seeking, people think, "If I just Google, I get an answer, that's great." And it's even worse now when you just get answers—not even links to articles—and they may be made up, they may not be right. And because you're a novice in that domain, you don't know if it's right or not. But seeking well means creating a good feed, monitoring that feed, and knowing how to write good search queries and evaluate your results.
There's this myth of the digital native—"Oh, they grew up with this technology, they're good at using it." And yet when two separate studies were done by librarians in the UK and the US, looking at people's ability to write good search strings and evaluate results—this was before AI, but even then—there was no difference by age. It depended on the individual's experience. As our colleague Jane Bozarth says, she's the oldest millennial because she's a digital native even though she's close to my age. We have a lot of myths.
Sense is the second stage—that's involved in experimenting, applying it, testing it, refining it, making sure you understand what you've heard, that it means something to you and is worth sharing with other people. And Share is the third stage—who to share with, how to share. That creates a cycle, because people read what you say and some will comment, and that feeds back into the Seek again. When you do this well, you get a really productive learning cycle. But knowing all these phases and how to execute each well is not something you should assume, because folk psychology can get in the way.
Lots of people think, for instance, if you read and highlight, that's valuable. It turns out that's really simplistic and actually not very true. There are good ways to highlight, but there are far better ways to process material than rereading. And yet too many people just reread it and think that's valuable—instead of processing it, re-representing it in different ways, making inferences and testing them. So that aspect of "how do we be effective learners?" is just neglected.
The culture matters too. Is it safe for you to share your ideas with other people? If you have the Miranda organization—where anything you say can and will be held against you—people aren't going to share their best ideas.
Chris: Psychological safety, that kind of thing.
Clark: Exactly.
Chris: So that's like self-directed, self-motivated learning. Does the question around people not knowing how to learn also apply when there are more informal learning structures or mentorship structures? Are there still deficiencies in people being able to learn in that context?
Clark: Yes. I was just writing a post thinking about coaching. There's the standard model of sports coaching where people say, "Focus on your swing now—you were a little low consistently, let's focus on hitting it higher." That's very domain-specific. But there's been this notion of domain-independent coaching, which asks, "What did you do wrong? What should you do differently? Who might you ask?" They deliberately choose coaches who don't know the domain. To me, what they're doing is essentially doing what you would do if you were a good, self-efficacious learner. So should we instead be developing everybody's ability to be a good learner, rather than just focusing on coaching?
There are also two senses of informal learning here. One is, "I need to learn, there may be somebody out there with the answer, and I want to go find it." And then there's the other situation where there isn't anybody with the answer—so you're innovating, using skills like brainstorming to address this. Just as in formal learning, you don't know the answer when you start, and there isn't anybody who's going to provide feedback for you. The world will have to do it, or you'll have to test it. That requires the same learning skills, but it requires finding ways to get feedback from the world, because you can't get it from somebody else who knows more.
Both formal and informal learning have their role. When you're starting in a new field, you don't know what you need to know and don't know what's important, so you need some structure. I'm not one of those people who says, "Don't have courses." I say, "Do have courses that are effective." But then you need different things as you start developing and move from novice to practitioner. You know what you need to know and what's important, and you just need access to it. So you start needing performance support for those things you don't do frequently, where it's not worth trying to put it in your head.
Chris: What's an example of performance support?
Clark: The video I watched to repair my dryer. My dryer stopped working and I couldn't figure out why. I looked online, found this video, and it talked about diagnosing and replacing this particular piece. I don't even remember what it was I did—and that's fine, because I got the dryer fixed. It was performance support that allowed me to do that, and yet I didn't have to "learn" anything.
Chris: Or if I'm reviewing an NDA and I haven't done it in six months, but we have a checklist for clauses to look for—that's something that helps me in the flow of work.
Clark: Checklists, decision trees, process guides—there are lots of tools we use to make sure we don't have to put it all in our heads. We put it in the world so we can access it. But there are also times we do want to advance our understanding, and then we need resources. That's formal in a sense that somebody's created it, but it's further down the learning journey.
We also start needing community, we start needing experts for mentoring and coaching, and we start needing to interact with our colleagues to see how they did it and share what we're doing and get feedback. So that ecosystem part—the performance part is making sure we can do it, and as novices we need courses, and then we want job aids for when something doesn't have to be in the head, and we want to advance what is in our head. But thinking that one tool is going to be able to do all that is naive. Swiss army knives are great when you're camping, but in the kitchen you really want the right different tools. The same applies in an organization: unless you're really small, you're going to want the right tool for the job.
Chris: Can we use a hypothetical to explore this performance ecosystem? In our industry—architecture, engineering, construction—let's say we have an architecture firm that focuses heavily on healthcare, which is a complicated discipline. There's medical equipment, jargon, compliance, different user groups. So let's say you have an undergraduate in architecture, maybe you've worked on some school projects, and now you're going to be doing some healthcare projects. There's this desire in our community to speed that learning cycle up. I've got some foundation in design and understanding how buildings go together, but I don't really know this discipline. How do you start thinking about a performance ecosystem to put around that emerging professional?
Clark: If they know buildings and design, and we're talking about buildings designed for specific healthcare needs, I'd probably also want them to get a foundation in the healthcare field—just what do they need to know? Not everything about healthcare; they may not need to know about filing forms for administration or reimbursements. But they probably need to know what are the common things people do, what are the common things people need, and who are all the different stakeholders.
And then there's a company—Jos Arets and Vivian Heijnen have Tluser, which is based in Belgium—and they have worked with my colleague Charles Jennings and created a whole process for thinking through exactly those situations. Because some of that's going to be performance support. What do you absolutely have to make sure a healthcare building includes? There's the standard checklist for buildings in general, but healthcare probably has its own requirements for the different machinery, and there has to be open space for wheeling complex equipment around. You start having multiple checklists from different areas.
Chris: Extra backup generators in case of earthquake, or something like that.
Clark: Right. And asking people to remember all that is probably not worthwhile. They may learn it over time, but you probably want to teach them how to use the checklist rather than memorize it. And you build that awareness of the resources available to them as part of what they're starting.
But you also want to make sure that person takes the additional healthcare course, and when they start going in, have them apprenticed with somebody—so they're watching someone, gradually handing off responsibility, with somebody looking over their work. And that person needs to know enough about learning to know, "When do I stop giving them feedback, and start asking them what feedback they should receive right now?"
So there's this transition from, "You got that wrong, you forgot to include seating for the family around the bedside"—I'm making this up, I'm not an expert in the domain—to at some point starting to say, "What have you forgotten?" "Ah, seating for the family." You're beginning to develop that self-monitoring. And then eventually you can start giving them real design requirements. It's the mentoring, and it's also the community. They should be seeing what other people on different projects are doing, and seeing the feedback they're getting, and then starting to give their own feedback, and sharing what they're doing and getting feedback. You start building this community.
Chris: If you have enough senior people in the community who can help with the domain, right?
Clark: And you also have to make it safe for them to share their good ideas, so they don't feel threatened by new people. They welcome the new generation, and if someone says something wrong, somebody else will correct them. There are a lot of factors that go into making that work. One of the best ways to find out you don't have a vibrant community is to put a social media platform in there and nobody shares. And you go, "Oh, maybe we don't have the right culture."
Chris: We lived through that when we first came into business in 2009—we were a social intranet, and it was all about, "Did they have the right culture?" Checklist number one: is there psychological safety? If somebody asks a naive question or says the wrong thing, how does the organization respond?
Clark: Right. And it takes nurturing to get there. I found out I'm not a good social media facilitator—I'm too much of an introvert. It takes time to build it, it takes cadence, it takes soliciting people, it requires people to continue to find value in the community, to gradually move from a legitimate peripheral participation to a central role.
Chris: We ran a workshop with some of our leading clients three or four weeks ago in leadership—companies that want to be learning organizations and want to push forward. And they described a bunch of disconnected learning programs and activities that weren't cohesively brought together in an ecosystem, as your term is. How does that all come together so that 1+1=7? You've got a formal course, maybe "Healthcare 101," you've got some job aids, they're part of a community, they learn vicariously through peers and mentors, they get feedback directly through mentorships, they have a social platform where they can ask questions. Many times it starts and stops with the course, and it doesn't include all those pieces as part of an integrated solution—is that correct?
Clark: Yes. And what you say is absolutely accurate, but I want to go a little bit further, because you can have all the pieces and still not have a coherent ecosystem, because the pieces aren't linked together. And there's more. Garvin Edmondson and Gino, in a Harvard Business Review article, talked about the dimensions of a learning culture. They really highlighted environmental factors—beyond having all the parts, you also have to care about the environment.
So, not just tolerating diversity but valuing it, recognizing the value. Not just saying "we welcome new ideas" but actually being open to them and testing them. Time for reflection—to actually stop and have time. People say, "Ah, we're too busy for reflection." It turns out if you take time for reflection, you're more productive than if you don't. And the last one is psychological safety. Amy Edmondson has continued to talk about that in a variety of useful ways in organizational culture.
But they go on to talk about practices too—you need to have an expectation and budget and time for experimentation. Amy Edmondson's most recent book is on smart failure—basically, it's about being safe to fail as long as you're doing smart experiments, because innovation requires risk. Learning requires risk, because you have to try things and be willing to get it wrong. If you know what you're going to learn if it goes wrong, and what you'll learn if it goes right, that's smart. Do it as low-investment as possible, and don't bet the farm unless you're going to lose the farm anyway.
And there's a dimension I hadn't thought a lot about. My colleague Matt Richter studied with Rich Ryan of Deci and Ryan and self-determination theory, which is a really rigorous framework for thinking about creating an environment in which people feel autonomy, competence, and relatedness—the three elements of self-determination theory. When you have all those, you get people willing to commit and willing to contribute. He has a new book coming out called The Motivation Blueprint that talks about this quite nicely.
It's a long answer to your question, but really an ecosystem is greater than the sum of its parts, because it's also the environment in which it's operating. You can get positive outcomes from each element, but if you really want that quantum leap, you need to make sure you've got the environment right and them all synergized together, working in ways that allow people to take best advantage of it.
Chris: Right. And then, to take best advantage—performance ecosystem to what end? What is the goal of learning and development? What does good look like for an organization?
Clark: Traditionally, learning and development has been about making sure you can execute the things you know you need to do. And so we create courses to develop skills. Unfortunately, we develop courses for all solutions too often, instead of knowing we can use performance support, and that there are certain problems courses won't fix. If I could be selling solution-based approaches, but I'm rewarded for selling product, I'm going to sell product until you change the incentives. Courses aren't going to fix that.
Chris: So you can put me through a course about solution selling, and if all my incentives line up for me to do the other thing, then what was the point?
Clark: Right. So there are lots of things that interfere, but ultimately we're about executing what's optimal—which is why we're too often the first part that gets cut. When things are tough, people go, "We're executing well enough, I'm not going to worry about L&D." So I argue that there's a second component to L&D that really should be part of it: facilitating innovation.
Chris: Interesting.
Clark: We in theory are the folks who know learning best. And innovation, as I've suggested, is informal learning—because you don't know the answer when you start, when you're doing the research, when you're doing troubleshooting, when you're doing design. So it's learning, it's innovation. Facilitating that, according to what we know about learning, is an opportunity for L&D. That's the principled reason. And then there's a very pragmatic reason: if you're the business unit responsible for adapting to increasing change and helping the organization meet ongoing needs, that suddenly becomes very strategic and central to the success of the organization.
Chris: Especially because we're in VUCA times right now—very fast-moving and very change-oriented. If the role of L&D is only to help scale the knowledge of the things we already know, but some of the most important things we need to know are unknown or changing very quickly—you have to be out at the edge, helping to facilitate that edge and helping the organization learn it, so that you can scale things as they settle down. Is that correct?
Clark: I like the way you put that. Yes. Doing the things you know you need to do is just going to be the cost of entry going forward. The only sustainable differentiator is going to be the ability to adapt, and that comes from that agility, that comes from that edge, that comes from experimentation and testing and tracking what's happening. I'm not saying L&D is responsible for it—I'm saying L&D is responsible for making sure that it's effective and optimal at leveraging what we know, but also facilitating that innovation process.
Chris: How can L&D do that? What are the things that help facilitate innovation? What's the role in the best examples you can think of?
Clark: To me it's about brainstorming as a practice, which I've looked at closely because I was interested in what makes creativity and good design. There was a series of articles some years ago saying, "Brainstorming doesn't work." And they were right, because the original proposal for brainstorming put people in a room, gave them a problem, and asked them to start talking. That doesn't work well because you're not leveraging the diversity of thinking in the room.
If you give everybody the problem but ask them to think independently—not come together or share ideas until everybody has come up with their own ideas—then you share. That's how you populate more broadly the solution space, and that increases the likelihood of finding the best solution.
Chris: Or maybe even letting people think about it for a couple of days in advance, for those who need to reflect and have their subconscious work on it.
Clark: That's even better. So there's time for that percolation, incubation, fermentation—pick your metaphor. We want to get those best ideas down, and then we share them. And before we evaluate them, we create—"Hey, what if we put these two together?" Let's randomly populate the solution space. There are apps that throw out random ideas just to prompt lateral thinking. Then you diverge first, get as many ideas as you can, then you converge. I knew an interface design company in Australia that set two separate teams on every problem and let them run, then brought them together and said, "Okay, what are the best parts of each?" It was a bit of a luxury, but the outputs were better.
Chris: This is a really interesting frame. Partly because we work in the design and engineering field, there's a lot of those kinds of thinkers per capita in our clients. But this idea that for a learning organization—I had written two things down and now I have a third. So: innovation, the frontier, the cutting edge. Then a function of scaling, once you've established some things that work and want more people to know how to do them. But then there's also a phase of unlearning, where we need to retire and stop doing what's no longer working or no longer accurate. There's that piece to it too.
Clark: You probably saw my reaction to the term "unlearning." I don't like that term. As somebody pointed out—maybe Tom McDowell—it makes it easy to think we can just unlearn stuff, and it's not that simple. Technically, we have to learn over the old traces, and the old traces can be quite strong and recapture our behavior under stress. Over time we'll backslide if we're not actively reminded, prodded, and trained to make sure we now have this new response.
And what Garvin Edmondson and Gino also talked about—beyond experimentation—was analyzing the results of those experiments, figuring out what they meant, and baking that into what you're teaching and training about. If you come up with a better way to do a task, you're going to have to learn over the old way of doing it. So maybe you did Process X, and now Process Y is a better way. Let's learn Process Y.
Chris: That's interesting, though. If Process Y is only 5 or 10% better, you also have to calculate the cost of retraining your entire organization for marginal improvement. Do you actually end up positive in terms of ROI?
Clark: These are the decisions you have to make. Is the investment worth the cost? Is the benefit worth the cost it's going to take to make the change? In the short term, maybe 10% improvement seems small. But if you multiply that over multiple years or products, maybe it is worth it. It depends on the volume, what it costs to do it, and what the impact is. This is the classic ROI calculation, and my only problem with ROI is that sometimes it can mislead you. You could do something small that has a really big ROI, but something bigger may not have as big an ROI and yet is going to make a much bigger difference over time. You have to make sure you prioritize those relative impacts appropriately.
Chris: Can we talk about performance more specifically? We talked about the performance ecosystem but didn't really talk about performance too much. One of the things you and I had talked about before is that it feels like—let me tie two ideas together. You said "accidental instructional designer," you mentioned that book, but there's a level above that, which is like an accidental learning organization who stumbled into... I think all organizations are learning organizations because they have to be in order to survive. But there's doing it ad hoc, and then there's very intentional organizations who think about measurement, success, performance, and innovation. What's that journey look like from accidental instructional designer slash accidental learning organization to something you would consider best-in-class?
Clark: I want to also characterize the accidental learning organization. There are also organizations led by people who think they know the answers. We have this culture that says, "Oh, if you built a successful company, you're brilliant." And they totally discount the influence of luck in that—the research shows that luck plays a much bigger influence. But sometimes they make all the decisions for everybody, instead of looking and tapping into other perspectives. It's the old hierarchical approach.
In the information age, Keith Sawyer documented as an academic in his book Group Genius, and Steven Johnson in his book Where Good Ideas Come From—that myth that one person goes away and comes back with the solution isn't true. Everybody's best answer is social. "We Are Smarter Than Me," as the book on my shelf titles it.
So how do you intentionally get there? It begins with figuring out how to facilitate the flow of information and communication in the organization. You make it safe, and you have the tools so you can have that conversation. And it's also what the leader does. I added to Garvin and Edmondson's approach: it wasn't just leadership that supports it—I believe leadership has to walk the walk. As Jane Bozarth's book Show Your Work suggests, you can't just say, "Make your mistakes visible." You have to do it yourself as a leader, or nobody's really going to believe it's safe to show your mistakes. There was a small software company up in Redmond, Washington, that had such an aggressive culture that if you shared a mistake, you would be eviscerated for it. So they all kept making the same mistake, because nobody could admit to it.
So it's a conscious decision to take advantage of the people in your organization, creating an environment in which they can participate in the decision-making. You can't experiment if you don't measure, because you really don't know if the outcome is good or bad unless you have a measure.
Chris: You don't necessarily know the clarity on the goal of the learning program or a specific learning activity. What needle should get moved if this goes well?
Clark: And that's one of the biggest issues in L&D—measurement. Too often, they measure, "Did people like the experience?" And there's nothing wrong with that, except that its correlation with actually having any useful impact is essentially zero. Salazar Aladl analyzed this and got a meta-analysis of 0.09 impact. That's zero with a rounding error—no correlation between whether people liked it and whether it was effective. And in fact there are plenty of examples of the instructor who was not the most liked and yet was the most effective when measured.
Chris: How do you measure if learning's effective? What does that mean?
Clark: Ultimately, it's what you said—it moves a needle in the organization. You should not actually start an initiative if you don't know what it is you're trying to fix. And there's a field called "performance consulting" that I think should essentially be part of L&D, which goes in and says, "Before we create a solution, let's make sure we understand the problem." What is the gap between what should be happening and what is happening? Is that gap significant enough that we need to remedy it? Let's look at all the gaps in the organization, figure out which ones we need to fix, and then—what's the cause? That's the gap analysis. And then there's the root cause analysis: Why do we have that gap? Is it wrong incentives? Courses aren't going to fix that. If people just don't have all the resources they need to do the job? Courses aren't going to fix that either.
Now, maybe it's knowledge? Ah, well now we're beginning to find something. Is this a skill? Something we need people to do that they can't do now, or they're doing wrong?
Chris: Or maybe they don't feel safe to raise their hand if they don't know what they're doing. So now we have skills, knowledge, and culture perhaps.
Clark: And so you work backwards and eventually say, "What would people be doing differently that would lead to this result?" And if people would be doing it differently, you look to see: are people doing this differently? Then you work back and say, "Are we training them to do it differently?" These are the measures. But it's ultimately: what is it you need to change in the organization? If you need salespeople to be shortening the sales cycle—well, you figure out what the sales cycle shortening process looks like, you train them on that, and then you look to see: is the sales cycle shortening? And is that yielding the increase in sales you want?
Chris: But only after you validated the problem isn't that no one wants to buy the product.
Clark: Right. Because no amount of training on selling will help with that. That's the performance gap, and the performance gap should be, "We're not selling enough product, and it turns out the sales cycle is too long."
Chris: So why is the sales cycle too long?
Clark: Exactly—you validate all this, and then you figure out what the intervention needs to be. And sometimes, when it's a course or a job aid, L&D steps in and develops that or curates resources. If it's a problem where you don't know the answer, that's when you assign a team.
Matt Richter likes to point to a British historian, Keith Grant, who analyzed military problems but talks about three sorts of types of problems. There are tame problems, where you just manage it. There are critical problems that are time-bounded—if you don't act, somebody's going to die—and you just need somebody to make a decision. And then there are wicked problems, where you have to assign a team, manage the process, and figure out what you're going to do. Different types of problems require different approaches. And knowing when you have a wicked problem, when to assign a team, and what sort of timeframe they need—in many cases, it could be unrealistic to expect a significant change to come out of a week of meetings.
Chris: So we're kind of working upstream from learning, to organizations getting better at classifying and diagnosing problems correctly, so that the solution to every performance problem doesn't become training or learning.
Clark: Right. And while that may make traditional L&D teams uncomfortable in the short term, if you reduce the courses you develop to only the ones where they're really needed, and then you do them well—you have more resources to do the other things, like facilitating innovation, curating job aids and learning resources, and making sure the community is vibrant.
Chris: Do you think learning development teams try to take on too much scope? It's kind of implicit in what you just said—maybe they're doing too much of what could be called back in favor of some less effective ones.
Clark: I think they're doing too much of the wrong stuff. They're doing too much content development. Back to how we perceive school: if I give people new information, they're going to change their behavior. And that, I believe, is implicit in so much of what we see in organizations—yearly reviews, for instance, which have just been proven to be worthless. Why do we continue to do that? Even twice-yearly reviews isn't frequently enough; people really need regular feedback. And bullet-point training, just giving people information, isn't going to change their behavior. And yet, too often: here's the content, PDF, PowerPoint, add a knowledge quiz, and you're done. And now people want to do that faster and faster.
Chris: Why do you think—we're in 2026—I've heard what you just said for a few decades now. Why is this still a problem?
Clark: Why has education been recognized as ineffective for decades, and it's still true? I think there are multiple reasons. L&D isn't very good at communicating its value, and it has been quite comfortable just taking orders and producing courses. They don't measure, so they don't know that they're not effective. And stakeholders, if something looks like school, think it must be effective. So they're not really asking for data. I'm still waiting for that to change, by the way. Eventually the CFO is going to go, "We're spending money on training—what evidence do you have that any of this is having an impact?" And I would rather L&D be changing to prepare for that rather than be faced with it as a sudden and surprising outcome.
Chris: We have bad ideas about good course design. What is good course design?
Clark: Practice. I believe strongly that we've got our ratio wrong. We tend to have about 80% content and 20% practice, and most of that practice is knowledge testing. There's research that shows—Prajagaral showed this—that we used to believe you need to make sure learners have the information before you test them to see if they can use it. So there were low-level questions and high-level questions. But it turns out: if you just ask high-level questions—applying the knowledge, not just retrieving it—you still get the same transfer to ability to do. You don't need the low-level knowledge questions. Here, you have to retrieve the knowledge and apply it. So you might as well skip the retrieval-only questions and go straight to the application.
Chris: Can you give me an example of what high-level versus low-level would be in this scenario?
Clark: Let me see if I can make an architecture one. A low-level question would be, "What are the requirements for a weight-bearing beam in terms of the current code?" And you recite an answer—"Current code says the beam has to be this long and sustain this much weight." That doesn't mean you can design anything. What you then need is: "Here's a building with this much weight—how many columns does it need?" That difference in question is applying the knowledge. What are the decisions people really need to make in the real world? Not, what is the knowledge they need—but what are the decisions they make? And just asking those application questions requires you to retrieve the knowledge and apply it the same as asking the retrieval-only questions. So you don't need the retrieval-only questions.
Chris: Is part of the reason people stick with low-level questions that it's easier? Low-level stuff can be multiple choice and fill-in, where the other one has to be an interactive exercise and someone has to evaluate whether their design would actually work. Is that part of this?
Clark: Yes, and it is part of it. It turns out—I recently ran for the third time a workshop on writing mini scenarios. You can take multiple choice questions and turn them into scenarios. So instead of asking a knowledge retrieval question, you ask: "Here's a building with this much weight—how many columns do you need?" And you have several options reflecting how people reliably go wrong: too many, the right number, and too few. And you have specific feedback for each wrong answer: "You were probably assuming X, but you forgot to add in Y." So you can write better multiple choice questions even with your standard tools, but you have to know that's important.
And it's not rocket science, but people can reliably go wrong. One of the things I found when first training people to write mini scenarios is that they would write decent scenarios, but then the alternatives would still be categorizing things instead of making decisions. That's why I created the workshop. It's challenging to get people to internalize that message, but it's valuable and worthwhile to start thinking about. The most important thing is to start making realistic practice and having a fair amount of it, because that's what makes learning work.
Chris: So instead of being 80/20 content-to-practice, you flip them?
Clark: Yes. 20% content, 80% practice—I may be exaggerating, but if you include the practice after the learning event, adding that in, it's probably about the right ratio. But it's also important not to let content experts just pile on. "Oh, they have to know this, and this, and this." What's actually happened is that experts can't tell you 70% of what they do. That's what research from the University of Southern California shows—because it's compiled away, that's how our brains work. And yet they can't always access all they know. So they recite, "They need to know this, and this, and this." And the instructional designer dutifully makes sure they cover it all, and they release the course, and nothing changes.
Because what you need to do is first ask what it is people do. And they don't have good access to this. Do you remember the AI winter in the '80s? They built expert systems by asking experts what they did, and built systems that did what the experts said they did. And they tested them and they failed. They went back and said, "Expert, what's going on?" And the expert said, "I don't know." So they watched the experts—and what the experts did and what they said they did had no correlation at all.
Chris: Right. And those expert systems were so brittle because if a condition came up that they weren't trained on, they didn't know what to do. Whereas an actual expert, that's something they could probably solve—they just hadn't figured out how to articulate it to the system yet.
Clark: Yes, and that's why Paul Compton created his "ripple down rules"—any time there was an exception, you add a new rule. It was a cascading series of rules, as opposed to a complete expert model, and that was closer to the sub-symbolic approach we see today in machine learning.
Chris: Earlier you said something about teaching that I want to come back to. Experts aren't very good at teaching. So in the scenario where we're trying to teach somebody about sizing beams, and they've got an expert—instead of that person just laundry-listing everything they think someone should know, what would be a better approach? Does that person need to be a better teacher? Do they need better partners within the organization? You shook your head at the first one and not at the second.
Clark: I think it's wrong to try and force subject matter experts to become experts on learning as well. There are people who should be experts on learning, and we actually get better results if—going back to the community and social learning aspect—we put a subject matter expert and a learner together. A number of people who work with subject matter experts, including Guy Wallace and the late Roger Schank, say: "Bring multiple experts together and let them negotiate a shared understanding." You just serve as a facilitator. When you do that, you get a better outcome than just letting the subject matter expert teach it.
Chris: Meaning multiple levels of experts—so like some of it's maybe an expert or a mid-level person, where some of those learnings are more present in their brain because they've just had recent experiences, versus a more senior person where it's deeper in memory?
Clark: In fact, I argue for that strongly. Don't just have the expert who understands the deep model. You want their supervisors and managers, because they will say, "No matter the training, they still do this wrong and they still do that wrong." They see the performance gaps you were talking about. These are the misconceptions and the mistakes people make. You want to have learners get it wrong in the learning experience and get feedback, rather than get it wrong when it matters.
And as you say, you also want practitioners who are a few months into the job. And you ask them, "What is it you wish you had learned that you didn't learn?" That gives you good input about what your learning experience should include. Experts have compiled away a lot of what they do and find it hard to articulate. They can recite knowledge, but they also don't understand the struggles and transitions novices have to go through—they've lost access to that. So partnering them with somebody and having different representatives gives you the triangulating information you need.
And then not only do you need to develop the practice—and you should develop the final practice first and then work backwards until you've got practice that they can accomplish to begin with—then you've got your learning experience. But then what are the models they need? We talk about content and we don't break it down into the elements that play a cognitive role. One of them is mental models—those tell you how the world works. If you put too much weight on a beam and that beam's design interacts with the material it's made of, at some point it can fail. To make a safety requirement, we need a certain level of strength in every beam that meets this requirement. That's a model of how the world works.
Then you need examples that show those mental models in practice. And then you need to hook people emotionally upfront and deliver on that premise by giving examples and practice that learners recognize are relevant to them, pitched at the right level of challenge. If you get these things right—good mental models, a worked example or two that shows how it works in context, practice first simple and then more discriminating challenge—that's how you develop competence. And that progression of model, examples, practices, model, examples, practices is what leads to people developing real ability.
Chris: So these are kind of model and examples, practice, model and examples, practice—moving back and forth?
Clark: Model, examples, practices, model, examples, practices—yes, exactly right. And that progression is what leads to people developing skill.
My colleague got in trouble for promoting the 70/20/10 model because the numbers were too perfect, but the underlying idea is that you don't just take a course and you're done. You think about how you developed anything you're good at. Chris, were you good at interviewing people when you started out? Did you develop over time, work with people, get feedback, and improve? It was multiple practices.
Chris: For our audience that doesn't know 70/20/10, can you explain that?
Clark: Research was done at an institute in North Carolina where they asked executives, "How did you end up being able to do what you're able to do?" They stripped away what they couldn't intervene on, and found roughly: 10% were courses, 20% was interaction with others, and 70% was learning on the job. Now, they simplified the numbers and threw out some data, so some people complain it's not rigorous science. But what Charles Jennings found it useful for was when executives thought courses were the only solution, he said, "Think about your own development. How many courses did you take on interviewing people?"
Chris: Zero. Read books by interviewers, watched and listened to a ton, but actually a course? Zero.
Clark: Right. And that's too often what happens. What he was trying to help people recognize is that formal instruction can be really useful for stuff we're not biologically primed to learn. Anthony Geary talks about biologically primary and secondary learning. Language, for instance—we are primed to speak and listen. Brains are wired to do that. But we are not wired to learn to read and write; that takes instruction. Most of what we need to learn in the information age, including architecture, is stuff we've artificially created, and our brains aren't wired to learn it naturally.
Chris: So, earlier we talked about learners learning how to learn. The teaching thing I want to pause on. For teachers, everything you said makes sense. And in our industry—this is probably true everywhere—those experts are highly in demand, highly billable, and very busy. Another reason why we're trying to minimize the impact on them. If they're not going to be that effective doing it all by themselves, having them partner helps on the business side of the whole thing as well. What makes for a great expert on learning? This person that will partner with a subject matter expert—what have they learned how to do?
Clark: It's interesting. John Alexon did a PhD with university faculty on how an instructional designer talks to a subject matter expert. One of the important things is being able to generate respect for their knowledge of learning—and say, "Look, you're the expert in the domain, but your intuitions about how people should learn it are not always to be listened to. I'm an expert in learning, and making sure that's understood is important."
Then that should actually be true—they really do need to understand learning. I think everybody who designs for people should understand how people process the world: the basic human information processing loop. They should also know the basics of good learning, and by understanding how learning happens, they understand how the compilation happens. So they know how to work with subject matter experts to get out what they need.
Chris: Meaning—are they fishing for the mental conceptual model you were talking about before?
Clark: Fishing for the model. There are a number of things they need—the model, the misconceptions, the ways people reliably go wrong and why, and good stories about great successes and failures. Too often, subject matter experts don't like to talk about their failures, and they skip steps. Alan Schoenfeld did this—he was the math person in cognitive apprenticeship—and he would work on a problem on a board. Most experts will go, "Oh, you do this and this and this, and you get there." And it turns out people think it makes sense, but then when they go and try to do it themselves, they can't remember why those steps, or in that order, and so they make mistakes. They may think they're stupid, they may tune out, or they may just not really understand what's going on.
And what Mickey Chi's research showed is that people who explained the steps—the reason why—did better. Alan Schoenfeld would work on a problem and deliberately expose his reasoning: "At this first step, I could do this or this, but because of this, I did this." He even deliberately made mistakes, then continued on and said, "That can't be right because of this. Ah, I made a mistake here." He was showing the unpacking of the self-monitoring.
Chris: Because that mental model is locked so deeply that you have to pull it out like this for somebody—through a real example—in order to even understand what your mental model is.
Clark: Yes. And so a good instructional designer, when the expert says, "Do this, and then this, and this," asks: "Oh, why did you do that? What else could you have done? If you made a mistake, how would you know you got it wrong?" And then to do that, they have to be comfortable enough asking questions and not knowing the thing. They also have to build enough trust with the expert that the expert will feel comfortable being vulnerable in front of them, talking about times they failed.
Chris: So they have to be comfortable asking questions and not knowing the thing. And they also have to build enough trust with the expert that the expert will feel comfortable being vulnerable, or talking about times they failed. It's interesting.
Clark: And understanding that that time is necessary and valuable, and structuring the organization so that they do have that time for reflection. That's a reflection activity, actually. You know that they bill more and are in demand, and yet somehow you have to find a way to recognize that in the long term, if you're not giving those people time to help shape the next generation, when that knowledge walks out the door, you lose it.
Chris: Right. And that expert will spend their time late at night fixing and re-fixing work instead of trying to prevent it on the front end.
Clark: So they pull out mental models, exceptions, stories—including failures. Then what happens? They figure out learning strategies. And is it better to still have the expert teach? Now that the expert knows how they work, is it better to have them teach it, or is it better to have another person teach it?
Two reasons I'd say another person. One, I still don't expect the subject matter expert to have become a good trainer. But also, you're taking them away from that high-value stuff—that billing. You can do it with e-learning, or face-to-face instruction, or virtual online instruction. The instructional designers should create an experience that's going to lead to learning, knowing what needs to be taught, working with the subject matter expert to make sure they understand the content. They then should design the final practice and work back to early practice, then design the experience, and test it.
Because one of the things is that people don't have the same wonderful, predictable properties as concrete does. We have good learning science principles, and that designer will probably have a very good first cut. But they should test and tune. You look at people like Megan Torrance with her LLAMA approach, or Michael Allen with his SAM—Success Approximation Model—and Megan Torrance's LLAMA is a lot like agile methodology. They're both iterative approaches. It's not "if we build it, done"—it's iterative learning experience design.
Chris: Iterative learning experience design. So pilot it, get feedback, adjust, before you scale it out and roll it to everybody.
Clark: Yes. And going back to the 20/80 idea—20% content, 80% actual real activities and experiences—if the expert isn't in the room and you're working through scenarios and activities, you need someone who can help guide. You might need what customer care centers have: two tiers. So the instructor may be able to answer the basic questions. If you've taken a subject matter expert and trained them to be a good instructor—which isn't unusual—they might be able to do this. But they might also have to say, "I don't know the answer to that, let me check it out and get back to you." And they can follow up with the answer afterward. I still don't think just because you deeply know the domain, you'll be a good instructor in it.
And the more upfront experience the learner has in the domain, the less that's true. As you progress to practitioner, you really don't need formal instruction so much—you just need the ability to ask the expert and get feedback, or communities of practice become very important.
Chris: Like communities of practice become very important at that stage?
Clark: Yes. And I did want to make a point—I haven't mentioned context. One of the important things is: when you're developing a skill, how broadly do you need to apply it? If you're trying to understand designing pavilions, and it could be for healthcare, or vendor displays, or any other purpose, you have a lot of broad transfer needed. You really want to make sure that all the examples span the space of possible application, so that you increase the likelihood of transferring to all the ones you're going to see. If I only train you on healthcare pavilions and then you're asked to design a vendor pavilion, you may make huge mistakes. But if you've been trained on a variety of different types of pavilions, you're more likely to successfully design any type.
Chris: Or conversely, you could overtrain somebody—if they're not ever going to do anything but healthcare work, why are we teaching the other context?
Clark: Right. So you need to know the space of application, then you want to pick the minimal set—across examples and practice, with more practice than examples—that will develop the ability but also cover the space to support appropriate transfer. If they're only doing healthcare pavilions, train them on the variety of healthcare pavilions. That's going to be much more important than training them in all the different types of pavilions.
Chris: I want to loop back to something that's sticking in my mind. We talked about why it's challenging for a subject matter expert to do the curriculum design on their own, but we didn't really talk about why it's hard for them to train or be the instructor on that material, even if they had a well-designed curriculum. What is it that makes it hard? Why aren't they necessarily good at this?
Clark: There's a lot that goes into facilitating face-to-face instruction: reading the audience, knowing how to give good feedback in a way that isn't personal and is objective, knowing what the right elements of appropriate feedback are, how to manage distractions in the classroom—if somebody's not paying attention, if somebody comes in late, if somebody's on their phone. There's a whole range of things I wouldn't expect a subject matter expert in a domain to know in order to be successful. Thinking that teaching is easy is just naive.
I don't do as much classroom instruction—I've been much more computing-and-learning-based. My colleague Matt Richter worked with Thiagi, who's one of the great activity designers. They wrote a whole book on designing interactive activities that are designed to play different roles in the learning experience. Having people active and knowing how to do that instead of just presenting content is an important aspect I wouldn't expect a subject matter expert to understand. Not because they can't—it's just, why would they? And some may be absolutely perfect. Richard Feynman just happened to be not only a great physicist but a great instructor. But that's not the way to bet.
Chris: One of the reasons I really wanted to talk to you is that for the majority of our listeners and community, they do find themselves as the accidental instructional designer, the accidental learning organization, the accidental learning and development leader. What haven't we talked about that you think is really important for them—if they want to become a modern learning organization or be very successful in learning and development?
Clark: There are sort of two things. One is: if you want to be an L&D learning organization, don't just think that because you want it, it's going to happen. You need to take concrete steps, it takes a strategy, it takes knowledge about how to do that. The other thing is from an individual level: if I want to know about learning, how do I become a learning professional? How do I self-learn enough about learning to be able to do this?
There are lots of ways people learn—they can attend events, workshops, read books. You were talking about reading books on interviewing and watching interviews. All of these things are fine. Look at good design critique, but you've got to be active, you've got to be reflective. I talk about learning as being action and reflection. In the real world, you act and you reflect on it—that's how you learn. Good instruction, then, is designed action and guided reflection.
Now you can self-design a choice of what you're going to do. People say, "Oh, you shouldn't go to conferences or lectures—lectures aren't learning experiences." Actually they are. If you're a practitioner in day-to-day work, a lecture is a reflection opportunity—to have somebody else's perspective on things that you then consciously and mentally apply to what you're doing. It's a thinking chance. So I don't have a problem with lectures; they should be good lectures, and you should self-select as a learner which ones to attend. But you need to take that knowledge and apply it, test it, and refine it. It's back to that model of seek, sense, share. Intention isn't enough. It takes deliberate choice and action, and persistence to get to where you want to be.
Chris: And I think what you're implying is that just buying learning technology or hiring a learning and development person—those aren't enough. What I've taken away from this conversation is: really getting to those performance gaps and the business goals, working back to learning, and also culture. Whether it's epistemic humility, psychological safety—thinking about where experts and leaders perhaps overestimate their certainty. And then there's this idea that the leader doesn't know everything. I think a lot of learning and development activity has been driven by employee engagement surveys saying, "We need more training," and so the organization creates more training, because that's what people said they wanted. Is that outside of our industry too? Do you see that?
Clark: Oh yeah. There's a lot of tactics—hiring an L&D person or buying a particular learning technology—without really knowing what you specifically need, buying the promises instead of identifying what you need. There's a lot of things people do to tick a box without having it be a really focused strategy about how to achieve their ends.
And lots of people will tell you, "You do this and you'll have solved the problem," because they have a vested interest in it. One of the things we did at the Learning Development Accelerator—my colleague Matt Richter and I—was create a research checklist that says, "When people make claims, how do you evaluate them? Can you give it a sniff test? Do they have a vested interest? Have they done a good study? Have they got a rigorous methodology? Have they got the right subjects? Is what they've done applicable to me?" There are lots of questions about the claims, and there are lots of claims. You have to know where you are, where you want to go, and then you can start gathering the information about how to get there. But you have to have a plan.
Like everything else, it takes a strategy. And you might hire an L&D person and say, "Your job is to come up with a strategy." Okay, then that's their issue. But they need to follow a good strategy, and that means knowing how people learn, knowing the details of good instruction, and knowing the basis of how we learn and why instruction works. That gives you the basis to also start supporting informal learning, because you know what works and what doesn't. Why, for brainstorming, do you have to think on your own first before you bring everybody together? If you don't know how our brains work, that isn't going to make sense.
Chris: Do you have a book or somewhere to point? If I were telling you, "I want to be a better learning and development professional"—do you have some starter places?
Clark: Lots. On my site I have a reading list of books depending on where you want to go. I strongly recommend starting with understanding the human information processing loop. That's why I created a video on it—I think it's about 20 to 30 minutes and discusses it, gives you some experiments so you actually experience the phenomena, and then some explanation.
Chris: We'll put both of those in the links in the show notes—the human information processing loop video and the book list.
Clark: Brown, Roediger, and McDaniel wrote Make It Stick, which is a good overview of how we learn—a good summary of the research. I wrote Learning Science for Instructional Designers, just trying to boil down what you needed to know most about learning. It's fortunately short.
For the strategic picture, that's why I wrote Revolutionized Learning and Development, though it's more than a decade old and was partly too early. There are other things slightly more modern that are more comprehensive. JD Dillon's Modern Learning Ecosystem is one I'd point to, though I don't fully agree with all of the model. And Lori Niles-Hofmann has The Eight Levers of Ed-Tech Transformation, which talks a bit more richly about the technology picture than I did.
I haven't found one book that talks about strategy perfectly—it depends on different facets. I mentioned Matt Richter's Motivation Blueprint, which talks about creating an environment for motivation. I talked about Garvin Edmondson and Gino's article about learning culture. I talked about Collins and Brown's Cognitive Apprenticeship, which is a really good model for designing learning.
Chris: Let's talk about the LDA—since we're talking about conferences.
Clark: That's why the Learning Development Accelerator was created. When COVID hit, people weren't going to travel to face-to-face events. So Matt Richter and Will Thalheimer—another of our colleagues, big on evaluation—put together a conference on evidence-based instruction. They brought in a number of us who cared about that, and it was successful enough that they decided to create a society. We regularly have a conference now in the fall—a learning science conference—where we get top people and bring them in to speak. We curate a curriculum, so it's not just whatever everybody wants to submit. We say, "This is what we think needs to be covered. Would you talk about this?"
And now we also have a spring conference—in fact it just started, the live sessions will be in a few weeks. We have a pedagogy that says: view the curated content before the conference, engage in discussion forums, and then attend the live sessions to interact with the presenters live in their special sessions.
Chris: So the content's available in advance, and there are discussions in advance. Then in the live sessions, what actually happens?
Clark: It depends on the presenter. Some want to elaborate on the material, and some will reflect back on what has happened in the discussion forums that they want more people to see. It's interactive and it changes. We run the main sessions twice, trying to capture more of the world, because it's online and we're trying to reach a broad population. And we have special sessions that are only live—a panel, roundtable discussions where you hear short presentations in small groups. These are people in L&D you should know if you don't.
Chris: And the spring one is more for people in the field practically applying L&D, and the fall one is more the science and academic community?
Clark: The fall one is specifically on the learning science and how to apply it to design good courses. The spring one is much more everything else you need to succeed at L&D—systems thinking, what does user experience have to tell us, what does marketing have to tell us, how do you do analysis right. It's a bunch of the other stuff that goes around designing learning experiences. And that's a community you can join for free. There are things we do for free—meet-the-authors sessions and webinars. Then there are two tiers of membership, and the higher the tier, the greater discount you have on the things and the greater access to certain other resources.
This goes back to what you were saying—your organization needs to support people in developing. It can't just say, "We want you to develop," but it has to provide time and resources. Whether it's the money to attend a face-to-face event, whether it's to become a member of something like the LDA—the organization has to care about people continuing to develop. And not only just care about it, but facilitate it and ensure that it's happening.
Chris: Maybe as we wind down—two things I want to ask you. One is, you're deep in putting this spring session together, you think a lot about where learning and development is going. What are you excited about in the field as you look at the next two to five years? What's got you optimistic and interested?
Clark: It depends on the time of day and phase of the moon, because my interests vary a lot. But one thing I'm interested in right now is this bridging gap. Too often, instruction—whether in schools or in organizations—gives you training and then stops. If you're lucky, you may get coaching, or you go into an environment where coaching is expected and there are people you can talk to. But too often we just train people up and then abandon them. There's a big decrement in performance, and then if they're smart and the environment isn't too punishing, they'll gradually acquire the skills. But we can do much more about actually facilitating that transfer from coursework to actual use in the workplace and change in the workplace.
Technology is giving us tools to address this, and I think that's an area where we could be doing much more systematically to ensure that transfer happens. To me that's the most exciting and important frontier.
Chris: Is that a process thing? What does bridging look like—going from a formal learning experience to actually supporting you in the work?
Clark: It depends. It could be coaching, it could be arranging mentors and training them—so that they know when to move from giving specific feedback to asking, "What should you be giving yourself feedback on?" You're gradually scaffolding them to becoming self-improving learners. It could be spaced learning—like what I'm doing with Elevator Nine, which is sending out little prompts over time at intervals that learning science tells us are the right intervals, to reactivate the knowledge and start gradually scaffolding from what you learned in the class to applying it to the workplace to actually seeing changes as a result of what you're doing. These are all opportunities we have but are doing idiosyncratically. I think we should be doing them systematically across the industry.
Chris: What I'm picturing is a more intentional handoff between the instructor and whoever the project leader is for the person who just went through that training. That project leader should have awareness of what the actual learning was and what the goals were, and then understand how to support a continuum of care—they take over the learning responsibilities.
Clark: Catherine Shinners told me about a company she was working for at the time where they wouldn't release training without having thought about how it was going to be extended—whether it was coaching, whether the project lead knew what was happening and was prepared to continue it, as opposed to extinguishing it. Which can happen: "Forget that stuff you learned in class. This is how we do things here."
Chris: We don't do it that way on my team. And one of our clients said this to me a couple months ago: "I really want all the leaders of my firm, and all the leaders of our projects, to understand every single course in our catalog and all of our learning experiences—so that when I'm in the moment of need with a learner, I know exactly what resources to point to." It's so obvious, but I wonder how rare that actually is.
Clark: Underpinning that is the assumption that there's a rational structure of those resources available. I will regularly ask audiences, "How many of you have material on your LMS that is out of date, never used, or questionable?" And everybody raises their hand. You want to make sure you have a rational arrangement—a curriculum of appropriate resources: courses, PDFs and PowerPoints, videos, whatever—that covers people's needs. And yes, the leader should know it. They should also know a bit about who to recommend, what, and when. You're beginning to create what I think is a sort of a coaching culture, which is part of a learning culture, which is to be desired. Because each one coaches everybody else, everybody improves—if they're doing it well.
Chris: And that kind of—maybe to tie a bow on this—for a learning organization that really is at the organizational level, not a learning department, not a learning person, but a learning organization: it takes on that coaching culture.
Clark: Yes. I do argue that L&D should master that itself first, because you won't have credibility taking it out to the rest of the organization if you aren't making it work in your own organizational culture. And once you've managed to make it work and documented that it's improved things, then you have credibility going out and working with early adopters and all that other smart strategy. Getting it down so that you can demonstrate that you understand what you're talking about.
Chris: Clark, you have been so generous with your time. Thank you so much for being with us. If people want to get in touch with you, connect, or learn more about your work—what's the best way to find you?
Clark: You can go to the LD Accelerator—I'm reachable there at clark@ldaccelerator.com. Quinnovation is the way I've been assisting organizations. And I do have a blog, learnlets.com, where ideas that end up being presented in conversations or in presentations or in books show up there first. And if you really need help sleeping at night, it's a great place to go.
Chris: I've been aware of your blog through us being connected on LinkedIn, and I disagree—I think it's good reading. Clark, once again, thank you for being here. We really appreciate you.
Clark: A pleasure for the conversation, Christopher. Thank you very much, and thank you to your audience.
Show Notes + Resources
📚 Books
Make It Stick — Brown, Roediger & McDaniel. Accessible research summary on how we actually learn. hup.harvard.edu
Learning Science for Instructional Designers — Clark Quinn A concise distillation of what L&D practitioners most need to know. quinnovation.com
Revolutionize Learning and Development — Clark Quinn Strategic picture for transforming L&D (Clark notes it's a decade old but still relevant). quinnovation.com
The Modern Learning Ecosystem — JD Dillon A more recent take on L&D strategy. td.org
The Accidental Instructional Designer (2nd ed.) — Cammy Bean Referenced when discussing how untrained practitioners end up running L&D. td.org
Show Your Work — Jane Bozarth On making learning and knowledge visible inside organizations. amazon.com
Creative Change — Jennifer Mueller. How innovation gets stifled in organizations that say they want it. amazon.com
Right Kind of Wrong — Amy Edmondson On creating a culture where smart experimentation is safe. simonandschuster.com
The Motivation Blueprint — Matt Richter On creating environments where people are willing to commit and contribute (forthcoming—check ldaccelerator.com for availability).
Where Good Ideas Come From — Steven Johnson A journalist's take on how innovation actually happens. penguinrandomhouse.com
Where Good Ideas Come From — Steven Johnson. A journalist's take on how innovation actually happens. amazon.com
The Eight Levers of EdTech Transformation — Lori Niles-Hofmann Talks more richly about the technology picture in L&D strategy. theeightlevers.com
🎤 People & Experts to Know
Harold Jarche — Creator of the Seek-Sense-Share (Personal Knowledge Mastery) framework. jarche.com | PKM framework: jarche.com/pkm
Charles Jennings — Learning strategist, popularizer of the 70-20-10 model. charlesjennings.com
Amy Edmondson — Harvard Business School professor, leading researcher on psychological safety. hbs.edu
Matt Richter — Clark's colleague at the L&D Accelerator. Co-created the LDA research claims checklist. thiagi.com
Jane Bozarth — Learning strategist, author of Show Your Work. Mentioned as "the oldest millennial." learningguild.com
Paul Kirschner — Researcher on learning science and instructional design foundations.
Guy Wallace — Expert practitioner on working with subject matter experts to design learning. eppic.biz
Thiagi (Sivasailam Thiagarajan) — Renowned interactive activity designer. Co-authored book with Matt Richter. thiagi.com
Alan Collins & John Seely Brown — Originators of the Cognitive Apprenticeship model.
Megan Torrance — Creator of the LLAMA agile learning design methodology. torchlightconsult.com
Michael Allen — Creator of the SAM (Successive Approximation Model) for iterative course design. alleninteractions.com
Keith Grint — British historian; framework for tame / critical / wicked problems.
Rich Ryan — Self-Determination Theory (Autonomy, Competence, Relatedness).
Mickey Chi — Research on self-explanation and why explaining steps improves learning.
Lori Niles-Hofmann — Author of The Eight Levers of EdTech Transformation. theeightlevers.com
🏛 Organizations & Resources
The L&D Accelerator (LDA) — Clark's professional society for evidence-based L&D. Free and paid membership tiers.
Quinnovation — Clark's consulting firm.
Learnlets.com — Clark's blog. Ideas that later appear in his presentations and books.
Clark Quinn's Human Information Processing Loop video — ~30-min video with experiments.
Clark Quinn's book list — Books written by Clark Quinn.
Tulser — L&D strategy firm (Belgium) co-founded by Yass Eritz and Vivian Heinen. Works with Charles Jennings on 70-20-10 application.
Elevator 9 — Startup working with Clark on spaced learning technology to extend training into the workplace.
🧠 Frameworks & Concepts Referenced
Performance Ecosystem — Clark Quinn's model: formal learning + performance support + informal resources + community + environment working together.
Seek-Sense-Share — Harold Jarche's personal knowledge mastery framework. jarche.com/pkm
70-20-10 Model — Roughly 70% learning on the job, 20% from others, 10% from formal courses. Research from Center for Creative Leadership. ccl.org
Cognitive Apprenticeship — Collins & Brown's model: apply apprenticeship principles to knowledge work.
Self-Determination Theory — Ryan & Deci: Autonomy, Competence, and Relatedness as drivers of intrinsic motivation.
Tame / Critical / Wicked Problems — Keith Grint's framework for classifying problems and choosing responses. leadershipcentre.org
LLAMA — Megan Torrance's agile, iterative approach to learning design. torchlightconsult.com
SAM (Successive Approximation Model) — Michael Allen's iterative course design methodology. alleninteractions.com
Double Diamond — Diverge then converge: referenced in discussing how good brainstorming and design works.
Biologically Primary vs. Secondary Learning — Anthony Geary: some learning (language, walking) is wired in; most organizational learning is not.
Inert Knowledge — Knowledge people can recite but cannot activate in context because it was never practiced in use.
