TL; DR:

Critique and review are different. 🗣️Critique is simply about making the work better. Review is about assessing readiness for the next stage in the process.

Healthy critique requirespsychological safety✨. To make the work better means being able to discuss it openly, and frankly, warts and all. To do so in a constructive fashion requires that everyone involved know there will be no retribution for their work or commentary, as long as it’s offered in good faith, and in the interest of learning and doing better.

Perhaps the biggest barrier to conducting critique is 🧱structural: ‘making space’ for critique by finding time, getting the right people in the room, and ensuring that the time is well-spent. When done ad hoc, the effort necessary feels greater than the reward. So, make it an ✅operational priority, something that just happens as part of the flow of work. 

This post is longer than usual, so here’s a Table of Contents to help you navigate:

Why critique matters

Critique is essential for UX/design practice. Like, literally: it’s at the essence of design.

“The practice of design is creation and criticism in dialogue with one another. And I think we’ve emphasized creation and completely lost the sense of criticism.”
Erika Hall, Finding Our Way  

If you’re UX/Design org wants to deliver high quality work, constructive critique practice is crucial.

I am moved to write this post because in my work across literally dozens of design organizations around the world, all have struggled handling critique effectively. Either…

  • They don’t conduct critique
  • They conduct critique, but rarely
  • They conduct what they call critique, but it’s actually review

Wait? There’s a difference between “critique” and “review”? Most definitely.

Critique is about making the work better.

Review is about assessing readiness for the next stage in the process.

What are the barriers to critique?

There exists an array of reasons why UX/Design teams either don’t critique, or do so rarely.

Understanding barriers

Don’t know what critique is. Many practitioners just don’t know what critique is. They didn’t experience it in school (perhaps, like me, they didn’t study design), and they haven’t experienced it in prior work. Some of those folks may now be team leaders, and critique just isn’t part of their practice or even vocabulary. 

Confuse review for critique. These are teams that perform an activity they call ‘critique,’ but it’s actually review. Critique is simply about getting a group of people together to help someone improve their work. It shouldn’t be considered a stage-gate in a process, or an opportunity for senior people to unload on the less experienced people in the room. 

Too often, what’s called ‘critique’ is more a form of review, not about improving the work, but APproving the work, determining if it’s ‘ready’ to move on to whatever the next stage of the process is.

Some signs that you are in a review, not a critique, include:

  • non-UX/Design people in the room
  • Very senior UX/Design people in the room, whose contribution is to pass judgment on the work, and may be seen as the final decision-makers in the room
  • all the work shown is expected to have a high degree of polish/finish

 The wrong people in the room. Defining “wrong people” can go a lot of directions:

  • there are too many people (and thus too many voices, and it’s noise)
  • there are too few people (and thus too few voices, and the feedback feels like direction, or not critique, more just an informal ‘uhh, what do you think of this?’)
  • there are non UX/Design folks in the room (critique, in making the work better, should be limited to those with real expertise in how the work gets done, not just anyone with an opinion)
  • there are people who don’t know the material under discussion (if you have to spend an inordinate amount of time bringing people up to speed so they can provide helpful questions and commentary, they might not be the right people to have in the room)

Focus on the wrong things. Critique should be about making the work better, which in turn should be all about the impact that this work will have. What are the goals, objectives, metrics, etc., that this design is meant to have. Ideally, what changes when it is ultimately released? Any other discussion is a distraction. 

Mindset barriers

People may very well understand critique, and even wish to conduct it, but are wary of doing so, because of sensitivities around criticism, and how it is given, and how it’s received. Though understandable, such sensitivities hamstring critique.

Lack of psychological safety. If you remember one thing from this article, it’s that critique is about making it better, review is about assessing readiness. If you remember a second thing, it’s that healthy critique requires psychological safety. Psychological safety is the condition where people can try stuff, make mistakes, wander down blind alleys, push back, and, as long as it’s done with respect and in the interest of making the work better, those people have no fear of retribution. 

Often, teams don’t conduct critique, since people are unwilling to ‘show their mess’ (work in progress), because they fear that showing anything that isn’t ready will expose them to undue criticism, performance issues, etc. Critique has to be a truly ‘safe space,’ or it will be rendered performative.

Uneven power dynamics. Too often, critiques are used as opportunities for senior team members to tear into the work of junior team members. Some even think this is the point, in ‘steel sharpens steel’ kind of way. But if there is not a healthy dynamic, criticism from senior people can just come across as attack, the junior people experience a kind of workplace trauma, and subsequently do anything to avoid being put in that situation. This is all exacerbated if the senior people never have their work subject to critique, so it only ever goes one way.

Trouble distinguishing the work and the person. Critique is about the work. But if it’s not handled well, it can feel like it’s about the person. Sometimes this is about the nature of the feedback given, “You should…” or “Why didn’t you…?” Sometimes its because the presenter only provides one solution or idea, and so if that idea doesn’t fly, it feels personal. If the presenter provides options or alternatives, then it’s more straightforward to make it about the work. 

Discomfort with the word ‘criticism’. I’m hesitant to bring this up, but for decades now, I’ve heard people take issue with the word “critique” or “criticism,” with the idea that it encourages negativity.  

Operational barriers

Because it occurs outside the flow of typical product development practices, critique can feel like a ‘nice-to-have,’ and nice-to-have’s rarely happen, because people already feel overburdened with their ‘must-do’s. If critique is seen to add effort (and with little direct benefit), it won’t get done.

Finding the right time with the right people. In my experience, this is probably the most acute barrier to conducting critique. When done in an ad hoc fashion, it requires scheduling well in advance in order to get on people’s calendars. And when competing with other demands on time, often loses to work that feels more crucial. 

Setting the context so that the critique can be useful. I’ve supported teams where a third of the critique time is taken up ensuring people have enough context so that they can provide helpful feedback. 

A Plan for Critique

To address these barriers, I’ve created a plan for setting up critique within a design organization. And this is very much a plan not the plan—what works in your org will likely be different. I’m hopeful that the principles and frameworks give you a place to start to make it work for you. 

Principles / requirements for critique

Psychological safety. Stated above, worth restating. It must be made explicit that critique is not an environment for personal judgment, retribution, performance, etc.

All participants must be up for being critiqued (no commentator-only roles). This begs the question, “But what about more senior leaders, Directors and VPs?” And the answer is: they’re welcome to attend critiques as silent observers (in fact, it’s encouraged), but they do not get any say in the critique, unless they put their work up for critique. And remember: any work can be critiqued. It doesn’t need to be design effort. Christina Goldschmidt, VP Design at Warner Music Group, shared on Finding Our Way how she embraces her work being critiqued: “[it may be a] strategy on something or it might be a flow on something or something like that. Or it might be an approach to something, but so that they can actually give me input.”

Fits within people’s working schedule. Critiques should not be ad hoc, but have regularly scheduled time dedicated to them, so people can plan accordingly. 

Manageable. There’s a bit of a Goldilocks-ness to critique—the subject matter should be ‘just right,’ to allow for a critique that isn’t too long, too short, too complex, too basic. It may take a few cycles to figure out just-right-ness.  

Prepared. Anyone presenting their work must be prepared. Critique is a gift to the practitioner, an opportunity to make their work better thanks to the attention and feedback from their peers. Respect those peers by being prepared, so that you can make the most of your time together.  

Good critique requires stable structure

At the core of critique are two structural elements:

  • Regular critique pods
  • Regular critique hours

Regular Critique Pods

A challenge for critique is the amount of context necessary in order to get quality feedback. I’ve seen orgs where critique happened across all design, and where a third of the time spent was just getting everyone enough context to understand the design problem.

To address this, consider creating Critique pods of 4-5 people. You likely already have people already grouped in some kind of fashion (reporting to the same manager; working on the same product), so just use that. If you don’t, well,  just figure out what works for now, and they’ll likely shift and evolve for a while. 

Regular Critique Hours

Another challenge for establishing critique is simply to find the time to do it. It’s important that critique happen frequently enough that it doesn’t feel like a big deal, but just part of the process.

To address this, consider setting aside 2 hours a week for critique, at the same time each week (e.g., 4pm-6pm UK Time every Wednesday). Everyone blocks this time off in their calendar and knows it is when critique happens.  

This time block holds across all critique pods. Benefits of having critiques happen at the same time includes:

  • Two pods, working on adjacent or related material, can have a co-critique
  • The VP, Directors, and others who aren’t part of specific pods know when they can join specific critiques, which is great for leadership visibility (though: if leadership is not putting themselves up for critique, then they are to be silent observers of the process)

What to Critique?

There’s some question as to what is useful for critique. This illustration from the book Discussing Design depicts the ‘sweet spot’ for critique:

Diagram showing "The Life of a Design" with a left-to-right time line, starting with "The first spark of an idea" then "you understand enough of the idea to communicate it to others" which begins 'the critique sweet spot,' which ends at 'time to product the ideas as is and move forward, and then the timeline completes with 'the final, produced product.
 

Critique shouldn’t only be about detailed designs. Workflows, wireframes, content directions, etc., are all good subjects for critique, as all are designed to deliver on some objective. 

Conducting Critique

Every critique session should have at least 2, and as many as 4, critiques within it. Rotate through the pod to make sure everyone is getting critiqued every couple weeks. 

Each critique takes 30 to 60 minutes, depending on how much material there is to cover. 

Discussing Design provides this simple framework for approaching critique:

What is the objective of the design? What elements of the design are related to the objective? Are those elements effective in achieving the objective? Why or why not?

Before Critique

The Practitioner should spend ~30 minutes before the critique to set it up. They should create a board (in Figma/Figjam or Miro) with the material they want to walk through, have prepared a statement about the objective(s) they are trying to achieve, and prepare any other context necessary to bring people up to speed (the business problem, the user type or specific target persona).

Early / Mid / Late

When preparing for critique, it’s important to situate the design work in the overall story of the project. If Early in the project, direct people away from the details of the solution, and more toward the matters of structure, flow, shape, message. If in the Middle of the project, have people critique in detail. If Late in the project, discourage commentary about structure and flow in favor of final fit and finish, adhering to standards, and the specific use of language. 

The Critique (figuring about an hour total, but will vary depending on breadth and depth)

Presenting – 10-15 minutes

After folks have gathered, the Practitioner presents their work, starting with the Objective statement and any other context. Then they go through the designs, articulating why they made key decisions. 

Clarifying questions – 5 minutes

Once the design has been presented, the Critiquers ask any clarifying questions they have, to make sure they understand exactly what is being addressed by the designs.

Writing down feedback – 10 minutes

On the critique board, Critiquers write virtual stickies to capture their thoughts about the work, in context. The feedback should be rooted in a) the objective and b) design standards. The amount of time for writing down should be relative to the amount of work shown, but it is important that it is timeboxed.

Feedback should be both positive and negative. It’s okay for the feedback to be mostly negative (we are trying to improve the work), but it’s helpful to call out what is working (and should be left as is).

When a Critiquer places their thought, if another Critiquer agrees or disagrees with it, place a 👍 or a 👎 on the sticky to indicate that. 

As Critiquers are placing their thoughts, the Practitioner is reviewing them, and making their own notes in terms of follow up questions to ask. 

It’s essential that this first round of feedback is done silently. If you go immediately to oral feedback, that preferences folks who are more comfortable speaking out, who ‘think out loud.’ Reflective written feedback enables greater participation from Critiquers. 

For strong guidance in how to give good written feedback, read: Asynchronous Design Critique: Giving Feedback. The advice holds even for synchronous, written feedback.

Feedback Don’ts
  • Do not provide “preference” based feedback (“I like…”)
  • Do not offer solutions in the feedback (“Move this to the right”)
  • Make assumptions—if you’re not sure about something, ask a clarifying question
Feedback Do’s
  • Connect comments to objectives or design standards
  • Bring a perspective (either yours, or a persona’s)
  • Point out (what seems to be) the obvious—it may not be so to others
  • Indicate your level of severity. For example, the emoji model suggested by the Giving Feedback article:
    • 🟥  a red square means that it’s something potentially worth blocking
    • 🔶  a yellow diamond is something where one can be convinced otherwise, but it seems to that it should be changed
    • 🟢  a green circle is a positive confirmation.
    • 🌀 a blue spiral is for either something that uncertain, an exploration, an open alternative, or just a note.
Discussing feedback – 15-20 minutes

The Practitioner walks back through the work, and the critique comments. They ask for clarification on anything that they don’t understand. For any sticky with an 👎, ask the person who disagreed to explain why. 

The Practitioner should not defend their work, nor revise/refine based on feedback in the moment. All the Practitioner needs to do is take the feedback in, and make sure they understand it. 

To understand the ‘weight’ behind the feedback, conduct a ~3-minute voting session, where the Critiquers can vote on the (3-5, depending on the amount of commentary) items that they feel most strongly need to be addressed. This helps the designer understand where to focus their efforts in revision. 

Additional Resources

Web searching will turn up a bunch of good stuff on how to conduct critique. Here are a few resources to get you started. 

 

This post builds on the Emerging Shape of Design Orgs.

As design organizations scale, I’ve worked with a number of design leaders who struggle with all that’s expected of them. Let’s look at the “HR Software” org I drew in the last post

No Time for Creative Leadership

The VP Design is a true design executive, and, as I wrote in The Makeup of a Design Executive, is expected to deliver on Executive, Creative, Managerial, and Operational leadership. The thing is, with a team this size, and particularly if it’s growing (as so many teams are), they simply don’t have the time to do it all (unless they work 60-, 70-, 80-hour weeks). And these VPs need to focus on what’s core to their role, which are the executive and managerial aspects, and so the creative leadership suffers.

Even the Design Director is spread thin—overseeing a team of 15-20 people, recruiting and hiring, encouraging professional development, building relationships with cross-functional peers. This takes up all your time, and, apart from weekly critique sessions, they don’t have capacity to provide creative and strategic leadership to their teams. 

Which Means No Time for Strategy

Design organizations are increasingly expected to contribute to product strategy, but these structures support little more than product delivery. If the team is asked to develop a vision for the future product experience 2-3 years out, how do they get it done? 

One way is to hire external consultancies. And that can serve as a good kickstart, but such relationships should be seen as bridges toward when the design org is able to conduct its own strategic practice. 

And as design orgs scale, and design leaders develop organizational authority, a common move is to create a Design Strategy group, a small team of senior designers to tackle wicked problems outside of the constraints of business as usual. It may look something like this (building on the depiction of the growing design org from the last post):

Scaled design organization with Strategy team tacked at the end(There’s an argument to be made that the Strategy Team could be a pillar of the Platform team. The point that follows wouldn’t really change.)

Separate Strategy Teams within Design Orgs suffer the same problem that any separated team has—getting traction. Now, looking at the diagram above, you could say the same about the Platform team, but in that case, the Applications teams all understand why integrating with Platform makes sense—the Application teams can focus on the higher order work specific to their business area, and move faster. 

Now take the perspective of an Application team. That Strategy Team gets to do fun vision stuff, play in a space with little accountability, and then what… tell us what to do? And if we try to work with the Strategy Team, we’re told that they’re looking at broader, end-to-end experiences, and don’t want to be confined to any particular business area.

And so the Strategy Team gets frustrated because while folks may get excited about their ideas, it’s not clear how they get purchase within product development.

Two Birds (Creative Leadership and Strategy) and One Stone: The Shadow Strategy Team 

So, scaling design orgs have a problem. The acknowledged leaders (executives and directors) don’t have the bandwidth to provide the strategic and creative leadership expected of them, and necessary for the optimal effectiveness of the team. Building a separate Strategy team addresses some of this, but is typically too removed from the actual work to make an impact.

A solution lurks within the Emerging Shape of Design Orgs, with the addition of Super Senior ICs . Design organizations are increasingly hiring Principal Designers and Design Architects, as shown in this diagram (click/tap to enlarge).

Scaled org with Super Senior ICs added

Design Architect. Reporting to the VP of Design, they have no managerial or operational responsibilities, and so are able to focus on creative and strategic leadership. I’ve written this job description a few times over the past couple years, and here is what the “Responsibilities include…” section looks like:

  • Provide creative and strategic leadership for design and throughout product development
  • Advocate for user-centered design best practices within product development
  • Partner with product and engineering leaders across the company
  • Spearhead the development of experience-led product vision across the entire product suite
  • Provide guidance and direction for key ‘horizontal’ activities such as Design System development
  • Create strategic design deliverables such as strategy decks, customer journeys, visions of future experiences and evangelize these cross-product “blueprints” across teams
  • Build and maintain a framework for establishing and assessing design quality
  • Connect design with business value
  • Work with design, research, program management, and product leaders on process for product development

Principal Designer. This role is similar to the Design Architect, just within a specific business area, reporting to a Design Director. The primary difference is that they are also involved with design delivery, playing a very active role in design direction and critique, and occasionally serving as a “big project Team Lead,” spearheading important and challenging new product development.

The Shadow Strategy Team. With a Design Architect and Principal Designers in place, you now have the constituents of your Shadow Strategy Team. Instead of a separate group of strategic designers, they are woven into the fabric of the producing design organization.

The trick is, how to get them working as a team? That’s primarily the responsibility of the Design Architect, with leadership support from the VP and Design Directors to protect some of their time for organization-wide efforts. At a minimum, this team meets weekly to share what’s happening in their worlds, and to ensure efforts are connected across the end-to-end experience. Occasionally, the Design Architect may engage Principal Designers on vision and strategy work, with the benefit being that these Principal Designers ensure that the vision is grounded in the reality of the business areas.

Recapturing some of the Dream of UX

A common frustration among digital designers is how their practice has been reduced to production. I think a reason for this is that our organizations lacked creative and strategic leadership—we assumed it was coming from the executives and directors, but they were too busy just keeping things going. So it just wasn’t happening.

By having roles within this explicit focus, these super-senior practitioners provide can recapture the untapped potential of thoughtful, intentional design. 

When a design team is small, fewer than 10 people, design quality can be successfully managed informally—reviews, crits, swivel-the-monitor discussions. The Head of Design can reasonably keep tabs on all the work, and, through discussion, drive their team toward higher quality.

As design teams grow into design orgs, this oral culture approach frays. The Head of Design can’t see all the work. Quality is determined by design managers and team leads, who may have varying opinions as to what good looks like. The larger the team gets, the more chaotic this view of quality becomes. Additionally, a larger design org is part of an even larger product development org, which ends up exponentializing the voices commenting on design quality. 

With all this noise, the only way to handle design quality at scale is to establish clear frameworks, guidelines, patterns, and measures of success that can shift local discussions of design quality from personal preferences and toward organization-wide references. 

What surprises me is that pretty much every design organization I engage with, regardless of size, still maintains that folkloric approach to quality. This is dangerous, because, at the end of the day, all a design org has to show for itself is the quality of the work it produces. If there are no standards, if that quality is all over the map, that reflects poorly on the design function as a whole.

The trick is, how does one define design quality? Our colleagues in software engineering have it easier—there are industry-standard criteria (reliability, efficiency, security, maintainability) with clear metrics. These criteria all pretty much hew to “how well does the code function for the needs of the machine?” 

Design quality, though, is perceived in the messy context of people and business. When we say that a design is “good,” what do we mean? How do we distinguish that from “great”? How do we articulate a quality framework so that everyone on the team understands what is expected in terms of the sophistication of their work? (When I work with VPs of Design, I ask them, “How do we inform a 25-year-old junior designer in your most distant office what good looks like?”)

Over time, I’ve developed a an approach to establishing design quality within an organization. There are a slew of components:

Usability Heuristics

In 1997 I took Richard Anderson’s UC Extension class on “User-Centered Design and Usability Engineering.” (It is still the only formal training, outside of conference workshops, I’ve ever had in this field). Among the things he taught was “heuristic evaluation,” a method for assessing the usability of interfaces. 

24 years later—that tool is still useful. Jakob Nielsen developed an updated presentation of the heuristics late last year. This is as close to an ‘industry standard’ as we have for a quality assessment of interfaces akin to what software engineers have developed. They’re insufficient on their own, but they are a great place to start.

Brand Personality Characteristics

Usability heuristics are table stakes. Good design goes beyond that,  delivering experiences specific to the company and the context it  operates within. To avoid coming across as me-too, it’s important that design embody the personality of the company brand. This isn’t just for marketing design either—it is perhaps more important in product design, as that is where the promise of the brand is actually delivered.

Any reasonably mature company should have a robust brand identity. This is more than a logo, typeface, and set of colors. It’s also includes a set of personality characteristics specific to the brand, traits that are important to express to help strengthen that customer connection.

Take those characteristics, and turn them into a set of “personality heuristics,” and as you develop, or review, designs, ask yourself—are we presenting ourselves in a way consistent with the personality we seek to express? 

Experience Principles

Experience principles are a set of statements for how people will experience using your product. Whereas brand personality characteristics are very much inside-out (how the company wants to be perceived), good experience principles are outside-in, based in user research, and distilled insights from what qualities users seek in their experience. 

Back in Ye Olden Days of UX, experience principles were all the rage. At Adaptive Path, they were a key aspect of any strategy and design work we did. From what I can tell, like other aspects of classic UX design (RIP site maps), they’ve fallen out of favor. Which is too bad—this post by Julie Zhuo makes clear how helpful they can be.

Former Adaptive Pathers Chris Risdon and Patrick Quattlebaum shared their practice in crafting principles, and here’s a website cataloging dozens of published principles. (Favorites include: Tivo’s original design principles, Microsoft Windows 7 Design Principles, Opower’s Design Principles, Asana’s Design Principles.)

As with brand traits, turn these principles into a set of heuristics, and assess your designs for how well they deliver on those heuristics. 

Design Guidelines / Design Systems

Perhaps the best known way to maintain a certain level of acceptable quality at scale is to institute design guidelines, or, if you have the resources and the need, a fuller-fledged design system. These help ‘raise the floor,’ of your design, by making sure that, at least in the content and interface, there’s consistency across the entire user’s experience.  

While I support the development of design systems, I’m wary of how they’ve emerged as a panacea to solve all design problems. I take issue with this because I see design as a fundamentally human endeavor. For design to thrive, it must be rooted in a healthy and humanistic context.

Design systems are about automation and, frankly, are dehumanizing. This can be okay if there’s a strong design culture in place that can wield the systems with taste and judgment. But if there isn’t, then design systems simply support the mechanization of design, reducing design practice to asset creation to feed the engineering machine.

Inclusive design and accessibility practices

Regrettably, my commentary here will be thin, as this is an area I haven’t explored in much depth. But my neglect shouldn’t be your excuse! Because when we say “quality,” there’s an implication of “quality for whom?” When we discuss Measures of Success next, we situate design quality in a business context, and, well, if a significant portion of potential users cannot engage with your design because it is ignorant of inclusive principles or accessibility guidelines, that’s bad for business, which is bad design. 

Quality toolkits for inclusive design have been developed by the University of Cambridge and Microsoft

Measures of Success

Fundamentally, the only measure of design quality that matters is how it contributes to (or detracts from) whatever has been agreed upon as a measure of success. Unlike engineering, where there are industry-wide standards for success, success for design cannot be extricated from what success looks like for the broader organization. 

In my experience, the most salient measures of success for design are identical to those for product management. Key “product” metrics around acquisition, retention, satisfaction, engagement, task completion, etc., are what designers should primarily be delivering against, and are the most important markers of ‘quality.’  

That said, it’s surprising how often product development work starts where the product team doesn’t have a clear understanding of success. I encourage my designers, and now the design teams I consult with, to not engage on any work until there are clear, shared measures of success. Without an understanding of what success looks like, decision-making becomes arbitrary, and designers find themselves jerked around… which inevitably leads to lower-quality work, as stuff gets shipped half-baked, it’s hard to say “No” to less important projects, people are spread too thin, etc. etc.

For more on this, I appreciate this article: Empower Product Teams with Product Outcomes, not Business Outcomes. (And just remember, design ‘outcomes’ are the same as product ones.)

Explained Exemplars of Quality Work

The next step is to take the elements discussed so far—traits, principles, guidelines, and measures—and show how they are embodied and delivered in final product. Every team should have a gallery of exemplary work, with clear explanations as to why the work can be considered, well, “good.” You can think of it as case studies, or a design team’s collective portfolio, though in this case, process is less interesting than the final product.

As we’ve discussed, whereas engineering quality is standardized and largely context-free, design quality is very much rooted in the context in which it operates. Also, design decision-making is not solely the product of a rational process. As such, there will always be subjectivity in the creation and assessment of design. By sharing exemplars in this gallery fashion, you can meld the subjective with the objective, and teach the team the language by which matters of quality can be communicated.  

Oh, and if your team doesn’t have their own quality work to share (because they’re so new, or they just haven’t been able to deliver on the kind of work they feel proud of), then start your gallery with publicly available work.

Unfortunately, there aren’t many examples of “good design” galleries in the spirit of which I’m thinking. I’ve always dug Milton Glaser’s critique of Olympics logos, as it’s not just preferences, but rooted in robust design values.   

Mature and inclusive critique practices

Critique is not a ‘nice-to-have’ in the design process. AsErika Hall said on an episode of Finding Our Way :

“The practice of design is creation and criticism in dialogue with one another. And I think we’ve emphasized creation and completely lost the sense of criticism, even though that’s fundamental, that’s one half of that dialectic.”

Critique is how we get to quality. We place our work up or review, we get feedback from other minds, and the refinements based on that input make it better. 

A problem, often, with critique is that it can feel arbitrary and rooted in preferences. That’s why I’ve placed it last—critique should be rooted in all the elements shared before. 

Even with all these elements in place, it’s crucial to attend to the practice of critique to ensure that it operates in an inclusive fashion. Braden Kowitz has written on practices that lead to improved critiques

I reviewed a number of explanations of critique processes. Some that stood out:

Design Critiques at Figma . Super extensive, and quite apt in our everything-remote world.

How to Run a Design Critique . From our pal Scott Berkun. 

Defining quality is of existential importance for design organizations

Because design teams are judged by the quality of their output, it’s essential for these teams to thoughtfully establish just what quality means in their organization. Clarity around quality empowers design teams to:

  • push back on unreasonable requirements (or, if no requirements exist, insist on developing those before doing any work)
  • incorporate quality determinations into the broader product development process, to discourage shipping crap
  • protect team members’ time, focusing on prioritized efforts that are meaningful and likely have impact, and ignoring executive brain farts that everyone knows won’t go anywhere
  • staff projects and programs appropriately to drive to those quality outcomes
  • consistently deliver good work, which leads to ongoing benefits, not just with customers, but internally for morale, retention, and hiring

This post is already too long, and I feel like I’ve only scratched the surface. I’d love to hear about how you define quality for design, and what resources you’ve found valuable in that work.