Industrial Design: Claims Without Substance

Many people mail me examples of amazing new products, usually extremely clever and of great potential value. Many is the time I have visited design schools across the world to be shown wonderful examples of student and faculty work, each cleverly done, each accompanied by a long explanation of why the product solves some long-endured real problem. But do they really work? Do they really solve problems? Nobody knows. The designers simply assert that they do.

Claims are worthless unless backed up by data. We will never know if the claims are true unless they are tested in controlled, sensible trials, where performance with the new device is compared with performance without, and moreover where the test is designed in such a way that it is fair, accurate, and unbiased. No, we don't need the full-scale experimental rigor of experimental psychology or a clinical test of a new drug, but we need proof of assertion.

OK, Industrial Designers Society of America (IDSA): Here is a worthy quest. Design magazines such as ID and IDSA's Innovation love to show the virtues of their designs, but with nary a shred of evidence. The well-known weekly business magazine, Business Week, runs a yearly design issue, which awards prizes and touts the benefit of great design. I once challenged the editor, Bruce Nussbaum, to include some judgment of a product's actual utility and effectiveness. In public, he agreed with me, but his response was to say "I'll put you on the jury." Not what I meant. Worse, many months later, someone from Business Week called to say they were sorry, but it was too late to add me to the jury. Grrrr. I don't want to be on the jury: I want to see their procedures for awarding prizes improved.

What I want is truth in advertising—some sort of way of assessing the validity of the claims. Putting me on the jury would not accomplish this: you can't assess verbal claims by looking at the item in question, you have to do a real, controlled test. Submissions should accompanied by the findings from such a test. Awards could be given to innovative designs that didn't pass the tests, but at least the award could then be meaningfully stated, perhaps something like "this is a clever, innovative design. But it still needs some work to deliver upon its promises."

Business Week: you are doing harm to the design profession by not distinguishing between true and false claims. Those who get away with false claims thereby tarnish the reputation of companies whose claims are actually true. Have you ever done a retrospective analysis of the awards? How many of the products that received highest awards failed in the marketplace? Rumor has it that winning an award can be the kiss of death for a product or company. Shouldn't that bother you?

This essay has long been simmering in the back of my mind. The straw that broke my back was the award of a grand prize to a novel bicycle design that promised to make it far easier for children to learn to ride.

Innovative Bicycle by Purdue University ID department

The bicycle is clearly a marvelously innovative: a split rear wheel that acts like a tricycle when the bike is not in motion, enabling the child to maintain balance, but once the bike gets up to sped, the rear wheels merge, forming a true bicycle. But does this really make the bike easier to learn? The designers give all sorts of logical explanations why it would. So let me give one logical explanation why it might not: the hard art of learning to ride a bike is getting started. This does not overcome that problem. Who is right? We have no way of knowing.

The designers could very well be correct, but until they do a properly controlled experiment, we will never know. Wouldn't their prestige be much enhanced if they could state "in controlled tests, children learned to ride in 25% less time than with conventional bikes. Moreover, 89% of the children succeeded, compared with 75% with conventional bikes." (I made up those numbers.) Suppose the tests failed, so the children did not learn to ride any better with the new designs. That would still be a valuable finding, perhaps leading these innovative designers to improve upon their solution, to find one that really worked. Why didn't the designers, Professor Scott Shim of Purdue University and his two students Matthew Grossman and Ryan Lightbody, have done some tests? Why not? Because testing is not in the vocabulary of industrial designers. Test, test, test. Without tests, the claims are unsubstantiated opinion. Possibly false, possibly true.

Even the design school I am most closely associated with does not test. They do careful observations to discover the problems, select a direction, do the design, and then pronounce that they have the solution. When I question why they didn't test to see if it really was the solution, they invariably respond that I am correct, but they ran out of time. Nonsense. This simply shows the low assessment of testing by the students, the faculty, and the school. Are they afraid of what they might find out? No, they just don't think it important.

A similar critique applies to the novel drug container invented by Deborah Adler. The article gives her rationale, and she is to be commended for the approach she has taken, moreover with full attention to real-world practicality. But the article, at least, nowhere states that any of the claims are actually true—there does not appear to have been any testing. As I told the person who brought this item to my attention:

This is indeed a very nice piece of design work, but I didn't like the fact that nowhere is there any mention of any tests to see if her claims are actually true. Designers love to explain why their design is so superior. But is it? Of course, she might have done a lot of testing, but the reporter neglected to talk about it for either lack of space or interest. Still, I'd like to know.

A challenge to the Industrial Design profession: validate your claims. The responsibility falls, first of all, upon design schools. Teach your students to validate their claims. IDSA could take the lead, especially in their juried exhibits, by requiring submissions to be accompanied by proof (or perhaps, stating that the major awards will only be given to entries accompanied by such proof. (They will also need to get some experimental design experts on their juries, else the proofs will be meaningless surveys or hopelessly biased experiments.) And Business Week: As the most prestigous reporter upon design and products, don't you have an obligation to truth and verification? After all, you require this from your reporting staff. Why not from your awards?


POSTCRIPT:

Bruce Nussbaum, of Business week finally discovered this article and discussed it in his blog, charmingly entitled "Don Norman is my hero." And then Chris Conley of both The Institute of Design and GravityTankwrote, asking me to be on the 2006 jury. Here is what he said: "I am the chair of the 2006 IDEA/Businessweek awards and would like to invite you to join me and a range of design and business leaders on the jury. I saw Bruce's post on his blog and couldn't help but chuckle!

"As you probably know the process is imperfect and last year's jury (of which I was a part) made strong recommendations to evaluate the actual products, not just pretty pictures. Alas, the logistics won't be in place until next year. But I feel that this year's jury must be strong advocates for change and improvement in this competition if it is going to build strong credibility and have its unfair share of influence!"

So, let's see what happens. As Chris says, not in 2006, but in 2007.