亚洲AV

When managing creatives, what you say is often what you get

Body

A new 亚洲AV study explores the complex connections between managerial feedback and creative outcomes.

The growing popularity of crowdsourcing and other forms of open innovation reflects the pressing need that companies have for creative ideas that go beyond the organizational same-old, same-old.聽聽

But once you have imaginative outsiders ready to lend you their time and attention, how do you elicit novel and useful contributions from them? It turns out to be as much about strategic communication as it is about the quality of your talent pool.聽

In recently published research, , professor and area chair of information systems and operations management (ISOM) at Mason's School of Business, and , associate professor and assistant area chair of ISOM, focused on two types of feedback crowdsourcing participants commonly receive. Outcome feedback rates the perceived quality of the submission, with no underlying explanation (鈥淭his design is not good.鈥). Process feedback reveals or hints at what contest organizers are looking for (鈥淚 prefer a green background鈥).

Pallab Sanyal and Shun Ye
Pallab Sanyal (left) and Shun Ye (right)

Sanyal and Ye analyzed data from a crowdsourcing platform covering close to 12,000 graphic-design contests over the period from 2009 to 2014. The data-set included the contest parameters, time-stamped submissions and feedback, winning designs, etc. It also allowed the researchers to track the activity of repeat entrants from contest to contest across the sample.聽

This put them in a good position to measure how choosing one feedback type over the other affected contest outcomes鈥攂ut not in terms of 鈥渜uality鈥 as it is traditionally defined by researchers.聽聽

鈥淭he moral of the story is, beauty is in the eye of the beholder. Whoever is the contest holder or client, whatever they think is best for their business objective, that is the highest quality.鈥澛 鈥擯allab Sanyal

鈥淚 gave a talk at a university where I showed 25 different submissions from a crowdsourcing contest and asked people to choose which one was the highest quality," says Sanyal. "And everyone in that room picked a different one. Not only that, the one that eventually won the contest was not picked by anyone.鈥澛

鈥淭he moral of the story is, beauty is in the eye of the beholder. Whoever is the contest holder or client, whatever they think is best for their business objective, that is the highest quality.鈥澛

With this working definition in mind, Sanyal and Ye developed an artificial intelligence (AI) tool for scoring all submissions by visual similarity to the eventual winning submission.

鈥淲e use the algorithm to calculate the distance between these images and the highest-quality image, to give it a score, a quality score, between zero and one,鈥 Sanyal explains.聽

They found that process feedback tended to increase the affinity of the designs, i.e., they were more similar to the winning design chosen by the client on average. By contrast, outcome feedback increased the diversity of the designs.聽聽

Sanyal and Ye theorize that precise guidance in the form of process feedback can lower ambiguity and assist competitors to narrow the search space, while outcome feedback expands the search space because it leaves plenty of room for interpretation.聽

Very late in the contest, though, the positive relationship between process feedback and submission affinity disappeared, and may have even flipped to the negative; the professors speculate this may be due to a demotivating, 鈥渘ow-you-tell-me鈥 effect.聽

Shifting gears from quality to quantity, Sanyal and Ye discovered that both process and outcome feedback encouraged more submissions on the whole. However, they did so in different ways.

Process feedback lured new contributors to the contest; outcome feedback spurred more submissions per contributor. But, again, both of these effects were weakened when feedback was offered late in the game. Interestingly, this contradicts previous studies, which suggest early feedback discourages new contributors from joining. Shun and Ye point out that those studies used only numeric feedback. 鈥淲e show that when it comes to textual feedback, it should be provided early in the game,鈥 Ye says.聽

He also comments, 鈥淲hat we find here can very well apply to a traditional context where, say, in an organizational setting, a manager wants a creative solution, or holds a brainstorming session.聽聽

鈥淚f managers feel that the submissions are converging very quickly, but they want more innovative solutions, they can provide outcome feedback. Or they may observe, 鈥榃ow, the submissions are all over the place. Doesn鈥檛 look like it鈥檚 close to what I have in mind.鈥 Then it鈥檚 best to start to provide some process feedback.鈥澛

Whichever feedback type they choose, managers should offer it promptly so as to maximize the impact. At the same time, they should be careful to avoid turning their preferences into self-fulfilling prophecies through strongly worded process feedback.聽聽

Sanyal uses an illustrative example from his own life: 鈥淢any times, if my kids are stuck with something, I hear them and I say, 鈥榊ou are on the right track. I won鈥檛 tell you the solution, I will only tell you that you鈥檙e on the right track.鈥 So give some overall ideas, but don鈥檛 constrain the solution space too much.鈥澛

Their work was published in .