Rules for concept development
After learning about rules to optimise brainstorming, I had an idea;
“Can rules be developed to optimise concept development?”
I shadowed three undergraduate engineering teams for five weeks to find out.
The rules of brainstorming
Created in 1953 by Alex Osborn, the rules of brainstorming were;
- No criticism of ideas is allowed.
- Freewheeling and free association is encouraged.
- Quantity is required more than quality.
- Building on ideas is encouraged.
Osborn considered the “no criticism” rule to be the most important out of the four, as he had evidence to suggest that groups were less effective at solving problems than the collective efforts of individuals.
Creating conditions that replicate individual brainstorming as much as possible is generally encouraged, with studies showing that criticism is one of three things (along with production blocking and social loafing) that differentiates a group session from an individual one.
Looking closer at criticism raised a interesting question – how many types were there?
The types of criticism
While studies have found evidence both for and against the “no criticism” rule, all of them seem to agree on three different types.
As well as standard criticism (which simply highlights an issue), there was also;
This highlights an issue in a considerate manner that aims to be as useful as possible. It also often includes potential solutions to issues with ideas.
This highlights an issue in an inconsiderate manner, with the aim of “killing” an idea altogether.
I wanted to find out if there was a relationship between the type of criticism and its effectiveness in developing concepts, as a possible rule could be based on this.
But how was criticism delivered? As a statement, or in another way?
The types of questioning
With research suggesting that the teams might be unwilling to directly criticise each other in the early stages of the project (as they were unfamiliar with each other), I decided to look at an indirect way of criticising – questioning.
This adapted model tested the complexity of a question (see note) by categorising the level of response that was required in order to answer it. My hypothesis was that higher level questions would be desired, as research has shown that the level of response tends to be determined by the level of question.
Finding out if there was a relationship between the type of questioning and its effectiveness in developing concepts became the second area I investigated. Again, if a relationship was found, there was a possibility of creating a rule.
With the interpretation of conversations a key part of most studies in the area, I needed a way to analyse the transcripts of the group brainstorming sessions. The method I decided to use was “Interaction Dynamics Notation” (IDN) – a conversational analysis tool that visually represented dialogue by categorising it into twelve standard responses.
However, I only wanted to focus on one type of response; blocks.
Blocks are an interaction where one person attempts to stop (or “block”) an idea. Both criticism and questioning can act as blocks, as they “obstruct the flow” – and overcoming them is crucial for further developing ideas and concepts.
I planned to analyse the sessions that I would shadow using IDN. From here, I could see if a particular type of blocking (in this case, criticising or questioning), as well as its type was more effective in developing concepts.
Effective criticism/questioning was something that helped advance an idea – either through the evolution of a previous idea or a new suggestion that addressed the issue(s) raised.
What I did
I followed three undergraduate engineering teams for five weeks during the early design stages of their projects. Each team consisted of six individuals who covered a range of different engineering disciplines, although they had all been made aware of Osborn’s rules before beginning their projects. To analyse their sessions, I took audio (and occasionally video) recordings of their meetings.
- A first listen of the recordings was done to generate a full list of the ideas the groups generated during these early sessions. During this, all episodes (see note) relevant to a particular idea were identified and documented.
- The final designs of the groups were then observed, before being reviewed against the list to identify the ideas that were included. Episodes that discussed ideas included in the final design were now called “critical episodes”, as they contained interactions that were key to developing the overall solution.
- The recordings were reviewed again, to find;
- the “support for block” and “block frequency” for normal and critical episodes,
- the block category (criticism or question), as well as its type,
- the result of the block.
- Finally, the raw data was tested it to see if there was a statistically significant change. If there was, a case for including this as a rule could be made.
By analysing transcripts of the groups conversations, I found that;
Most advancement did not arise due to questioning and criticism, but rather through statements made by team members.
With low levels of criticism overall (as team members seemed reluctant to respond negatively to ideas), most advancement actually came through general discussion, with suggestions to improve an idea normally accepted by team members.
One team member did the majority of the criticism in each team, although this criticism often in the form of questions.
This individual, according to the closing interview conducted with the teams, was also the one most likely to give destructive criticism later in the project.
It was also rare for a block to be supported – although support for the answer was common in all the groups. This was due to team members trying to be positive in the early stages of their projects, looking to build on the point(s) that had been raised.
While Results 1 presented “incidental findings”, this section focused more on finding the relationships that potential rules for concept development could be based on.
The first set found a small average increase in block frequency during critical episodes of 7%. However, this change wasn’t large enough to be considered statistically significant.
The second set aimed to show the difference in the type of questioning present during normal and critical episodes. It ended up finding questioning at a slightly higher level in critical episodes – fueled by a fall in questions that required understanding to answer (-8%) and an increase in analytical questions (+8%). Again, these changes weren’t statistically significant.
The final set of results aimed to find out if there was an increase in advancement in critical episodes – and again only finding small, non-significant increases. Rates of non-advancement were low across the board, with only 15% of episodes ending with this result.
I couldn’t recommend rules for concept development.
While the results had found small increases in many things (including block frequency and the level of questioning), these changes weren’t large enough to completely rule out chance. However, the possibility of a relationship still exists.
There are several directions that future work in this topic could take. However, two things that must be done are;
- to increase the amount of data available to analyse,
- because some of the results didn’t have enough to accurately and realistically test for a significant result.
- to use control groups,
- because testing for relationships is easier if people are in a controlled setting. Doing this was also minimise the effects of external circumstances skewing the results in an undesired way, by removing as many of these as possible.