Consider the following workplace scenario. The manager of an R&D lab needs her engineers to solve a complex problem. There are many possible approaches and it’s unclear which will end up best. What is the best way to structure their communication?
For at least the last twenty years, the accepted answer to this question within knowledge work has been to introduce the maximum amount of communication with the minimum possible friction. Email makes it simple for engineers to swap ideas and results. Instant messenger tools like Slack reduce friction even further and increase transparency. Progress!
The logic driving this consensus is straightforward: more information is strictly better than less; a veritable axiom of the burgeoning Information Age that has been widely accepted ever since Bill Gates touted his early-adoption of email as a strategy to broaden the incoming stream of ideas and insights on which his algorithmic brain could churn.
But is this answer always right?
Many knowledge work sages may have overlooked a classic 2007 paper from the network science literature. It’s titled “The Network Structure of Exploration and Exploitation,” and it’s authored by two Harvard researchers, David Lazer and Allan Friedman (Lazer has since been hired away to Northeastern’s impressive network science group).
Early in the paper, Lazer and Friedman acknowledge the maximal information consensus:
“Services to increase the efficiency with which we exploit our personal networks have proliferated, and armies of consultants have emerged to improve the efficiency of organizational networks. Implicit in this discourse is the idea that the more connected we are, the better: silos are to be eliminated, and the boundaryless organization is the wave of the future.”
As they elaborate, however, the research supporting this view tended to focus on scenarios that measured individual welfare in solving a problem. In such settings, more connectivity was better, as it ensured that solutions better than your own would diffuse into your awareness as quickly as possible.
Lazer and Friedman, by contrast, were interested in the problem-solving ability of the overall group. The question they asked, in essence, was how the structure of the underlying communication network impacted a group’s ability to come up with the best possible solution to a problem.
The details of the experiment get a little tricky. They used a round-based agent simulation and focused on the NK problem space, a set of abstract problems, introduced by evolutionary biologist Stuart Kauffman, in which “solutions” are modeled as sequence of numbers, and nearby solutions (e.g, those that differ by only a small number of changes) have at most small differences in their fitness values. This creates a “rugged landscape” where you might have to explore worse answers to get to better options.
Lazer and Friedman assumed that individuals were “myopic,” meaning that in a given round they could only examine solutions that were slightly different than their best known current solution. They can also, however, explore their neighbors in the network and discover their best solutions, adopting them if better than their own.
They then studied how different network structures impacted the quality of the best solution arrived at in the network as a whole. To return to our original example, they wanted to know what type of communication network would lead our hypothetical engineers to the best possible outcome.
Here’s a crude summary of their otherwise complex results:
- Well-connected networks, in which information flows quickly, arrive at pretty good solutions fast, but then get stuck.
- Poorly-connected networks, in which information flows slowly, arrive at much better final solutions, but it takes longer.
- This trade-off between quality of final solution and speed at which solutions are reached can be tuned by taking a poorly-connected network and then adding more and more shortcuts to the information flows.
The underlying dynamic behind these results is easy to understand. In a poorly-connected network, more solutions are being examined in parallel before the best of the bunch is able to spread enough to enforce a consensus. In the well-connected network, the first reasonable idea quickly takes over.
I mention this paper to underscore an important reminder. We should be cautious about any early confidence that the way we work today is in any sense optimal. Hooking everyone up to low-friction digital communication channels and telling them to rock n’ roll is flexible, convenient, and cheap. But it might be far from the best approach to taking a collection human minds and extracting from them the best possible results.
We still, in other words, probably have a lot of work to do in figuring out the best way to work.
Cal, could heuristics and cognitive biases come into play. I study Human Centred Security, looking at the factors that contribute to people making poor decisions and judgements relating to their use of technology. What you describe sounds a lot like cognitive bias such as ‘anchoring’ – relying on the first or most profound piece of information or ‘herd behaviour’ – mimic the actions of the group, rather than thinking clearly for themselves.
I think we see a lot of this in the education space, especially in instructional design. More and more attention is put into design models like DBR (Design Based Research) and SAM (Successive Approximation Model), which are similar to rapid prototyping and agile design in other fields, while longer term solutions like ADDIE are almost frowned upon outside of being something that’s mentioned as a model that’s rarely used anymore.
Google Scholar to the rescue: https://nsr.asc.upenn.edu/files/Lazer-Friedman-2007-ASQ.pdf
I recall something similar about “brainstorming” in a group.
First, have everyone do an individual brainstorm for themselves and write it down privately.
Only then get the group back together and put all the ideas out on the table and brainstorm further from there.
Not sure whether there was research for this, or it was just a ‘common sense’ recommendation, but it made sense to me.
I use this approach in brainstorms I hold. Some people are much better at coming up with ideas on their own and others require the interaction to be creative. This approach gives a balance of the two and provides everyone the opportunity to participate. I think I, also, read somewhere that it leads to more creative results, but the participation and buy-in aspect is where I see the most value.
Hi Cal,
I would like to read the 2007 paper but it’s behind an academic journal paywall so I’m going to see if I can read via my local library account. The field of socio-technical systems (STS) is rich with examples of both failures and successes. More recently I’ve been aware of disparate medical research teams being re-organized to optimize decision-making with clinical teams in adjacent campuses mostly in response to silos and long-held resentments about lack of information sharing with respect to drug trials. And, there are technology components to the network solutions to enhance workflows and planning. My own experience is that human orchestration of the management piece becomes critical to producing timely results irrespective of the technology platform. Appreciate your thought-provoking post.
https://en.wikipedia.org/wiki/Sociotechnical_system
Scott Love
Palo Alto, CA
a search on google scholar turns up this link to a PDF: https://journals.sagepub.com/doi/pdf/10.2189/asqu.52.4.667
whoops i accidentally cut off the important part of the link, here is the whole thing: https://journals.sagepub.com/doi/pdf/10.2189/asqu.52.4.667?casa_token=Uz88ivItNXsAAAAA:faqWzy7RpIo7phnEswJGP6hRvJSpIC0iZZsK9-O2jxvQWCnS66tJJWzKwZcZhgGkUqebikqrLCkd
What would be more interesting to see would be the results of a simulation where connections between many teams are looked at. Each team is a different function or department and has a hierarchy. e.g. the teams could be : Design, Test, Customer support etc.
Comparing the two cases : 1) The teams are connected at the top level. 2) teams are connected randomly at the bottom.
In fact case 1 is normal Functional Org structure while case 2 is the infamous Matrix org structure (very common in tech). It’d be interesting to study under what conditions the Matrix structure works or doesn’t work while solving a cross-functional problem. This situation is very common in tech.
It is quite amazing though that this can be studied quantitatively using models. Seems like a beautiful piece of work. The one I find practically hardest is combining many myopic solutions into one. This activity is extremely difficult and also the most rewarding and requires some sort of synthetic mind to put different solutions in their right place without conflict. Don’t think the simulation would have taken this into account!
That’s quite an interesting thought.
I remember that in one interview the late Freeman Dyson said about fusion research that there should be much more parallel research exploring different strategies conducted by smaller research teams, instead of putting all eggs in one basket of ITER.
A similar controversy happened during the Apollo program concerning the lunar flight mode. The three alternatives were direct ascent, earth orbit rendezvous and lunar orbit rendezvous. Wernher von Braun’s team favored earth orbit rendezvous, but in the end, even von Braun came to the conclusion that lunar orbit rendezvous was the only way to go within the short time frame given by President Kennedy, despite the many unknowns pertaining to the complicated orbital mechanics of bringing two space ships together, leave alone the practical demonstration.
I find the idea in this essay deeply intriguing and meriting a lot more thought.
So the scientific community is a poorly-connected network in this sense: the solutions are communicated through peer-reviewed papers and conference talks. Hopefully then this means that we come up with the best possible solution for problems in the long term.
Hi Cal! A little bit off topic but I’ve seen your new book is already for pre-order on Amazon. Can you confirm what’s the best time, from your POV, to buy it? Closer to the publishing date or now?
The math behind this is a little beyond me, but isn’t this going to depend a lot on what the problem space looks like? If this space is ‘rugged’ like in the paper, then it makes sense to spread out effort as much as possible (by making network effects weaker) so you have a better chance at finding the globally best option. But if you have a relatively smooth space this isn’t as true, because you are less worried about getting stuck in local minima.
So then the real problem becomes figuring out what kind of space you’re in. Research = rugged, manufacturing/engineering = smooth?
I wonder if there’s a real-world example of competing firms working on similar projects but using different results….Simulations are all well and good, but they are, at best, working hypotheses. That’s not to say it’s bad; how a system deviates from an ideal is often incredibly useful (it’s about the only way Hardy-Weinburg Equilibrium CAN be used).
What defines “similar” is often tricky, for example. Individual background comes into play–what we consider similar solutions depends on what we’ve done in the past. To give an example: when I was in college we had to keep sediment suspended in water for an experiment. The folks in the class were used to working with lab equipment, so were discussing magnetic stirring devices. I grew up doing home remodeling every weekend; I immediately went to paint mixing and provided a cheap, effective solution. Our different backgrounds gave us very different perspectives and therefore very different definitions of what is a similar problem/situation.
There’s also the fact that humans don’t work the way models often predict. In any group of more than 3 or 4 people there’s going to be a leader, someone everyone else defaults to. This can be seen everywhere from grade school projects to households to corporate board rooms. So the reality is that we’re not looking at perfectly smooth organizations; we’re looking at little clusters of people regardless of ease of information transfer. We should expect this to create small, highly permeable, but still identifiable silos of information.
I’d be curious to see how these–and other confounding factors–affect this simulation. A real-world example would also give us a lot of information on other, non-obvious, deviations from the simulation, and allow further fine-tuning.
The real world example is the race to understand how to develop the most effective vaccine to neutralize the virus that is causing COVID-19. As pointed out by a friend, the Albert Sabin vs. Jonas Salk race to find a polio vaccine yielded a relatively fast solution and then a longer term solution. So, an efficient network might get us to a vaccine sooner (not necessarily the safest) but then the least efficient network is still plodding along and eventually finding an alternative approach out of necessity. Fascinating times and a topic relevant to everyone one of us alive to discuss it. Thank you Cal and friends for this discussion.
https://www.cincinnati.com/story/news/2019/05/10/our-history-albert-sabin-jonas-salk-competed-for-safest-polio-vaccine/1140590001/
Researchers working independently, and then slowly sharing their best results so far–that sounds like pre-internet academia. Research teams would communicate less often, typically by snail mail, while the primary way of sharing results was by print publication.
It worked pretty well!
That’s not exactly true. Teams (labs, research partners, grad students, geographically defined groups) worked independently. Within each team information flowed seamlessly–put two scientists working on the same project in a room for more than 30 seconds and they’ll start chatting about it. Information was transmitted between teams in a much slower, more methodical manner, however. Informal communication was common, but official communication was limited to talks, papers, and books.
This seems to allow the best of both worlds. Local groups come up with a good solution that rapidly reaches fixation, but because in science there is rarely only one team researching a question multiple solutions typically are examined (ego plays a role here too–if I come up with a solution I’m going to assume it’s better than yours, so am less likely to accept your solution; we’re humans, after all!). Within groups of highly knowledgeable people deep in the weeds of the problem information is shared freely, but there’s a set point–publication–where deep thought is required.
>> Email makes it simple for engineers to swap ideas and results.
>> Instant messenger tools like Slack reduce friction even further and increase transparency.
I often see discussions about communication channels / tools / systems framed in terms of “what’s best overall”.
But there is no single best tool. Each is optimized for a particular communication type – email, threaded chat, IM, video, online meetings, documents, presentations, etc. My toolbox is overflowing with communication tools and most are already pretty friction-free. They’ll all continue to evolve. New ones will be introduced and others will fade away. I can’t control what’s available to me and I can’t control what others choose to use. Information will continue to come at me from all different directions.
What I need help with is managing and facilitating the ebb and flow of the information life-cycle.
I need to keep all the relevant information connected regardless of what channel it initially appears in.
I need the information to reside in whatever the optimum communication channel is at a point in time.
Vague email questions are better tackled in a lightweight chat so I need to be able to move it there. That chat may evolve into a project investigation – I need to be able to lift the chat into a space with more participants and maybe some structure. Then the document and presentation files begin to get created – I need those tied together too. All the while more emails and chats are occuring off to the side and need to be captured.
Information content needs to be able to move freely back and forth BETWEEN tools and stay connected.
This makes me think of how Reddit can upvote “sensible” sounding responses while an in-depth response with sources that took 30 minutes longer is sitting at the bottom with one upvote.
A good counter to this is to RED TEAM an important decision by having one smaller group assigned to come up with ideas to counter the proposal with constructive criticism. Since they are officially assigned to the group, it reduces the social conflict that can occur when some junior members of a team may be socially reluctant to criticize the ideas of senior member or the person in charge.
There are so many caveats due to the extreme reductionism of that article’s methodology that I thoroughly hesitate to draw any real-world conclusions from it.
One reduction (that the authors themselves raise): this is talking about a single optimization target, whereas in reality a team will have many different goals.
“Further, of course, what we offered here was a single process model, examining how network structure affects the search of a problem space. Although this is a virtue in the theory-building process, in the real world, many processes may be operating at one time that are affected by the net- work’s configuration. The network configuration of a team, for example, might plausibly affect the coordination of activities, team solidarity, and opportunistic behavior.”
Related to this but even more importantly, not everyone is doing the same thing in most team scenarios because that’s not an efficient use of resources; many things can and should be done in parallel.
The new subscriber pop up’s close button isn’t accessible when viewing the website on a mobile browser.
Thanks for sharing this interesting research. Two things come to mind:
1. I wonder if this same result plays out in the space of “opinions”. Now that we’re all connected on Twitter and Facebook, consensus on current events can be reached very quickly in a form of herd (mob?) mentality, but the “optimal”, nuanced solution is never found. Compare to the slower, less-connected networks of opinion sharing, such as conversations at the dinner table and letters to the editor. The sub-optimal solutions of today can stack up over time and lead our society in a bad direction.
2. In many contexts, such as market competition, the sub-optimal solution arrived at quickly is far preferable to an optimal solution reached slowly. There is plenty of time to find the optimal solution once the entire endeavor is no longer under existential threat.