Design Thinking vs Social Selection of Scientific Truths: The Curious Cases of Jordan Peterson and Open Office
This article starts by explaining that in social sciences, unlike in natural sciences, very often the socially fittest “empirical evidence” survives. I call this the Social Selection of Evidence. I use examples to show how researchers from opposing camps compete to increase the survival chance of their evidence by using different means of socializing their work with different audience groups. Then I argue that this social processing of evidence might be inevitable, yet it could be done methodically— we can learn from Design Thinking to elevate the audience of research from consumer or processor of evidence to creator of it.
The Case of Jordan Peterson
On October 2018, the popular psychology professor Jordan Peterson engaged in a discussion with Sweden’s leader of the Centre Party, Annie Lööf. At some point during their conversation, Lööf said that “if I raise my daughter to … be self-confident, to have a high education … she’ll have a good platform to become a civil engineer [or] a CEO of a company”. Peterson replied: “That is what people who think that the differences between people are primarily culturally constructed believe, but it’s not what the evidence suggests” and that “it is not a contentious issue among informed scientists. We’ve known this for 25 years”.
Earlier this year, Peterson went even further to call American Psychological Association’s (APA) Guidelines for Psychological Practice with Boys and Men “scientifically fraudulent”. Traditionalist or not, Peterson is a legitimate clinical psychologist who offers “scientific facts” and “empirical evidence” contrary to those of APA’s.
Is one of the two sides of the debate “ideologically possessed”? Could they both be true?
Conflicting Scientific Truths
In the positivist social sciences (identified by quantitative research designs and methods), it’s not too difficult to spot contradicting research findings published in peer-reviewed academic journals. There are also many research studies with too-obvious findings — like this 2019 research study published in one of my favorite peer-reviewed journals that showed when kids got exposed to nature at school, they reported a closer connection to nature! But that’s a story for another time.
The clash of scientific truths in branches of social sciences that adhere to positivism may be partially due to varying sample sizes, sample characteristics, study length, measurement tools and instruments, control variables, and analysis methods among other things. Alternatively, in natural sciences where folks are dealing with mostly non-human subjects, it’s easier for researchers to, e.g., accurately replicate a study to increase its explanatory power.
So unsurprisingly, you get folks who argue that although positivist “social sciences is surrounded by the paraphernalia of the natural sciences” and emulates scientific method, their results are still not as reliable or useful compared to those of natural sciences. This is often attributed to the social phenomena’s emergent, highly complex, and highly contextual quality. They say Einstein once said, “understanding physics is child’s play compared to understanding child’s play”.
For those who know the ins and outs of social sciences, these are not enough reason to make positivist social research unreliable. After all, Einstein never got to see the exciting advancements in predictive analytics, data science, and AI including efforts to understand and simulate player behavior. Moreover, contributions of positivist approach to, e.g., field of clinical psychology have proven extremely useful.
Social Selection of Evidence
When it comes to debating “scientific facts”, the community of physicists don’t leave things at “agree to disagree”. In case of disagreement over interpretation of evidence, scientific consensus enters the scene. For example, although there are scientists who question aspects of evidence on global warming, the scientific consensus is behind human-caused climate change.
There are signs that indicate the nature and process of building scientific consensus in social sciences is different from that of natural sciences. In the latter, consensus is mainly driven by the validity and reliability of empirical evidence, whereas in the former, it seems that the consensus is often replaced by what I refer to as social selection of evidence — the result of negotiation between the guardian of competing evidence with access to means of socializing their narration and the audience of competing evidence.
While the process of establishing scientific truth in the natural sciences is agnostic to the audience’s participation, evidence in social sciences needs to be socially processed. This is because, compared to natural sciences, social sciences is more fragmented into conflicting schools of thought. Consequently, and in an evidence landscape with diverse and competing elements, only the socially-fittest evidence survives.
In the case of Peterson vs APA, both parties seem to be aware of the need for scientific truth to be socially processed by the audience. Therefore, they actively, and crudely, engage the audience with the evidence to increase the chance of its survival — Peterson leverages social media platforms and APA dedicates dollars to crafting marketing documents and promoting them in professional and academic circles.
Researchers in social sciences often overlook the important role of social processes involved in the making of evidence — they seem to be more concerned about publishing. As a result, principles of evolution often determine the fate of their work.
I’m not interested in predicting the conclusion of negotiations between conflicting evidence in the social realm. I’m also not interested in investigating whether this makes social sciences’ truth less or more valid or reliable. As an advocate of Neopragmatism, I consider exploring the usefulness of arguments more important than debating over their truth.
What excites me is exploring alternative ways to the survival of socially fittest evidence — carefully designed social mechanisms through which researchers, audience, and empirical findings participate in co-creating new, useful, and non-obvious social facts.
The Case of Open Plan vs Closed Office
On July 2018, a team of Harvard researchers published a study about the impact of physical space on collaboration in a peer-reviewed journal. Their study showed that as employees moved from the closed office to open office, their collaboration decreased.
A year prior to that, another team of researchers had conducted an award-winning research study on the same topic with similar methodology. That study showed that as employees moved from the closed office to open office, their collaboration increased.
So, two research studies, exploring the same topic, using very similar methods, both regarded as rigorous studies by experts in the field, each revealing a different truth, each truth contradicting the other one. How is that possible?
OK, the two studies were not exactly identical. There was a subtle but important difference between the two:
In the second study, and during the time of transition from closed to open office, employees actively participated in creating their new open office.
If the valid and reliable empirical finding from the first study showed that open office is bad for social interactions, that organizations who move to open plan office are destined to exhibit less collaborative behavior, then employees in the second study managed to alter their fate and augment the established scientific truth by means of participating in building it. This happened simply because users were elevated to the level of creators of truth as opposed to consumers of it.
This changes many things. Not only the social processing of evidence builds a differentiating advantage for social sciences over natural sciences, but also challenges what we consider fact, evidence, or true. At some level, it also changes why we do research:
“We cannot regard truth as a goal of inquiry. The purpose of inquiry is to achieve agreement among human beings about what to do, to bring consensus on the end to be achieved and the means to be used to achieve those ends. (Rorty, 1999:xxv).
The Design Thinking Way
Richard Rorty, the Neopragmatist philosopher, says that the quest for certainty should be replaced with the demand for imagination. That one should replace knowledge by hope. He urges us to stop worrying about whether what we believe is well grounded and start worrying about whether we’ve been imaginative enough to think up interesting alternatives to our present beliefs. In a nutshell, concern about “building a better future” has primacy over the obsession about “correspondence to reality”.
I find this the best definition for Design Thinking.
From exploring the role of gender differences in leadership effectiveness to explaining the impact of physical space on human behavior, positivist social research has often acknowledged and appropriated social processing of evidence but in non-methodical ways. Alternatively, as a mindset and methodology that draws extensively from qualitative research designs such as Participatory Action Research and Ethnographic Research, Design Thinking offers a substitute to social selection by suggesting certain mechanisms and their corresponding rituals for researchers, users, and empirical findings to participate in the practice of creating new, useful, and non-obvious social facts.
Design Thinking isn’t concerned with ways of socializing a certain narration of scientific truth to increase its chance of survival. As a matter of fact, it avoids the debate over truth and “correspondence to reality” altogether. And it does so by elevating the audience from consumer and processor of truth to creator of it.
Design Thinking vs Social Selection of Scientific Truths was originally published in UX Planet on Medium, where people are continuing the conversation by highlighting and responding to this story.