How an evidence hierarchy can inform your thinking
So here’s a good way to start a fight with your partner. Only accept an argument your partner has made when it comes from someone else. You’ll know you’ve succeeded when your partner exclaims “I’ve been saying that this whole time! Why do you only believe it when he says it!”
Partner dynamics aside, this is a common situation that neatly illustrates a fact of life: not all evidence is created equal. When an argument comes from a reputable expert within a field, I’m likely to give it more credence than when it comes from a non-expert. I’m not saying that you shouldn’t believe your partner, but it does explain how we implicitly dismiss or lean into various kinds of evidence we come across.
The E in QCE
As we’ve explored in prior posts, the promise I see in the QCE framework is that it offers a structured process for developing your personal positions and arguments on topics that matter to you.
Implementing it amounts to classifying information you consume into either a question (Q) to explore, a claim (C) that provides a potential answer to one of your questions, or evidence (E) that supports or opposes a claim.
My hypothesis is that developing a library of questions, claims, and evidence will make me a more impactful thinker and a better decision maker in work and life.
As I continue to build out a working QCE system, I realize that embracing the various shades of strength that evidence naturally comes in is an important design consideration.
A starting point
The QCE framework comes from the world of academic research, so we shouldn’t be surprised to find out that this community has thought long and hard about how to handle evidence. Afterall, it’s potentially dangerous to generalize the result from a study on mice and assume it would hold for humans.
Academics classify the strength and validity of evidence using an evidence hierarchy that spans a continuum from “expert opinion” on the low end, to population-based observational studies, all the way to the nearly unimpeachable controlled, double-blind experiment.
Such an evidence hierarchy makes a ton of sense when the main purpose of the research is to seek and establish consensus on truth. Conclusions from scientific research affect everything from how we believe the universe started, to which drugs get approved for public consumption.
My needs aren’t nearly as lofty or high stakes however. In my non-academic case, what I’m really after is a means of clarifying and developing my thinking so that I can deploy it at some opportune time in the future.
Here’s what a non-academic evidence hierarchy could look like, from the weakest evidence to the strongest.
Non-academic evidence hierarchy
Second-hand information
At the bottom of the list is second-hand information. This is any evidence that takes the form of “someone said this once.” Examples include anecdotes from friends of friends, posts from strangers on social media, or any unsubstantiated views from people you don’t know or can’t follow up with.
This is effectively hearsay, and shouldn’t be used for making decisions or drawing conclusions. It does have its place though, since it can be very useful for generating lines of inquiry.
Example: Xochil, a friend of a friend, swears that her running performance improved 10x after she began weaving in small amounts of walking into her longer runs.
While I shouldn’t immediately conclude that this run-walk method would work for me (or anyone else), I might want to dig into this claim to see if it might help me as a runner.
First-hand information
This is any evidence that comes from the lived experience of either yourself, or someone you know, like an opinion from a spouse or friend. A defining trait of first-hand information is that you can ask follow up questions. The reason I consider Xochil a source of second-hand information is because I don’t know her and wouldn’t be able to call her up to ask more about her claim.
By definition, the source of the first-hand evidence can speak authoritatively to their experience, and is therefore an expert on it.
While the downside to this kind of evidence is a small sample size of one, it’s great for forming an initial hypothesis for individual use cases.
Example: As someone interested in habit formation, I’ve come to realize that I’m way more likely to build and stick with a habit if I can visualize my successes through a win streak. After journaling about it for a month, I realize that this is a piece of evidence I should record for my future self.
Documented results
This is any evidence taking the form of a document detailing facts or results, intended for distribution to an audience. This evidence is stronger than first-hand information since it’s usually the product of some kind of analysis and/or collaboration, and therefore has a higher standard of credibility built in. This type of evidence is usually somewhat qualitative in nature.
Examples: Project status reports, post-mortems and after-action reports, summary of findings from an A/B test, ad-hoc analyst reports, or even well structured meeting minutes.
Aggregate objective data
This is any form of evidence that comes from an objective report that aggregates data. Since this evidence is typically generated from systems, and not produced by any particular individual, its objectivity makes for a stronger form of evidence than documented results, which are based on subjective interpretations of data. This type of evidence is also usually quantitative in nature.
Examples: Corporate KPI scorecards, departmental dashboards, financial statements, and results from sanctioned queries.
Published sources
At the highest level of credibility are any publicly available pieces of writing that have survived a process of proper editing and fact checking by a team of experts. These forms of evidence are great for developing new claims or for supporting existing claims you’ve been mulling over. Plus, they make you sound smart when you trot them out.
Examples: Books published by a reputable publishing house, newspaper articles, and academic papers published in peer-reviewed journals.
Handling Evidence
To see how I would process a piece of evidence relative to the QCE framework, here's how I would handle the example given in "second-hand evidence" involving Xochil:
Step 1: Capture and restate
Coming away from my conversation with my friend, I would jot down the idea, and in the process, restate it in my own words. Doing this serves as a small test to see if I really do understand the idea.
Step 2: Triage
Next, I classify this information as a question, a claim, or a piece of evidence. If it’s evidence, then I need to determine its strength.
In this example, the idea gets recorded as: “Evidence, second-hand - Xochil swears by the run-walk method. She says it revolutionized her running game.”
Step 3: Elaborate
Within the next few days, I would then ask how this evidence connects to my prior thinking. Does it contradict a claim I’ve already made? Does it shed light on a question I’m interested in?
In this case, Xochil’s evidence informs an existing question I’ve been thinking about: “Question - How can I make my running as sustainable as possible?”
Since I hadn’t considered walking before, this evidence generates and supports a new claim: “Claim - Regularly walking throughout a run makes you a stronger runner.”
A quick google search leads me to a book by Jeff Galloway called The Run Walk Run method, which I now add to my reading list. When I get around to reading it, I’ll be sifting for Galloway’s claims and evidence to add to my knowledge base.
In this way, I see the QCE system serving as an ever growing and evolving repository of my best thinking, capable of ingesting information from multiple sources over time.
Next steps
This is an early start at an evidence hierarchy that could be applied to business and life, and I fully expect to iterate on it once I begin getting some reps in.
Maybe because Christmas is a few days away, I see processing each piece of evidence like hanging ornaments on a christmas tree. The trunk of the tree represents a question, branches represent claims, and ornaments are the evidence. The brilliance of each ornament is a function of its place on the evidence hierarchy.
Deploying knowledge can then be informed by the state of my christmas trees. If I’m writing about a topic where I’m simply making an observation and stating an opinion, a few dim ornaments (first and second-hand evidence) might be sufficient reference material. If I’m having to make a weighty decision that could impact budgets and people’s lives, I’ll need much stronger evidence and likely a bunch of it.
I’m excited by the potential of this evidence scoring and classification system and am looking forward to seeing how it handles some real life testing. Be sure to subscribe to follow along as I post updates on how that goes.
This post is the fourth in a series focused on how the QCE framework can aid us in the non-academic contexts of business and life. Here are links to the prior posts:
Post 1: Extending the zettelkasten beyond the ivory tower. An experiment.
Post 2: Highlighting too much? Try raising the standard.
Post 3: How bad decisions get the green light
If you have any ideas, questions, or feedback, I’d love to hear from you. Please feel free to reach me at jon@fallingtosystems.com.