Applied Unknowns Analysis
• John Vandivier
One of the major Austrian critiques of certain kinds of analysis focuses on uncertainty. You can make a forecast based on certain data, but something unexpected might come along and muck everything up. In the past I have described ways of overcoming this issue. These article contains 3 sections which do the following:
- Argues for a better model of overcoming uncertainty in theory
- Gives a concrete example based on some estimates I produced at work recently
- Provide links to my writings on Kirzner
But much of the knowledge that is actually utilized is by no means \"in existence\" in this ready-made form. Most of it consists in a technique of thought which enables the individual engineer to find new solutions rapidly as soon as he is confronted with new constellations of circumstances.This comes dangerously close to coinciding with the Schumpeterian notion of an entrepreneur-as-inventor, in contrast to the classic Misesian entrepreneur-as-arbitrageur or risk taker. I believe the statement renders research productivity as a major category of so-called tacit knowledge. It seems to blur the line between what might be called potential and actual knowledge. See this article where I discuss two components of research productivity: research skill and environmental facilitation. Section II - A Concrete Example All of the above is to argue for a better model of overcoming uncertainty in theory. This section gives a concrete example based on some estimates I produced at work recently. I had to produce some LOEs, which are estimates of level of effort in terms of hours, for work. There were about 100 tasks to estimate and I had a couple hours to create the estimates. The problem is that I didn't understand the description of the requirement for some of these. So I estimated the ones which I understood and I left the others as unknown. You could say their estimates were uncertain, and they are particularly appropriate called uncertain even in the Knightian sense because I did not have a good qualitative understanding of their constitution. Yet, I was able to produce a total estimate. How? By leveraging categorical certainty and ordinal correction. Even if I didn't understand these tasks, I new that they were tasks, and I have an estimate of the LOE for an average task. Watch out! The average estimated task is plausibly less complex than the average non-estimated task, so we have a reason to believe that unestimated tasks are systematically more complex and thus time consuming to complete. Here's how I could still produce a credible estimate. Consider the following simplified/fictitious set of task data:
- Produce new thing, 5 hours
- Fix broken thing, 3 hours
- Make a second thing, 4 hours
- Make another different thing, ?
- Fix another thing, ?
- Dec 2016 - 4 Problems with Austrian Economics
- Sept 2016 - Rational Estimation and Price Under Uncertainty
- Oct 2016 - The Kirznerian Delta: FSR or Abundance?