An article in Metro Pulse (May 22, 2008) on the economics of the Knoxville convention center illuminated the difficulty of predicting the success of major investments, such as a $140 million convention center. Unfortunately Knoxville’s didn’t turn out to be one. Problems such as this, for which quantitative answers like net income have to be found with incomplete information on inputs and over a thirty year planning horizon are known as messy problems. In the early years of computer modeling, it was thought that one could overcome this deficiency by developing mathematical simulations of the underlying processes. Very soon the conclusion was “garbage in, garbage out.” In other words, computation alone cannot overcome input deficiencies. What computer models can do, however, what formerly could not be done, is “sensitivity analysis” or “what-if analysis.” That means answering the question of what will happen to the outcome if one or the other input is changed. This number crunching capability was an enormous step forward to more comprehensive analysis of engineering and economic problems. Messy problems are actually the rule as only in rare cases the inputs are known precisely. This means that, in general, every input has a range and possibly also a frequency associated with it and only in very special cases the input is represented by a deterministic variable, one that has a single value and a zero range. In this latter case one can
just use classic arithmetic and add, multiply or divide the variables, as the mathematical model of the problem requires. Deterministic analysis was the way in the pre-computer age, when number crunching by hand from a certain point on became practically impossible. The problem with the deterministic approach is that it produces a shaky result, one answer when there are usually many. This
becomes rather obvious when there is no agreement on what the inputs should be. Different people or even the same person may think that this or that input value is more or less likely the right one to be used. In these situations the computer comes in handy as it allows a rather painless repetitive analysis of numerous input sets. An example is the calculation of income from conferences at the Knoxville convention center that was described in the Metro Pulse article. The mathematical model used was the product of four www.technicalsociety.net inputs: number of attendees, amount spent by each, an economic multiplier, and the
number of days in attendance. In this case, almost every person asked would come up with a different number for each of these inputs, not just lay persons but also experts in the subject. I assumed
three values for each, put them in a spreadsheet and got 3^4 = 81 results. Since I used my own estimates, my results are not comparable with those of the true project, but they reminded me of a few interesting characteristics. The range of the 81 outcomes was vast, from $38 million to $275,000, with each number having a frequency of 1/81 (based on my assumptions). Sorting the results in a descending series and adding their frequencies from the bottom up produced a non-exceedance probability distribution that revealed a few interesting results. The probability that the lowest value would be exceeded was 1 – 1/81 = 0.99, which means that this lowest income would be exceeded by a comfortable probability. Since the simplistic model that I used did not include costs, the outcomes never went negative. The probability that the highest income of $38 million would be exceeded was zero. These extreme values are a direct consequence of my range assumptions for the inputs. Taking the average of all 81 values was $7.6 million. It occurred at a non-exceedance probability of 60 %. This is a reminder that this “average” income is not exceeded in 60 % of outcomes. It seems rather risky to base a project on such a rather low probability of success.
A method that number crunching by computers has made possible is Monte Carlo simulation. Given a number of inputs, their ranges and frequency distributions, and a mathematical process model, like the one above or, better, a more comprehensive one, would random sample sets of input data and run them through the model. Again, a good deal of attention must be paid, first, to the input preparation, second to the adequacy of the process model, and, third, to the analysis of the results. The reliability or the outcome stands and falls with the quality of the inputs. Repetitive sampling and evaluating the input sets produces a sufficiently large number of outputs that are then statistically analyzed by determining their dispersion, mean and range. The most important information obtained from such analysis is the probability of achieving certain goals. One is to find the probability that the project will economically fail, i.e., produce a negative return, given all revenues and costs of the project, and how high it is. If it exceeds the comfort level of the decision makers, then this would mean back to the drawing board for a revised project alternative. For example, for one reason or another, the Knoxville convention center ballooned from a 100,000 sqft project to a 500,000 sqft one.
Of course, advanced decision analysis methods still require a lot of attention to detail to avoid mistakes and wrong conclusions. But it seems that despite all advances in computation technology and mathematical modeling the analysis at the consumer level is still stuck at the pre-computer level and decisions for multi-million dollar projects are based on rather simplistic calculations. If this is the case one should not be surprised when the project turns out to be a economic sink of taxpayer money instead of a well spring of profits, or a t least a break-even proposition.
In conclusion, an as yet other analytical approach to messy problems should be mentioned, namely the use of fuzzy arithmetic. This approach goes back to a 1965 paper on fuzzy sets by Lotfi Zadeh. I remember it because it looked intriguing to me and I retained a copy of. Here, instead of probabilities input data are qualified by levels of belief instead of probabilities. The outcomes are again associated with a level of belief instead of probabilities. Karl E. Thorndike, FuziWare , Inc., of Knoxville, Tennessee, developed a software FuziCalc that was published in 1993. I momentarily do not know how it fared. But in dealing with messy problems and multi-million dollar investments I would not leave one stone unturned in an attempt to thoroughly analyze a multi-million dollar project, especially one that taps our own money. WOW.