Show me hard data...
The "hard data" bit stood out for me. Data can fool you. Big time. If you have data, even good, hard data, at most you can demonstrate a correlation. What's really lacking is an explanation; a chain of cause and effect that leads from what you are controlling (whether you do wawa or not) to the measured impact (people quitting or the stock booming). In addition, the explanation should also allow you to deduce other necessary outcomes that you can use to test the validity of the explanation. This is what separates practitioners of voodoo from practitioners of science.
Constructing correlations seems to be built into the way we think, so it's natural. We all do it. It's a fallacy, though, to treat this as science or to trust it as you would real science. Any sixth grade science class that's covered the scientific method should laugh down ALL of this data. Hard or otherwise, without the theory to explain it.
After reading the first chapter (I got my copy in the mail yesterday), its not "hard facts." As we've discussed before on this board ad-nauseum, its extremely difficult to do studies that can correctly test the effect of different tools and approaches in software.
Even if we have "hard data" we're not really in a much better position than if we rely on anecdotal evidence. It's just a more sophisticated form of voodoo if it's not coupled with a theory or model of how the system works.
Cause and effect is sometimes (not always!) hard to establish, as you observe. Does that mean we shouldn't try to understand, though? And there's nothing wrong with incomplete or limited theories, provided you are aware of the context in which they are valid. A lot of the time and expense involved in scientific research is spent trying to make local theories more universal.
There are good methods to use for trying to make improvements based on data gathering. Manufacturers use tools like Design of Experiments, Statistical Process Control, and all the crap that comes with applying Six Sigma to your processes very effectively. Even Deming, one of the fathers of all these things, noted that a necessary condition to using these tools correctly is what he called "profound knowledge" of the system.
Interesting you mention the bullet points, it seems to me that reuse-in-the-small required free software and the internet. Reuse-in-the-large is achieved by everyone having a copy of Windows or some popular framework.
I agree with the other two, especially since "adding people to a late project" is open to interpretation. I'll take it to mean the reflex of adding warm bodies.
Cause and effect is sometimes (not always!) hard to establish, as you observe. Does that mean we shouldn't try to understand, though? And there's nothing wrong with incomplete or limited theories, provided you are aware of the context in which they are valid.
That is true. I just meant to say that whether you rely on anecdotes or hard data, your conclusions will be imperfect. Controlled experiments are supposedly the best evidence, but very often they do not test exactly what they were meant to test. Correlational data is very hard to interpret. Of course this is why people devote their lives to research. Expecting science to give clear simple answers is a mistake.
Robert Glass is a GOD. Over 20 years ago I read a collection of tiny paperbacks he authored with names like "Software Projects Which Failed", "More Software Projects Which Failed". They taught me more about the people and practices of software than anything I've read since, including the oft-mentioned "Peopleware", etc.
If you asked somebody the question "Do you think it will hurt if I poke you in the eye with a stick?" that person will probably say "Yes". There's no hard data to justify the response though.