tag:blogger.com,1999:blog-91406125035961051132024-03-06T08:54:43.941+02:00Decisions and Info-GapsThis blog discusses, in non-technical terms, issues relating to decisions under uncertainty, especially from an info-gap perspective.Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.comBlogger26125tag:blogger.com,1999:blog-9140612503596105113.post-74442548319990077722013-04-28T14:25:00.000+03:002013-04-28T14:25:26.100+03:00Mathematical Metaphors<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: justify;">
Theories in all areas of science tell us something about the world. They are images, or models, or representations of reality. Theories tell stories about the world and are often associated with stories about their discovery. Like the story (probably apocryphal) that Newton invented the theory of gravity after an <a href="http://en.wikipedia.org/wiki/Isaac_Newton#Apple_incident" target="_blank"><span style="color: blue;">apple fell</span></a> on his head. Or the story (probably true) that Kekule discovered the cyclical structure of benzene after <a href="http://en.wikipedia.org/wiki/Benzene#Discovery" target="_blank"><span style="color: blue;">day-dreaming of a snake</span></a> seizing its tail. Theories are metaphors that explain reality.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
A theory is scientific if it is precise, quantitative, and amenable to being tested. A scientific theory is mathematical. Scientific theories are mathematical metaphors.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
A metaphor uses a word or phrase to define or extend or focus the meaning of another word or phrase. For example, "The river of time" is a metaphor. We all know that rivers flow inevitably from high to low ground. The metaphor focuses the concept of time on its inevitable uni-directionality. Metaphors make sense because we understand what they mean. We all know that rivers are wet, but we understand that the metaphor does not mean to imply that time drips, because we understand the words and their context. But on the other hand, a metaphor - in the hands of a creative and imaginative person - might mean something unexpected, and we need to think carefully about what the metaphor does, or might, mean. Mathematical metaphors - scientific models - also focus attention in one direction rather than another, which gives them explanatory and predictive power. Mathematical metaphors can also be interpreted in different and surprising ways.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Some mathematical models are very accurate metaphors. For instance, when Galileo dropped a heavy object from the leaning tower of Pisa, the <a href="http://en.wikipedia.org/wiki/Free_fall" target="_blank"><span style="color: blue;">distance it fell</span></a> increased in proportion to the square of the elapsed time. Mathematical equations sometimes represent reality quite accurately, but we understand the representation only when the meanings of the mathematical terms are given in words. The meaning of the equation tells us what aspect of reality the model focuses on. Many things happened when Galileo released the object - it rotated, air swirled, friction developed - while the equation focuses on one particular aspect: distance versus time. Likewise, the quadratic equation that relates distance to time can also be used to relate <a href="http://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence" target="_blank"><span style="color: blue;">energy to the speed of light</span></a>, or to relate <a href="http://en.wikipedia.org/wiki/Logistic_function" target="_blank"><span style="color: blue;">population growth</span> <span style="color: blue;">rate</span></a> to population size. In Galileo's case the metaphor relates to freely falling objects.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Other models are only approximations. For example, a particular theory describes the build up of mechanical <a href="http://en.wikipedia.org/wiki/Stress_intensity_factor" target="_blank"><span style="color: blue;">stress around a crack</span></a>, causing damage in the material. While cracks often have rough or ragged shapes, this important and useful theory assumes the crack is smooth and elliptical. This mathematical metaphor is useful because it focuses the analysis on the radius of curvature of the crack that is critical in determining the concentration of stress.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Not all scientific models are approximations. Some models measure something. For example, in statistical mechanics, the <a href="http://en.wikipedia.org/wiki/Temperature" target="_blank"><span style="color: blue;">temperature</span></a> of a material is proportional to the average kinetic energy of the molecules in the material. The temperature, in degrees centigrade, is a global measure of random molecular motion. In economics, the <a href="http://en.wikipedia.org/wiki/Gross_domestic_product" target="_blank"><span style="color: blue;">gross domestic product</span></a> is a measure of the degree of economic activity in the country.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Other models are not approximations or measures of anything, but rather graphical portrayals of a relationship. Consider, for example, the competition among three restaurants: <span style="color: blue;"><a href="http://www.joeseasydiner.co.za/" target="_blank"><span style="color: blue;">Joe's</span></a> </span>Easy Diner, <a href="http://www.mcdonalds.com/us/en/home.html" target="_blank"><span style="color: blue;">McDonald's</span></a>, and <a href="http://www.maxims-de-paris.com/" target="_blank"><span style="color: blue;">Maxim's de Paris</span></a>. All three restaurants compete with each other: if you're hungry, you've got to choose. Joe's and McDonald's are close competitors because they both specialize in hamburgers but also have other dishes. They both compete with Maxim's, a really swank and expensive boutique restaurant, but the competition is more remote. To model the competition we might draw a line representing "competition", with each restaurant as a dot on the line. Joe's and McDonald's are close together and far from Maxim's. This line is a mathematical metaphor, representing the proximity (and hence strength) of competition between the three restaurants. The distances between the dots are precise, but what the metaphor means, in terms of the real-world competition between Joe, McDonald, and Maxim, is not so clear. Why a line rather than a plane to refine the "axes" of competition (price and location for instance)? Or maybe a hill to reflect difficulty of access (Joe's is at one location in South Africa, Maxim's has restaurants in Paris, Peking, Tokyo and Shanghai, and McDonald's is just about everywhere). A metaphor emphasizes some aspects while ignoring others. Different mathematical metaphors of the same phenomenon can support very different interpretations or insights.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The scientist who constructs a mathematical metaphor - a model or theory - chooses to focus on some aspects of the phenomenon rather than others, and chooses to represent those aspects with one image rather than another. Scientific theories are fascinating and extraordinarily useful, but they are, after all, only metaphors.</div>
</div>
Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com1tag:blogger.com,1999:blog-9140612503596105113.post-82377590118873664432013-02-09T17:44:00.000+02:002013-04-08T17:42:54.256+03:00MOOCs and the Unknown<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: justify;">
MOOCs - Massive Open Online Courses - have fed hundreds of thousands of knowledge-hungry people around the globe. Stanford University's MOOCs program has taught open online courses to tens of thousands students per course, and has 2.5 million enrollees from nearly every country in the world. The students hear a lecturer, and also interact with each other in digital social networks that facilitate their mastery of the material and their integration into global communities of the knowledgable. The internet, and its MOOC realizations, extend the democratization of knowledge to a scale unimagined by early pioneers of workers' study groups or public universities. MOOCs open the market of ideas and knowledge to everyone, from the preacher of esoteric spirituality to the teacher of esoteric computer languages. It's all there, all you need is a browser.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The internet is a facilitating technology, like the invention of writing or the printing press, and its impacts may be as revolutionary. MOOCs are here to stay, like the sun to govern by day and the moon by night, and we can see that it is good. But it also has limitations, and these we must begin to understand.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Education depends on the creation and transfer of knowledge. Insight, invention, and discovery underlay the creation of knowledge, and they must precede the transfer of knowledge. MOOCs enable learners to sit at the feet of the world's greatest creators of knowledge.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
But the distinction between creation and transfer of knowledge is necessarily blurred in the process of education itself. Deep and meaningful education is the creation of knowledge in the mind of the learner. Education is not the transfer of digital bits between electronic storage devices. Education is the creation or discovery by the learner of thoughts that previously did not exist in his mind. One can transfer facts per se, but if this is done without creative insight by the learner it is no more than <a href="http://www.literaturepage.com/read/huckfinn-18.html" target="_blank"><span style="color: blue;">Huck Finn's learning</span></a> "the multiplication table up to six times seven is thirty-five".</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Invention, discovery and creation occur in the realm of the unknown; we cannot know what will be created until it appears. Two central unknowns dominate the process of education, one in the teacher's mind and one in the student's.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The teacher cannot know what questions the student will ask. Past experience is a guide, but the universe of possible questions is unbounded, and the better the student, the more unpredictable the questions. The teacher should respond to these questions because they are the fruitful meristem of the student's growing understanding. The student's questions are the teacher's guide into the student's mind. Without them the teacher can only guess how to reach the learner. The most effective teacher will personalize his interaction with the learner by responding to the student's questions.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The student cannot know the substance of what the teacher will teach; that's precisely why the student has come to the teacher. In extreme cases - of really deep and mind-altering learning - the student will not even understand the teacher's words until they are repeated again and again in new and different ways. The meanings of words come from context. A word means one thing and not another because we use that word in this way and not that. The student gropes to find out how the teacher uses words, concepts and tools of thought. The most effective learning occurs when the student can connect the new meanings to his existing mental contexts. The student cannot always know what contexts will be evoked by his learning.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
As an interim summary, learning can take place only if there is a gap of knowledge between teacher and student. This knowledge gap induces uncertainties on both sides. Effective teaching and learning occur by personalized interaction to dispel these uncertainties, to fill the gap, and to complete the transfer of knowledge.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We can now appreciate the most serious pedagogic limitation of MOOCs as a tool for education. Mass education is democratic, and MOOCs are far more democratic than any previous mode. This democracy creates a basic tension. The more democratic a mode of communication, the less personalized it is because of its massiveness. The less personalized a communication, the less effective it is pedagogically. The gap of the unknown that separates teacher and learner is greatest in massively democratic education.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<a href="http://apt46.net/2011/05/18/socrates-was-against-writing" target="_blank"><span style="color: blue;">Socrates inveighed</span></a> against the writing of books. They are too impersonal and immutable. They offer too little room for Socratic mid-wifery of wisdom, in which knowledge comes from dialog. Socrates wanted to touch his students' souls, and because each soul is unique, no book can bridge the gap. Books can at best jog the memory of learners who have already been enlightened. Socrates would probably not have liked MOOCs either, and for similar reasons.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Nonetheless, Socrates might have preferred MOOCs over books because the mode of communication is different. Books approach the learner through writing, and induce him to write in response. In contrast, MOOCs approach the learner through speech, and induce him to speak in response. Speech, for Socrates, is personal and interactive; speech is the road to the soul. Spoken bilateral interaction cannot occur between a teacher and 20 thousand online learners spread over time and space. That format is the ultimate insult to Socratic learning. On the other hand, the networking that can accompany a MOOC may possibly facilitate the internalization of the teacher's message even more effectively than a one-on-one tutorial. Fast and multi-personal, online chats and other networking can help the learners to rapidly find their own mental contexts for assimilating and modifying the teacher's message.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Many people have complained that the internet undermines the permanence of the written word. No document is final if it's on the web. Socrates might have approved, and this might be the greatest strength of the MOOC: no course ever ends and no lecture is really final. If MOOCs really are democratic then they cannot be controlled. The discovery of knowledge, like the stars in their orbits, is forever on-going, with occasional supernovas that brighten the heavens. The creation of knowledge will never end because the unknown is limitless. If MOOCs facilitate this creation, then they are good. </div>
</div>
Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com5tag:blogger.com,1999:blog-9140612503596105113.post-9563104159844100552012-11-10T17:06:00.003+02:002012-11-10T17:06:55.052+02:00Habit: A Response to the Unknown<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: justify;">
<a href="http://en.wikipedia.org/wiki/David_Hume" target="_blank"><span style="color: blue;">David Hume</span></a> explained that we believe by habit that logs will burn, <a href="http://www.infidels.org/library/historical/david_hume/human_understanding.html" target="_blank"><span style="color: blue;">stones will fall</span></a>, and endless other past patterns will recur. No experiment can <a href="http://decisions-and-info-gaps.blogspot.co.il/2011/10/end-of-science.html" target="_blank"><span style="color: blue;">prove</span></a> the future recurrence of past events. An experiment belongs to the future only until it is implemented; once completed, it becomes part of the past. In order for past experiments to prove something about the future, we must assume that the past will recur in the future. That's as circular as it gets.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
But without the habit of believing that past patterns will recur, we would be incapacitated and ineffectual (and probably reduced to moping and sobbing). Who would dare climb stairs or fly planes or eat bread and drink wine, without the belief that, like in the past, the stairs will bear our weight, the wings will carry us aloft, and the <a href="http://www.jewishvirtuallibrary.org/jsource/Bible/Psalms104.html" target="_blank"><span style="color: blue;">bread</span> </a>and <span style="color: blue;"><a href="http://kodesh.mikranet.org.il/i/tr/t26a4.htm" target="_blank"><span style="color: blue;">wine</span></a> </span>will nourish our body and soul. Without such habits we would become a jittering jelly of indecision in the face of the unknown.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
But you can't just pull a habit out of a hat. We spend great effort instilling good habits in our children: to brush their teeth, tell the truth, and not pick on their little sister even if she deserves it.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
As we get older, and I mean really older, we begin to worry that our habits become frozen, stodgy, closed-minded and constraining. Younger folks smile at our rigid ways, and try to loosen us up to the new wonders of the world: technological, culinary or musical. Changing your habits, or staying young when you aren't, isn't always easy. Without habits we're lost in an unknowable world.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
And yet, openness to new ideas, tastes, sounds and other experiences of many sorts can itself be a habit, and perhaps a good one. It is the habit of testing the unknown, of acknowledging the great gap between what we <i>do</i> know and what we <i>can</i> know. That gap is an invitation to growth and awe, as well as to fear and danger.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The habit of openness to change is not a contradiction. It is simply a recognition that habits are a response to the unknown. Not everything changes all the time (or so we're in the habit of thinking), and some things <i>are</i> new under the sun (as newspapers and Nobel prize committees periodically remind us).</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Habits, including the habit of open-mindedness, are a good thing precisely because we can never know for sure how good or bad they really are.</div>
<div style="text-align: justify;">
<br /></div>
</div>
Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-9117443652598853022012-05-26T09:19:00.000+03:002012-05-26T09:19:12.573+03:00Why We Need Libraries, Or, Memory and Knowledge<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: justify;">
<i>"Writing is thinking in slow motion. We see what at normal speeds escapes us, can rerun the reel at will to look for errors, erase, interpolate, and rethink. Most thoughts are a light rain, fall upon the ground, and dry up. Occasionally they become a stream that runs a short distance before it disappears. Writing stands an incomparably better chance of getting somewhere.</i></div>
<div style="text-align: justify;">
<i><br /></i></div>
<div style="text-align: justify;">
<i>"... What is written can be given endlessly and yet retained, read by thousands even while it is being rewritten, kept as it was and revised at the same time. Writing is magic." </i></div>
<div style="text-align: right;">
Walter Kaufmann</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We are able to know things because they happen again and again. We know about the sun because it glares down on us day after day. Scientists learn the laws of nature, and build confidence in their knowledge, by testing their theories over and over and getting the same results each time. We would be unable to learn the patterns and ways of our world if nothing were <a href="http://books.google.co.il/books?id=XtM7AAAAIAAJ&pg=PA391&lpg=PA391&dq=Apart+from+recurrence,+knowledge+would+be+impossible+alfred+north+whitehead&source=bl&ots=rPpBvqNxY0&sig=dwkTnPDbTbeVtYiwfZzPB4b0FPg&hl=en&sa=X&ei=8GW_T9aOBZKQ4gTG9u3pCQ&redir_esc=y#v=snippet&q=recurrence&f=false" target="_blank"><span style="color: blue;">repeatable.</span></a><span id="goog_522336784"></span><span id="goog_522336785"></span><a href="http://www.blogger.com/"></a></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
But without memory, we could learn nothing even if the world were tediously repetitive. Even though the sun rises daily in the east, we could not <i>know</i> this if we couldn't <i>remember</i> it.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The world has stable patterns, and we are able to discover these patterns because we remember. Knowledge requires more than memory, but memory is an essential element.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The invention of writing was a great boon to knowledge because writing is collective memory. For instance, the Peloponnesian wars are known to us through Thucydides' writings. People understand themselves and their societies in part through knowing their history. History, as distinct from pre-history, depends on the written word. For example, each year at the Passover holiday, Jewish families through the ages have read the <a href="http://en.wikipedia.org/wiki/Haggadah" target="_blank"><span style="color: blue;">story</span> </a>of the Israelite exodus from Egypt. We are enjoined to see ourselves as though we were <a href="http://www.biu.ac.il/JH/Parasha/eng/bo/agur.html" target="_blank"><span style="color: blue;">there,</span></a> fleeing Egypt and trudging through the <a href="http://www.yeshiva.org.il/midrash/shiur.asp?id=13423" target="_blank"><span style="color: blue;">desert.</span></a> Memory, recorded for all time, creates individual and collective awareness, and motivates aspirations and actions.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Without writing, much collective memory would be lost, just as books themselves are sometimes lost. We know, for instance, that Euclid wrote a book called <i><a href="http://www.projecthindsight.com/PHASER/web/porisms.html" target="_blank"><span style="color: blue;">Porisms,</span></a></i> but the book is lost and we know next to nothing about its message. Memory, and knowledge, have been lost.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Memory can be uncertain. We've all experienced that on the personal level. Collective memory can also be uncertain. We're sometimes uncertain of the meaning of rare ancient words, such as <i>lilit</i> in <i>Isaiah</i> (34:14) or <i>gvina</i> in <i>Job</i> (10:10). Written traditions, while containing an element of truth, may be of uncertain meaning or veracity. For instance, we know a good deal, both from the Bible and from archeological findings, about Hezekiah who ruled the kingdom of Judea in the late 8th century BCE. About David, three centuries earlier, we can be much less certain. Biblical stories are told in great detail but corroboration is hard to obtain.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Memory can be deliberately corrupted. Records of history can be embellished or prettified, as when a king commissions the chronicling of his achievements. Ancient monuments glorifying imperial conquests are invaluable sources of knowledge of past ages, but they are unreliable and must be interpreted cautiously. Records of purported events that never occurred can be maliciously fabricated. For instance, <i>The <a href="http://en.wikipedia.org/wiki/The_Protocols_of_the_Elders_of_Zion" target="_blank"><span style="color: blue;">Protocols</span></a> of the Elders of Zion</i> is pure invention, though that book has been re-published voluminously throughout the world and continues to be taken seriously by many people. Memory is alive and very real, even if it is memory of things that never happened.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Libraries are the physical medium of human collective memory, and an essential element in maintaining and enlarging our knowledge. There are many types of libraries. The family library may have a few hundred books, while the <a href="http://en.wikipedia.org/wiki/Library_of_Congress" target="_blank"><span style="color: blue;">library of Congress</span></a> has 1,349 km of bookshelves and holds about 147 million items. Libraries can hold paper books or digital electronic documents. Paper can perish in fire as happened to the <a href="http://ehistory.osu.edu/world/articles/articleview.cfm?aid=9" target="_blank"><span style="color: blue;">Alexandrian library,</span></a> while digital media can be erased, or become damaged and unreadable. Libraries, like memory itself, are fragile and need care.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Why do we need libraries? Being human means, among other things, the capacity for knowledge, and the ability to appreciate and benefit from it. The written record is a public good, like the fresh air. I can read Confucius or Isaiah centuries after they lived, and my reading does not consume them. Our collective memory is part of each individual, and preserving that memory preserves a part of each of us. Without memory, we are without knowledge. Without knowledge, we are only another animal.</div>
<div style="text-align: justify;">
<br /></div>
</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-49215276485417799512012-05-11T08:46:00.000+03:002012-05-11T08:46:08.938+03:00Alone<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: justify;">
<i>[S]ince there is an infinity of possible worlds, there is also an infinity of possible laws, some proper to one world, others proper to another, and each possible individual of a world includes the laws of its world in its notion.</i> Gottfried Wilhelm Leibniz</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
On simple matters we can agree. Water freezes and wood burns. People can agree on social or political issues, though often more from self interest than from reasoned argument.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Agreement is rare or flitting on what is good or bad, worthy or worthless, humane or heartless. Are we simply not wise or intelligent or patient or convincing enough to find consensus?</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Agreement is rare because the realm of possibilities is boundless. Every thought or vision carries a cosmos of variations and extensions. A good idea is one that spawns new good ideas, on and on. We are told that God the creator created man and woman in his image: as creators, to be fruitful and to multiply children, and ideas, and worlds.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
At first we think that we are the entire world. Then we discover other worlds - things and people - and we think that they are the same as us. Then we discover that they have minds that, like ours, create their own worlds. We learn to communicate with those minds out there. We think that our meanings are their meanings, and this is true for many things, and even for many thoughts. But not for all of them. Then we discover that our deepest feelings are ours alone, and that we have created a continent whose shores are only lapped by waves from distant lands. </div>
</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com1tag:blogger.com,1999:blog-9140612503596105113.post-17481795337823671702012-04-23T08:11:00.000+03:002012-11-22T21:13:33.859+02:00I am a Believer<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: justify;">
There are many things that I don't know. <i>About the past</i>: how my great-great-grandfather supported his family, how Charlemagne consolidated his imperial power, or how Rabbi Akiva became a scholar. <i>About the future:</i> whether I'll get that contract, how much the climate will change in the next 100 years, or when the next war will erupt. <i>About why things are as they are:</i> why stones fall and water freezes, or why people love or hate or don't give a damn, or why we are, period.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We reflect about questions like these, trying to answer them and to learn from them. For instance, we are interested in the relations between Charlemagne and his co-ruling brother <a href="http://en.wikipedia.org/wiki/Charlemagne" target="_blank"><span style="color: blue;">Carloman</span></a>. This can tell us about brothers, about emperors, and about power. We are interested in <a href="http://en.wikipedia.org/wiki/Akiva_ben_Joseph" target="_blank"><span style="color: blue;">Akiva</span></a> because he purportedly started studying at the age of 40, which tells us something about the indomitable human spirit.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We sometimes get to the bottom of things and understand the whys and ways of our world. We see patterns and discover laws of nature, or at least we tell stories of how things happen. Stones fall because it's their nature to seek the center of the world (Aristotle), or due to gravitational attraction (Newton), or because of mass-induced space warp (Einstein). Human history has its patterns, driven by the will to power of heroic leaders, or by the unfolding of truth and justice, or by God's hand in history.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We also think about thinking itself, as suggested by Rodin's <i><a href="http://en.wikipedia.org/wiki/The_Thinker" target="_blank"><span style="color: blue;">Thinker</span></a>.</i> What <i>is</i> thinking (or what do we think it is)? Is thinking a physical process, like electrons whirling in our brain? Or does thinking involve something transcendental; maybe the soul whirling in the spheres? Each age has its answers.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We sometimes get stuck, and can't figure things out or get to the bottom of things. Sometimes we even realize that there is no "bottom", that each answer brings its own questions. As <a href="http://www.todayinsci.com/W/Wheeler_John/WheelerJohn-Quotations.htm" target="_blank"><span style="color: blue;">John Wheeler</span></a> said, "We live on an island of knowledge surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance."</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Sometimes we get stuck in an even subtler way that is very puzzling, and even disturbing. Any rational chain of thought must have a starting point. Any rational justification of that starting point must have its own starting point. In other words, any attempt to rationally justify rational thought can never be completed. Rational thought cannot justify itself, which is <i>almost</i> the same as saying that rational thought is not justified. Any specific rational argument - Einstein's cosmology or Piaget's psychology - is justified based on its premises (and evidence, and many other things). But Rational Thought, as a method, as a way of life and a core of civilization, cannot ultimately and unequivocally justify itself.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
I believe that experience reflects reality, and that thought organizes experience to reveal the patterns of reality. The truth of this belief is, I believe, self evident and unavoidable. Just look around you. Flowers bloom anew each year. Planets swoop around with great regularity. We have learned enough about the world to change it, to control it, to benefit from it, even to greatly endanger our small planetary corner of it. I believe that rational thought is justified, but that's a belief, not a rational argument.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Rational thought, in its many different forms, is not only justified; it is unavoidable. We can't resist it. Moses saw the flaming bush and was both frightened and curious because it was not consumed (<a href="http://www.biblegateway.com/passage/?search=Exodus+3&version=NIV" target="_blank"><span style="color: blue;">Exodus</span></a> 3:1-3). He was drawn towards it despite his fear. The Unknown draws us irresistibly on an endless search for order and understanding. The Unknown drives us to search for knowledge, and the search is not fruitless. This I believe. </div>
</div>
Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-38822805291846577922012-03-22T21:12:00.000+02:002012-03-22T21:12:55.739+02:00We're Just Getting Started: A Glimpse at the History of Uncertainty<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: justify;">We've had our cerebral cortex for several tens of thousands of years. We've lived in more or less sedentary settlements and produced excess food for 7 or 8 thousand years. We've written down our thoughts for roughly 5 thousand years. And Science? The ancient <a href="http://books.google.co.il/books/about/Greek_science_in_antiquity.html?id=mweWMAlf-tEC&redir_esc=y" target="_blank"><span style="color: blue;">Greeks</span></a> had some, but science and its systematic application are overwhelmingly a European invention of the past 500 years. We can be proud of our accomplishments (quantum theory, polio vaccine, powered machines), and we should worry about our destructive capabilities (atomic, biological and chemical weapons). But it is quite plausible, as <a href="http://en.wikipedia.org/wiki/The_Ghost_in_the_Machine" target="_blank"><span style="color: blue;">Koestler</span> </a>suggests, that we've only just begun to discover our cerebral capabilities. It is more than just plausible that the mysteries of the universe are still largely hidden from us. As evidence, consider the fact that the main theories of physics - general relativity, quantum mechanics, statistical mechanics, thermodynamics - are still not unified. And it goes without say that the consilient <a href="http://en.wikipedia.org/wiki/Consilience_(book)" target="_blank"><span style="color: blue;">unity of science</span></a> is still far from us.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">What holds for science in general, holds also for the study of uncertainty. The <a href="http://books.google.co.il/books/about/A_History_of_Greek_Mathematics.html?id=drnY3Vjix3kC&redir_esc=y"><span style="color: blue;">ancient Greeks</span></a> invented the axiomatic method and used it in the study of mathematics. Some <a href="http://www6.miami.edu/ethics/jpsl/archives/bookReview/franklin.html"><span style="color: blue;">medieval </span></a><span id="goog_670338195"></span><span id="goog_670338196"></span><a href="http://www.blogger.com/"></a>thinkers explored the mathematics of uncertainty, but it wasn't until around <a href="http://www.cambridge.org/gb/knowledge/isbn/item1163358/?site_locale=en_GB"><span style="color: blue;">1600 </span></a>that serious thought was directed to the systematic study of uncertainty, and <a href="http://press.princeton.edu/titles/853.html"><span style="color: blue;">statistics </span></a>as a separate and mature discipline emerged only in the <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674403413"><span style="color: blue;">19th century.</span></a> The 20th century saw a florescence of uncertainty models. <a href="http://en.wikipedia.org/wiki/Jan_%C5%81ukasiewicz"><span style="color: blue;">Lukaczewicz </span></a>discovered 3-valued logic in 1917, and in 1965 <a href="http://en.wikipedia.org/wiki/Lotfi_A._Zadeh"><span style="color: blue;">Zadeh </span></a>introduced his work on fuzzy logic. In between, <a href="http://en.wikipedia.org/wiki/Wald's_maximin_model"><span style="color: blue;">Wald </span></a>formulated a modern version of min-max in 1945. A plethora of other theories, including <a href="http://www.ramas.com/riskcalc.htm"><span style="color: blue;">P-boxes</span></a>, lower <a href="http://en.wikipedia.org/wiki/Imprecise_probability"><span style="color: blue;">previsions</span></a>, <a href="http://www.blogger.com/goog_670338267"><span style="color: blue;">Dempster-</span></a><span style="color: blue;"><a href="http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory"><span style="color: blue;">Shafer</span></a> </span>theory, <a href="https://www.google.com/search?hl=en&q=Klir+GJ.+Uncertainty+and+Information"><span style="color: blue;">generalized information theory</span></a> and <a href="http://info-gap.com/"><span style="color: blue;">info-gap theory</span></a> all suggest that the study of uncertainty will continue to grow and diversify.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">In short, we have learned many facts and begun to understand our world and its uncertainties, but the disputes and open questions are still rampant and the yet-unformulated questions are endless. This means that innovations, discoveries, inventions, surprises, errors, and misunderstandings are to be expected in the study or management of uncertainty. We are just getting started. </div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-54052622703239044592012-02-20T08:31:00.000+02:002012-02-20T08:31:12.084+02:00Accidental Education<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: center;"><i>"He had to take that life as he best could, </i></div><div style="text-align: center;"><i>with such accidental education as luck had given him". </i></div><div style="text-align: center;"><a href="http://www.bartleby.com/159/20.html" target="_blank"><span style="color: blue;">Henry Adams</span></a></div><br />
<div style="text-align: justify;">I am a university professor. Universities facilitate efficient and systematic learning, so I teach classes, design courses, and develop curricula. Universities have tremendously benefitted technology, the economy, health, cultural richness and awareness, and many other "goods".</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Nonetheless, some important lessons are learned strictly by accident. Moreover, without accidental surprises, education would be a bit dry, sometimes even sterile. As Adams <a href="http://www.classicreader.com/book/540/5" target="_blank"><span style="color: blue;">wrote</span></a>: "The chief wonder of education is that it does not ruin everybody concerned in it, teachers and taught."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">An example. I chose my undergraduate college because of their program in anthropology. When I got there I took a chemistry course in my first semester. I was enchanted, by the prof as much as by the subject. I majored in chemistry and never went near the anthro department. If that prof had been on sabbatical I might have ended up an anthropologist.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Universities promote lifelong learning. College is little more than a six-pack of knowledge, a smattering of understanding and a wisp of wisdom. But lifelong learning doesn't only mean "come back to grad school". It means perceiving those rarities and strangenesses that others don't notice. Apples must have fallen on lots of peoples' heads before some <a href="http://en.wikipedia.org/wiki/Isaac_Newton#Apple_incident" target="_blank"><span style="color: blue;">clever fellow</span></a> said "Hmmm, what's going on here?".</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Accidental education is much more than keeping your eyes and mind open (though that is essential). To understand the deepest importance of accidental education we need to enlist two concepts: the boundlessness of the unknown, and human free will. We will then understand that accidental education feeds the potential for uniqueness of the individual.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">As we have explained elsewhere, in discussing <a href="http://decisions-and-info-gaps.blogspot.com/2011/12/jabberwocky-or-grand-unified-theory-of.html" target="_blank"><span style="color: blue;">grand unified theories</span></a> and <span style="color: blue;"><a href="http://decisions-and-info-gaps.blogspot.com/2012/01/age-of-imagination.html" target="_blank"><span style="color: blue;">imagination,</span></a> </span>the unknown is richer and stranger - and more contradictory - than the single physical reality that we actually face. The unknown is the realm of all possible as well as impossible worlds. It is the domain in which our dreams and speculations wander. It may be frightening or heartening, but taken as a whole it is incoherent, contradictory and endlessly amazing, variable and stimulating.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We learn about the unknown in part by speculating, wondering, and dreaming (awake and asleep). Imagining the impossible is very educational. For instance, most things are impossible for children (from tying their shoes to running the country), but they must be encouraged to imagine that they can or will be able to do them. Adults also can re-make themselves in line with their dreams. We are free and able to imagine ourselves and the world in endless new and different ways. Newton's apple brought to his mind a picture of the universe unlike any that had been imagined before. Surprises, like dreams, can free us from the mundane. Cynics sometimes sneer at personal or collective myths and musings, but the ability to re-invent ourselves is the essence of humanity. The children of Israel imagined at Sinai that the covenant was given directly to them all - men, women and children equally - with no royal or priestly intermediary. This launched the concept and the possibility of <a href="http://www.createdequalthebook.com/" target="_blank"><span style="color: blue;">political equality.</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The Israelites had no map of the desert because the promised land that they sought was first of all an idea. Only after re-inventing themselves as a free people created equal in the image of God, and not slaves, only after finding a collective identity and mission, only then could they enter the land of Canaan. Theirs wanderings were random and their discoveries were accidental, but their formative value is with us to this day. No map or curriculum can organize one's wandering in the land of imagination. Unexpected events happen in the real world, but they stimulate our imagination of the infinity of other possible worlds. Our most important education is the accidental stumbling on new thoughts that feed our potential for innovation and uniqueness. For the receptive mind, accidental education can be the most sublime.</div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com2tag:blogger.com,1999:blog-9140612503596105113.post-2068347791220791102012-01-28T17:01:00.000+02:002012-01-28T17:01:17.372+02:00Genesis for Engineers<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;"></div><div style="text-align: justify;">Technology has come a long way since Australopithecus first bruised their fingers chipping flint to make knives and scrapers. We are blessed to fruitfully multiply, to fill the world and to master it (<i>Genesis</i> 1:28). And indeed the trend of technological history is towards increasing mastery over our world. Inventors deliberately invent, but many inventions are useless or even harmful. Why is there progress and how certain is the process? Part of the answer is that good ideas catch on and bad ones get weeded out. Reality, however, is more complicated: what is 'good' or 'bad' is not always clear; unintended consequences cannot be predicted; and some ideas get lost while others get entrenched. Mastering the darkness and chaos of creation is a huge engineering challenge. But more than that, <a href="http://decisions-and-info-gaps.blogspot.com/2011/09/pains-of-progress.html" target="_blank"><span style="color: blue;">progress is painful</span></a> and uncertain and the challenge is not only technological.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">An example of the weeding-out process, by which our mastery improves, comes to us in Hammurabi's code of law from 38 centuries ago:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death. If it kill the son of the owner the son of that builder shall be put to death." <a href="http://eawc.evansville.edu/anthology/hammurabi.htm" target="_blank"><span style="color: blue;">(Articles 229-230)</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Builders who use inferior techniques, or who act irresponsibly, will be ruthlessly removed. Hammurabi's law doesn't say what techniques to use; it is a mechanism for selecting among techniques. As the level of competence rises and the rate of building collapse decreases, the law remains the same, implicitly demanding better performance after each improvement.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Hammurabi's law establishes negative incentives that weed out faulty technologies. In contrast, positive incentives can induce beneficial invention. <a href="http://en.wikipedia.org/wiki/John_Harrison" target="_blank"><span style="color: blue;">John Harrison</span></a> (1693-1776) worked for years developing a clock for accurate navigation at sea, motivated by the Royal Society's 20,000 pound prize.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Organizations, mores, laws and other institutions explain a major part of how good ideas catch on and how bad ones are abandoned. But good ideas can get lost as well. <a href="http://www.pajiba.com/book_reviews/guns-germs-and-steel-the-fates-of-human-societies-by-jared-diamond-.php" target="_blank"><span style="color: blue;">Jared Diamond</span></a> relates that bow and arrow technologies emerged and then disappeared from pre-historic Australian cultures. Aboriginal mastery of the environment went up and then down. The mechanisms or institutions for selecting better tools do not always exist or operate.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Valuable technologies can be "side-lined" as well, despite apparent advantages. The <a href="http://en.wikipedia.org/wiki/CANDU_reactor#Fuel_cycles" target="_blank"><span style="color: blue;">CANDU</span></a> nuclear reactor technology, for instance, uses natural Uranium. No isotope enrichment is needed, so its fuel cycle is disconnected from Uranium enrichment for military applications (atom bombs use highly enriched Uranium or Plutonium). CANDU's two main technological competitors - pressurized and boiling water reactors - use isotope-enriched fuel. Nuclear experts argue long (and loud) about the merits of various technologies, but no "major" or "serious" accidents (<a href="http://www.guardian.co.uk/news/datablog/2011/mar/14/nuclear-power-plant-accidents-list-rank" target="_blank"><span style="color: blue;">INES levels</span></a> 6 or 7) have occurred with CANDU reactors but have with PWRs or BWRs. Nonetheless, the CANDU is a <a href="http://www.world-nuclear.org/info/inf32.html" target="_blank"><span style="color: blue;">minor contributor</span></a> to world nuclear power.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The long-run improvement of technology depends on incentives created by attitudes, organizations and institutions, like the Royal Society and the law. Technology modifies those attitudes and institutions, creating an interactive process whereby society influences technological development, and technology alters society. The main uncertainty in technological progress arises from unintended <a href="http://www.technion.ac.il/yakov/HUMANIT2.pdf" target="_blank"><span style="color: blue;">impacts of technology</span></a> on mores, values and society as a whole. An example will make the point.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Early mechanical clocks summoned the faithful to prayer in medieval monasteries. But technological innovations may be used for generations without anyone realizing their full implications, and so it was with the clock. The long-range influence of the mechanical clock on western civilization was the idea of "<i>time discipline</i> as opposed to <i>time obedience.</i> One can ... use public clocks to summon people for one purpose or another; but that is not punctuality. Punctuality comes from within, not from without. It is the mechanical clock that made possible, for better or for worse, a civilization attentive to the passage of time, hence to productivity and performance." <a href="http://books.google.co.il/books/about/Revolution_in_time.html?id=iVSOyg877usC&redir_esc=y" target="_blank"><span style="color: blue;">(Landes, p.7)</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Unintended consequences of technology - what economists called "externalities" - can be beneficial or harmful. The unintended internalization of punctuality is beneficial (maybe). The clock example illustrates how our values gradually and unexpectedly change as a result of technological innovation. Environmental pollution and adverse climate change are harmful, even when they result from manufacturing beneficial consumer goods. Attitudes towards technological progress are beginning to change in response to perceptions of technologically-induced climate change. Pollution and climate change may someday seriously disrupt the technology-using societies that produced them. This disruption may occur either by altering social values, or by adverse material impacts, or both.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Progress occurs in historical and institutional context. Hammurabi's Code created incentives for technological change; monastic life created needs for technological solutions. Progress is uncertain because we cannot know what will be invented, and whether it will be beneficial or harmful. Moreover, inventions will change our attitudes and institutions, and thus change the process of invention itself, in ways that we cannot anticipate. The scientific engineer must dispel the "darkness over the deep" (<i>Genesis </i>1:2) because mastery comes from enlightenment. But in doing so we change both the world and ourselves. The unknown is not only over "the waters" but also in ourselves.</div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-6680912363234552712012-01-09T11:35:00.000+02:002012-01-09T11:35:19.181+02:00The Age of Imagination<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: justify;">This is not only the Age of Information, this is also the Age of Imagination. Information, at any point in time, is bounded, while imagination is always <a href="http://decisions-and-info-gaps.blogspot.com/2011/12/jabberwocky-or-grand-unified-theory-of.html" target="_blank"><span style="color: blue;">unbounded</span></a>. We are overwhelmed more by the potential for new ideas than by the admittedly vast existing knowledge. We are drunk with the excitement of the unknown. Drunks are sometimes not a pretty sight; Isaiah (28:8) is very <a href="http://ebible.org/bible/kjv/Isaiah.htm" target="_blank"><span style="color: blue;">graphic</span></a>.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">It is true that topical specialization occurs, in part, due to what we proudly call the explosion of knowledge. There is so much to know that one must ignore huge tracts of knowledge. But that is only half the story. The other half is that we have begun to discover the unknown, and its lure is irresistible. Like the scientific and global explorers of the early modern period - <a href="http://books.google.co.il/books/about/The_discoverers.html?id=aEr07wJ21NYC&redir_esc=y" target="_blank"><span style="color: blue;">The Discoverers</span></a> as Boorstin calls them - we are intoxicated by the potential "out there", beyond the horizon, beyond the known. That intoxication can distort our vision and judgment.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Consider <a href="http://decisions-and-info-gaps.blogspot.com/2011/12/picking-theory-is-like-building-boat-at.html" target="_blank"><span style="color: blue;">Reuven's comment</span></a>, from long experience, that "Engineers use formulas and various equations without being aware of the theories behind them." A pithier version was said to me by an acquisitions editor at Oxford University Press: "Engineers don't read books." She should know.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Engineers are imaginative and curious. They are seekers, and they find wonderful things. But they are too engrossed in inventing and building The New, to be much engaged with The Old. "Scholarship", wrote <a href="http://books.google.co.il/books?id=nNbWAO1M6hsC&pg=PA28&lpg=PA28&dq=%22intimate+and+systematic+familiarity+with+past+cultural+achievements%22+Veblen&source=bl&ots=gGsp_gdrrQ&sig=zWIqfSka39V99WgFC5CQMBDwUqA&hl=en&sa=X&ei=eV8FT-WvMIbAswav2OiCDw&redir_esc=y#v=onepage&q&f=false" target="_blank"><span style="color: blue;">Thorstein Veblen</span></a> is "an intimate and systematic familiarity with past cultural achievements." Engineers - even research engineers and professors of engineering - spend very little time with past masters. How many computer scientists scour the works of <a href="http://books.google.co.il/books/about/Charles_Babbage_on_the_principles_and_de.html?id=K20LAQAAIAAJ&redir_esc=y" target="_blank"><span style="color: blue;">Charles Babbage</span></a>? How often do thermal engineers study the writings of <a href="http://zapatopi.net/kelvin/papers" target="_blank"><span style="color: blue;">Lord Kelvin</span></a>? A distinguished professor of engineering, himself a member of the US National Academy of Engineering, once told me that there is little use for journal articles more than a few years old.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Fragmentation of knowledge results from the endless potential for new knowledge. Seekers - engineers and the scientists of nature, society and humanity - move inexorably apart from one another. But nonetheless it's all connected; <a href="http://www.americanscientist.org/bookshelf/pub/e-o-wilsons-consilience-a-noble-unifying-vision-grandly-expressed" target="_blank"><span style="color: blue;">consilient</span></a>. Technology alters how we live. Science alters what we think. How can we keep track of it all? How can we have some at least vague and preliminary sense of where we are heading and whether we value the prospect?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The first prescription is to be aware of the problem, and I greatly fear that many movers and shakers of the modern age are unaware. The second prescription is to identify who should take the lead in nurturing this awareness. That's easy: teachers, scholars, novelists, intellectuals of all sorts.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Isaiah struggled with this long ago. "Priest and prophet erred with liquor, were swallowed by wine."(<a href="http://ebible.org/bible/kjv/Isaiah.htm" target="_blank"><span style="color: blue;">Isaiah, 28:7</span></a>) We are drunk with the excitement of the unknown. Who can show the way?</div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-36706638137904783972012-01-04T08:12:00.002+02:002012-01-05T10:18:48.634+02:00Mind or Stomach? Imagination or Necessity?<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;"></div><div style="text-align: justify;">"An army marches on its stomach" <a href="http://www.brainyquote.com/quotes/quotes/n/napoleonbo130788.html" target="_blank"><span style="color: blue;">said</span></a> Napoleon, who is also credited with <a href="http://www.brainyquote.com/quotes/quotes/n/napoleonbo150185.html" target="_blank"><span style="color: blue;">saying</span></a> "Imagination rules the world". Is history driven by raw necessity and elementary needs? Or is history hewn by people from their imagination, dreams and ideas?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The answer is simple: 'Both'. The challenge is to untangle imagination from necessity. Consider these examples:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">An ancient Jewish saying is "Without flour, there is no Torah. Without Torah there is no flour." (<a href="http://www.chabad.org/library/article_cdo/aid/2019/jewish/Chapter-Three.htm" target="_blank"><span style="color: blue;">Avot 3:17</span></a>) Scholars don't eat much, but they do need to eat. And if you feed them, they produce wonders.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Give a typewriter to a monkey and he might eventually tap out Shakespeare's sonnets, but it's not very likely. Give that monkey an inventive mind and he will produce poetry, a vaccine against polio, and the atom bomb. Why the bomb? He needed it.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Necessity is the mother of invention, they say, but it's actually a two-way street. For instance, human inventiveness includes dreams of cosmic domination, leading to war. Hence the need for that bomb. Satisfying a need, like the need for flour, induces inventiveness. And this inventiveness, like the discovery of genetically modified organisms, creates new needs. Necessity induces inventiveness, and inventiveness creates new dangers, challenges and needs. This cycle is endless because the realm of imagination is boundless, far greater than prosaic reality, as we discussed <a href="http://decisions-and-info-gaps.blogspot.com/2011/12/jabberwocky-or-grand-unified-theory-of.html" target="_blank"><span style="color: blue;">elsewhere</span></a>.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Imagination and necessity are intertwined, but still are quite different. Necessity focusses primarily on what we know, while imagination focusses on the unknown.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We know from experience that we need food, shelter, warmth, love, and so on. These requirements force themselves on our awareness. Even the need for protection against surprise is known, though the surprise is not.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Imagination operates in the realm of the unknown. We seek the new, the interesting, or the frightful. Imagination feeds our fears of the unknown and nurtures our hopes for the unimaginable. We explore the bounds of the possible and try breaking through to the impossible.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Mind or stomach? Imagination or necessity? Every 'known' has an 'unknown' lurking behind it, and every 'unknown' may some day be discovered or dreamed into existence. Every mind has a stomach, and a stomach with no mind is not human.</div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com9tag:blogger.com,1999:blog-9140612503596105113.post-18711920730625942472011-12-19T09:30:00.000+02:002011-12-19T09:30:37.691+02:00Jabberwocky. Or: Grand Unified Theory of Uncertainty???<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;"></div><br />
<div style="text-align: justify;"><a href="http://www.jabberwocky.com/carroll/jabber/jabberwocky.html" target="_blank"><span style="color: blue;">Jabberwocky</span></a>, Lewis Carroll's whimsical nonsense poem, uses made-up words to create an atmosphere and to tell a story. "Billig", "frumious", "vorpal" and "uffish" have no lexical meaning, but they <i>could</i> have. The poem demonstrates that the realm of imagination exceeds the bounds of reality just as the set of possible words and meanings exceeds its real lexical counterpart.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Uncertainty thrives in the realm of imagination, incongruity, and contradiction. Uncertainty falls in the realm of science fiction as much as in the realm of science. People have struggled with uncertainty for ages and many theories of uncertainty have appeared over time. How many uncertainty theories do we need? Lots, and forever. Would we say that of physics? No, at least not forever.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Can you think inconsistent, incoherent, or erroneous thoughts? I can. (I do it quite often, usually without noticing.) For those unaccustomed to thinking incongruous thoughts, and who need a bit of help to get started, I can recommend thinking of "two meanings packed into one word like a portmanteau," like 'fuming' and 'furious' to get '<a href="http://www.literature.org/authors/carroll-lewis/the-hunting-of-the-snark/preface.html" target="_blank"><span style="color: blue;">frumious</span></a>' or 'snake' and 'shark' to get '<a href="http://www.literature.org/authors/carroll-lewis/the-hunting-of-the-snark" target="_blank"><span style="color: blue;">snark</span></a>'.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Portmanteau words are a start. Our task now is portmanteau thoughts. Take for instance the idea of a 'thingk':</div><br />
When I think a thing I've thought,<br />
I have often felt I ought<br />
To call this thing I think a "Thingk",<br />
Which ought to save a lot of ink.<br />
<br />
The participle is written "thingking",<br />
(Which is where we save on inking,)<br />
Because "thingking" says in just one word:<br />
"Thinking of a thought thing." Absurd!<br />
<br />
All this shows high-power abstraction.<br />
(That highly touted human contraption.)<br />
Using symbols with subtle feint,<br />
To stand for something which they ain't.<br />
<br />
<div style="text-align: justify;">Now that wasn't difficult: two thoughts at once. Now let those thoughts be contradictory. To use a prosaic example: thinking the unthinkable, which I suppose is 'unthingkable'. There! You did it. You are on your way to a rich and full life of thinking incongruities, fallacies and contradictions. We can hold in our minds thoughts of 4-sided triangles, parallel lines that intersect, and endless other seeming impossibilities from super-girls like <a href="http://en.wikipedia.org/wiki/Pippi_Longstocking" target="_blank"><span style="color: blue;">Pippi Longstockings</span></a> to life on Mars (some of which may actually be true, or at least possible).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Scientists, logicians, and saints are in the business of dispelling all such incongruities, errors and contradictions. Banishing inconsistency is possible in science because (or if) there is only one coherent world. Belief in one coherent world and one <a href="http://en.wikipedia.org/wiki/Grand_Unified_Theory" target="_blank"><span style="color: blue;">grand unified theory</span></a> is the modern secular version of the ancient monotheistic intuition of one universal God (in which saints tend to believe). Uncertainty thrives in the realm in which scientists and saints have not yet completed their tasks (perhaps because they are incompletable). For instance, we must entertain a wide range of conflicting conceptions when we do not yet know how (or whether) quantum mechanics can be reconciled with general relativity, or Pippi's strength reconciled with the limitations of physiology. As <a href="http://www.bartleby.com/159/34.html" target="_blank"><span style="color: blue;">Henry Adams wrote</span></a>:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"Images are not arguments, rarely even lead to proof, but the mind craves them, and, of late more than ever, the keenest experimenters find twenty images better than one, especially if contradictory; since the human mind has already learned to deal in contradictions."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The very idea of a rigorously logical theory of uncertainty is startling and implausible because the realm of the uncertain is inherently incoherent and contradictory. Indeed, the first uncertainty theory - probability - <a href="http://books.google.co.il/books/about/The_emergence_of_probability.html?id=Z2g0F1V5DywC&redir_esc=y" target="_blank"><span style="color: blue;">emerged</span></a> many centuries after the invention of the axiomatic method in mathematics. Today we have many theories of uncertainty: <a href="http://en.wikipedia.org/wiki/Probability_theory" target="_blank"><span style="color: blue;">probability</span></a>, <a href="http://www.sipta.org/" target="_blank"><span style="color: purple;">imprecise probability</span></a>, <a href="http://en.wikipedia.org/wiki/Information_theory" target="_blank"><span style="color: blue;">information theory</span></a>, <span style="color: purple;"><a href="http://www.carleton-scientific.com/isipta/PDF/024.pdf" target="_blank"><span style="color: purple;">generalized information theory</span></a>,</span> <a href="http://plato.stanford.edu/entries/logic-fuzzy" target="_blank"><span style="color: blue;">fuzzy logic</span></a>, <a href="http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory" target="_blank"><span style="color: purple;">Dempster-Shafer theory</span></a>, <a href="http://info-gap.com/" target="_blank"><span style="color: blue;">info-gap theory</span></a>, and more (the list is a bit uncertain). Why such a long and diverse list? It seems that in constructing a logically consistent theory of the logically <i>in</i>consistent domain of uncertainty, one cannot capture the whole beast all at once (though I'm uncertain about this).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A theory, in order to be scientific, must exclude something. A scientific theory makes statements such as "This happens; that doesn't happen." Karl Popper explained that a scientific theory must contain statements that are at risk of being wrong, statements that could be <a href="http://www.experiment-resources.com/falsifiability.html" target="_blank"><span style="color: blue;">falsified</span></a>. Deborah Mayo demonstrated how science grows by discovering and recovering from <a href="http://books.google.co.il/books/about/Error_and_the_growth_of_experimental_kno.html?id=FEsAh4L9r_EC&redir_esc=y" target="_blank"><span style="color: blue;">error</span></a>.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The realm of uncertainty contains contradictions (ostensible or real) such as the pair of statements: "Nine year old girls can lift horses" and "<a href="http://en.wikipedia.org/wiki/Muscle_contraction" target="_blank"><span style="color: blue;">Muscle fiber generates tension</span></a> through the action of actin and myosin cross-bridge cycling". A logically consistent theory of uncertainty can handle improbabilities, as can scientific theories like quantum mechanics. But a logical theory cannot encompass outright contradictions. Science investigates a domain: the natural and physical worlds. Those worlds, by virtue of their existence, are perhaps coherent in a way that can be reflected in a unified logical theory. Theories of uncertainty are directed at a larger domain: the natural and physical worlds and all imaginable (and unimaginable) other worlds. That larger domain is definitely <i>not</i> coherent, and a unified logical theory would seem to be unattainable. Hence many theories of uncertainty are needed.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Scientific theories are good to have, and we do well to encourage the scientists. But it is a mistake to think that the scientific paradigm is suitable to all domains, in particular, to the study of uncertainty. Logic is a powerful tool and the axiomatic method assures the logical consistency of a theory. For instance, Leonard Savage argued that personal probability is a "<a href="http://books.google.co.il/books?id=zSv6dBWneMEC&q=code+of+consistency#v=snippet&q=code%20of%20consistency&f=false" target="_blank"><span style="color: blue;">code of consistency</span></a>" for choosing one's behavior. Jim March compares the rigorous logic of mathematical theories of decision to strict religious morality. Consistency between values and actions is commendable says March, but he notes that one sometimes needs to deviate from perfect morality. While "[s]tandard notions of intelligent choice are theories of strict morality ... saints are a luxury to be encouraged only in <a href="http://books.google.co.il/books?id=R2dleyi_iTMC&pg=PA51&lpg=PA51&dq=saints+are+a+luxury+to+be+encouraged+only+in+small+numbers&source=bl&ots=VIfFyf8Moq&sig=QhIqkc6cdUQiwPGfaeq6o1rkPlA&hl=en&ei=kI3pTtijCYPc8gOv2PGhCg&sa=X&oi=book_result&ct=result&redir_esc=y#v=onepage&q=saints%20are%20a%20luxury%20to%20be%20encouraged%20only%20in%20small%20numbers&f=false" target="_blank"><span style="color: blue;">small numbers</span>."</a> Logical consistency is a merit of any single theory, including a theory of uncertainty. However, insisting that the same logical consistency apply over the entire domain of uncertainty is like asking reality and saintliness to make peace.</div><br />
</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-25647340314808388372011-12-11T14:14:00.000+02:002011-12-11T14:14:52.383+02:00Picking a Theory is Like Building a Boat at Sea<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: center;"><i>"We are like sailors who on the open sea must reconstruct their ship</i></div><div style="text-align: center;"><i> but are never able to start afresh from the bottom." </i></div><div style="text-align: center;">Otto Neurath's analogy in the words of Willard V. Quine</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Engineers, economists, social planners, security strategists, and others base their plans and decisions on theories. They often argue long and hard over which theory to use. Is it ever right to use a theory that we know is empirically wrong, especially if a true (or truer) theory is available? Why is it so difficult to pick a theory?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's consider two introductory examples.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">You are an engineer designing a robot. You must calculate the forces needed to achieve specified motions of the robotic arms. You can base these calculations on either of two theories. One theory assumes that an object comes to rest unless a force acts upon it. Let's call this axiom <b>A</b>. The other theory assumes that an object moves at constant speed unless a force acts upon it. Let's call this axiom <b>G</b>. Axiom <b>A</b> agrees with observation: Nothing moves continuously without the exertion of force; an object will come to rest unless you keep pushing it. Axiom <b>G</b> contradicts all observation; no experiment illustrates the perpetual motion postulated by the axiom. If all else is the same, which theory should you choose?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Axiom <b>A</b> is Aristotle's law of inertia, which contributed little to the development of mechanical dynamics. Axiom <b>G</b> is Galileo's <a href="http://en.wikipedia.org/wiki/Inertia"><span class="Apple-style-span" style="color: blue;">law of inertia:</span></a> one of the most fruitful scientific ideas of all time. Why is an undemonstrable assertion - axiom <b>G</b> - a good starting point for a theory?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Consider another example.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">You are an economist designing a market-based policy to induce firms to reduce pollution. You will use an economic theory to choose between policies. One theory assumes that firms face pure competition, meaning that no single firm can influence market prices. Another theory provides agent-based game-theoretic characterization of how firms interact (without colluding) by observing and responding to price behavior of other firms and of consumers.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Pure competition is a stylized idealization (like axiom <b>G</b>). Game theory is much more realistic (like axiom <b>A</b>), but may obscure essential patterns in its massive detail. Which theory should you use?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We will not address the question of <i>how</i> to choose a theory upon which to base a decision. We will focus on the question: <i>why</i> is theory selection so difficult? We will discuss four trade offs.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><i>"Thanks to the negation sign, there are as many truths as falsehoods;</i></div><div style="text-align: center;"><i>we just can't always be sure which are which."</i> Willard V. <a href="http://books.google.com/books?id=BVr1IHM5yXUC&q=Thanks+to+the+negation+sign#v=snippet&q=Thanks%20to%20the%20negation%20sign&f=false"><span class="Apple-style-span" style="color: blue;">Quine</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>The tension between right and right.</b> The number of possible theories is infinite, and sometimes it's hard to separate the wheat from the chaff, as suggested by the quote from Quine. As an example, I have a book called <i>A Modern Guide to Macroeconomics: An Introduction to Competing Schools of Thought</i> by <a href="http://books.google.com/books/about/A_modern_guide_to_macroeconomics.html?id=LdG7AAAAIAAJ"><span class="Apple-style-span" style="color: blue;">Snowdon, Vane and Wynarczyk.</span></a> It's a wonderful overview of about a dozen theories developed by leading economic scholars, many of them Nobel Prize Laureates. The theories are all fundamentally different. They use different axioms and concepts and they compete for adoption by economists. These theories have been studied and tested upside down and backwards. However, economic processes are very complex and variable, and the various theories succeed in different ways or in different situations, so the jury is still out. The choice of a theory is no simple matter because many different theories can all seem right in one way or another.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><i>"The fox knows many things, but the hedgehog knows one big thing." </i><a href="http://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox" target="_blank"><span class="Apple-style-span" style="color: blue;">Archilochus</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>The fox-hedgehog tension.</b> This aphorism by Archilochus metaphorically describes two types of theories (and two types of people). Fox-like theories are comprehensive and include all relevant aspects of the problem. Hedgehog-like theories, in contrast, skip the details and focus on essentials. Axiom <b>A</b> is fox-like because the complications of friction are acknowledged from the start. Axiom <b>G</b> is hedgehog-like because inertial resistance to change is acknowledged but the complications of friction are left for later. It is difficult to choose between these types of theories because it is difficult to balance <i>comprehensiveness</i> against <i>essentialism.</i> On the one hand, all relevant aspects of the problem should be considered. On the other hand, don't get bogged down in endless details. This fox-hedgehog tension can be managed by weighing the context, goals and implications of the decision. We won't expand on this idea since we're not considering <i>how</i> to choose a theory; we're only examining <i>why</i> it's a difficult choice. However, the idea of resolving this tension by goal-directed choice motivates the third tension.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><i>"Beyond this island of meanings which in their own nature are true or false</i></div><div style="text-align: center;"><i>lies the ocean of meanings to which truth and falsity are irrelevant."</i> <a href="http://books.google.com/books?id=ZIgn5l73hmEC&pg=PA80&lpg=PA80&dq=Beyond+this+island+of+meanings+which+in+their+own+nature+are+true+or+false&source=bl&ots=UZ74ALzbVE&sig=67RklvZaoe3pMQR3liLsDzkTevU&hl=en&ei=SNXVTp6iPNTP4QThs7XFAQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CB4Q6AEwAA#v=onepage&q=Beyond%20this%20island%20of%20meanings%20which%20in%20their%20own%20nature%20are%20true%20or%20false&f=false" target="_blank"><span class="Apple-style-span" style="color: blue;">John Dewey</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>The truth-meaning tension.</b> Theories are collections of statements like axioms <b>A</b> and <b>G</b> in our first example. Statements carry meaning, and statements can be either true or false. Truth and meaning are different. For instance, "Archilochus was a Japanese belly dancer" has meaning, but is not true. The quote from Dewey expresses the idea that "meaning" is a broader description of statements than "truth". All true statements mean something, but not all meaningful statements are true. That does <i>not</i> imply, however, that all untrue meaningful statements are false, as we will see.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We know the meanings of words and sentences from experience with language and life. A child learns the meanings of words - chair, mom, love, good, bad - by experience. Meanings are learned by pointing - this is a chair - and also by experiencing what it means to love or to be good or bad.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Truth is a different concept. John Dewey wrote that</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"truths are but one class of meanings, namely, those in which a claim to verifiability by their consequences is an intrinsic part of their meaning. Beyond this island of meanings which in their own nature are true or false lies the ocean of meanings to which truth and falsity are irrelevant. We do not inquire whether Greek civilization was true or false, but we are immensely concerned to penetrate its meaning."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A true statement, in Dewey's sense, is one that can be confirmed by experience. Many statements are meaningful, even important and useful, but neither true nor false in this experimental sense. Axiom <b>G</b> is an example.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Our quest is to understand why the selection of a theory is difficult. Part of the challenge derives from the tension between meaning and truth. We select a theory for use in formulating and evaluating a plan or decision. The decision has implications: what would it <i>mean</i> to do this rather than that? Hence it is important that the meaning of the theory fit the context of the decision. Indeed, hedgehogs would say that getting the meaning and implication right is the essence of good decision making.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">But what if a relevantly meaningful theory is unprovable or even false? Should we use a theory that is meaningful but not verifiable by experience? Should we use a meaningful theory that is even wrong? This quandary is related to the fox-hedgehog tension because the fox's theory is so full of true statements that its meaning may be obscured, while the hedgehog's bare-bones theory has clear relevance to the decision to be made, but may be either false or too idealized to be tested.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Galileo's axiom of inertia is an idealization that is unsupported by experience because friction can never be avoided. Axiom <b>G</b> assumes conditions that cannot be realized so the axiom can never be tested. Likewise, pure competition is an idealization that is rarely if ever encountered in practice. But these theories capture the essence of many situations. In practical terms, what it <i>means</i> to get the robotic arm from here to there is to apply net forces that overcome Galilean inertia. But actually designing a robot requires considering details of dissipative forces like friction. What it <i>means</i> to be a small business is that the market price of your product is beyond your control. But actually running a business requires following and reacting to prices in the store next door.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">It is difficult to choose between a relevantly meaningful but unverifiable theory, and a true theory that is perhaps not quite what we mean.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>The knowledge-ignorance tension.</b> Recall that we are discussing theories in the service of decision-making by engineers, social scientists and others. A theory should facilitate the use of our knowledge and understanding. However, in some situations our ignorance is vast and our knowledge will grow. Hence a theory should also account for ignorance and be able to accommodate new knowledge.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's take an example from theories of decision. The <a href="http://en.wikipedia.org/wiki/Independence_of_irrelevant_alternatives" target="_blank"><span class="Apple-style-span" style="color: blue;">independence axiom</span></a> is fundamental in various decision theories, for instance in von Neumann-Morgenstern <a href="http://en.wikipedia.org/wiki/Expected_utility_hypothesis" target="_blank"><span class="Apple-style-span" style="color: blue;">expected utility theory</span>.</a> It says that one's choices should be independent of irrelevant alternatives. Suppose you are offered the dinner choice between chicken and fish, and you choose chicken. The server returns a few minutes later saying that beef is also available. If you switch your choice from chicken to fish you are violating the independence axiom. You prefer beef less than both chicken and fish, so the beef option shouldn't alter the fish-chicken preference.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">But let's suppose that when the server returned and mentioned beef, your physician advised you to reduce your cholesterol intake (so your preference for beef is lowest) which prompted your wife to say that you should eat fish at least twice a week because of vitamins in the oil. So you switch from chicken to fish. Beef is not chosen, but new information that resulted from introducing the irrelevant alternative has altered the chicken-fish preference.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">One could argue for the independence axiom by saying that it applies only when all relevant information (like considerations of cholesterol and fish oil) are taken into account. On the other hand, one can argue against the independence axiom by saying that new relevant information quite often surfaces unexpectedly. The difficulty is to judge the extent to which ignorance and the emergence of new knowledge should be central in a decision theory.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Wrapping up.</b> Theories express our knowledge and understanding about the unknown and confusing world. Knowledge begets knowledge. We use knowledge and understanding - that is, theory - in choosing a theory. The process is difficult because it's like building a boat on the open sea as <a href="http://en.wikipedia.org/wiki/Otto_Neurath" target="_blank"><span class="Apple-style-span" style="color: blue;">Otto Neurath</span></a> once said. </div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-15586463411326527052011-11-29T09:01:00.001+02:002011-12-01T08:05:55.925+02:00Fog of War<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: center;"><i>"War is the realm of uncertainty;</i></div><div style="text-align: center;"><i>three quarters of the factors on which action in war is based </i></div><div style="text-align: center;"><i>are wrapped in a fog of greater or lesser uncertainty."</i></div><div style="text-align: center;">Carl von Clausewitz, <i>On War</i></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">What makes a great general?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Hannibal changed Carthaginian strategy from naval to land warfare, and beat the Romans in nearly every encounter. Julius Caesar commanded the undying loyalty of his officers and soldiers. Napoleon Bonaparte invented the modern concept of total war with a citizen army. Was their genius in strategy, or tactics, or logistics, or charisma? Or was it crude luck? Or was it the exploitation of uncertainty?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">War is profoundly influenced by technology, social organization, human psychology and political goals. Success in war requires understanding and control of these factors. War consumes vast human and material resources and demands "genius, improvisation, and energy of mind" as <a href="http://books.google.com/books?id=m4KpV6XAbZsC&pg=PA831&lpg=PA831&dq=genius,+improvisation,+and+energy+of+mind+-+winston+churchill&source=bl&ots=w7iNsNx7so&sig=D77MS9gwe3_7WYWFzVT92qvmA0U&hl=en&ei=yvDNTtDaEZLG8QOUieHZDQ&sa=X&oi=book_result&ct=result&resnum=2&ved=0CC0Q6AEwAQ#v=onepage&q=genius&f=false"><span class="Apple-style-span" style="color: blue;">Winston Churchill said.</span></a> And yet, Clausewitz writes: "No other human activity is so continuously or universally bound up with chance."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Why? What does this imply about the successful military commander? What does it mean for human endeavor and history in general?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Clausewitz uses the terms "chance" and "uncertainty", sometimes interchangeably, to refer to two different concepts. An event occurs by chance if it is unexpected, or its origin is unknown, or its impact is surprising. Adverse chance events provoke "uncertainty, the psychological state of discomfort from confusion or lack of information" (Katherine Herbig, reference below).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Chance and uncertainty are dangerous because they subvert plans and diminish capabilities. Soldiers have been aware of both the dangers and the advantages of surprise since they first battered each other with sticks. Conventional military theorists aimed to avoid or ameliorate chance events by careful planning, military intelligence, training and discipline, communication, command and control. Clausewitz also recognized that steadfast faithfulness to mission and determination against adversity are essential in overcoming chance events and the debilitating effect of uncertainty. But "Clausewitz dismisses as worse than useless efforts to systematize warfare with rules and formulas. Such systems are falsely comforting, he says, because they reduce the imponderables of war to a few meagre certainties about minor matters" (Herbig). Clausewitz' most original contribution was in building a systematic theory of war in which the unavoidability of chance, and its opportunities, are central.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Why is uncertainty (in the sense of lack of knowledge) unavoidable and fundamental in war? Clausewitz' answer is expressed in his metaphor of friction. As Herbig explains:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"Friction is the decremental loss of effort and intention caused by human fallibility, compounded by danger and exhaustion. Like the mechanical phenomenon of friction that reduces the efficiency of machinery with moving parts, Clausewitz' friction reduces the efficiency of the war machine. It sums up all the little things that always go wrong to keep things from being done as easily and quickly as intended. ...</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"What makes friction more than a minor annoyance in war is its confounding with chance, which multiplies friction in random, unpredictable ways."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">War, like history, runs on the cumulative effect of myriad micro-events. Small failures are compounded because war is a coordinated effort of countless local but inter-dependent occurrences. Generals, like symphony conductors, choose the score and set the pace, but the orchestra plays the notes. A mis-tuned violin, or a drummer who mis-counts his entry, can ruin the show. Moses led the children of Israel out of Egypt, but he'd have looked pretty funny if they had scattered to the four winds. Moses' genius as a leader wasn't plied against Pharaoh (Moses had help there), but rather against endless bickering and revolt once they reached the desert.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Uncertainty originates at the tactical rather than the strategic level. The general can't know countless local occurrences: a lost supply plane, failed equipment here, over-reaction there, or complacency someplace else. As an example, the New York Times reported on <a href="http://www.nytimes.com/2011/11/28/world/asia/pakistan-and-united-states-bitter-allies-in-fog-of-war.html?ref=world"><span class="Apple-style-span" style="color: blue;">27 November 2011:</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"The NATO air attack that killed at least two dozen Pakistani soldiers over the weekend reflected a fundamental truth about American-Pakistani relations when it comes to securing the unruly border with Afghanistan: the tactics of war can easily undercut the broader strategy that leaders of both countries say they share.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"The murky details complicated matters even more, with Pakistani officials saying the attack on two Pakistani border posts was unprovoked and Afghan officials asserting that Afghan and American commandos called in airstrikes after coming under fire from Pakistani territory."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Central control is critical, but also profoundly limited by the micro-event texture of history.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Conversely, uncertainty can be exploited at the tactical level by flexible and creative response to random opportunities. The field commander has local knowledge that enables decisive initiative: the sleeping sentinel, the bridge not destroyed, the deserted town. The general's brilliance is in forging a war machine whose components both exploit uncertainty and are resilient to surprise.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Uncertainty is central in history at large, like in war, because they both emerge from the churning of individual events. In democratic societies, legislatures pass laws and executive branches formulate and implement policies. But only active participation of the citizenry brings life and reality to laws and policies. Conversely, citizen resistance or even apathy dooms the best policies to failure. This explains the failure of democratic institutions that are imported precipitously to countries with incompatible social and political traditions. Governments formulate policy, but implementation occurs in the context of social attitudes and <a href="http://decisions-and-info-gaps.blogspot.com/2011/11/can-we-replay-history.html"><span class="Apple-style-span" style="color: blue;">historical memory.</span></a> You can elect legislatures and presidents but you can't elect the public. Non-centralized beliefs and actions also dominate the behavior of industrial economies. The actions of countless households, firms and investors can vitiate the best laid plans of monetary and fiscal authorities. All this adds up to Clausewitz' concept of friction: global uncertainty accumulating from countless local deviations.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">In peace, like in war, the successful response to uncertainty is to face it, grapple with it, exploit it, restrain it, but never hope to abolish it. Uncertainty is inevitable, and sometimes even propitious. The propensity for war is the ugliest attribute of our species. Nonetheless, what we learn about uncertainty from the study of war applies to all our endeavors: in <a href="http://www.leaderexcel.com/040305.html"><span class="Apple-style-span" style="color: blue;">business,</span></a> in <a href="http://bilder.buecher.de/zusatz/29/29337/29337705_lese_1.pdf"><span class="Apple-style-span" style="color: blue;">politics</span></a> and beyond. Waging peace demands the same staunchness, determination and inventive flexibility in the face of the unknown, as the successful pursuit of war.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Main source:</div><div style="text-align: justify;">Katherine L. Herbig, 1989, Chance and Uncertainty in <i>On War</i>, in Michael Handel, ed., <i>Clausewitz and Modern Strategy</i>, Frank Cass, London, pp.95-116.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">See also:</div><div style="text-align: justify;">Peter Paret, 1976, <i>Clausewitz and the State: The Man, His Theories, and His Times</i>, re-issued 2007, Princeton University Press. </div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com9tag:blogger.com,1999:blog-9140612503596105113.post-3850059784035760962011-11-10T09:15:00.000+02:002011-11-10T09:15:35.097+02:00Can We Replay History?<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: justify;">After the kids' party games and the birthday cake came the action-packed Steve McQueen movie. My friend's parents had rented a movie projector. They hooked up the reel and let it roll. But the high point came later when they ran the movie backwards. Bullets streamed back into guns, blows were retracted and fallen protagonists recoiled into action. The mechanism that pulls the celluloid film forward for normal showing, can pull the film in the reverse direction, rolling it back onto the feeder reel and showing the movie in reverse.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">If you chuck a round pebble off a cliff it will fall in a graceful parabolic arch, gradually increasing its speed until it hits the ground. The same pebble, if shot from the point of impact, at the terminating angle and speed, will gracefully and obligingly retrace its path. (I'm ignoring wind and air friction that make things a bit more complicated.)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Deterministic mechanisms, like the movie reel mechanism or the law of gravity, are reversible.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">History is different. Peoples' behavior is influenced by what they know. You pack an umbrella on a trip to the UK. Google develops search algorithms not search parties because their knowledge base is information technology not mountain trekking. Knowledge is powerful because it enables rational behavior: matching actions to goals. Knowledge transforms futile fumbling into intelligent behavior.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Knowledge underlies intelligent behavior, but knowledge is continually expanding. We discover new facts and relationships. We discover that things have changed. Therefore tomorrow's knowledge-based behavior will, to some extent, be unpredictable today because tomorrow's discoveries cannot be known today. Human behavior has an inherent element of <a href="http://www.technion.ac.il/yakov/phig03.pdf"><span class="Apple-style-span" style="color: blue;">indeterminism.</span></a> Intelligent learning behavior cannot be completely predicted.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Personal and collective history does not unfold like a pre-woven rug. Human history is fundamentally different from the trajectory of a pebble tossed from a cliff. History is the process of uncovering the unknown and responding to this new knowledge. The existence of the unknown creates the possibility of free will. The discovery of new knowledge introduces indeterminism and irreversibility into history, as explained by the philosophers <a href="http://www.amazon.com/Epistemics-Economics-Critique-Economic-Doctrines/dp/1560005580"><span class="Apple-style-span" style="color: blue;">G.L.S. Shackle</span></a> and <a href="http://books.google.com/books/about/The_open_universe.html?id=wPGKjcrS6DkC"><span class="Apple-style-span" style="color: blue;">Karl Popper.</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Nonetheless history is not erratic because each increment of new knowledge adds to the store of what was learned before. Memory is not perfect, either of individuals or groups, but it is powerful. History happens in historical context. For instance, one cannot understand the recent revolutions and upheavals in the Arab world from the perspective of 18th century European revolutions; the historical backgrounds are too different, and the outcomes in the Middle East will be different as well. Innovation, even revolution, is spurred by new knowledge laid over the old. A female municipal official slapped a Tunisian street vendor, Mohamed <a href="http://en.wikipedia.org/wiki/Mohamed_Bouazizi"><span class="Apple-style-span" style="color: blue;">Bouazizi.</span></a> That slap crystalized Mr Bouazizi's knowledge of his helpless social impotence and lit the match with which he immolated himself and initiated conflagrations around the Mideast. New knowledge acts like thruster engines on the inertial body of memory. What is emerging in the Mideast is Middle Eastern, not European. What is emerging is the result of new knowledge: of the power of networking, of the mortality of dictators, of the limits of coercion, of the power of new knowledge itself and the possibilities embedded in tomorrow's unknowns.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Mistakes are made, even with the best intentions and the best possible knowledge. Even if analysts knew and understood all the actions of all actors on the stage of history, they still cannot know what those people will learn tomorrow and how that new knowledge will alter their behavior. Mistakes are made because history does not unwind like a celluloid reel.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">That's not to say that analysts are never ignorant, negligent, stupid or malicious. It's to say that all actions are, in a sense, mistakes. Or, the biggest mistake of all is to think that we can know the full import of our actions. We cannot, because actions are tossed, like pebbles, into the dark pit of unknown possible futures. One cannot know all possible echoes, or whether some echo might be glass-shatteringly cataclysmic.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Mistakes can sometimes be corrected, but never undone. History cannot be run backwards, and you never get a second chance. Conversely, every instant is a new opportunity because the future is always uncertain. Uncertainty is the freedom to err, and the opportunity to create and discover. </div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com7tag:blogger.com,1999:blog-9140612503596105113.post-9333513010211525892011-10-31T08:23:00.001+02:002011-11-06T13:13:02.198+02:00The Language of Science and the Tower of Babel<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: justify;"><i>And God said: Behold one people with one language for them all ... and now nothing that they venture will be kept from them. ... [And] there God mixed up the language of all the land.</i> (Genesis, 11:6-9)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><i>"Philosophy is written in this grand book the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and to read the alphabet in which it is composed. It is written in the language of mathematics."</i> <a href="http://en.wikipedia.org/wiki/The_Assayer"><span class="Apple-style-span" style="color: blue;">Galileo Galilei</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Language is power over the unknown. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Mathematics is the language of science, and computation is the modern voice in which this language is spoken. Scientists and engineers explore the book of nature with computer simulations of swirling <a href="http://blogs.discovermagazine.com/80beats/2011/08/30/watch-this-the-most-realistic-simulation-of-spiral-galaxy-formation-to-date"><span class="Apple-style-span" style="color: blue;">galaxies</span></a> and colliding <a href="http://en.wikipedia.org/wiki/Molecular_dynamics"><span class="Apple-style-span" style="color: blue;">atoms,</span></a> crashing <a href="http://www.youtube.com/watch?v=tvNMehM0VyI"><span class="Apple-style-span" style="color: blue;">cars</span></a> and wind-swept <a href="http://www.cimne.com/eo/publicaciones/files/PI181.pdf"><span class="Apple-style-span" style="color: blue;">buildings.</span></a> The wonders of nature and the powers of technological innovation are displayed on computer screens, "continually open to our gaze." The language of science empowers us to dispel confusion and uncertainty, but only with great effort do we change the babble of sounds and symbols into useful, meaningful and reliable communication. How we do that depends on the type of uncertainty against which the language struggles.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Mathematical equations encode our understanding of nature, and Galileo exhorts us to learn this code. One challenge here is that a single equation represents an infinity of situations. For instance, the equation describing a flowing liquid captures water gushing from a pipe, blood coursing in our veins, and a droplet splashing from a puddle. Gazing at the <a href="http://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations"><span class="Apple-style-span" style="color: blue;">equation</span></a> is not at all like gazing at the <a href="http://www.google.com/search?q=droplet+splash&hl=en&prmd=imvns&tbm=isch&tbo=u&source=univ&sa=X&ei=UdWjTtKiEInm4QTr3tnJDg&ved=0CDMQsAQ&biw=1680&bih=917"><span class="Apple-style-span" style="color: blue;">droplet.</span></a> Understanding grows by exposure to pictures and examples. Computations provide numerical examples of equations that can be realized as pictures. Computations can simulate nature, allowing us to explore at our leisure.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Two questions face the user of computations: Are we calculating the correct equations? Are we calculating the equations correctly? The first question expresses the scientist's ignorance - or at least uncertainty - about how the world works. The second question reflects the programmer's ignorance or uncertainty about the faithfulness of the computer program to the equations. Both questions deal with the fidelity between two entities. However, the entities involved are very different and the uncertainties are very different as well.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The scientist's uncertainty is reduced by the ingenuity of the experimenter. Equations make predictions that can be tested by experiment. For instance, Galileo predicted that small and large balls will fall at the same rate, as he is reported to have tested from the <a href="http://en.wikipedia.org/wiki/Galileo's_Leaning_Tower_of_Pisa_experiment"><span class="Apple-style-span" style="color: blue;">tower of Pisa.</span></a> Equations are rejected or modified when their predictions don't match the experimenter's observation. The scientist's uncertainty and ignorance are whittled away by testing equations against observation of the real world. Experiments may be extraordinarily subtle or difficult or costly because nature's unknown is so endlessly rich in possibilities. Nonetheless, observation of nature remorselessly cuts false equations from the body of scientific doctrine. God speaks through nature, as it were, and "the Eternal of Israel does not deceive or console." (1 Samuel, 15:29). When this observational cutting and chopping is (temporarily) halted, the remaining equations are said to be "validated" (but they remain on the chopping block for further testing).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The programmer's life is, in one sense, more difficult than the experimenter's. Imagine a huge computer program containing millions of lines of code, the accumulated fruit of thousands of hours of effort by many people. How do we verify that this computation faithfully reflects the equations that have ostensibly been programmed? Of course they've been checked again and again for typos or logical faults or syntactic errors. Very clever methods are available for <a href="http://prod.sandia.gov/techlib/access-control.cgi/2000/001444.pdf"><span class="Apple-style-span" style="color: blue;">code verification.</span></a> Nonetheless, programmers are only human, and some infidelity may slip through. What remorseless knife does the programmer have with which to verify that the equations are correctly calculated? Testing computation against observation does not allow us to distinguish between errors in the equations, errors in the program, and compensatory errors in both.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The experimenter compares an equation's prediction against an observation of nature. Like the experimenter, the programmer compares the computation against something. However, for the programmer, the sharp knife of nature is not available. In special cases the programmer can compare against a known answer. More frequently the programmer must compare against other computations which have already been verified (by some earlier comparison). The verification of a computation - as distinct from the validation of an equation - can only use other high-level human-made results. The programmer's comparisons can only be traced back to other comparisons. It is true that the experimenter's tests are intermediated by human artifacts like calipers or cyclotrons. Nonetheless, bedrock for the experimenter is the "reality out there". The experimenter's tests can be traced back to observations of elementary real events. The programmer does not have that recourse. One might say that God speaks to the experimenter through nature, but the programmer has no such Voice upon which to rely.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The <a href="http://en.wikipedia.org/wiki/Tower_of_Babel"><span class="Apple-style-span" style="color: blue;">tower</span></a> built of old would have reached the heavens because of the power of language. That tower was never completed because God turned talk into babble and dispersed the people across the land. Scholars have argued whether the story prescribes a moral norm, or simply describes the way things are, but the power of language has never been disputed.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The tower was never completed, just as science, it seems, has a <a href="http://decisions-and-info-gaps.blogspot.com/2011/10/end-of-science.html"><span class="Apple-style-span" style="color: blue;">long way to go</span></a>. Genius, said <a href="http://en.wikiquote.org/wiki/Thomas_Edison"><span class="Apple-style-span" style="color: blue;">Edison,</span></a> is 1 percent inspiration and 99 percent perspiration. A good part of the sweat comes from getting the language right, whether mathematical equations or computer programs.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Part of the challenge is finding order in nature's bubbling variety. Each equation captures a glimpse of that order, adding one block to the structure of science. Furthermore, equations must be validated, which is only a stop-gap. All blocks crumble eventually, and all equations are fallible and likely to be falsified.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Another challenge in science and engineering is grasping the myriad implications that are distilled into an equation. An equation compresses and summarizes, while computer simulations go the other way, restoring detail and specificity. The fidelity of a simulation to the equation is usually verified by comparing against other simulations. This is like the dictionary paradox: using words to define words.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">It is by inventing and exploiting symbols that humans have constructed an orderly world out of the confusing tumult of experience. With symbols, like with blocks in the tower, the sky is the limit.</div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-2127818609457497492011-10-24T09:21:00.000+02:002011-10-24T09:21:17.408+02:00The End of Science?<div dir="ltr" style="text-align: left;" trbidi="on"><br />
<div style="text-align: justify;">Science is the search for and study of patterns and laws in the natural and physical worlds. Could that search become exhausted, like an over-worked coal vein, leaving nothing more to be found? Could science end? After briefly touching on several fairly obvious possible end-games for science, we explore how the vast Unknown could undermine - rather than underlie - the scientific enterprize. The possibility that science could end is linked to the reason that science is possible at all. The path we must climb in this essay is steep, but the (in)sight is worth it.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Science is the process of discovering unknowns, one of which is the extent of Nature's secrets. It is possible that the inventory of Nature's unknowns is finite or conceivably even nearly empty. However, a look at open problems in science, from astronomy to zoology, suggests that Nature's storehouse of surprises is still chock full. So, from this perspective, the answer to the question 'Could science end?' is conceivably 'Yes', but most probably 'No'.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Another possible 'Yes' answer is that science will end by reaching the limit of human cognitive capability. Nature's storehouse of surprises may never empty out, but the rate of our discoveries may gradually fall, reaching zero when scientists have figured out everything that humans are able to understand. Possible, but judging from the last 400 years, it seems that we've only begun to tap our mind's expansive capability.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Or perhaps science - a product of human civilization - will end due to historical or social forces. The simplest such scenario is that we blow ourselves to smithereens. Smithereens can't do science. Another more complicated scenario is Oswald Spengler's theory of <a href="http://www.duke.edu/~aparks/Spengler.html"><span class="Apple-style-span" style="color: blue;">cyclical history,</span></a> whereby an advanced society - such as Western civilization - decays and disappears, science disappearing with it. So again a tentative 'Yes'. But this might only be an interruption of science if later civilizations resume the search.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We now explore the main mechanism by which science could become impossible. This will lead to deeper understanding of the delicate relation between knowledge and the Unknown and to why science is possible at all.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">One axiom of science is that there exist stable and discoverable laws of nature. As the philosopher A.N. Whitehead wrote in 1925: "Apart from recurrence, knowledge would be impossible; for nothing could be referred to our past experience. Also, apart from some regularity of recurrence, measurement would be impossible." (<i>Science and the Modern World,</i> p.36). The stability of phenomena is what allows a scientist to repeat, study and build upon the work of other scientists. Without regular recurrence there would be no such thing as a discoverable law of nature.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">However, as <a href="http://plato.stanford.edu/entries/hume"><span class="Apple-style-span" style="color: blue;">David Hume</span> <span class="Apple-style-span" style="color: blue;">explained</span></a> long ago in <i>An Enquiry Concerning Human Understanding,</i> one can never empirically prove that regular recurrence will hold in the future. By the time one tests the regularity of the future, that future has become the past. The future can never be tested, just as one can never step on the rolled up part of an endless rug unfurling always in front of you.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Suppose the axiom of Natural Law turns out to be wrong, or suppose Nature comes unstuck and its laws start "sliding around", changing. Science would end. If regularity, patterns, and laws no longer exist, then scientific pursuit of them becomes fruitless.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Or maybe not. Couldn't scientists search for the laws by which Nature "slides around"? Quantum mechanics seems to do just that. For instance, when a polarized photon impinges on a polarizing crystal, the photon will either be entirely absorbed or entirely transmitted, as <a href="http://www.informationphilosopher.com/solutions/scientists/dirac/chapter_1.html"><span class="Apple-style-span" style="color: blue;">Dirac explained.</span></a> The photon's fate is not determined by any law of Nature (if you believe quantum mechanics). Nature is indeterminate in this situation. Nonetheless, quantum theory very accurately predicts the probability that the photon will be transmitted, and the probability that it will be absorbed. In other words, quantum mechanics establishes a deterministic law describing <a href="http://www.technion.ac.il/yakov/lr05.pdf"><span class="Apple-style-span" style="color: blue;">Nature's indeterminism.</span></a></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Suppose Nature's indeterminism itself becomes lawless. Is that conceivable? Could Nature become so disorderly, so confused and uncertain, so <a href="http://www.shakespeare-literature.com/Hamlet/5.html"><span class="Apple-style-span" style="color: blue;">"out of joint: O, cursed spite"</span></a>, that no law can "set it right"? The answer is conceivably 'Yes', and if this happens then scientists are all out of a job. To understand how this is conceivable, one must appreciate the Unknown at its most rambunctious.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's take stock. We can identify attributes of Nature that are necessary for science to be possible. The axiom of Natural Law is one necessary attribute. The successful history of science suggests that the axiom of Natural Law has held firmly in the past. But that does not determine what Nature <i>will be</i> in the future.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">In order to understand how Natural Law could come unstuck, we need to understand how Natural Law works (today). When a projectile, say a baseball, is thrown from here to there, its progress at each point along its trajectory is described, scientifically, in terms of its current position, direction of motion, and attributes such as its shape, mass and surrounding medium. The Laws of Nature enable the calculation of the ball's progress by solving a mathematical equation whose starting point is the current state of the ball.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We can roughly describe most Laws of Nature as formulations of problems - e.g. mathematical equations - whose input is the current and past states of the system in question, and whose solution predicts an outcome: the next state of the system. What is law-like about this is that these problems - whose solution describes a progression, like the flight of a baseball - are constant over time. The scientist calculates the baseball's trajectory by solving the same problem over and over again (or all at once with a differential equation). Sometimes the problem is hard to solve, so scientists are good mathematicians, or they have big computers, (or both). But solvable they are.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's remember that Nature is not a scientist, and Nature does not solve a problem when things happen (like baseballs speeding to home plate). Nature just <i>does it.</i> The scientist's Law is a description of Nature, not Nature itself.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">There are other Laws of Nature for which we must modify the previous description. In these cases, the Law of Nature is, as before, the formulation of a problem. Now, however, the solution of the problem not only predicts the next state of the system, but it also re-formulates the problem that must be solved at the next step. There is sort of a feedback: the next state of the system alters the rule by which subsequent progress is made. For instance, when an object falls towards earth from outer space, the law of nature that determines the motion of the object depends on the gravitational attraction. The gravitational attraction, in turn, increases as the object gets closer. Thus the problem to be solved changes as the object moves. Problems like these tend to be more difficult to solve, but that's the scientist's problem (or pleasure).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Now we can appreciate how Nature might become lawlessly unstuck. Let's consider the second type of Natural Law, where the problem - the Law itself - gets modified by the evolving event. Let's furthermore suppose that the problem is not simply difficult to solve, but that no solution can be obtained in a finite amount of time (mathematicians have lots of examples of problems like this). As before, Nature itself does not solve a problem; Nature just <i>does it.</i> But the scientist is now in the position that no prediction can be made, no trajectory can be calculated, no model or description of the phenomenon can be obtained. No explicit problem statement embodying a Natural Law exists. This is because the problem to be solved evolves continuously from previous solutions, and none of the sequence of problems can be solved. The scientist's profession will become frustrating, futile and fruitless.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Nature becomes lawlessly unstuck, and science ends, if all Laws of Nature become of the modified second type. The world itself will continue because Nature solves no problems, it just does its thing. But the <i>way</i> it does this is now so raw and unruly that no study of nature can get to first base.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Sound like science fiction (or nightmare)? Maybe. But as far as we know, the only thing between us and this new state of affairs is the axiom of Natural Law. Scientists assume that Laws exist and are stable because past experience, together with our psychological makeup (which itself is evolutionary past experience), very strongly suggests that regular recurrence can be relied upon. But if you think that the scientists can empirically <i>prove</i> that the future will continue to be lawful, like the past, recall that <i>all</i> experience is <i>past</i> experience. Recall the unfurling-rug metaphor (by the time we test the future it becomes the past), and make an appointment to see Mr Hume.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Is science likely to become fruitless or boring? No. Science thrives on an Unknown that is full of surprises. Science - the search for Natural Laws - thrives even though the existence of Natural Law can never be proven. Science thrives precisely because we can never know for sure that science will not someday end. </div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com5tag:blogger.com,1999:blog-9140612503596105113.post-19741035295824998472011-10-09T13:51:00.001+02:002011-10-09T14:03:36.000+02:00Squirrels and Stock Brokers, Or: Innovation Dilemmas, Robustness and Probability<div style="text-align: justify;"></div><div style="text-align: justify;">Decisions are made in order to achieve desirable outcomes. An <a href="http://decisions-and-info-gaps.blogspot.com/2011/08/innovation-dilemma.html"><span class="Apple-style-span" style="color: blue;">innovation dilemma</span></a> arises when a seemingly more attractive option is also more uncertain than other options. In this essay we explore the relation between the innovation dilemma and the robustness of a decision, and the relation between robustness and probability. A decision is robust to uncertainty if it achieves required outcomes despite adverse surprises. A robust decision may differ from the seemingly best option. Furthermore, robust decisions are not based on knowledge of probabilities, but can still be the most likely to succeed.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><b><span class="Apple-style-span" style="font-size: large;">Squirrels, Stock-Brokers and Their Dilemmas</span></b></div><div style="text-align: center;"><br />
</div><div style="text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-R6984VRjIMSP2R0SJFg2fq2dOZlvjkVoKR0mK07CEXiRR-6QwMbLPS3DYTpdMJL9kAeI3Xhb1c0WPOOC7RmeSUq9jJ8lLg70W2HvAU5Z7V0RSpTmlJATZtByQFRcP_tewajvV2fzSCDI/s1600/nyse-floor-fig01.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="144" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-R6984VRjIMSP2R0SJFg2fq2dOZlvjkVoKR0mK07CEXiRR-6QwMbLPS3DYTpdMJL9kAeI3Xhb1c0WPOOC7RmeSUq9jJ8lLg70W2HvAU5Z7V0RSpTmlJATZtByQFRcP_tewajvV2fzSCDI/s200/nyse-floor-fig01.jpg" width="200" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-FHexm3MH2TGlE48ZN479btCHMJ4EYsZrVofRHZFtJI_eM0Cg0-yCuZKWp8LSy26knjw-K1LxuQyo8rEN_CuJNz6g6dtwqhiuCcpFUJce7es7CJRbOuV4X4H7FidM0iLhciU5AaJvzd4g/s1600/squirrel01fig.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="128" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-FHexm3MH2TGlE48ZN479btCHMJ4EYsZrVofRHZFtJI_eM0Cg0-yCuZKWp8LSy26knjw-K1LxuQyo8rEN_CuJNz6g6dtwqhiuCcpFUJce7es7CJRbOuV4X4H7FidM0iLhciU5AaJvzd4g/s200/squirrel01fig.jpg" width="200" /></a></div><br />
<div style="text-align: justify;"><br />
</div><div style="text-align: center;"><b>Decision problems.</b></div><div style="text-align: justify;">Imagine a squirrel nibbling acorns under an oak tree. They're pretty good acorns, though a bit dry. The good ones have already been taken. Over in the distance is a large stand of fine oaks. The acorns there are probably better. But then, other squirrels can also see those trees, and predators can too. The squirrel doesn't need to get fat, but a critical caloric intake is necessary before moving on to other activities. How long should the squirrel forage at this patch before moving to the more promising patch, if at all?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Imagine a hedge fund manager investing in South African diamonds, Australian Uranium, Norwegian Kroners and Singapore semi-conductors. The returns have been steady and good, but not very exciting. A new hi-tech start-up venture has just turned up. It looks promising, has solid backing, and could be very interesting. The manager doesn't need to earn boundless returns, but it is necessary to earn at least a tad more than the competition (who are also prowling around). How long should the manager hold the current portfolio before changing at least some of its components?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">These are decision problems, and like many other examples, they share three traits: critical needs must be met; the current situation may or may not be adequate; other alternatives look much better but are much more uncertain. To change, or not to change? What strategy to use in making a decision? What choice is the best bet? Betting is a surprising concept, as we have <a href="http://decisions-and-info-gaps.blogspot.com/2011/09/robustness-and-lockes-wingless.html"><span class="Apple-style-span" style="color: blue;">seen before</span></a>; can we bet without knowing probabilities?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Solution strategies.</b></div><div style="text-align: justify;">The decision is easy in either of two extreme situations, and their analysis will reveal general conclusions.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">One extreme is that the status quo is clearly insufficient. For the squirrel this means that these crinkled rotten acorns won't fill anybody's belly even if one nibbled here all day long. Survival requires trying the other patch regardless of the fact that there may be many other squirrels already there and predators just waiting to swoop down. Similarly, for the hedge fund manager, if other funds are making fantastic profits, then something has to change or the competition will attract all the business.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The other extreme is that the status quo is just fine, thank you. For the squirrel, just a little more nibbling and these acorns will get us through the night, so why run over to unfamiliar oak trees? For the hedge fund manager, profits are better than those of any credible competitor, so uncertain change is not called for.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">From these two extremes we draw an important general conclusion: the right answer depends on what you need. To change, or not to change, depends on what is critical for survival. There is no universal answer, like, "Always try to improve" or "If it's working, don't fix it". This is a very general property of decisions under uncertainty, and we will call it <b>preference reversal.</b> The agent's preference between alternatives depends on what the agent needs in order to "survive".</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The decision strategy that we have described is attuned to the needs of the agent. The strategy attempts to satisfy the agent's critical requirements. If the status quo would reliably do that, then stay put; if not, then move. Following the work of Nobel Laureate Herbert Simon, we will call this a <b>satisficing decision strategy:</b> one which satisfies a critical requirement.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><i>"Prediction is always difficult, especially of the future."</i> - Robert Storm Petersen</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Now let's consider a different decision strategy that squirrels and hedge fund managers might be tempted to use. The agent has obtained information about the two alternatives by signals from the environment. (The squirrel sees grand verdant oaks in the distance, the fund manager hears of a new start up.) Given this information, a prediction can be made (though the squirrel may make this prediction based on instincts and without being aware of making it). Given the best available information, the agent predicts which alternative would yield the better outcome. Using this prediction, the decision strategy is to choose the alternative whose predicted outcome is best. We will call this decision strategy <b>best-model optimization.</b> Note that this decision strategy yields a single universal answer to the question facing the agent. This strategy uses the best information to find the choice that - if that information is correct - will yield the best outcome. Best-model optimization (usually) gives a single "best" decision, unlike the satisficing strategy that returns different answers depending on the agent's needs.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">There is an attractive logic - and even perhaps a moral imperative - to use the best information to make the best choice. One should always try to do one's best. But the catch in the argument for best-model optimization is that the best information may actually be grievously wrong. Those fine oak trees might be swarming with insects who've devoured the acorns. Best-model optimization ignores the agent's central dilemma: stay with the relatively well known but modest alternative, or go for the more promising but more uncertain alternative.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"Tsk, tsk, tsk" says our hedge fund manager. "My information already accounts for the uncertainty. I have used a probabilistic asset pricing model to predict the likelihood that my profits will beat the competition for each of the two alternatives."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Probabilistic asset pricing models are good to have. And the squirrel similarly has evolved instincts that reflect likelihoods. But a best-probabilistic-model optimization is simply one type of best-model optimization, and is subject to the same vulnerability to error. The world is full of surprises. The probability functions that are used are quite likely wrong, especially in predicting the rare events that the manager is most concerned to avoid.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><b><span class="Apple-style-span" style="font-size: large;">Robustness and Probability</span></b></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Now we come to the truly amazing part of the story. The satisficing strategy does not use any probabilistic information. Nonetheless, in many situations, the satisficing strategy is actually a better bet (or at least not a worse bet), probabilistically speaking, than any other strategy, including best-probabilistic-model optimization. We have no probabilistic information in these situations, but we can still maximize the probability of success (though we won't know the value of this maximum).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">When the satisficing decision strategy is the best bet, this is, in part, because it is more robust to uncertainty than another other strategy. A decision is robust to uncertainty if it achieves required outcomes even if adverse surprises occur. In many important situations (though not invariably), <i>more robustness</i> to uncertainty is equivalent to being <i>more likely to succeed</i> or survive. When this is true we say that <b>robustness is a proxy for probability.</b></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A thorough analysis of the proxy property is rather <a href="http://www.technion.ac.il/yakov/prx27.pdf"><span class="Apple-style-span" style="color: blue;">technical.</span></a> However, we can understand the gist of the idea by considering a simple special case.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's continue with the squirrel and hedge fund examples. Suppose we are completely confident about the future value (in calories or dollars) of not making any change (staying put). In contrast, the future value of moving is apparently better though uncertain. If staying put would satisfy our critical requirement, then we are absolutely certain of survival if we do not change. Staying put is completely robust to surprises so the probability of success equals 1 if we stay put, regardless of what happens with the other option. Likewise, if staying put would <i>not</i> satisfy our critical requirement, then we are absolutely certain of failure if we do not change; the probability of success equals 0 if we stay, and moving cannot be worse. Regardless of what probability distribution describes future outcomes if we move, we can always choose the option whose likelihood of success is greater (or at least not worse). This is because staying put is either sure to succeed or sure to fail, and we know which.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">This argument can be extended to the more realistic case where the outcome of staying put is uncertain and the outcome of moving, while seemingly better than staying, is much more uncertain. The agent can know which option is more robust to uncertainty, without having to know probability distributions. This implies, in many situations, that the agent can choose the option that is a better bet for survival.</div><div style="text-align: justify;"><br />
</div><div style="text-align: center;"><b><span class="Apple-style-span" style="font-size: large;">Wrapping Up</span></b></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The skillful decision maker not only knows a lot, but is also able to deal with conflicting information. We have discussed the innovation dilemma: When choosing between two alternatives, the seemingly better one is also more uncertain.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Animals, people, organizations and societies have developed mechanisms for dealing with the innovation dilemma. The response hinges on tuning the decision to the agent's needs, and robustifying the choice against uncertainty. This choice may or may not coincide with the putative best choice. But what seems best depends on the available - though uncertain - information.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The commendable tendency to do one's best - and to demand the same of others - can lead to putatively optimal decisions that may be more vulnerable to surprise than other decisions that would have been satisfactory. In contrast, the strategy of robustly satisfying critical needs can be a better bet for survival. Consider the design of critical infrastructure: flood protection, nuclear power, communication networks, and so on. The design of such systems is based on vast knowledge and understanding, but also confronts bewildering uncertainties and endless surprises. We must continue to improve our knowledge and understanding, while also improving our ability to manage the uncertainties resulting from the expanding horizon of our efforts. We must identify the critical goals and seek responses that are immune to surprise. </div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com1tag:blogger.com,1999:blog-9140612503596105113.post-64646028001172044262011-10-06T14:14:00.003+02:002011-10-09T14:05:52.010+02:00Beware the Rareness Illusion When Exploring the Unknown<div style="text-align: justify;">Here's a great vacation idea. Spend the summer roaming the world in search of the 10 <a href="http://www.pbs.org/wgbh/nova/israel/losttribes.html"><span class="Apple-style-span" style="color: blue;">lost tribes of Israel,</span></a> exiled from Samaria by the Assyrians 2700 years ago (2 Kings 17:6). Or perhaps you'd like to search for <a href="http://geography.about.com/od/historyofgeography/a/presterjohn.htm"><span class="Apple-style-span" style="color: blue;">Prester John,</span></a> the virtuous ruler of a kingdom lost in the Orient? Or would you rather trace the gold-laden kingdom of <a href="http://en.wikipedia.org/wiki/Ophir"><span class="Apple-style-span" style="color: blue;">Ophir </span></a>(1 Kings 9:28)? Or do you prefer the excitement of tracking the <a href="http://en.wikipedia.org/wiki/Amazons"><span class="Apple-style-span" style="color: blue;">Amazons,</span></a> that nation of female warriors? Or perhaps the naval power mentioned by Plato, operating from the island of <a href="http://en.wikipedia.org/wiki/Atlantis"><span class="Apple-style-span" style="color: blue;">Atlantis?</span></a> Or how about unicorns, or the fountain of eternal youth? The Unknown is so vast that the possibilities are endless.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Maybe you don't believe in unicorns. But Plato evidently "knew" about the island of Atlantis. The conquest of Israel is known from Assyrian archeology and from the Bible. That you've never seen a <span class="Apple-style-span" style="color: blue;"><a href="http://dictionary.reference.com/browse/reubenite"><span class="Apple-style-span" style="color: blue;">Reubenite</span></a> </span>or a <span class="Apple-style-span" style="color: blue;"><a href="http://dictionary.reference.com/browse/naphtalite"><span class="Apple-style-span" style="color: blue;">Naphtalite</span></a> </span>(or a unicorn) means that they don't exist?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">It is true that when something really does not exist, one might spend a long time futilely looking for it. Many people have spent enormous energy searching for lost tribes, lost gold, and lost kingdoms. Why is it so difficult to decide that what you're looking for really isn't there? The answer, ironically, is that the world has endless possibilities for discovery and surprise.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's skip vacation plans and consider some real-life searches. How long should you (or the Libyans) look for Muammar Qaddafi? If he's not in the town of Surt, maybe he's Bani Walid, or Algeria, or Timbuktu? How do you decide he cannot be found? Maybe he was pulverized by a NATO bomb. It's urgent to find the suicide bomber in the crowded bus station before it's too late - if he's really there. You'd like to discover a cure for AIDS, or a method to halt the rising global temperature, or a golden investment opportunity in an emerging market, or a proof of the <a href="http://en.wikipedia.org/wiki/Parallel_postulate"><span class="Apple-style-span" style="color: blue;">parallel postulate</span></a> of Euclidean geometry.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's focus our question. Suppose you are looking for something, and so far you have only "negative" evidence: it's not here, it's not there, it's not anywhere you've looked. Why is it so difficult to decide, conclusively and confidently, that it simply does not exist?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">This question is linked to a different question: <i>how</i> to make the decision that "it" (whatever it is) does not exist. We will focus on the "why" question, and leave the "how" question to students of <a href="http://en.wikipedia.org/wiki/Decision_theory"><span class="Apple-style-span" style="color: blue;">decision theories</span></a> such as statistics, fuzzy logic, possibility theory, Dempster-Shafer theory and <a href="http://info-gap.com/"><span class="Apple-style-span" style="color: blue;">info-gap theory.</span></a> (If you're interested in an <a href="http://info-gap.com/content.php?id=22"><span class="Apple-style-span" style="color: blue;">info-gap application to statistics</span></a>, here is an <a href="http://www.technion.ac.il/yakov/null-ht03.pdf"><span class="Apple-style-span" style="color: blue;">example.</span></a>)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Answers to the "why" question can be found in several domains.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Psychology provides some answers. People can be very goal oriented, stubborn, and persistent. Marco Polo didn't get to China on a 10-hour plane flight. The round trip took him 24 years, and he didn't travel business class.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Ideology is a very strong motivator. When people believe something strongly, it is easy for them to ignore evidence to the contrary. Furthermore, for some people, the search itself is valued more than the putative goal.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The answer to the "why" question that I will focus on is found by contemplating The Endless Unknown. It is so vast, so unstructured, so, well ..., unknown, that we cannot calibrate our negative evidence to decide that whatever we're looking for just ain't there.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">I'll tell a true story.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">I was born in the US and my wife was born in Israel, but our life-paths crossed, so to speak, before we were born. She had a friend whose father was from Europe and lived for a while - before the friend was born - with a cousin of his in my home town. This cousin was - years later - my 3rd grade teacher. My school teacher was my future wife's friend's father's cousin.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Amazing coincidence. This convoluted sequence of events is certainly rare. How many of you can tell the very same story? But wait a minute. This convoluted string of events could have evolved in many many different ways, each of which would have been an equally amazing coincidence. The number of similar possible paths is namelessly enormous, uncountably humongous. In other words, potential "rare" events are very numerous. Now that sounds like a contradiction (we're getting close to some of <a href="http://en.wikipedia.org/wiki/Zeno's_paradoxes"><span class="Apple-style-span" style="color: blue;">Zeno's paradoxes,</span></a> and Aristotle thought Zeno was crazy). It is not a contradiction; it is only a "rareness illusion" (something like an optical illusion). The specific event sequence in my story is unique, which is the ultimate rarity. We view this sequence as an amazing coincidence because we cannot assess the number of similar sequences. Surprising strings of events occur not infrequently because the number of possible surprising strings is so unimaginably vast. <i>The rareness illusion is the impression of rareness arising from our necessary ignorance of the vast unknown.</i> "Necessary" because, by definition, we cannot know what is unknown. "Vast" because the world is so rich in possibilities.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The rareness illusion is a false impression, a mistake. For instance, it leads people to wrongly goggle at strings of events - rare in themselves - even though "rare events" are numerous and "amazing coincidences" occur all the time. An appreciation of the richness and boundlessness of the Unknown is an antidote for the rareness illusion.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Recognition of the rareness illusion is the key to understanding why it is so difficult to confidently decide, based on negative evidence, that what you're looking for simply does not exist.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">One might be inclined to reason as follows. If you're looking for something, then look <i>very</i> thoroughly, and if you don't find it, then it's not there. That is usually sound and sensible advice, and often "looking thoroughly" will lead to discovery.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">However, the number of ways that we could overlook something that really <i>is</i> there is enormous. It is thus very difficult to confidently conclude that the search was thorough and that the object cannot be found. Take the case of your missing house keys. They dropped from your pocket in the car, or on the sidewalk and somebody picked them up, or you left them in the lock when you left the house, or or or .... Familiarity with the rareness illusion makes it very difficult to decide that you have searched thoroughly. If you think that the only contingencies not yet explored are too exotic to be relevant (a raven snatched them while you were daydreaming about that enchanting new employee), then think again, because you've been blinded by a rareness illusion. The number of such possibilities is so vastly unfathomable that you cannot confidently say that all of them are collectively negligible. Recognition of the rareness illusion prevents you from confidently concluding that what you are seeking simply does not exist.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Many quantitative tools grapple with the rareness illusion. We mentioned some decision theories earlier. But because the rareness illusion derives from our necessary ignorance of the vast unknown, one must always beware.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Looking for an exciting vacation? The Endless Unknown is the place to go. </div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com5tag:blogger.com,1999:blog-9140612503596105113.post-72987852197238647102011-09-30T10:04:00.000+03:002011-09-30T10:04:48.951+03:00The Pains of Progress<div style="text-align: center;"><i>To measure time by how little we change is to find how little we've lived, </i></div><div style="text-align: center;"><i>but to measure time by how much we've lost is to wish we hadn't changed at all.</i> Andre Aciman</div><br />
<div style="text-align: justify;">The last frontier is not the Antarctic, or the oceans, or outer space. The last frontier is The Unknown. We mentioned in an <a href="http://decisions-and-info-gaps.blogspot.com/2011/08/baseball-and-linguistic-uncertainty.html"><span class="Apple-style-span" style="color: blue;">earlier essay</span></a> that uncertainty - which makes baseball and life interesting - is inevitable in the human world. Life will continue to be interesting as long as the world is rich in unknowns, waiting to be discovered. Progress is possible if propitious discoveries can be made. Progress, however, comes with costs.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The emblem of my university entwines a billowing smokestack and a cogwheel in the first letter of the institution's name. When this emblem was adopted (probably in 1951) these were optimistic symbols of progress. Cogwheels are no longer 'hi-tech' (though we still need them), and smoke has been banished from polite company. But our emblem is characteristic of industrial society which has seared <i>Progress</i> on our hearts and minds.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Progress is accompanied by painful tensions. On the one hand, progress is nurtured by stability, cooperation, and leisure. On the other hand, progress grows out of change, conflict, and stress. A society's progressiveness reflects its balance of each of these three pairs of attributes. In the most general terms, progressiveness reflects social and individual attitudes to uncertainty.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's consider the three pairs of attributes one at a time.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><i>Change and stability.</i> Not all change is progress, but all progress is change. Change is necessary for progress, by definition, and progress can be very disruptive. The disruptiveness sometimes arises from unexpected consequences. <a href="http://cscs.umich.edu/~crshalizi/Daedalus.html"><span class="Apple-style-span" style="color: blue;">J.B.S. Haldane wrote</span></a> in 1923 that "the late war is only an example of the disruptive result that we may constantly expect from the progress of science." On the other hand, progressives employ and build on existing capabilities. The entrepreneur depends on stable property rights before risking venture capital. The existing legal system is used to remove social injustice. Watt's steam engine extended Newcomen's more primitive model. The new building going up on campus next to my office is very disruptive, but the construction project depends on the continuity of the university despite the drilling and dust. Even revolutionaries exploit and react against the status quo, which must exist for a revolutionary to be able to revolt. (One can't revolt if nothing is revolting.) Progress grows from a patch of opportunity in a broad bed of certainty, and spreads out in unanticipated directions.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><i>Conflict and cooperation.</i> Conflict between vested interests and innovators is common. <a href="http://www.micheleboldrin.com/research/aim/anew01.pdf"><span class="Apple-style-span" style="color: blue;">Watt protected his inventions</span></a> with extensive patents which may have actually retarded the further development and commercialization of steam power. Conflict is also a mechanism for selecting successful ideas. Darwinian evolution and its social analogies proceed by more successful adaptations replacing less successful ones. On the other hand, cooperation enables specialization and expertise which are needed for innovation. The tool-maker cooperates with the farmer so better tools can be made more quickly, enhancing the farmer's productivity and the artisan's welfare. Conflicts arise over what constitutes progress. Stem cell research, genetic engineering, nuclear power technology: progress or plague? Cooperative collective decision making enables the constructive resolution of these value-based conflicts.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><i>Stress and leisure.</i> Challenge, necessity and stress all motivate innovation. If you have no problems, you are unlikely to be looking for solutions. On the other hand, the leisure to think and tinker is a great source of innovation. Subsistence societies have no resources for invention. In assessing the implications of industrial efficiency, <a href="http://grammar.about.com/od/classicessays/a/praiseidleness.htm"><span class="Apple-style-span" style="color: blue;">Bertrand Russell praised idleness</span></a> in 1932, writing: "In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving ...." Stress is magnified by the unknown consequences of the stressor, while leisure is possible only in the absence of fear.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">New replaces Old. <a href="http://en.wikipedia.org/wiki/Yin_and_yang"><span class="Apple-style-span" style="color: blue;">Yin and yang</span></a> are complementary opposites that dynamically interact. In <a href="http://en.wikipedia.org/wiki/Dialectic#Hegelian_dialectic"><span class="Apple-style-span" style="color: blue;">Hegel's dialectic,</span></a> tension between contradictions is resolved by synthesis. Human history is written by the victors, who sometimes hardly mention those swept into <a href="http://en.wikipedia.org/wiki/Ash_heap_of_history"><span class="Apple-style-span" style="color: blue;">Trotsky's "dustbin of history".</span></a> "In the evening resides weeping; in the morning: joy." (Psalm 30:6). Change and stability; conflict and cooperation; stress and leisure.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">No progress without innovation; no innovation without discovery; no discovery without the unknown; no unknown without fear. There is no progress without pain.</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com4tag:blogger.com,1999:blog-9140612503596105113.post-29444630371713831472011-09-20T10:49:00.004+03:002011-10-09T14:06:17.289+02:00Robustness and Locke's Wingless Gentleman<div style="text-align: justify;">Our ancestors have made decisions under uncertainty ever since they had to stand and fight or run away, eat this root or that berry, sleep in this cave or under that bush. Our species is distinguished by the extent of deliberate thought preceding decision. Nonetheless, the ability to decide in the face of the unknown was born from primal necessity. Betting is one of the oldest ways of deciding under uncertainty. But you bet you that 'bet' is a subtler concept than one might think.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We all know what it means to make a bet, but just to make sure let's quote the Oxford English Dictionary: "To stake or wager (a sum of money, etc.) in support of an affirmation or on the issue of a forecast." The word has been around for quite a while. Shakespeare used the verb in 1600: "Iohn a Gaunt loued him well, and betted much money on his head." (<i>Henry IV,</i> Pt. 2 iii. ii. 44). Drayton used the noun in 1627 (and he wasn't the first): "For a long while it was an euen bet ... Whether proud Warwick, or the Queene should win."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">An even bet is a 50-50 chance, an equal probability of each outcome. But betting is not always a matter of chance. Sometimes the meaning is just the opposite. According to the OED 'You bet' or 'You bet you' are slang expressions meaning 'be assured, certainly'. For instance: "'Can you handle this outfit?' 'You bet,' said the scout." (D.L.Sayers, <i>Lord Peter Views Body,</i> iv. 68). Mark Twain wrote "'I'll get you there on time' - and you bet you he did, too." (<i>Roughing It,</i> xx. 152).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">So 'bet' is one of those words whose meaning stretches from one idea all the way to its opposite. Drayton's "even bet" between Warwick and the Queen means that he has no idea who will win. In contrast, Twain's "you bet you" is a statement of certainty. In Twain's or Sayers' usage, it's as though uncertainty combines with moral conviction to produce a definite resolution. This is a dialectic in which doubt and determination form decisiveness.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">John Locke may have had something like this in mind when he wrote:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">"If we will disbelieve everything, because we cannot certainly know all things; we shall do muchwhat as wisely as he, who would not use his legs, but sit still and perish, because he had no wings to fly." (<i>An Essay Concerning Human Understanding,</i> 1706, I.i.5)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The absurdity of Locke's wingless gentleman starving in his chair leads us to believe, and to act, despite our doubts. The moral imperative of survival sweeps aside the paralysis of uncertainty. The consequence of unabated doubt - paralysis - induces doubt's opposite: decisiveness.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">But rational creatures must have some method for reasoning around their uncertainties. Locke does not intend for us to simply ignore our ignorance. But if we have no way to place bets - if the odds simply are unknown - then what are we to do? We cannot "sit still and perish".</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">This is where the <a href="http://info-gap.com/"><span class="Apple-style-span" style="color: blue;">strategy of robustness</span></a> comes in.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">'Robust' means 'Strong and hardy; sturdy; healthy'. By implication, something that is robust is 'not easily damaged or broken, resilient'. A statistical test is robust if it yields 'approximately correct results despite the falsity of certain of the assumptions underlying it' or despite errors in the data. (OED)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A decision is robust if its outcome is satisfactory despite error in the information and understanding which justified or motivated the decision. A robust decision is resilient to surprise, immune to ignorance.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">It is no coincidence that the colloquial use of the word 'bet' includes concepts of both chance and certainty. A good bet can tolerate large deviation from certainty, large error of information. A good bet is robust to surprise. 'You bet you' does not mean that the world is certain. It means that the outcome is certain to be acceptable, regardless of how the world turns out. The scout will handle the outfit even if there is a rogue in the ranks; Twain will get there on time despite snags and surprises. A good bet is robust to the unknown. You bet you!</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">An extended and more formal discussion of these issues can be found <a href="http://www.technion.ac.il/yakov/up03.pdf"><span class="Apple-style-span" style="color: blue;">elsewhere.</span></a></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-37797771989695719302011-08-21T17:00:00.001+03:002011-09-20T10:50:28.088+03:00Baseball and Linguistic Uncertainty<div style="text-align: justify;">In my youth I played an inordinate amount of baseball, collected baseball cards, and idolized baseball players. I've outgrown all that but when I'm in the States during baseball season I do enjoy watching a few innings on the TV.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">So I was watching a baseball game recently and the commentator was talking about the art of pitching. Throwing a baseball, he said, is like shooting a shotgun. You get a spray. As a pitcher, you have to know your spray. You learn to control it, but you know that it is there. The ball won't always go where you want it. And furthermore, where you want the ball depends on the batter's style and strategy, which vary from pitch to pitch for every batter.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">That's baseball talk, but it stuck in my mind. Baseball pitchers must manage uncertainty! And it is not enough to reduce it and hope for the best. Suppose you want to throw a strike. It's not a good strategy to aim directly at, say, the lower outside corner of the strike zone, because of the spray of the ball's path and because the batter's stance can shift. Especially if the spray is skewed down and out, you'll want to move up and in a bit.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">This is all very similar to the ambiguity of human speech when we pitch words at each other. Words don't have precise meanings; meanings spread out like the pitcher's spray. If we want to communicate precisely we need to be aware of this uncertainty, and manage it, taking account of the listener's propensities.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Take the word "liberal" as it is used in political discussion.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">For many decades, "liberals" have tended to support high taxes to provide generous welfare, public medical insurance, and low-cost housing. They advocate liberal (meaning magnanimous or abundant) government involvement for the citizens' benefit.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A "liberal" might also be someone who is open-minded and tolerant, who is not strict in applying rules to other people, or even to him or herself. Such a person might be called "liberal" (meaning advocating individual rights) for opposing extensive government involvement in private decisions. For instance, liberals (in this second sense) might oppose high taxes since they reduce individuals' ability to make independent choices. As another example, John Stuart Mill opposed laws which restricted the rights of women to work (at night, for instance), even though these laws were intended to promote the welfare of women. Women, insisted Mill, are intelligent adults and can judge for themselves what is good for them.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Returning to the first meaning of "liberal" mentioned above, people of that strain may support restrictions of trade to countries which ignore the health and safety of workers. The other type of "liberal" might tend to support unrestricted trade.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Sending out words and pitching baseballs are both like shooting a shotgun: meanings (and baseballs) spray out. You must know what meaning you wish to convey, and what other meanings the word can have. The choice of the word, and the crafting of its context, must manage the uncertainty of where the word will land in the listener's mind.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's go back to baseball again.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">If there were no uncertainty in the pitcher's pitch and the batter's swing, then baseball would be a dreadfully boring game. If the batter knows exactly where and when the ball will arrive, and can completely control the bat, then every swing will be a homer. Or conversely, if the pitcher always knows exactly how the batter will swing, and if each throw is perfectly controlled, then every batter will strike out. But which is it? Whose certainty dominates? The batter's or the pitcher's? It can't be both. There is some deep philosophical problem here. Clearly there cannot be complete certainty in a world which has some element of free will, or surprise, or discovery. This is not just a tautology, a necessary result of what we mean by "uncertainty" and "surprise". It is an implication of limited human knowledge. Uncertainty - which makes baseball and life interesting - is inevitable in the human world.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">How does this carry over to human speech?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">It is said of the Wright brothers that they thought so synergistically that one brother could finish an idea or sentence begun by the other. If there is no uncertainty in what I am going to say, then you will be bored with my conversation, or at least, you won't learn anything from me. It is because you <i>don't</i> know what I mean by, for instance, "robustness", that my speech on this topic is enlightening (and maybe interesting). And it is because you disagree with me about what robustness means (and you tell me so), that I can perhaps extend my own understanding.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">So, uncertainty is inevitable in a world that is rich enough to have surprise or free will. Furthermore, this uncertainty leads to a process - through speech - of discovery and new understanding. Uncertainty, and the use of language, leads to discovery.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Isn't baseball an interesting game?</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com3tag:blogger.com,1999:blog-9140612503596105113.post-27689358346326263782011-08-12T14:44:00.002+03:002011-10-03T15:25:32.177+02:00(Even) God is a Satisficer<div style="text-align: justify;">To 'satisfice' means "To decide on and pursue a course of action that will satisfy the minimum requirements necessary to achieve a particular goal." (Oxford English Dictionary). Herbert Simon (1978 Nobel Prize in Economics) was the first to use the term in this technical sense, which is an old alteration of the ordinary English word "satisfy". Simon wrote (<i>Psychological Review,</i> 63(2), 129-138 (1956)) "Evidently, organisms adapt well enough to 'satisfice'; they do not, in general, 'optimize'." Agents satisfice, according to Simon, due to limitation of their information, understanding, and cognitive or computational ability. These limitations, which Simon called "bounded rationality", force agents to look for solutions which are good enough, though not necessarily optimal. The optimum may exist but it cannot be known by the resource- and information-limited agent.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">There is a deep psychological motivation for satisficing, as <a href="http://www.swarthmore.edu/SocSci/bschwar1"><span class="Apple-style-span" style="color: blue;">Barry Schwartz</span></a> discusses in <i>Paradox of Choice: Why More Is Less.</i> "When people have no choice, life is almost unbearable." But as the number and variety of choices grows, the challenge of deciding "no longer liberates, but debilitates. It might even be said to tyrannize." (p.2) "It is maximizers who suffer most in a culture that provides too many choices" (p.225) because their expectations cannot be met, they regret missed opportunities, worry about social comparison, and so on. Maximizers may acquire or achieve more than satisficers, but satisficers will tend to be happier.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Psychology is not the only realm in which satisficing finds its roots. Satisficing - as a decision strategy - has systemic or structural advantages that suggest its prevalence even in situations where the complexity of the human psyche is irrelevant. We will discuss an example from the behavior of animals.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Several years ago an ecological colleague of mine at the Technion, <a href="http://envgis.technion.ac.il/people/yohay.htm"><span class="Apple-style-span" style="color: blue;">Prof. Yohay Carmel</span></a>, posed the following question: Why do foraging animals move from one feeding site to another later than would seem to be suggested by strategies aimed at maximizing caloric intake? Of course, animals have many goals in addition to foraging. They must keep warm (or cool), evade predators, rest, reproduce, and so on. Many mathematical models of foraging by animals attempt to predict "patch residence times" (PRTs): how long the animal stays at one feeding patch before moving to the next one. A common conclusion is that patch residence times are under-predicted when the model assumes that the animal tries to maximize caloric intake. Models do exist which "patch up" the PRT paradox, but the quandary still exists.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Yohay and I wrote a <a href="http://info-gap.com/content.php?id=13"><span class="Apple-style-span" style="color: blue;">paper</span></a> in which we explored a satisficing - rather than maximizing - model for patch residence time. Here's the idea. The animal needs a critical amount of energy to survive until the next foraging session. More food might be nice, but it's not necessary for survival. The animal's foraging strategy must maximize the confidence in achieving the critical caloric intake. So maximization is taking place, but not maximization of the substantive "good" (calories) but rather maximization of the confidence (or reliability, or likelihood, but these are more technical terms) of meeting the survival requirement. We developed a very simple foraging model based on <a href="http://info-gap.com/"><span class="Apple-style-span" style="color: blue;">info-gap theory.</span></a> The model predicts that PRTs for a large number of species - including invertebrates, birds and mammals - tended to be longer (and thus more realistic) than predicted by energy-maximizing models.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">This conclusion - that satisficing predicts observed foraging times better than maximizing - is tentative and preliminary (like most scientific conclusions). Nonetheless, it seems to hold a grain of truth, and it suggests an interesting idea. Consider the following syllogism.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">1. Evolution selects those traits that enhance the chance of survival.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">2. Animals seem to have evolved strategies for foraging which satisfice (rather than maximize) the energy intake.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">3. Hence satisficing seems to be competitively advantageous. Satisficing seems to be a better bet than maximizing.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Unlike my psychologist colleague Barry Schwartz, we are not talking about happiness or emotional satisfaction. We're talking about survival of dung flies or blue jays. It seems that aiming to do good enough, but not necessarily the best possible, is the way the world is made.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">And this brings me to the suggestion that (even) God is a satisficer. The word "good" appears quite early in the Bible: in the 4th verse of the 1st chapter of Genesis, the very first book: "And God saw the light [that had just been created] that it was good...". At this point, when the world is just emerging out of <i>tohu v'vohu</i> (chaos), we should probably understand the word "good" as a binary category, as distinct from "bad" or "chaos". The meaning of "good" is subsequently refined through examples in the coming verses. God creates dry land and oceans and sees that it is good (1:10). Grass and fruit trees are seen to be good (1:12). The sun and moon are good (1:16-18). Swarming sea creatures, birds, and beasts are good (1:20-21, 25).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">And now comes a real innovation. God reviews the entire creation and sees that it is <i>very</i> good (1:31). It turns out that goodness comes in degrees; it's not simply binary: good or bad. "Good" requires judgment; ethics is born. But what particularly interests me here is that God's handiwork isn't excellent. Shouldn't we expect the very best? I'll leave this question to the theologians, but it seems to me that God is a satisficer.</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com9tag:blogger.com,1999:blog-9140612503596105113.post-32940752768403714572011-08-09T13:21:00.003+03:002011-09-20T10:51:43.595+03:00No-Failure Design and Disaster Recovery: Lessons from Fukushima<div style="text-align: justify;">One of the striking aspects of the early stages of the nuclear accident at Fukushima-Daiichi last March was the nearly total absence of disaster recovery capability. For instance, while Japan is a super-power of robotic technology, the nuclear authorities had to import robots from France for probing the damaged nuclear plants. Fukushima can teach us an important lesson about technology.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The failure of critical technologies can be disastrous. The crash of a civilian airliner can cause hundreds of deaths. The meltdown of a nuclear reactor can release highly toxic isotopes. Failure of flood protection systems can result in vast death and damage. Society therefore insists that critical technologies be designed, operated and maintained to extremely high levels of reliability. We benefit from technology, but we also insist that the designers and operators "do their best" to protect us from their dangers.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Industries and government agencies who provide critical technologies almost invariably act in good faith for a range of reasons. Morality dictates responsible behavior, liability legislation establishes sanctions for irresponsible behavior, and economic or political self-interest makes continuous safe operation desirable.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The language of performance-optimization <span class="Apple-style-span" style="background-color: white; font-family: verdana, helvetica, arial, sans-serif; font-size: 12px;">−</span> not only doing our best, but also achieving the best <span class="Apple-style-span" style="background-color: white; font-family: verdana, helvetica, arial, sans-serif; font-size: 12px;">−</span> may tend to undermine the successful management of technological danger. A probability of severe failure of<a href="http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1829/"> <span class="Apple-style-span" style="color: blue;">one in a million</span></a> per device per year is exceedingly <span class="Apple-style-span" style="background-color: white; font-family: verdana, helvetica, arial, sans-serif; font-size: 12px;">−</span> and very reassuringly <span class="Apple-style-span" style="background-color: white; font-family: verdana, helvetica, arial, sans-serif; font-size: 12px;">−</span> small. When we honestly believe that we have designed and implemented a technology to have vanishingly small probability of catastrophe, we can honestly ignore the need for disaster recovery.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Or can we?</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Let's contrast this with an ethos that is consistent with a thorough awareness of the potential for adverse surprise. We now acknowledge that our predictions are uncertain, perhaps highly uncertain on some specific points. We attempt to achieve very demanding outcomes <span class="Apple-style-span" style="background-color: white; font-family: verdana, helvetica, arial, sans-serif; font-size: 12px;">−</span> for instance vanishingly small probabilities of catastrophe <span class="Apple-style-span" style="background-color: white; font-family: verdana, helvetica, arial, sans-serif; font-size: 12px;">−</span> but we recognize that our ability to reliably calculate such small probabilities is compromised by the deficiency of our knowledge and understanding. We robustify ourselves against those deficiencies by choosing a design which would be acceptable over a wide range of deviations from our current best understanding. (This is called "<a href="http://www.technion.ac.il/yakov/IGT/faqs01.pdf"><span class="Apple-style-span" style="color: blue;">robust-satisficing</span></a>".) Not only does "vanishingly small probability of failure" still entail the possibility of failure, but our predictions of that probability may err.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Acknowledging the need for disaster recovery capability (DRC) is awkward and uncomfortable for designers and advocates of a technology. We would much rather believe that DRC is not needed, that we have in fact made catastrophe negligible. But let's not conflate good-faith attempts to deal with complex uncertainties, with guaranteed outcomes based on full knowledge. Our best models are in part wrong, so we robustify against the designer's bounded rationality. But robustness cannot guarantee success. The design and implementation of DRC is a necessary part of the design of any critical technology, and is consistent with the strategy of robust satisficing.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">One final point: moral hazard and its dilemma. The design of any critical technology entails two distinct and essential elements: failure prevention and disaster recovery. What economists call a `moral hazard' exists since the failure prevention team might rely on the disaster-recovery team, and vice versa. Each team might, at least implicitly, depend on the capabilities of the other team, and thereby relinquish some of its own responsibility. Institutional provisions are needed to manage this conflict.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The alleviation of this moral hazard entails a dilemma. Considerations of failure prevention and disaster recovery must be combined in the design process. The design teams must be aware of each other, and even collaborate, because a single coherent system must emerge. But we don't want either team to relinquish any responsibility. On the one hand we want the failure prevention team to work as though there is no disaster recovery, and the disaster recovery team should presume that failures will occur. On the other hand, we want these teams to collaborate on the design.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">This moral hazard and its dilemma do not obviate the need for both elements of the design. Fukushima has taught us an important lesson by highlighting the special challenge of high-risk critical technologies: design so failure cannot occur, and prepare to respond to the unanticipated.</div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com4tag:blogger.com,1999:blog-9140612503596105113.post-39202738306089909132011-08-09T13:09:00.003+03:002011-10-09T14:06:35.429+02:00The Innovation Dilemma<div style="text-align: justify;">"If it ain't broken, don't fix it."Sound advice, but limited to situations where "fixing it" only entails restoring past performance. In contrast, innovations entail substantive improvements over the past. Innovations are not just corrections of past mistakes, but progress towards a better future.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">However, innovations often present a challenging dilemma to decision makers. Many decisions require choosing between options, one of which is both potentially better in the outcome but markedly more uncertain. In these situations the decision maker faces an "innovation dilemma."</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The innovation dilemma arises in many contexts. Here are a few examples.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Technology.</b> New and innovative technologies are often advocated because of their purported improvements on existing products or methods. However, what is new is usually less well-known and less widely tested than what is old. The range of possible adverse (or favorable) surprises of an innovative technology may exceed the range of surprise for a tried-and-true technology. The analyst who must choose between innovation and convention faces an innovation dilemma.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Investment.</b> The economic investor faces an innovation dilemma when choosing between investing in a promising but unknown new start-up and investing in a well-known existing firm.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Auction.</b> "Nothing ventured, nothing gained" is the motto of the risk-taker, while the risk-avoider responds: "Nothing ventured, nothing lost". The innovation dilemma is embedded in the choice between these two strategies. Consider for example the "winner's curse" in auction theory. You can make a financial bid for a valuable piece of property, which will be sold to the highest bidder. You have limited information about the other bidders and about the true value of the property. If you bid high you might win the auction but you might also pay more than the property is worth. Not bidding is risk-free because it avoids the purchase. The choice between a high bid and no bid is an innovation dilemma.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Employer decision.</b> An employer must decide whether or not to replace a current satisfactory employee with a new candidate whose score on a standardized test was high. A high score reflects great ability. However, the score also contains a random element, so a high score may result from chance, and not reflect true ability. The innovation dilemma is embedded in the employer's choice between the current adequate employee and a high-scoring new candidate.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Natural resource exploitation.</b> Permitting the extraction of offshore petroleum resources may be productive in terms of petroleum yield but may also present officials with significant uncertainty about environmental consequences.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Public health.</b> Implementation of a large-scale immunization program may present policy officials with worries about uncertain side effects.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Agricultural policy.</b> New technologies promise improved production efficiency or new consumer choices, but with uncertain benefits and costs and potential unanticipated adverse effects resulting from use of manufactured inputs such as fertilizers, pesticides, and machinery, and, more recently, genetically engineered seed varieties and information technology. (I am indebted to L. Joe Moffitt and Craig Osteen for these examples in natural resources, public health and agriculture.)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">An essay like this one should - according to custom - end with a practical prescription: What to do about the innovation dilemma? You need to make a decision - a choice between options - and you face an innovation dilemma. How to choose? All I'll say is that the first step is to identify what you need to achieve from this decision. Recognizing the vast uncertainties which accompany the decision, choose the option which achieves the required outcome over the largest range of uncertain contingencies.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">If you want more of an answer than that, consult your favorite decision theory (like <a href="http://info-gap.com/"><span class="Apple-style-span" style="color: blue;">info-gap theory</span></a>, for instance).</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">I will conclude by drawing a parallel between the innovation dilemma and one of the oldest quandaries in political philosophy. In <i>The Evolution of Political Thought</i> C. Northcote Parkinson explains the historically recurring tension between freedom and equality.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Freedom.</b> People have widely varying interests and aptitudes. Hence a society that offers broad freedom for individuals to exploit their abilities, will also develop a wide spread of wealth, accomplishment, and status. Freedom enables individuals to explore, invent, discover, and create. Freedom is the recipe for innovation. Freedom induces both uncertainty and inequality.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>Equality.</b> People have widely varying interests and aptitudes. Hence a society that strives for equality among its members can achieve this by enforcing conformity and by transferring wealth from rich to poor. The promise of a measure of equality is a guarantee of a measure of security, a personal and social safety net. Equality reduces both uncertainty and freedom.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The dilemma is that a life without freedom is hardly human, but freedom without security is the jungle. And life in the jungle, as Hobbs explained, in "solitary, poor, nasty, brutish and short".</div><div><div style="text-align: justify;"><br />
</div></div>Yakov Ben-Haimhttp://www.blogger.com/profile/10765902456064490854noreply@blogger.com4