söndag, augusti 16, 2020

Black Swan – Svart Svan

 


Black swan theory WIKI

The phrase "black swan" de­rives from a Latin ex­pres­sion; its old­est known oc­cur­rence is from the 2nd-cen­tury Roman poet Ju­ve­nal's char­ac­ter­i­za­tion in his Satire VI of some­thing being "rara avis in ter­ris ni­groque simil­lima cygno" ("a rare bird in the lands and very much like a black swan").

When the phrase was coined, the black swan was pre­sumed not to exist. The im­por­tance of the metaphor lies in its anal­ogy to the fragility of any sys­tem of thought. A set of con­clu­sions is po­ten­tially un­done once any of its fun­da­men­tal pos­tu­lates is dis­proved. In this case, the ob­ser­va­tion of a sin­gle black swan would be the un­do­ing of the logic of any sys­tem of thought, as well as any rea­son­ing that fol­lowed from that un­der­ly­ing logic.

Ju­ve­nal's phrase was a com­mon ex­pres­sion in 16th cen­tury Lon­don as a state­ment of impossibility. The Lon­don ex­pres­sion de­rives from the Old World pre­sump­tion that all swans must be white be­cause all his­tor­i­cal records of swans re­ported that they had white feathers. In that con­text, a black swan was im­pos­si­ble or at least nonex­is­tent.

How­ever, in 1697, Dutch ex­plor­ers led by Willem de Vlam­ingh be­came the first Eu­ro­peans to see black swans, in West­ern Aus­tralia. The term sub­se­quently meta­mor­phosed to con­note the idea that a per­ceived im­pos­si­bil­ity might later be dis­proven. Taleb notes that in the 19th cen­tury, John Stu­art Mill used the black swan log­i­cal fal­lacy as a new term to iden­tify fal­si­fi­ca­tion.

Black swan events were dis­cussed by Nas­sim Nicholas Taleb in his 2001 book Fooled By Ran­dom­ness, which con­cerned fi­nan­cial events. His 2007 book The Black Swan ex­tended the metaphor to events out­side of fi­nan­cial mar­kets. Taleb re­gards al­most all major sci­en­tific dis­cov­er­ies, his­tor­i­cal events, and artis­tic ac­com­plish­ments as "black swans"—undi­rected and un­pre­dicted. He gives the rise of the In­ter­net, the per­sonal com­puter, World War I, the dis­so­lu­tion of the So­viet Union, and the Sep­tem­ber 11, 2001 at­tacks as ex­am­ples of black swan events.

What we call here a Black Swan (and cap­i­tal­ize it) is an event with the fol­low­ing three at­trib­utes.

First, it is an out­lier, as it lies out­side the realm of reg­u­lar ex­pec­ta­tions, be­cause noth­ing in the past can con­vinc­ingly point to its pos­si­bil­ity. Sec­ond, it car­ries an ex­treme 'im­pact'. Third, in spite of its out­lier sta­tus, human na­ture makes us con­coct ex­pla­na­tions for its oc­cur­rence after the fact, mak­ing it ex­plain­able and pre­dictable.

I stop and sum­ma­rize the triplet: rar­ity, ex­treme 'im­pact', and ret­ro­spec­tive (though not prospec­tive) pre­dictabil­ity. A small num­ber of Black Swans ex­plains al­most every­thing in our world, from the suc­cess of ideas and re­li­gions, to the dy­nam­ics of his­tor­i­cal events, to el­e­ments of our own per­sonal lives.

After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals.
Ac­cord­ing to Taleb, the COVID-19 pan­demic is not a black swan, but is con­sid­ered to be a white swan; such an event has a major ef­fect, but is com­pat­i­ble with sta­tis­ti­cal properties.

Coping with black swans
The prac­ti­cal aim of Taleb's book is not to at­tempt to pre­dict events which are un­pre­dictable, but to build ro­bust­ness against neg­a­tive events while still ex­ploit­ing pos­i­tive events. Taleb con­tends that banks and trad­ing firms are very vul­ner­a­ble to haz­ardous black swan events and are ex­posed to un­pre­dictable losses. On the sub­ject of busi­ness, and quan­ti­ta­tive fi­nance in par­tic­u­lar, Taleb cri­tiques the wide­spread use of the nor­mal dis­tri­b­u­tion model em­ployed in fi­nan­cial en­gi­neer­ing, call­ing it a Great In­tel­lec­tual Fraud. Taleb elab­o­rates the ro­bust­ness con­cept as a cen­tral topic of his later book, An­tifrag­ile: Things That Gain From Dis­or­der.

In the sec­ond edi­tion of The Black Swan, Taleb pro­vides "Ten Prin­ci­ples for a Black-Swan-Ro­bust Society".

Taleb states that a black swan event de­pends on the ob­server. For ex­am­ple, what may be a black swan sur­prise for a turkey is not a black swan sur­prise to its butcher; hence the ob­jec­tive should be to "avoid being the turkey" by iden­ti­fy­ing areas of vul­ner­a­bil­ity in order to "turn the Black Swans white".

Epistemological approach
Taleb's black swan is dif­fer­ent from the ear­lier philo­soph­i­cal ver­sions of the prob­lem, specif­i­cally in epis­te­mol­ogy, as it con­cerns a phe­nom­e­non with spe­cific em­pir­i­cal and sta­tis­ti­cal prop­er­ties which he calls, "the fourth quadrant".

Taleb's prob­lem is about epis­temic lim­i­ta­tions in some parts of the areas cov­ered in de­ci­sion mak­ing. These lim­i­ta­tions are twofold: philo­soph­i­cal (math­e­mat­i­cal) and em­pir­i­cal (human known epis­temic bi­ases). The philo­soph­i­cal prob­lem is about the de­crease in knowl­edge when it comes to rare events as these are not vis­i­ble in past sam­ples and there­fore re­quire a strong a pri­ori, or an ex­trap­o­lat­ing the­ory; ac­cord­ingly pre­dic­tions of events de­pend more and more on the­o­ries when their prob­a­bil­ity is small. In the fourth quad­rant, knowl­edge is un­cer­tain and con­se­quences are large, re­quir­ing more robustness.

Ac­cord­ing to Taleb, thinkers who came be­fore him who dealt with the no­tion of the im­prob­a­ble, such as Hume, Mill, and Pop­per fo­cused on the prob­lem of in­duc­tion in logic, specif­i­cally, that of draw­ing gen­eral con­clu­sions from spe­cific ob­ser­va­tions. The cen­tral and unique at­tribute of Taleb's black swan event is that it is high-pro­file. His claim is that al­most all con­se­quen­tial events in his­tory come from the un­ex­pected – yet hu­mans later con­vince them­selves that these events are ex­plain­able in hind­sight.

One prob­lem, la­beled the ludic fal­lacy by Taleb, is the be­lief that the un­struc­tured ran­dom­ness found in life re­sem­bles the struc­tured ran­dom­ness found in games. This stems from the as­sump­tion that the un­ex­pected may be pre­dicted by ex­trap­o­lat­ing from vari­a­tions in sta­tis­tics based on past ob­ser­va­tions, es­pe­cially when these sta­tis­tics are pre­sumed to rep­re­sent sam­ples from a nor­mal dis­tri­b­u­tion. These con­cerns often are highly rel­e­vant in fi­nan­cial mar­kets, where major play­ers some­times as­sume nor­mal dis­tri­b­u­tions when using value at risk mod­els, al­though mar­ket re­turns typ­i­cally have fat tail dis­tri­b­u­tions.

Taleb said "I don't par­tic­u­larly care about the usual. If you want to get an idea of a friend's tem­pera­ment, ethics, and per­sonal el­e­gance, you need to look at him under the tests of se­vere cir­cum­stances, not under the reg­u­lar rosy glow of daily life. Can you as­sess the dan­ger a crim­i­nal poses by ex­am­in­ing only what he does on an or­di­nary day? Can we un­der­stand health with­out con­sid­er­ing wild dis­eases and epi­demics? In­deed the nor­mal is often ir­rel­e­vant. Al­most every­thing in so­cial life is pro­duced by rare but con­se­quen­tial shocks and jumps; all the while al­most every­thing stud­ied about so­cial life fo­cuses on the 'nor­mal,' par­tic­u­larly with 'bell curve' meth­ods of in­fer­ence that tell you close to noth­ing. Why? Be­cause the bell curve ig­nores large de­vi­a­tions, can­not han­dle them, yet makes us con­fi­dent that we have tamed un­cer­tainty. Its nick­name in this book is GIF, Great In­tel­lec­tual Fraud."

More gen­er­ally, de­ci­sion the­ory, which is based on a fixed uni­verse or a model of pos­si­ble out­comes, ig­nores and min­i­mizes the ef­fect of events that are "out­side the model". For in­stance, a sim­ple model of daily stock mar­ket re­turns may in­clude ex­treme moves such as Black Mon­day (1987), but might not model the break­down of mar­kets fol­low­ing the 9/11 at­tacks. Con­se­quently, the New York Stock Ex­change and Nas­daq ex­change re­mained closed till Sep­tem­ber 17, 2001, the most pro­tracted shut­down since the Great Depression.[16] A fixed model con­sid­ers the "known un­knowns", but ig­nores the "un­known un­knowns", made fa­mous by a state­ment of Don­ald Rums­feld. The term "un­known un­knowns" ap­peared in a 1982 New Yorker ar­ti­cle on the aero­space in­dus­try, which cites the ex­am­ple of metal fa­tigue, the cause of crashes in Comet air­lin­ers in the 1950s.

Taleb notes that other dis­tri­b­u­tions are not us­able with pre­ci­sion, but often are more de­scrip­tive, such as the frac­tal, power law, or scal­able dis­tri­b­u­tions and that aware­ness of these might help to tem­per expectations.

Be­yond this, he em­pha­sizes that many events sim­ply are with­out prece­dent, un­der­cut­ting the basis of this type of rea­son­ing al­to­gether.

Taleb also ar­gues for the use of coun­ter­fac­tual rea­son­ing when con­sid­er­ing risk.

Inga kommentarer: