Negovanje netrivialnega1 Urban Kordeš Univerza v Ljubljani, Kognitivna znanost, Slovenija urban.kordes@guest.arnes.si V članku poskušamo opozoriti na potrebo po vnovičnem zaupanju v individualnost, kompleksnost in intimnost neposrednega doživljanja. Pokažemo, zakaj toka zavesti ne moremo opazovati s standardi klasične (analitično-redukcionistične)paradigme. Predlagamo uravnoteženje intersubjektivne resničnosti redukcionističnih teorij z intimno resničnostjo Gestalta zavedanja, na katero - morda bolje kakor kdorkoli -opozarjajo pisatelji, pozorni zasledovalci toka zavesti. Ključne besede: kognitivna znanost / doživljanje / tok zavesti / individualnost / soudeleženost UDK 165.242 UDK 159.922 Uvod Znanost že od nekdaj poganjata dva motorja, dva kreativna nemira: radovednost in strah pred negotovostjo. Tok znanstvenega razkrivanja sveta je seveda bistveno zaznamovan z mnogimi drugimi vplivi — predvsem z ekonomskimi —, vendar sta oba kreativna nemira bistvena. Nemir radovednega otroškega iskanja nas sili, da zapuščamo udobje znanega, sili nas k čudenju, k priznanju, da ne vemo in da ne razumemo. Nemir zaradi slutnje neskončne zapletenosti vesolja in naše izgubljenosti v tem nepreglednem procesu pa nas žene v urejanje, poenostavljanje, pojasnjevanje in — če smo v tem res uspešni — v poskuse napovedovanja. Če opazujemo zgodovinski tok znanstvenega prizadevanja, se zdi, da se oba nemira stalno prepletata. V nekaterih obdobjih prevladuje eden od njiju, kar pa hitro ustvari potrebo in s tem prostor za drugega. Na področju raziskovanja duševnosti v tem trenutku prevladujejo poskusi urejanja, poenostavljanja in razlage. Hkrati pa neučinkovitost nekaterih tako pridobljenih rezultatov opozarja na potrebo po negovanju kompleksnosti, četudi za ceno teoretske jasnosti. Kognitivna znanost Še pred nekaj desetletji je bilo razmišljanje o teoretskih modelih delovanja duševnosti omejeno na filozofske špekulacije in nekatere psihološke parcialne modele (Freud, Piaget, James). Zanimivo je, da se preskok ni zgodil kot posledica kakšnega empiričnega odkritja. Dramatično (in nepričakovano) spremembo je prinesla nova skupna metafora, model, ki je omogočil interdisciplinarno povezavo vseh raziskovalnih disciplin, ki se na tak ali drugačen način dotikajo z duševnostjo povezanih fenomenov. Skupna metafora — kognicija kot procesiranje informacij — izhaja iz kibernetike in danes težko razumemo revolucijo v razmišljanju, ki jo je sprožila. Podobno kakor računalniki procesirajo informacije (tj. vhodne impulze v skladu s programom prevedejo v izhode), je naloga kognicijskih sistemov prevajanje dražljajev (tj. vhodnih impulzov) v vedenje (tj. v izhode sistema). T. i. informacijskoprocesni oziroma računalniški model kognicije je nenadoma omogočil skupno koncepcijo o tem, kaj se dogaja v »črni škatli« duševnih procesov. Iz te skupne koncepcije je zrasla nova znanstvena disciplina: kognitivna znanost. V osemdesetih letih prejšnjega stoletja je bil razcvet računalniške tehnologije skupaj z novo metaforo za delovanje kognicije vir velikega zanosa. Zmožnost računalnikov, da v nekaj sekundah opravijo naloge, ki so bile celo za najbistrejše ljudi skoraj nerešljive, je zbudila splošno vero, da smo izumili orodje, s katerim bomo lahko ne le modelirali kognitivne procese, temveč tudi presegli inteligenco njihovih samih ustvarjalcev računalnikov. To obdobje je dodobra zaznamovalo iskanje (računalniških) algoritmov, ki bi lahko simulirali inteligenco. Šele ko se je izkazalo, da razumnost računalnikov ne raste proporcionalno z njihovo zmogljivostjo (oziroma da sploh ne raste), so se raziskovalci začeli resneje ukvarjati z vprašanjem, kaj sploh je inteligenca. Konec desetletja ni prinesel ne zadovoljivega odgovora na to vprašanje ne računalnikov, ki bi jim lahko pripisali »razum«. Izkazalo se je, da je resda mogoče precej preprosto algoritmično definirati nekatera opravila, ki nam veljajo za znak visoke inteligence oziroma ki jih pripisujemo »ekspertom«, na primer ugotavljanje diagnoze iz znanih simptomov, izračun zapletenih diferencialnih enačb ali igranje šaha. Za veliko bolj nerazumljive pa so se izkazale operacije, ki jih v našem vsakdanjem življenju navadno sploh ne opazimo: proces spoznavanja okolice in odzivanja nanj, učenje jezika in dodeljevanje pomena, ki je računalnikom seveda povsem nedosegljivo. Dermot Furlong in David Vernon sta leta 1994 ugotovila tole: Ce natančneje pomislimo, je res nenavadno pa tudi zgovorno, da je umetna inteligenca predmet resnih raziskav, ne da bi prej raziskali področje umetnega življenja — inteligenco vendar pripisujemo zgolj živim sistemom. So znanstveniki s področja umetne inteligence na tihem računali, da bo, ko bo njihov posel končan, sistem umetne inteligence že kar živ sistem? (98) Na začetku devetdesetih let so nekateri raziskovalci začeli opozarjati, da je kognitivna znanost, utemeljena na informacijsko-procesnem modelu, v krizi (gl. Winograd in Flores; Varela, Thompson in Rosch; Furlong in Vernon). Tem raziskovalcem je skupno, da so podvomili v primernost analitično-redukcionističnega modela za raziskovanje duševnih procesov, zavesti in življenja, medtem ko je bila večina kognitivnih znanstvenikov na tihem prepričana, da je izhod iz zastoja v nadaljnji specializaciji študija kognitivnih pojavov in s tem naposled v enotni teoriji, ki da bo zadovoljivo odgovorila tudi na širša vprašanja. Kot naročeno, je v devetdesetih letih prvenstvo umetni inteligenci prevzela nevroznanost, ki je z novimi, neinvazivnimi metodami opazovanja živih možganov lahko prvič v zgodovini začela klinično raziskovati duševne procese. To je zasenčilo predstavo, da raziskovalce kognicije druži le skupen model. Celo dejstvo, da gre zgolj za model, se je pomaknilo v ozadje. Kljub nekaterim poskusom novih metafor (konekcionizem, utelešena kognicija) je ideja zavedajočega se bitja kot procesorja zunanjih dražljajev ostala temeljni (in vse bolj samoumeven) koncept. Problem redukcije Kognitivna znanost je torej obdržala računalniško metaforo kot skupni model, analitično-redukcionistično metodo pa kot ustrezen raziskovalni pristop. Na duševnost lahko gledamo z vidika kemije, biologije, filozofije, antropologije ali računalniškega modeliranja. Na primer kemik se bo lotil kemijskih procesov v živem organizmu. Seveda mu ne bo uspelo opisati celotnega (kemičnega) dogajanja na mah, zato se bo osredotočil le na določen kemičen proces v določeni vrsti organizmov. Takšno razbitje problema na preprostejše komponente je glavni adut analitično-redukci-onističnega pristopa: če je sistem preveč zapleten, da bi ga razumeli, ga razdelimo na manjše in preprostejše dele. Ce se izkaže, da so ti deli še zmerom preveč zapleteni, jih spet razdelimo — in tako naprej, dokler ne dobimo delov, ki so dovolj preprosti, da jih lahko razumemo in opišemo. Ackoff (8) označuje redukcionizem kot »doktrino, da so vsi objekti, njihove lastnosti ter naše izkustvo in znanje o njih sestavljeni iz osnovnih nedeljivih delov«. Tiha predpostavka takšnih pogledov je, da pot do spozna- nja proučevanega objekta ali pojava (nujno) vodi skozi raziskavo »osnovnih« delov. Redukcionistična predpostavka opravičuje (in celo spodbuja) poenostavitev opazovanega sistema (pojava, objekta). To razstavljanje v manj zapleteno entiteto je lahko fizično ali smiselno, vsekakor pa ne zmore brez simplifikacije — postopka zanemarjanja »nebistvenih« lastnosti. Fiziki lahko tako v eni potezi spremenijo Zemljo v »točkasto maso«. Prednost analitično-redukcionističnega pristopa je ta, da vedno prinese rezultate. Če se lotimo drobljenja opazovanega sistema, slej ko prej pridemo do sistema, s katerim znamo ravnati. Nerodno je le to, da rezultati včasih nimajo nobene zveze z začetnim problemom. Že Wittgenstein je zaslutil, da je moč analize enostranska: celoto sveta (»vsega, kar se primeri«) lahko z analizo razstavimo in tako pridemo do »dejstev«, v nasprotni smeri pa ne gre — iz posameznih dejstev ne moremo sestaviti celostnosti sveta. Lahko sicer naberemo ogromno podatkov o posameznih delih in zelo poglobimo znanje o njih. Vsak detajl skriva neskončno novih možnosti za še bolj specializirano raziskavo. Na tej ravni lahko poiščemo vzroč-no-posledične odnose in identificiramo ustrezne količine in/ali pojave. A cena je pogosto v tem, da začetni problem postane nekakšna legenda, s katero vsakdanje raziskave nimajo nobene zveze. Spoznavanje novih delov sveta, četudi na račun drobljenja »celotne slike«, seveda ni nič slabega. Težava je v tem, da raziskovalce rado zamika, da bi iz rezultatov poenostavljenih raziskav sklepali na začetni problem. Takšen postopek dobro deluje v naravoslovju, na področju raziskovanja duševnosti pa ne. Poskusov sklepanja na celoto iz (sicer dobro metodološko obdelanega) drobca je nešteto. Oglejmo si dobro znane Libetove eksperimente (gl. Libet), na podlagi katerih so mnogi kognitivni nevroznanstveniki (gl. npr. Wegner) sklepali, da svobodna volja ne obstaja. Libet je v svojih poskusih primerjal čas, ko so se udeleženci poskusov »odločili«, da bodo pritisnili gumb, s časom sprožitve možganske aktivnosti, ki označuje pripravo motorične aktivnosti (v tem primeru premika prsta). Poskusi so pokazali, da se možganska aktivnost začne precej pred pojavom zavestne odločitve. Ti poskusi so celoten spekter človeškega odločanja (ki sega od delno voljnih kretenj do kompleksnih dolgoročnih življenjskih odločitev) reducirali na »odločitev« o tem, kdaj bo udeleženec pritisnil gumb (celo to, da ga bo pritisnil, je bilo odločeno s tem, da je pristal na sodelovanje pri eksperimentu). S katere perspektive opazuje kognitivna (nevro)znanost? Leta 1971 je Heinz von Foerster na pol za šalo, na pol zares zapisal svoj t. i. »prvi teorem«: »Bolj temeljen ko je problem, ki ga ignoriramo, večje so možnosti za slavo in uspeh.« (von Foerster, »Responsibilities« 1) Naj se zdi ta trditev sliši še tako cinična, drži. Na primer kognitivna nevroznanost dosega skokovit napredek zgolj zato, ker se je odpovedala spraševanju o osnovah fenomena, ki ga raziskuje, se pravi, o tem, kaj je zavest, kaj je doživljanje in kakšen je odnos med doživljajskim in telesnim. Zanemarjanje vprašanja odnosa med doživljanjem in telesnim, t. i. »težkega problema«, še posebej bije v oči, saj naj bi bila osnovna naloga kognitivne nevroznanosti prav raziskovanje nevroloških korelatov doživljajskih procesov, tj. procesov, ki so imanentno subjektivni. Na eni strani razlagalne vrzeli imamo fiziologijo, ki se dobro razume z analitično-re-dukcionistično metodo. Na drugi strani pa je doživljanje, živo človekovo izkustvo, vsebina zavesti — intimno in po definiciji subjektivno področje, ki se izmika posploševanju, še bolj pa analizi. Doživljanje ni lastnost, ki bi jo lahko zadostno opredelili s končnim številom diskretnih empiričnih parametrov, ampak se kaže kot kompleksen, (vase) sklenjen in zato ireduktibilen fenomen. Doživljanje je Gestalt, več kot preprosta vsota sestavnih delov. Še več, dinamičen Gestalt je, ki ga ne moremo »zamrzniti« v trenutek. Kot pravita Furlong in Vernon: »Napaka pri aplikaciji znanosti na probleme življenja in duha je v tem, da analitični redukcionizem, ki zaznamuje gledišče opazujoče zavesti, ni sposoben ujeti posebnosti organizacije, ki so lastne živim zaznavajočim bitjem.« (96) Kljub temu pa znanost nenehno poskuša zanemariti temeljni problem — subjektivnost in ireduktibilnost doživljanja —, saj se v skladu s von Foersterjevim »prvim teoremom« to zdi edina pot naprej. Zgodovina raziskovanja duševnosti niha med neuspešnimi poskusi redukcionističnega raziskovanja doživljanja (kakršen je propadli projekt nemškega introspek-cionizma z začetka 20. stoletja) in poskusi zanemarjanja obstoja (oziroma epistemološke samostojnosti) polja zavesti (kar velja za behaviorizem in seveda v nevrologiji priljubljeno razlago doživljanja kot epifenomena). Ker je, kot rečeno, osnovna naloga nevrološko podprte kognitivne znanosti ravno iskanje fizioloških korelatov doživljanja, se ne moremo povsem odreči raziskovanju doživljanja. Zato je kognitivna znanost polna poskusov prevajanja doživljajskega Gestalta na oprijemljivejše enote, bodisi na vedenje ali pa na fiziologijo. Antonio Damasio v svoji veliki teoriji čustev priznava pomembnost doživljajske (prvoosebne) perspektive. Vendar je v nasprotju s fiziološko perspektivo nikdar ne poskuša sistematično raziskati in pojasniti njene povezave z ostalimi (fiziološkimi) sestavinami. Drug sodoben zgovoren poskus tlačenja neulovljive kompleksnosti doživljanja v pregledne kategorije, dostopne tretjeosebni perspektivi, so poskusi afektivnega računalništva, novega cvetočega področja umetne inteligence. Profesor Nicu Sebe poroča o velikem uspehu novega algoritma za interpretacijo slik, s katerim naj bi mu uspelo dešifrirati čustva Mone Lize: natančna razdelitev čustev Mone Lize je v skladu z novim programjem takšna: 83% sreče, 9% gnusa, 6% strahu in 2% jeze. Seveda je samoumevno, da profesor Sebe objavlja v najvišje indeksiranih znanstvenih revijah. Primerjajmo to z odstavkom iz Gospe Dalloway: »Se še spominjaš jezera?«, je rekla nenadoma pod pritiskom čustva, ki jih je zajelo srce, ji stiskalo grlo in ustnice v čudnem krču, ko je izgovorila besedo »jezero«. Bila je otrok, ki trosi racam krušne drobtine, stoječ med očetom in materjo, v istem času pa odrasla ženska, ki prihaja k staršem tja k jezeru, držeč vse svoje življenje v rokah, in ko se jim je bližala, je življenje postajalo v njenih rokah vse večje in večje, dokler ni postalo célo življenje, popolno življenje, in položila ga je prednje in rekla: »To sem naredila za njega! To!« In kaj je naredila iz njega? Res, kaj? je premišljevala, ko je to jutro sedela zraven Petra in šivala. Ozrla se je v Petra Walsha; njen pogled, ki je premeril ves ta čas in vse to čustvo, se ga je boječe dotaknil, obvisel solzen na njem, potem pa vstal in zletel proč kakor kak ptič, ki se dotakne veje in vstane in odleti. Preprosto si je obrisala oči. (Woolf 73). S temi primeri sem poskusil pokazati obstoj dveh področij: področja, ki ga lahko uspešno raziskujemo z analitično-redukcionističnim pristopom, in področja, ki se takemu pristopu izmuzne, kakor drobna mivka steče skoz sito. V nadaljevanju želim natančneje pokazati, v čem je razlika med področjema, ki ju bom z von Foersterjevo pomočjo imenoval trivialno in netrivialno. Pokazati želim, da sta ti dve »področji« v resnici dve stališči, s katerih lahko opazujemo svet. Trivialno Za zdaj odmislimo zadnji pomislek o »področjih« in si oglejmo razliko med trivialnim in netrivialnim, kakor da bi se pojavi zares delili na trivialne in na netrivialne. Trivalni sistemi so tisti, ki si jih lahko predstavljamo kot »stroje« (v Turingovem pomenu), ki predelujejo (procesirajo) vhode v izhode. Takšne sisteme lahko torej modeliramo tako, da poiščemo t. i. prehodno funkcijo (transferfunction) med neodvisnimi in odvisnimi spremenljivkami (vhodi in izhodi), kar je, kot rečeno, temeljni metodološki princip naravoslovne zna- nosti. Rečeno v računalniškem izrazju, prehodno funkcijo običajno zamenjujeta pojma algoritem ali program, ki zajameta zapis zaporedja korakov, ki jih mora stroj narediti, da se bo adekvatno odzval na dražljaj. Opisljivost z metaforo stroja je zelo pomembna lastnost sistema. Tisti sistemi, ki jih lahko opišemo s primernim strojem, so tudi kandidati za obdelavo z ana-litično-redukcionistično metodo. Občutki gotovosti, zanesljivosti in nezgrešljivosti, ki jih zbuja razlagalna shema vzrok — operator — posledica, so postali ključni za zahodno filozofsko in znanstveno misel. V različnih disciplinah ima ta shema različna imena. V fiziki gre za shemo vzrok — naravni zakoni / posledica, v biologiji za dražljaj — organizem — odziv, v nekaterih delih psihologije pa za shemo motivacija — osebnost — vedenje. Zgodovina sheme sega vsaj do Aristotelovih logičnih silogizmov, zlasti do sheme deduktivnega sklepanja: velika premisa — mala premisa — sklep. Z vpeljavo matematike je naravoslovno-matematična paradigma (sodobna shema je: x — f — y) izostrila svoje orodje opisa, zato ni več le deskriptivna, ampak omogoča tudi napovedovanje. Prav možnost napovedovanja je omogočila naravoslovju tako skokovit napredek in mu prinesla današnjo moč. Prenosna funkcija je lahko tudi mnogo bolj zapletena; lahko je celo nelinearna. Ne glede na zapletenost pa jo lahko predstavimo s preprostim diagramom: Slika 1: Trivialni sistem V splošnem za določitev prehodne funkcije nekega trivialnega sistema potrebujemo toliko preskusov, kolikor je razločljivih vhodnih stanj. Trivialni sistemi so (a) neodvisni od časa (ahistorični) in zgodovine interakcij, (b) analitično določljivi in zato napovedljivi. Von Foerster pravi: Ni težko razumeti velike naklonjenosti zahodne kulture trivialnim strojem. Navedel bi lahko ogromno primerov trivialnih strojev. Ko kupimo avto, z njim dobimo tudi trivializacijsko potrdilo, ki nam zagotavlja, da bo avto ostal trivialen stroj vsaj naslednjih 100 ali 1000 milj ali naslednjih pet let. In če avto nenadoma postane nezanesljiv, ga peljemo k trivializatorju, ki ga bo spravil nazaj v red. Naša ljubezen do trivialnih strojev gre tako daleč, da svoje otroke, ki so običajno zelo nepredvidljiva bitja, pošiljamo v trivializacijske institucije, zato da njihov odgovor na vprašanje »koliko je 2 krat 3?« ne bi bil »zeleno« ali »toliko sem jaz star«, ampak »6«. (von Foerster, »Uncle» 8) Netrivialno Razumemo torej lahko veliko hrepenenje po trivialnem (ponovljivem, napovedljivem), tako v vsakdanjem rokovanju s svetom, kot v znanstvenem diskurzu. Zanimivo pa je, da nihče — niti znanstveniki, ki posvečajo ves svoj kreativni potencial trivializaciji — ne mara jemati sebe kot trivialen stroj. Ko sem se pogovarjal s kolegom računalničarjem, ki se ukvarja z avtomatičnim prepoznavanjem čustev s slik (kakor omenjeni profesor Sebe), se je strinjal, da procentualna razdelitev čustev nima za njegovo vsakdanje doživljanje nobenega pomena. Tudi von Foerster opaža to neskladnost: Če povprašam prijatelje, se jim zdi, da so podobni netrivialnim sistemom, in nekateri tako mislijo tudi o drugih. Ti prijatelji in vsi ostali ljudje, ki naseljujejo svet, predstavljajo temeljni epistemološki problem, kajti svet, obravnavan kot velik ne-trivialen sistem, je odvisen od zgodovine, analitično nedoločljiv in nenapovedljiv. Kako naj pristopimo k njemu? (von Foerster, »Through« 8) Von Foerster našteje tri strategije pristopa k temu epistemološkemu problemu: (a) ignoriranje problema, (b) trivializiranje sveta in (c) razvijanje epistemologije netrivialnosti. Najbolj priljubljeno metodo, (a) smo že omenili. Po priljubljenosti ji sledi (b), metoda, ki jo von Foerster imenuje »Laplacova rešitev«, saj naj bi »Laplace izločil iz svojih teorij vse elemente, ki bi utegnili povzročati težave: sebe, svoje sodobnike in druge netrivialne nadloge«, in nato proglasil vesolje za trivialen stroj.2 Če priznamo obstoj imanentno netrivialnih sistemov, se s tem odpo-vemo možnosti poznavanja pravil transformacije, prenosne funkcije, naravnih zakonov itn. Zveza med vzrokom in posledico je pri netrivialnih sistemih analitično nedoločljiva. Sam koncept linearne vzročnosti (vzrok — operator — posledica) je brez pomena. Če jemljemo svet kot netrivialen sistem, vsekakor velja Wittgensteinova propozicija 5.1361: 5.1361 Na prihodnje dogodke ne moremo sklepati iz sedanjih. Vera v vzročno zvezo je praznoverje. Je torej možno, da linearna vzročnost kot razlagalni princip velja v določenih delih sveta, v drugih pa ne? Vsekakor velja pri strojih, ki smo jih zgradili, in pri pojasnjevanju dovršnega dela narave, tj. tistega, ki ga zajemajo naravoslovne znanosti. Pri gradnji strojev smo namreč izbrali omrežje, v katerem so relacijska vprašanja tipa »Zakaj Y, ko X?« odločljiva. Brž ko analiziramo sistem, ga naredimo trivialnega; izbrali smo (trivialne) aksiome in na njihovih temeljih zgradili omrežje. Drugače rečeno: izbrali smo perspektivo, s katere je videti le trivialno področje. Izbira načina opazovanja oziroma raziskovanja določa, kaj bomo videli. Čar perspektive, ki omogoča analizo in napovedovanje, je nedvomen: vodi nas do tega, da plačujemo za zagotovilo o trivialnosti naših ur, avtomobilov, letal ... Nevarno postane, ko zahtevo po trivialnosti razširimo na soljudi, na naše otroke, na družine in na večje socialne sisteme, s tem ko zmanjšamo število njihovih izbir, namesto da bi ga povečali. (von Foerster, »Through» 9) Pri znanosti je podobno. Naravoslovni pristop spoznavanja sveta je eden vrhuncev človeškega razuma. Poskusi odrekanja takemu pristopu in celo poskusi njegove kritike so nesmiselni in neutemeljeni. Nevarno postane tedaj, ko se ravnamo po analitično-redukcionistični paradigmi tudi pri problemih, ki jim ta ni kos — na primer pri opazovanju toka doživljanja. Trivialnost je le približek. Kjer ta približek deluje, deluje tudi naravoslovni pristop. Trivializacija je podobno kakor Newtonova mehanika v fiziki zelo uspešna idealizacija, ki funkcionira v večjem delu »uporabnega« sveta. Zagotavlja varnost in stabilnost — in seveda konsenz o tem, kaj je »res» in kaj ne. S te plati lahko na klasično (analitično-redukcionistično) znanstveno metodo gledamo kot na sito, ki ločuje trivialno od netrivialnega. Iz množice vseh naših interakcij z okoljem izbira le tiste, ki ustrezajo njenim merilom. Znanstveni postopek torej ni toliko metoda raziskovanja resničnosti, kolikor postopek za izbiro področij, ki jih je mogoče trivializirati. Udeleženosti pri opazovanju toka zavesti Sredi 20. stoletja se je fizika znašla na robu trivialnega sveta: Heisenberg je ugotovil, da meritve vplivajo na izid eksperimenta in da zato nikoli ne moremo natančno poznati vseh lastnosti opazovanega delca. Ta ugotovitev (Heisenbergovo načelo nedoločenosti) in še nekatere druge lastnosti sveta najmanjših delcev so dodobra razburile fizike. Pokazale so na možnost, da kvantnih delcev niti teoretično ne moremo dokončno poznati, opisati in napovedovati ter da je predstava o neodvisnem opazovalcu iluzija. Fiziki so se »težavi« ognili z izbiro nove perspektive: posamezni delci so neulovljivi, vedenje velikih skupin pa je ponovljivo in predvidljivo, tj. trivialno. Po t. i. k0benhavnski interpretaciji je najbolje obravnavati kvantni svet statistično. Temu dogovoru so tedaj nasprotovali Einstein in mnogi drugi vodilni znanstveniki, in še danes mnoge jezi ideja, da je vedenje kvantnih delcev nenapovedljivo. A ker statistični pogled na kvantno fiziko očitno deluje (fiziki lahko nadaljujejo delo po ustaljenih metodah, ne da bi se morali spraševati po globljih epistemoloških osnovah svojega početja), za tovrstne neudobne ideje ni veliko prostora v uglednih fizikalnih revijah. Družboslovci, ekonomisti in psihologi so hvaležno sprejeli k0ben-havnsko rešitev: kjer je to le mogoče, se z uporabo statistike ognejo izmu-zljivosti opazovanja individuov, subjektivnega. Ne smemo pa pozabiti osnovnega motiva fizikov za uvedbo statistične interpretacije, namreč spoznanja, da je opazovalec udeležen v opazovanem sistemu. V družboslovju je ignoriranje raziskovalčevega vpliva na raziskovano mnogo težje in predvsem neskončno manj uspešno. Raziskovanje toka zavesti pa se statistični interpretaciji celo povsem upira. Vsako dejanje opazovanja je vzrok spremembe polja doživljanja; v tem polju je vpliv opazovanja neposreden: opazovanje je le dodatna oblika toka zavesti. Kako povleči ločnico med trivialnim in netrivialnim? Do kod je približek trivialnega še sprejemljiv? Ločnica poteka na meji med tistimi deli, ki jih lahko uspešno opišemo kot od opazovalca ločene, in onimi, ki ne dopuščajo več takšne idealizacije. Netrivialno področje se začenja tam, kjer opustimo približek, ki ga je izračunal distancirani opazovalec, in sprejmemo stališče udeleženca. S tem pa sprejmemo tudi soodgovornost za svet, saj sta vsako dejanje in celo vsaka odločitev za perspektivo opazovanja dejanje kreacije. Negovanje netrivialnega Težnja po trivialnem izhaja iz želje po predvidljivem, varnem, urejenem svetu. Kot sem omenil na začetku, je težnja po urejanju, razumevanju, postavljanju v odnos, tj. težnja po trivializaciji, ena od glavnih pogonskih sil znanstvenega napredka. Strah pred negotovostjo nepredvidljivega je prav tako pomemben kakor njegov komplement: radovednost in čudenje netri-vialnemu toku doživljanja, ki teče skoz zavest in ki je zavest. Nasprotna pola se dopolnjujeta, zato je zelo pomembno, da sta čimbolj uravnotežena: izbruhi žive, pogumne, subverzivne radovednosti morajo biti pomirjeni z modro in konservativno težnjo po urejanju. Pomirjeni, a ne zatrti. Ravno zgodovina znanstvenega prizadevanja nas uči skromnosti, in sicer tudi ob skokovitemu napredku katere od disciplin. V najboljšem primeru lahko proizvedemo delovno teorijo (prehodno funkcijo), ki povezuje nekatere podatke o opazovanem sistemu — o sistemu, ki smo ga skonstruirali z izbiro perspektive opazovanja. V časih skokovitega napredka (ki smo mu prav zdaj priča na področju kognitivne nevroznanosti) se zdi, kot da nekoliko prevladuje konservativni pol. Vse prehitro pozabimo na velika vprašanja, ki smo jih morali zanemariti, da smo prišli do (delnega) uvida; prehitro verjamemo, da nam je uspelo urediti in razumeti opazovani delček sveta. Kako naj ohranimo zavedanje, da je trivialno le približek? Bi se morda morali zateči k umetnosti? Morda nas lahko literarnovedne raziskave toka zavesti spomnijo na polnost in nedeljivost doživljanja. S tem ne želim reči, da bi lahko branje Joycea zamenjalo raziskave doživljanja. Od umetnikov ne smemo pričakovati sistematičnega proučevanja resničnosti. Umetnik je neodvisen od omejitev resničnosti in od sis-tematičnosti raziskovanja. Njegova svoboda izhaja iz njegove predanosti ustvarjalnemu gonu. Sistematično proučevanje resničnosti je znanstvenikov način iskanja svobode: njegova vztrajna in neomajna zvestoba empiričnim podatkom ga osvobaja zmede. Zatočišče išče s tem, ko poskuša postaviti mnenja in osebna stališča v oklepaje (v čemer ni nikdar povsem uspešen). Vsakdo naj torej ostane predan svojemu iskanju, svojemu načinu doseganja svobode. Kot znanstvenik pa vendarle slutim, da književnost prinaša pomemben nauk: nauk o netrivialnosti doživljajskega sveta, o kompleksnem, nedeljivem, prelivajočem se Gestaltu, o samonanašajoči se naravi zavesti in o naši nepreklicni odvisnosti od naše osebne zgodovine. Nekatera branja pa prinašajo še en opomin: da doživljajska pokrajina sega precej dlje od utrjenih poti, ki jih ubiramo v vsakdanjem življenju. Na vprašanje »Kako je biti človek?« nismo še niti resno začeli odgovarjati. Udobje trivialnega, ki ni bilo še nikoli tako mamljivo kakor prav zdaj — v obdobju funkcionalno usmerjene družbe —, nas z jeklenimi sponami drži v vsakdanjem, avtomatičnem, znanem. Sili nas verjeti, da poznamo svet in sebe. Sleherno opozorilo, da obstajajo doživljajske pokrajine zunaj utečenega toka, je dragoceno; še več, življenjskega pomena je, in sicer ne glede na njegov vir. Vsak poskus pobega iz doživljajske trivialnosti je bojevniška gesta. [P]ot bojevnika je tako zelo nevarna zato, ker je ravno nasprotna življenjskemu položaju sodobnega človeka. Sodobni človek je zapustil kraljestvo neznanega in skrivnostnega ter se ustalil v kraljestvu funkcionalnega. Obrnil je hrbet preroškemu in zmagoslavnemu svetu in namesto tega sprejel svet dolgočasja. (Castaneda 116) Podobno je s potjo umetnika — pričevalca o človeškem doživljanju, poročevalca o človeški kompleksnosti, nelinearnosti, netrivialnosti. OPOMBI 1 Navdih za pisanje tega prispevka je v veliki meri posledica (pre)kratke korespondence z dr. Sowon Park. Iskreno sem ji hvaležen za to, da me je spomnila na Virginio Woolf in njeno pogumno vztrajanje v toku zavesti. 2 Laplace leta 1814 piše: »Nadčloveškemu bitju, ki bi mu bila znana stanja vseh delcev [...] nič ne bi bilo negotovo; prihodnost in preteklost bi mu bila razkrita.« (von Foerster, »Through» 9) LITERATURA Ackoff, Russell L. Redesigning the Future: A Systems Approach to Societal Problems. New York: Wiley, 1974. Castaneda, Carlos. Notranji ogenj. Prev. Janez Urh. Ljubljana : Gnosis, 1995. Damasio, Antonio. Looking for Spinoza. London: Vintage Books, 2004. Furlong, Dermot, in David Vernon. »Reality Paradigms, Perception, and Natural Science: The Relevance of Autopoiesis« Autopiesis and Perception 25.8 . (1994): 95—120. Libet, Benjamin. »Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action«. Behavioral and Brain Sciences 8 (1985): 529—566. Sebe, Nicu. »Software Decodes Mona Lisa's Enigmatic Smile«. New Scientist (17 Dec. 2005): 25. Varela, Francisco J., Evan Thompson in Eleanor Rosch. The Embodied Mind. Cambridge (MA) in London: MIT Press, 1991. von Foerster, Heinz. »Responsibilities of Competence«. Journal of Cybernetics 2.2 (1972): 1—6. ---. »Through the Eyes of the Other«. Research and Reflexivity. Ur. Frederick Steier. London: Sage, 1991. 63-75. ---. »'Uncle Ludwig' and Other Wittgensteiniana«. [Neobjavljeno predavanje s simpozija »Wittgensteinseminara«, Skjoldnu, 24 maj 1992.] Wegner, Daniel M. The Illusion of Conscious Will. Cambridge (MA): MIT Press, 2002. Winograd, Terry, in Fernando Flores. Understanding Computers and Cognition. Norwood (NJ): Ablex Publishing, 1986. Wittgenstein, Ludwig. »Logisch-Philosophische Abhandlung«. Annalen der Naturphilosophie 14 (1921): 185-262. Woolf, Virginia. Gospa Dalloway. Prev. Jože Udovič. Ljubljana: Cankarjeva založba, 1987. Tending to the Non-Trivial1 Urban Kordes University of Ljubljana, Cognitive Science, Slovenia urban.kordes@guest.arnes.si The paper aims to acknowledge the need for renewing the trust in the individuality, complexity and intimacy of direct experience. It delineates limitations of the analytical-reductionist paradigm in the observation of theflow of consciousness, and suggests a balancing the intersubjective reductionist approximation with the intimate reality of the gestalt awareness demonstrated, perhaps better than by anybody else, by literary writers, those careful pursuers oftheflow ofconsciousness. Keywords: cognitive science / experience / flow of consciousness / individuality / participation UDK 165.242 UDK 159.922 Introduction Science has always been driven by two motors, two types of creative unrest: curiosity and the fear of uncertainty. There have of course been numerous other influences — mostly economic ones — which substantially marked the flow of scientific discovering, but these two types of creative unrest are essential. The unrest of child-like curiosity in searching leads us to abandon the comfortable realm of the known; it forces us to wonder, to admit that we do not know and do not understand. And the unrest of the infinite complexity of universe and of being lost in this colossal process leads us to organising, simplifying, clarifying and — if we are successful — attempting to make predictions. If one observes the historical flow of scientific advances, one feels that both types of unrest constantly intertwine and balance one another. In a given period, one might be dominant, which somehow produces a gap that only the other can fill. The area of research into cognitive phenomena is currently dominated by attempts to organise, simplify and explain. The inadequacy of results of such attempts bring forth a growing need to acknowledge the complexity of the studied phenomena, even on the account of the clarity of theory. Primerjalna književnost, Volume 35, Number 2, Ljubljana, August 2012 Cognitive science Up to a few decades ago, thinking about theoretical models of the functioning of mind was limited to philosophical speculation and a few partial psychological models (Freud, Piaget, James). Interestingly enough, the eventual breakthrough did not occur as a consequence of an empirical discovery. The dramatic (and unexpected) change was introduced due to a new common metaphor, a model that allowed various areas of research on mind-related phenomena to venture interdisciplinary cooperation. This common metaphor — cognition as information processing — originated in cybernetics. Today it is hard to fathom the revolution in thinking it has triggered. Just like computers process information (that is, translate input impulses into output according to their programs), the task of cognitive systems is the translation of stimuli (input impulses) into behaviour (the system's outputs). The so-called information processing model or computer model of cognition suddenly allowed for a common conception of what goes on inside the 'black box' of mental processes. And from this common conception a new scientific discipline emerged: cognitive science. In the 1980s, the development in computer technology, combined with the new metaphor for the functioning of cognition, stirred great excitement. The ability of computers to perform, in a matter of seconds, tasks which even the smartest people found virtually impossible to do, produced the overwhelming belief that an instrument has been invented which will enable us not only to model cognitive processes, but also to overcome the intelligence of the very creators of computers. This period was marked by the search for (computer) algorithms which could simulate intelligence. It was only when it became clear that the intelligence of computers does not grow proportionally to their performance (or rather that it does not grow at all) that researchers started asking questions about what intelligence actually was. The end of the decade brought neither a satisfactory answer to that question nor computers that could be deemed 'intelligent'. It turned out that it was indeed rather simple to define, in terms of algorithms, certain operations which we consider to be indicative of high intelligence, or rather, those which we ascribe to 'experts': determining a diagnosis on the basis of known symptoms, calculating complex differential equations, playing chess, etc. What turned out to be much more incomprehensible were operations that we usually do not take any notice of at all in our daily lives: the process of getting to know your surroundings and of reacting to it, acquiring language and, above all, assigning meaning, a completely im- possible task for a computer. In 1994, Dermot Furlong and David Vernon proposed the following conclusion: Actually, when you ponder on it, it is indeed strange, and telling, that artificial intelligence should have been a subject of serious, detailed study before artificial life, for, actually, we never assign intelligence to anything other than living systems. Did the artificial intelligencers simply but quietly assume that when their job was done their artificial intelligence systems would in fact be living systems? (98). In the early nineties, several researchers began pointing out that cognitive science based on the information processing model entered into a crisis (see Winograd and Flores; Varela, Thompson and Rosch; Furlong and Vernon). These researchers first suspected that the analytical-reductionist model was perhaps not appropriate for research on mental processes, consciousness and life, while most cognitive scientists shared the tacit assumption that a way out of the stand-still lied in further specialisation of the study of cognitive phenomena, which would at some point bring to a unified theory that could provide satisfactory answers to questions of a wider scope. In the nineties, the centre of attention shifted from artificial intelligence to neuroscience, which applied new non-invasive methods to the observation of the brain in vivo, embarking for the first time in history on the clinical research into mental processes. This overshadowed the fact that the only thing cognition researchers shared was a common model. Even the fact that it was just a model sifted into the background. Despite some attempts at introducing new metaphors such as connectionism and embodied cognition, the idea of the conscious being as the processor of external stimuli remained the fundamental (and increasingly self-evident) concept. The problem of reduction Thus, the computer metaphor remained the common model of cognitive science, and the analytical-reductionist method remained the appropriate research approach. Mind and cognitive phenomena can be viewed from the point of view of chemistry, biology, physiology, anthropology or computer modelling. A chemist, for instance, may consider chemical processes that occur inside a living organism. Of course one is unable to describe the entire (chemical) turmoil all at once, so one must focus on a specific chemical process in a specific species of living organism. This breaking-down of the problem into simpler components is the main argument in favour of the analytical-reductionist approach: if the system is too complex for us to understand, we should break it down into smaller or simpler parts. If it turns out that these parts are still too complex, we should continue breaking them down until we come to parts that are simple enough for us to understand and describe. According to Ackoff (8), this reductionism 'is a doctrine that maintains that all objects and events, their properties and our experience and knowledge of them are made up of ultimate elements, indivisible parts'. The tacit assumption of such approaches is that the path to comprehending an object or phenomenon of research (necessarily) leads through the study of its 'basic' elements. The reductionist assumption justifies (and even encourages) the simplification of the system (that is, phenomenon or object) under observation. This breaking-down into less complex entities can occur either at the physical or at the conceptual level, but it can never avoid simplification — the process of neglecting 'unessential' properties. And this is what enables physicists to transform the Earth into a 'puncti-form mass' at a moment's notice. The advantages of the analytical-reductionist approach lie in the fact that it always brings results. If we embark on the fragmentation of the observed system, we are sooner or later left with a system that we are able to handle. The only inconvenience here is that the results sometimes bear no relation whatsoever to the initial problem. It was Wittgenstein who first sensed that the power of analysis was a one-way street: the wholeness of the world ('everything that happens') can be broken-down through analysis in order to gain 'facts', but this process does not work in the opposite direction — one cannot combine individual facts back into the wholeness of the world. One might be able to collect enormous quantities of data about individual parts, expanding one's knowledge of them. Each detail contains an infinite number of new possibilities for even more specific research. At this level one is able to seek out the relations of cause and effect and to identify corresponding quantities and/or phenomena. But this comes at a price of distancing oneself form the original problem that often dims into a kind of myth that no longer relates in any way to everyday research. Of course there is no harm in learning about new parts of the world, even at the expense of breaking down 'the big picture'. The problem is that researchers are often tempted to infer about the original research question on the basis of results obtained by fragmented, simplified research. While such methods work admirably well in natural sciences, this is not the case in mind research. I could quote numerous instances of inferring about the whole from fragments (albeit methodologically well processed fragments). Let me give the example of the well-known Libet's experiments (see Libet), which led many cognitive scientists (see Wegner) to conclude that there was no such thing as free will. In his experiments, Libet compared the time in which participants 'decided' to push a button with the time of the firing of brain activity which signified the preparation of motor activity (in this case, the movement of the finger). Experiments demonstrated that brain activity significantly preceded the occurrence of the conscious decision. These experiments reduced the entire spectrum of human decision-making (which covers everything from semi-conscious movements to complex long-term decisions affecting our entire lives) to the decision about when a subject should push a button (the decision to push being already made upon agreeing to take part in the experiment). What is the perspective of (neuro)scientific observation? In 1971, Heinz von Foerster wrote down his cheek-in-tongue 'first theorem': 'The more profound the problem that is ignored, the greater are the chances for fame and success.' (von Foerster, 'Responsibilities'1) However cynical this remark may seem, it is nonetheless true. The giant progress in, say, cognitive neuroscience can be attributed exclusively to the fact that it gave up asking questions about the fundaments of the phenomenon it is studying, about what is consciousness, what does it mean to experience, and what is the relationship between the experiential and the physical. Neglecting the question of the relationship between the experiential and the physical, the so-called 'hard problem', is especially problematic, since the basic task of cognitive neuroscience is supposed to be research on the neurological correlates of experiential processes. On one side of the explanatory gap we find physiology, which goes hand in hand with the analytical-reductionist method. On the other side we find lived human experience, the content of consciousness — an intimate and by definition subjective area that resists any generalisation and analysis. Experience is not a property that could be satisfactorily defined by a finite number of discrete empirical parameters. Rather it appears to be a complex, (self-)contained and thus irreducible phenomenon. Experience is gestalt, more than just a simple sum of its components. Moreover, it is a dynamic gestalt, one that cannot simply be 'frozen' in a moment of time. As Furlong and Vernon write, 'What is wrong with our conception of science in its application to Life and Mind is that the analytic reductionism which characterizes the spectator consciousness stance can never capture organizational distinctions which characterize living or cognizing beings.' (96) Nonetheless, science is constantly trying to neglect the fundamental problem — the subjectivity and irreducibility of experience — as this seems to be the only way to get anywhere, precisely in accordance with von Foerster prediction. The history of research on the mind oscillates between unsuccessful attempts at reductionist research on experience (such as the failed project of German introspectionism of the early twentieth century) and attempts to ignore the existence (or epistemological independence) of the field of consciousness (such as behaviourism and the popular neuroscientific view of experience as an epiphenomenon). Since, as mentioned above, the basic task of a neurologically enhanced cognitive science is the search for physiological correlates of experience, cognitive science cannot simply give up studying experience. Accordingly, cognitive science abounds with attempts to translate the experiential gestalt into more tangible units, be that behaviour or events in the brain. In his grand theory of emotions, Antonio Damasio acknowledges the importance of an experiential (first-person) perspective. But in his work he merely mentions it without ever attempting to elaborate a systematic study (at the level of, say, his study of the physiological perspective), nor does he ever clarify its connection to other (physiological) components. Another contemporary telling attempt to fit the elusive complexity of experience into the tight shoes of comprehensive categories accessible to the third-person perspective is affective computing, the new and flourishing area of artificial intelligence. Professor Nicu Sebe reports on the huge success of a new image analysis algorithm by which he managed to 'decipher' the emotions of Mona Lisa. The exact division of Mona Lisa's emotions according to the latest software goes as follows: 83% happiness, 9% disgust, 6% fear and 2% anger. Needless to say, professor Sebe's work is published in top-ranked research journals. Let me compare that to a passage from Mrs Dalloway: 'Do you remember the lake?' she said, in an abrupt voice, under the pressure of an emotion which caught her heart, made the muscles of her throat stiff, and contracted her lips in a spasm as she said lake'. Foe she was a child throwing bread to the ducks, between her parents, and at the same time a grown woman coming to her parents who stood by the lake, holding her life in her arms which, as she neared them, grew larger and larger in her arms, until it became a whole life, a complete life, which she put down by them and said, 'This is what I have made of it! This!' And what had she made of it? What, indeed? Sitting there sewing this morning with Peter. She looked at Peter Walsh; her look, passing through all time and that emotion, reached him doubtfully; settled on him tearfully; and rose and fluttered away, as a bird touches a branch and rises and flutters away. Quite simply she wiped her eyes. (Woolf 48—49) The aim of these examples was to demonstrate the existence of two diverse areas: an area that can be successfully studied with an analytical-reductionist approach, and an area that eludes such an approach, just like fine sand sifts through a sieve. In the follow-up, I intend to show more precisely the difference between these two areas, which, following von Foerster, I refer to as the trivial and the non-trivial. I will claim that these are not actual areas, but rather two different types of perspective an observer can take towards the world. The trivial Let me leave this last qualification aside for a while and take a look at the difference between the trivial and the non-trivial as if phenomena actually were divided into these two types. Trivial systems can be thought of as 'machines' (in Turing's sense), devices for processing inputs into outputs. Such systems can be modelled by finding the so-called transferfunction between the independent and dependent variables (inputs and outputs), which, as mentioned above, is the fundamental methodological principle of natural science. In computer terms, the transfer function is usually replaced by the concepts of algorithm or programme, which grasp a sequence of steps a machine has to take in order to adequately respond to a stimuli. Describability in terms of the machine metaphor is a very important property of the system. The systems that can be successfully described by a corresponding machine are also the ones most likely to be successfully handled with the analytical-reductionist method. The sense of certainty, accountability and infallibility provided by the explanatory scheme of cause — operator — effect has become central to Western philosophic and scientific thought. This scheme has different names in different disciplines. In physics one speaks about cause — natural laws — effect, in biology about stimulus — organism — response, and in some areas of psychology about motivation — personality — behaviour. The origin of this scheme goes back at least to Aristotle and his logical syllogisms, especially the scheme of deductive reasoning: major premise — minor premise — conclusion. With the introduction of mathematics, the naturalist-mathematic paradigm (today the scheme appears in the form of x — f — y) improved its instrument of description so much that it was no longer merely descriptive, but also enabled predictions. It was this capability of prediction that allowed for the dramatic progress of natural sciences, endowing them with the power they wield today. The transfer function can be much more complex; it can even be nonlinear. But regardless of its complexity, it can presented by a simple diagram: Figure 1: A trivial system In general, to determine the transfer function of any trivial system it takes as many attempts as there are distinguishable input states. Trivial machines are (a) independent of time (ahistorical) and of their history of interactions, (b) analytically determinable, and therefore predictable. Von Foerster writes: It is easy to understand the attraction of trivial machines for western culture. We could enumerate infinite examples of trivial machines. When we buy a car we get a trivialisation certificate guaranteeing that the car would stay a trivial machine for at least the next 100 or 1,000 miles or the next five years. And if the car suddenly becomes unreliable, we take it to a trivialisator to put it back in order. Our love for trivial machines is so great that we even send our children, who are usually very unpredictable beings, into trivialising institutions, so that when asked 'How much is 2 times 3?' their answer would not be 'green' or 'that's how old I am', but '6'. (von Foerster, '"Uncle Ludwig"' 8) The non-trivial The persistent longing for the trivial (repeatable, predictable) thus becomes more understandable, be it in everyday dealing with the world or in the scientific discourse. An interesting point, however, is that nobody — not even those scientists who dedicate all of their creative potential to trivialisation — is ready to accept her/himself as a trivial machine. A computer expert active in the area of automated recognition of emotions in images (in much the same manner as Professor Sebe) readily agreed with me that the distribution of emotions into percentages bears no meaning whatsoever in his daily experience. Von Foerster notices the same discrepancy: When asked, all my friends consider themselves to be like non-trivial machines, and some of them think likewise of others. These friends and all the others who populate the world create the most fundamental epistemological problem, because the world, seen as a large non-trivial machine, is thus history dependent, analytically indeterminable, and unpredictable. How shall we go about it? (von Foerster, 'Through the Eyes' 8) Von Foerster talks of three strategies of approaching this epistemo-logical complication: (a) ignore the problem, (b) trivialise the world and (c) develop an epistemology of non-triviality. I have already discussed the most popular solution: (a). It is followed in popularity by (b), the method that von Foerster dubs 'the Laplace solution', alluding to Laplace's 1814 statement that 'an intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed [...] for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes'. For Von Foerster, 'Laplace eliminated from his considerations all elements that could cause trouble for his theory: himself, his contemporaries, and other non-trivial annoyances', in order to present the universe as a trivial machine (von Foerster, 'Through the Eyes' 9). If we admit to the existence of intrinsically non-trivial systems, we lose the chance of knowing the rules of transformation, the transfer function, natural laws, etc. The relationship between cause and effect in non-trivial systems is analytically indeterminable. The concept of linear causality itself (cause — operator — effect) becomes meaningless. If we consider the world to be a non-trivial system, then Wittgenstein's proposition 5.1361 applies: 5.1361 We cannot infer the events of the future from those of the present. Belief in the causal nexus is superstition. Could it be that linear causality as an explanatory principle is applicable to some areas of the world while ineffective in others? It certainly seems to work for machines we build ourselves, and it helps explain a large part of nature — the part covered by natural sciences. For in building the machines, we have chosen a network inside which the relational questions of the type 'Why Y when X?' are determinable. The moment we analyse the system, we make it trivial; we have chosen (trivial) axioms and built a network based on them. In other words, we have chosen a perspective that allows us to see only the trivial area. The choice of our manner of observation, or research, determines what we see. The allure of a perspective that allows us to analyse and make predictions is obvious. It leads one to pay for guarantees that our watches, lawnmowers, airplanes, etc., maintain their no-choice quality. The danger begins when we extend this demand to others, to our children, our families and other larger social bodies by trying to trivialize them, that is, by reducing their number of choices, instead of enlarging it (von Foerster, Through the Eyes' 9). A similar situation occurs in science. The natural science approach to knowing the world is one of the climaxes of human reason. It would be pointless and ungrounded to criticise, let alone to try to forsake, such an approach. The danger lies in trying to apply the analytical-reductionist paradigm to problems that it is unable to handle, say, to observing the flow of experience. Triviality is just about approximation. Where such approximations work, the natural science approach is effective. Trivialisation similar to Newton's mechanics in physics is a very successful idealisation functional in much of the 'useful' world. It guarantees safety and stability — and of course a consensus about what is 'real' and what is not. From this point of view, the traditional analytical-reductionist scientific method can be seen as a sieve separating the trivial from the nontrivial. From the set of all our interactions with the environment it selects only those that suit its standards. The scientific procedure is hence not a method for research on reality, but rather a procedure for determining areas susceptible to trivialisation. Participation in the observation of the flow of consciousness In mid-twentieth century, physics reached the edge of the trivial world: Heisenberg realised that the act of measuring affects the outcome of the experiment and that, as a consequence, we can never know all the properties of the observed particle. This insight (Heisenberg's uncertainty principle) and some other properties of the world of elementary particles stirred an uproar in physics. This was an indication that even in theory, we might be unable to learn everything about quantum particles 'as they are' — that is, to describe these particles and predict their behaviour — and that the conception of an independent observer is an illusion. Physicists avoided this problem by choosing a different perspective: individual particles may be elusive, but the behaviour of large groups is repeatable and predictable, in a word, trivial. According to the so-called Copenhagen interpretation, it is best to treat the quantum world statistically. This agreement met with strong opposition from some of the leading scientists of the time, including Einstein. Even today, the idea that the behaviour of quantum particles is unpredictable is a source of great frustration for some. But since the statistical view of quantum physics appears to be working (physicists are able to proceed with their work according to accepted methods without having to question the deeper epistemological fundaments of what they are doing), little attention is paid to such killjoys in respectable physics journals. Social scientists, economists and psychologists gratefully embraced the Copenhagen solution: whenever possible, they tend to take statistical perspective to escape the elusiveness of observing individuals, the subjective component. We should not forget, however, about the physicists' original motive for introducing the statistical interpretation: the realisation that the observer participates in the observed system. In social sciences it is much harder, and also infinitely less successful, to ignore the researcher's involvement in the subject of research, and research on the flow of consciousness even utterly resists the statistical interpretation. Any act of observing causes a change in the field of experience; in this field the influence of observation has direct consequences because observation itself is just another form of the flow of consciousness. Where do we draw the dividing line between the trivial and the nontrivial? Up to which point is the trivial approximation still acceptable? The line runs along the border between the parts that can be successfully described as being separated from the observer and the parts where such idealisation is no longer viable. The non-trivial area begins at the point where it becomes necessary to give up the approximation of the remote observer and to accept the participatory point of view. By accepting the participatory point of view one also takes upon oneself part of the responsibility for the world. For any act of observation, and even any decision on the perspective of observation, is also an act of creation. Tending to the non-trivial The inclination towards the trivial originates from the wish for a predictable, safe, organised world. As I have mentioned in the beginning, the tendency to organise, understand, relate, that is, the tendency to trivialise, is one of the principal motors of scientific progress. The fear of the uncertainty of the unpredictable is just as important as its complement: the curiosity and wonder at the complex flow of experience, which runs through, and which is, our consciousness. As these polar opposites complement each other, it is very important to keep them in some kind of balance: the outbursts of lively, daring, subversive curiosity should be checked by the conservative tendency for orderliness and explanation. But checking should not be suppressing. The history of scientific endeavour teaches us to remain modest even in the face of dramatic progress in one of the disciplines. At best, we can produce a working theory (a transfer function) that connects some of the data about the observed system — a system constructed by choosing the perspective of observation. In periods of great progress (like the one we are currently witnessing in the field of cognitive neuroscience), the conservative pole appears to have the upper hand. It is so easy to forget about the big questions we had to neglect in order to reach our (partial) insight; it seems that we are too quick in convincing ourselves that we have finally managed to organise and understand the researched fragment of the world. How can we remain aware of the fact that the trivial is merely an approximation? Should we perhaps look to art in order to find answers? Perhaps literary studies with its analyses of the flow of consciousness can remind us about the fullness and indivisibility of experience. This is not to say that we should replace research on experience by reading Joyce. We cannot expect artists to study reality systematically. The artist's freedom is not bound by the limitations of reality or systematic exploration; it originates in his or her fidelity to the creative drive. Systematic exploration of, and faithfulness to, reality is a scientist's way of searching for freedom: his/her persistent, unconditional and systematic fidelity to empirical data liberates him/her from confusion. S/he seeks shelter by attempting, without ever fully succeeding, to place opinions and personal thoughts into brackets. So, each of us should remain dedicated to our way of searching, our way of attaining freedom. As a scientist I nonetheless feel there is an important lesson to be learned from literature: a lesson about the non-triviality of experiential world, about the complex, indivisible, fluid, overflowing gestalt, about the self-referring nature of consciousness and our irrevocable dependence upon our personal history. Some readings teach us yet another lesson: that the experiential landscape reaches far deeper than the well-trodden paths upon which we walk in our daily lives. We have not even really begun to answer the question 'What is it like to be human?' The comfort of the trivial, which has never been as alluring as in this very moment — in the era of functionally-oriented society — holds us in its iron grip of the mundane, the automatic, the well-known. It forces us to believe that we know the world and ourselves. So, any reminder of the existence of experiential landscapes beyond the routine is precious; moreover, it is of vital importance regardless of its origin. Any attempt to escape experiential triviality is an act of a warrior. What makes the warrior's path so very dangerous is that it is the opposite of the life situation of modern man. The modern man has left the realm of the unknown and the mysterious, and has settled down in the realm of the functional. He has turned his back to the world of the foreboding and the exulting and has welcomed the world of boredom. (Castaneda 72) This also applies to the path of the artist, as s/he reminds us about human experience, complexity, non-linearity and non-triviality. NOTE 1 This article is largely inspired by a (too) short correspondence with Sowon Park. I am sincerely grateful to her for reminding me about Virginia Woolf. WORKS CITED Ackoff, Russell L. Redesigning the Future: A Systems Approach to Societal Problems. New York: Wiley, 1974. Castaneda, Carlos. The Fire from Within. New York: Washington Square Press, 1984. Damasio, Antonio. Looking for Spinoza. London: Vintage Books, 2004. Furlong, Dermot and David Vernon. 'Reality Paradigms, Perception, and Natural Science: The Relevance of Autopoiesis'. Autopiesis and Perception 25.8 . (1994): 95—120. Libet, Benjamin. 'Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action'. Behavioral and Brain Sciences 8 (1985): 529—566. Sebe, Nicu. 'Software Decodes Mona Lisa's Enigmatic Smile'. New Scientist (17 Dec. 2005): 25. Varela, Francisco J., Evan Thompson and Eleanor Rosch. The Embodied Mind. Cambridge (MA) and London: MIT Press, 1991. von Foerster, Heinz. 'Responsibilities of Competence'. Journal of Cybernetics 2.2 (1972): 1—6. ---. 'Through the Eyes of the Other'. Research and Reflexivity. Ed. Frederick Steier. London: Sage, 1991. 63-75. ---. '"Uncle Ludwig" and Other Wittgensteiniana'. Lecture at the 'Wittgensteinseminara' symposium (24 May 1992), Skjoldnu, unpublished. Wegner, Daniel M. The Illusion of Conscious Will. Cambridge (MA): MIT Press, 2002. Winograd, Terry and Fernando Flores. Understanding Computers and Cognition. Norwood (NJ): Ablex Publishing, 1986. Wittgenstein, Ludwig. 'Logisch-Philosophische Abhandlung'. Annalen der Naturphilosophie 14 (1921): 185-262. Woolf, Virginia. Mrs Dalloway. Oxford: Oxford University Press, 1992.