|
. . . 2000-03-25 |
Neuraesthetics: The Poetic
Many disciplines implicitly assume a distinction between literal and figurative language, with the figurative posited as secondary and "poetic," a hoity-toity exception to the common run of the tongue. However, working language doesn't make much of that distinction. Rather than being treated as optional ornamentation, metaphor and simile are essential aspects of normal speech.
First comes use of figurative language ("my truck died"); then recognition of the correct paraphrases for figures of speech; then the ability to paraphrase; and finally the ability to explain the figures.
Speakers produce an average of 15 novel and 34 cliched figures of speech per 1000 words. One study of 500,000 words of American literature found only 3 novel figures per 1000 words. (A study of English Renaissance literature or hip-hop lyrics might show different results....)
(Data from "Figurative language and cognitive psychology," Pollio, Smith, & Pollio, Language and Cognitive Processes, 1990)
. . . 2000-05-21 |
Free and direct discourse
Was writing, considered as external memory storage, truly a revolutionary leap in cognitive evolution?
It was an advance in shopping list technology, sure. But, considered as very long-term external memory storage, writing relies on the kindness of strangers almost as much as that other external memory storage, oral culture, does. Look at how few "immortal masterworks" since the invention of writing have survived to reach us. Whether kept in the noggin or kept on parchment or kept busily transferring from one mechnically-interpreted-medium-of-the-decade to the next, words' persistence and accessibility are almost completely dependent on interested individuals. Parchment just has an edge as far as dumb luck goes.
Similarly, the contractual use of writing as external evidence of intent wasn't a revolutionary leap in social development. Forgeries can be made and denounced; libel is only slightly easier than slander; witness's depositions are just as unreliable as their oral testimony....
But writing's use as external object is another matter, and not one that gets mentioned much in the cognitive science texts.
Person-to-person, we use language to express and to manipulate. To have one's words be understood is an ambition that's hard to even describe without the assumption of distance. It's not the noisy-channel-between-transmitter-and-receiver described by information theory. It's a channel between transmitter and object, followed by a completely different group of channels between object and receivers, channels whose "success" can't be measured by eliminating the middleman and totting up the error rate because the middleman is the point. I'm not standing behind my words to guarantee them; I'm standing there because you're not supposed to see me. I'm no longer the "message source"; I've handed that status over to an inanimate object, and that object can't be queried as to the success of the transmission.
|
. . . 2000-06-03 |
An Introduction to Neuraesthetics
In their standard creation myths, philosophy and its natural child, science, assume that presentation can be separated from thought, form separated from meaning. Ideas and facts can be expressed baldly, blankly; with detached justice they can be put to the test, condemned to death, or exalted to eternal life.
Art has been the unwashable sin and bad conscience of both disciplines. Those expressions which most clearly are directed towards unreliable (if sometimes enticing) human beings, which most clearly do not stake claim on a winning slot in a truth table, are only "artistic" -- sometimes condescended to as weak but charming, sometimes attacked as a corrupting influence, sometimes glorified as transcendent, but never quite comprehended. You can't live with it and you can't live without it.
With Nietzsche, philosophers stopped trying to pluck this thorn from their side and instead began to cultivate it; nowadays even academics will admit that aesthetic considerations can't be left out, although they can demonstratedly be bungled. Most recently, the neo-rhetoricians of philosophy have attempted to advance the subjective offensive into "harder" disciplines under the oddly bland banner of "Science Studies" with mixed results.
Meanwhile, happily isolated from those poststructuralist battles, neurology, psychology, physiology, mathematics, and computer research have been mingling in the not-particularly-disciplined discipline of cognitive science, a mudpuddle with pretensions to primal soupdom, reviving along their way such tired philosophical issues as rationality, free will, the nature of consciousness, and, with less fanfare, the place of art.
One of the main lines of research in cognitive science models a "mind" as dynamic patterns of activity across groups of smaller units (brain cells, for example), with a single unit taking part in many different patterns. Depending on the home discipline, such a model might be referred to as a neural net, parallel distributed processing, or connectionism. Neural nets are self-organizing: the act of perception itself can generate an ability to match patterns and an ability to recreate them. Also, neural nets are biologically plausible and can be simulated by computers, both important points for research funding.
Fifty years ago, when the leading scientific model of mind was stimulus-response and the leading philosophic model of mind was mathematical logic, if you'd asked which discipline handled the interweaving of biological, psychological, and intellectual patterns, treating them all on more or less equal terms, the (reluctant) answer would've been aesthetics, since art would be (reluctantly) admitted to speak to all these "interconnections" in some mysteriously unified way by means of physical perception and mental context. But, being a matter of mysterious interconnections, art would also seem flimsily secondary....
In cognitive science, such mysteriously unified interconnections can be seen as the basic stuff of mind, and actually generated (in the combined senses of woven and powered) by physical perception and internal crosstalk. Logical reasoning derives from exactly the same source as sensual wallowing or recognizing a facial expression or geometric doodling or language acquisition or the illusion of consciousness: they all result from one mechanism, with no one outcome innately privileged as The Goal. And where do they all meet again?
So we might expect insights to pass fairly easily between the two disciplines, although -- perhaps due to the ultra-conservative tastes of most scientists, or perhaps due to those research funding issues -- cognitive science has shied away from emphasizing artistic problems until fairly recently.
. . . 2000-09-19 |
Neuraesthetics: Hypnotic Narratology
The influence of Ernest R. Hilgard's Divided Consciousness: Multiple Controls in Human Thought and Action, first published in 1977, was somewhat hobbled by Hilgard's up-front completely speculative application of his brilliant hypnosis research to his not-very-rigorous understanding of multiple personality disorder. That kind of "these conditions are obviously completely different but their descriptions have some words in common and therefore they must actually be exactly the same" move is best left to us popularizers; in Hilgard's book, it's just a distraction from the stuff Hilgard really knows about.
That stuff started in a classroom demonstration when Hilgard hypnotised a blind guy and told him he would be completely deaf until a hand was placed on his shoulder. The students had their usual sadistic fun trying to make little Tommy jump with handclaps, gunshots, and Keith Moon imitations, and then some Pysch-for-Poets throwback asked about subconscious memories (a red herring, but good bait). OK, if you're curious, said Hilgard to the student, and then to the blind deaf guy, "If there's some part of you that hears me, I'd like your right index finger to raise."
It did.
And the blind guy said "Please restore my hearing so you can tell me what you did. I felt my finger rise and it wasn't a twitch."
Post-hand-on-shoulder, Hilgard asked the blind guy what he remembered.
"Everything was quiet for a while. It was a little boring just sitting here so I busied myself with a statistical problem. I was still doing that when I felt my finger lift."Hilgard hypnotized the guy again and told him, "I can be in touch with that part of you which made your finger rise and it can answer me when I put my hand on your arm."
"It" did. "It" answered, for example, every question about what kind of noises had assaulted the deafened blind guy, and reported the lifting finger command too.
Then Hilgard asked the hypnotized "subject" what "he" thought was happening.
"You said something about placing your hand on my arm and some part of me would talk to you. Did I talk?"
. . . 2000-09-20 |
Neuraesthetics: Hypnotic Narratology, cont.
Lacking both a theoretical foundation and any profitability for pharmaceutical companies, hypnosis has never been a particularly easy sell to scholars. Ernest Hilgard's Stanford lab was a great legitimizer, and it was justified largely by hypnosis's usefulness as a pain reliever (although only as a last resort -- there are those pharmaceutical companies to consider, after all).
How do you measure pain without damaging the experimental subjects? (After all, there are only so many political prisoners in the world.) With a bucket of ice water, that's how: a freezing hand has the useful quality of ramping up its agony fairly predictably over time. Once you get yourself a time-coded diary of reported pain levels ("bad," "awful," "fucking awful," and so on) and reported discomfort levels ("I'd rather be doing something else," "I can't stand it any more," "I'll kill you, I swear I'll kill you" and so on), all you have to do is put the subject through it again after hypnosis and measure how the new reports differ.
While the speaking "hypnotized subject" was told not to feel pain, the "part that knew about the hypnotic suggestion" was given control of the subject's non-freezing hand and told to report its feelings in writing. And while the "subject" then verbally reported low pain and discomfort levels, the "outside the performance" hand reported pretty much the same pain levels as before and discomfort levels midway between the unhypnotized state and the hypnotized state.
So some aspect of the mind could accurately perceive what had been hidden from the unifying verbal consciousness that we usually metonymize as "The Mind." Moreover, that "aware" aspect had privileged access even to perceptions that the explicitly "conscious" aspect was allowed to monitor. In the ice water tests, the hypnotized subjects may not feel pain qua pain, but they can still feel a variable rate of throbbing in their hand. When the "aware" aspect and the "conscious" aspect had to make their reports by taking turns using the same medium, the "aware" reporter described slow continuous change in the throbbing while the "conscious" reporter described sudden jumps correlating to the time taken up by the "hidden" reports.
Hilgard says that it was only later that he realized how these experiments duplicated some results from the ooh-spooky! days of hypnosis research, when there was similarly trendy interest in automatic writing: an arm was hypnotically anesthetized and then "the arm" was given a pencil and paper and permission to write; when the anesthetized arm was pricked with a pin, "the subject" felt nothing but "the arm" was vehement in its complaints. (Unfortunately, automatic writing research continues to languish in the spiritualist ghetto from which Hilgard and company partially rescued hypnotism.)
. . . 2000-09-21 |
Neuraesthetics: Hypnotic Narratology
Continuing our summary of Ernest R. Hilgard's out-of-print Divided Consciousness....
Hilgard called the aware-but-not-included-in-unified-consciousness portion of the communicating subject "the hidden observer," partly because of its privileged access to sensation and partly because it denied being the boss: when a hypnotized "subject" thinks he has a unmovable arm, the "observer" knows that the arm is actually just stiff but still doesn't feel itself as stiffening the muscles of the arm.
If the "hidden observer" had been asked what was controlling the arm, the answer would presumably have been the hypnotist, because, by definition, that's the story that's agreed to during the hypnotic state. According to Hilgard, after all other suggested attributes were successfully argued away, the final explanation of the hypnotized as to how they know they're hypnotized is "I know I'll do what you tell me."
But the hypnotist's assignments still give a lot of leeway, and hypnotized subjects aren't puppets: they come up with their own back story fantasies to explain the suggestion and their own strategies for performance. Which is why Hilgard describes hypnotism not as control so much as goal-setting. What the successfully hypnotized report about their experience of hypnotic suggestion isn't a feeling of subjection to the suggestion but a complete lack of interest in not following the suggestion; e.g., "I didn't blink because it just didn't seem worth the effort to blink."
And, Hilgard guesses, it's also why, when he looked for some personality trait that highly hypnotizable subjects have in common (because we're only talking about a subclass of the most hypnotizable subjects here; less hypnotizable people don't enter into this -- or into most of the other interesting clinical results, for that matter), the only correlation he found was "imaginativeness." His best hypnotic subjects were already comfortable with contorting themselves into somewhat arbitrary new goals and rules; in fact, they sought them out: they included pleasure readers, sensualists, and adventurers, used to vivid fantasies and compartmentalized emotions.
Thus they also included a number of what Hilgard descibes as "excellent storytellers," one of whom was asked, under hypnosis, to produce a story with "you and some friends in front of a cave." What spilled out was fifteen minutes of smoothly related, vividly detailed narrative which started by exploring the cavern's chambers and then moved into a Lost World adventure.
Afterward, the storyteller explained his apparent effortlessness: "In hypnosis, once I create the pattern, I don't have to take any more initiative; the story just unfolds. I knew ahead of time that there would be another room inside the cavern, but I didn't know what it would look like until I walked through and was describing it. In the waking state, storytelling seems more fabricated. I don't see the things that I describe in the way I actually see them in hypnosis."
And, backing up this report, while still in the hypnotic state, a "hidden observer" had claimed to be handling the distracting structural work of planning the move to the Lost World and monitoring the story's length, thus letting "the subject" concentrate on a description of the passively viewed surface fantasy.
. . . 2000-09-22 |
Concluding the Neuraesthetics: Hypnotic Narratology saga...
Sometimes all it takes to leap to a conclusion is the choice of an article. Whereas calling a communicating-but-unavailable-to-consciousness mental process "a hidden observer" would have brought out its transient nature, calling it "the hidden observer" made it sound like an invincible Masked Avenger, predisposing Hilgard to treat it as more persistent than his evidence would support.
Once on that train of thought, multiple personality disorder may have seemed like a natural stop, but you'd think a clinical hypnotist would be more reluctant to draw attention to a "hypnotist = trauma," "suggestion = repressed trauma," and "hypnosis = serious mental illness" equation.
Anyway, there's no need to board that train. What makes MPD socially and emotionally problematic isn't its modularity per se but its assignment of personalities to the modules. What makes the term "modularity" problematic isn't the notion that the mind is multiply streamed, but the notion that the streams can all be divvied up neatly into things called modules. And what makes Hilgard's hypnosis research interesting isn't how it maps dysfunction but the insight it offers into function.
(Walter Jon Williams's Aristoi takes a similar wrong turn: the novel assumes that modularity makes for efficient thinking, but it takes a trauma-and-MPD route rather than the practice-and-hypnosis route.)Post-Hilgard hypnosis researchers have been at pains to point out that "hypnosis entails social interaction as well as alterations in conscious awareness"; what they forget and what Hilgard's research underscores is the extent to which conscious awareness is also a matter of social interaction.
One of the noisiest "paradoxes" of the cognitive sciences is that the mind handles tasks faster than consciousness possibly can. But as a paradox, it shows the same kind of naivete as artsy types blathering about quantum theory. Philosophers got over that one in the nineteenth century (and before the nineteenth century, they handled it with stuff like the humours and astrology): we're just talking about the fact that consciousness, by definition, has to pretend to be the boss even when it's perfectly obviously not. Maybe Nietzsche overstated the case when he described consciousness as only a kind of surface froth on the driving waves (Nietzsche exaggerating for effect? What are the odds?), but he was clearly right that the conscious self's usefulness and power get overestimated to justify concepts of legal and religious responsibility.
The corresponding problem is, most folks who latch onto the non-unified-self drop into one of two camps: a New Agey hippyish irresponsibility groovin' on its drives, man, or a Calvinistic morose fatalism where the lucky ones happen to be born naturally more unified (and then fool themselves that it's their super-thick undisturbed froth of will power that's doing the trick) and the rest of us are born permanent losers.
Hilgard's work points toward a less melodramatically binary state of affairs. Rather than a sharp constrast between the self-deceived "self" and the uncontrollable mind-flood, it indicates a constantly shifting array of simultaneous processes, capable of handing off tasks and even of taking over communication. It's not so much that "consciousness" is inherently modular as that modularity is a useful mental technique, with the narrating "consciousness" a specific case in point.
(Man, it's hard to figure out where to put the scare quotes with this stuff. At least I'm leaving sous rature out of the toolbox....)
A narrating consciousness doesn't exclude the possibility of other modules, nor is it invalidated by their existence. When we're all at our best, it's more a matter of efficient mutual support, like in a WPA poster, or like in Hilgard's storyteller story.
When I read it, I thought of Samuel R. Delany, in The Motion of Light in Water:
"... but now what pressed me to put words on paper -- what made me open my notebook and pick up my ball-point -- were comparatively large, if vague, blocks of language that came.... It was as if the whole writing process had finally secreted another, verbal layer. These 'language blocks' were not, certainly, lengths of finished prose, all words in place. But now, as well as the vague images and ideas that formed the prewritten story, I would also envision equally vague sentences or paragraphs, sometimes as much as a page and a half of them -- which was when I knew it was time to write."The skeptical reaction to Hilgard's work was that since hypnotism research depends on introspection by the lab animals, the research is untrustworthy. In particular, since we know that distraction reduces the affect of pain for less hypnotizable subjects, and since hypnosis reduces the affect of pain for highly hypnotizable subjects, how do we know, first, that the supposedly hypnotizable subjects aren't lying, and second, that the supposedly hypnotizable subjects aren't just distracted drama queens?
So someone compared how well distracted subjects in pain did on a vocabulary test with how well hypnotized subjects in pain did on a vocabulary test. The distracted subjects seemed... distracted. The hypnotized subjects did just as well as if they weren't in pain at all.
The researchers expressed surprise, because surely the limited store of "cognitive energy" (I'm picturing a sort of green glowing fluid) would be even more depleted by ignoring pain and forgetting that you're ignoring the pain than it would be by just taking the pain straight.
As if we have more energy when we aren't doing anything! No, as sensibly evolved organisms, we're more likely to produce energy when there's a reason to do so: when we have an achievable goal and we're achieving it efficiently. Thus the appeal of hypnotism to the hypnotizable, and thus the dismay that Hilgard reports after he revealed her "hidden observer" to a hypnotized woman:
"There's an unspoken agreement that the hidden observer is supposed to stay hidden and not come out. He broke his promise, he's not abiding by the rules..."
. . . 2000-11-04 |
Neuraesthetics: Writerly FAQs
|
|
. . . 2000-11-11 |
Addenda
|
David Auerbach supplements our Neuraesthetics Writerly FAQs:
"Writing has served, for authors from Rilke to Fitzgerald, as a way to make safe concessions without having to make them to anyone -- to give them an out for self-recrimination and self-doubt that does not have to manifest itself in any concrete way, allowing the social persona that may have discomfitted them to continue on its way. The depression would come from this containment, an only half-expressed portrayal of misery that they can't permit to overflow its literary bounds. Hence, I'd say, why so many writers are jerks in spite of being depressives. And partially (to answer a third question) why so many writers mistake cruelty for honesty." |
. . . 2001-05-06 |
I started reading Derrida immediately after taking a class on Nagarjuna's masterwork, Codependent No More. It's always pleasant to fantasize that autobiographical accident somehow counts as critical insight, and so my rolling bloodshot eyes paused over Curtis White's latest confident assertion:
Anyone who has taken the trouble to understand Derrida will tell you that this putative incoherence was the discovery that the possibility for the Western metaphysics of presence was dependent on its impossibility, an insight that Derrida shared with Nietzsche, Hegel, and the Buddhist philosopher of sunyata, Nagarjuna, who wrote that being was emptiness and that emptiness was empty too.And I don't care much what club is used to belabor Harold Bloom so long as he gets lumpier.... But White's unadorned Adorno is not much more palatable:
"Aesthetic experience is not genuine experience unless it becomes philosophy."An ambitious mission if taken seriously, but a terrible guide if taken as Adorno does (i.e., "Art that is not easily explained by my philosophy does not count as art" -- if Adorno was doing philosophy of math, he'd declare that 5 wasn't an integer because it's not divisible by two). Aesthetics is empirical philosophy, and if pursued without attention to particulars, you quickly end up with nonsense like using a single-dimensional scale of "complexity" to ascertain "greatness." (Scientific attempts at researching "complexity" make for amusing reading, given the muddling ambiguity that attentiveness brings in, and the inevitable toppling over of increasing perceived complexity either into perceived organizational structure -- i.e., greater simplicity -- or into perceived noise -- i.e., greater simplicity.)
More pernicious, and indicative of why smart lads like Derrida avoid the whole question of "greatness," is White's contrast of a "simple folk tune" with what "a Bach or Beethoven will then make of this tune," the former being prima facie non-great and the latter being great. Note the indefinite article: we're now so far from particulars that we're not even sure how many Bachs or Beethovens there are.
And note the isolation of the tune, floating in space, divorced from performer or listener: no wonder the poor thing is simple. Why doesn't White contrast "a simple folk singer" with "a Beethoven" instead? How about with "an Aaron Copland"? Or with "a John Williams"? Is Skip James's 1964 studio performance of "Crow Jane" less complex and therefore less great than a slogging performance of an aria from "Fidelio"? How about if the former is given close attention and the latter only cursory? Is study of a printed orchestral score somehow more aesthetically valid a response to music than, say, dancing?
I'm pretty sure how Adorno would answer all those questions, to give the dickens his due. White, I think, would rather avoid them.
I owe 'im, though, for providing the following lovely quote from Viktor Shklovsky, which for some reason makes me think of The Butterfly Murders:
"Automization eats away at things, at clothes, at furniture, at our wives, and at our fear of war."
. . . 2002-04-16 |
Neuraesthetics: Negative Correlations
I distrust taxonomy partly because I've noticed my own misuse of categories and partly because I've noticed misuse by others. We're all editorial cartoonists at heart: the first thing we want to do with any generalization is to treat it as an instance, or better yet a personification. Classifications are played as if they were the same sort of tokens as the objects classified, and what started as descriptive analysis becomes a competitive comparison against an enticingly novel and coherent ideal.
Similar confusions arise from the "statistical significances" of contemporary medical and psychological research. Narratives aren't built of percentages, much less of scatterplots. Following our narrative impulse, we'll reify a positive correlation from a controlled experiment into an exemplary (but imaginary) real-life case, adding a dollop of assumed causality for dramatic emphasis.
Providing fine backup of my prejudices and a fine example of how a non-scientist like myself will caricature complex research given half a chance, a couple of recent studies ("Psychophysics and Physiology of Figural Perception in Humans", "Visual Categorization and Object Representation in Monkeys and Humans") have looked into how closely various models of classification match what humans (and rhesus monkeys) actually do when they categorize. The models tested were:
And so it turned out, for us and the rhesus monkeys both.
The researchers kept tight control over instances and implied categories, making it less likely that their four-variable diagrams would be shanghaied into pre-existing classifications such as "Looks more like my boyfriend's family than mine" or "Probably edible." All the researchers sought, of course, was a cleaner description of how the mind works.
But my half-assed misappropriation would apply their results to how to work with our minds:
Categories that are constructed and maintained using an exemplar or boundary model are more likely to be useful and less likely to be misunderstood than categories that require a prototype or cue-validity model. Because we're probably going to insist on using the exemplar or boundary model regardless.
"Race" and "sex" being obvious examples: a pre-existing classification based on boundary conditions fuels research whose results -- valid only (if at all) as descriptive scatterplot with plenty of outliers -- are then misapplied to intensify our focus on those boundary conditions.
. . . 2002-05-02 |
Neuraesthetics: Foundations
You may have already heard this news -- it was all over Herodotus -- but when the ancient Egyptians went to mummify somebody, the first thing they did was take out the brain.
They had these narrow metal rods, and some had a hook on the end, like a nut pick, and some had more like a corkscrew, and supposedly they pushed them up the nose and, you know, pulled. (I'm kind of surprised that it hasn't become standard non-Western medicine out here in California: direct stimulation of your frontal lobes, smart massage....)
Well, some ancient Egyptians, anyway. And also some researchers at the U. Maryland Med School, who got permission to try this out on some poor 70-year-old Baltimorean. According to them, it was actually much too frustrating to try to drag the brain out in chunks, so -- you can hear the exasperated sighs echo through the millennia -- they finally just shoved the pick up there and whisked it around to get sort of a slurry, and then held the old guy upside down....
Especially after that level of detail, an afterlife wandering around with an empty skull probably doesn't sound so attractive. To me, it brings back those Wizard of Oz nightmares -- you know, the ones with zombie Ray Bolger? "Brain...! Give me... brain...!"
But to the ancient Egyptians it made sense. They were obsessed with fighting death. Death meant turning into rotten stinking gooey stuff. Therefore rotten stinking gooey stuff was the enemy: Their physicians specialized in enemas and emetics, and those who could afford the time devoted three days of each month to purgation.
If we assume that the soul is immortal, the soul can't be in stuff that decays. And the brain is too icky and gooey to embalm, so it most be useless to the soul.
Of course, the ancient Egyptians were right that the brain is an obstacle to immortality. Sadly, that's because the brain is what defines mortality. They were right that we rot from the head down. They were just wrong about there being an alternative.
But their technique was embalming, so that's what they went by. It's a common enough fallacy -- a subspecies of what experimental psychologists tend to call "priming." We apply (shape, mask, filter) what we've encountered (whether we realize it or not) to our actions, and we apply our actions (whether we realize them or not) to what we perceive.
. . . 2002-05-03 |
Neuraesthetics: Foundations, cont.
|
It's inherent to the mind's workings that we'll always be blinded and bound by our own techniques.
Another example of this -- which actually bugs me even more than ancient Egyptians taking brains out of mummies -- is when essayists or philosophers or cognitive scientists or divorced alcoholic libertarians or other dealers in argumentative prose express confusion (whether positive or negative) before the very existence of art, or fantasy, or even of emotion -- you know, mushy gooey stuff. Sometimes they're weirdly condescending, sometimes they're weirdly idealizing, and sometimes, in the great tradition of such dichotomies, they seem able to, at a glance, categorize each example as either a transcendent mystery (e.g., "Mozart," "love") or a noxious byproduct (e.g., "popular music," "lust").
(Should you need a few quick examples, there are some bumping in the blimp-filled breeze at Edge.org -- "Why do people like music?", "Why do we tell stories?", "Why do we decorate?" -- a shallow-answers-to-snappy-questions formula that lets me revisit, through the miracle of the world-wide web, the stunned awful feeling I had as a child after I tackled the Great Books Syntopicon....) |
These dedicated thinkers somehow don't notice, even when they're trying to redirect their attention, what they must already know very well from intellectual history and developmental psychology both: that their technique is blatantly dependent on what seems mysteriously useless to the technique.
Cognition doesn't exist without effort, and so emotional affect is essential to getting cognition done. Just listen to their raised or swallowed, cracked or purring voices: you'll seldom find anyone more patently overwhelmed by pleasure or anger or resentment than a "rationalist," which is one reason we rationalists so often lose debates with comfortably dogmatic morons.
Similarly, "purely observational" empiricism or logic could only produce a sedately muffled version of blooming buzzing confusion -- would only be, in fact, meditation. Interviews, memoirs, and psych lab experiments all indicate that scientists and mathematicians, whether students or professionals, start their work by looking for patterns. Which they then try to represent using the rules of their chosen games (some of the rules being more obviously arbitrary than others). And they know they're done with their new piece when they've managed to find satisfying patterns in the results. It's not that truth is beauty so much as that truth-seekers are driven by aesthetic motives. ("It's easier to admit that there's a difference between boring and false than that there's a difference between interesting and true.")
Studies in experimental psychology indicate that deductive logic (as opposed to strictly empirical reasoning) is impossible without the ability to explicitly engage in fantasy: one has to be able to pretend in what one doesn't really believe to be able to work out the rules of "if this, then that" reasoning. The standard Piaget research on developmental psychology says that most children are unable to fully handle logical problems until they're twelve or so. But even two-year-olds can work out syllogisms if they're told that it's make-believe.
Rationality itself doesn't just pop out of our foreheads solitary and fully armed: it's the child of rhetoric. Only through the process of argument and comparison and mutual conviction do people ever (if ever) come to agree that mathematics and logic are those rhetorical techniques and descriptive tools that have turned out to be inarguable. (Which is why they can seem magical or divine to extremely argumentative people like the ancient Greeks: It's omnipotence! ...at arguing!)
An argument is a sequence of statements that makes a whole; it has a plot, with a beginning, a middle, and an end. And so rhetoric is, in turn, dependent on having learned the techniques of narrative: "It was his story against mine, but I told my story better."
As for narrative.... We have perception and we have memory: things change. To deal with that, we need to incorporate change itself into a new more stable concept. (When young children tell stories, they usually take a very direct route from stability back to initial stability: there's a setup, then there's some misfortune, then some action is taken, and the status quo is restored. There's little to no mention of motivation, but heavy reliance on visual description and on physically mimicking the action, with plenty of reassurances that "this is just a story." Story = Change - Change.)
And then to be able to communicate, we need to learn to turn the new concept into a publicly acceptable artifact. "Cat" can be taught by pointing to cats, but notions like past tense and causality can only be taught and expressed with narrative.
It seems clear enough that the aesthetic impulse -- the impulse to differentiate objects by messing around with them and to create new objects and then mess around with them -- is a starting point for most of what we define as mind. (Descartes creates thoughts and therefore creates a creator of those thoughts.)
So there's no mystery as to why people make music and make narrative. People are artifact-makers who experience the dimension of time. And music and narrative are how you make artifacts with a temporal dimension.
Rational argument? That's just gravy. Mmmm... delicious gravy....
Further reading
I delivered something much like the above, except with even less grammar and even more whooping and hollering, as the first part of last week's lecture. Oddly, since then on the web there's been an outbreak of commentary regarding the extent to which narrative and rationality play Poitier-and-Curtis. Well, by "outbreak" I guess I just mean two links I hadn't seen before, but that's still more zeitgeist than I'm accustomed to.
|
. . . 2002-05-18 |
In a 1989 study that's been much cited by those looking to improve their toddlers' SAT scores and investment portfolios, a bunch of 4-year-olds were tortured by being told that they could have a few pretzels now if they wanted, but that if they waited they could have some cookies instead. Researchers then tracked how long it took for the children to crack.
Unsurprisingly, it was harder for the children to delay gratification if the cookies were displayed in plain sight, or told to think hard about cookies while waiting.
But that effect reversed when representations of the rewards came into play:
"Children who saw images of the rewards they were waiting for (shown life-size on slides) delayed twice as long as those who viewed slides of comparable control objects that were not the rewards for which they were waiting, or who saw blank slides.Which may provide insight into cave paintings and pornography and Marcel Duchamp, even if it does toss up advertising as a topic for further research...."... children facing pictures of the rewards delayed almost 18 minutes, but they waited less than 6 minutes when they pretended that the real rewards, rather than the pictures, were in front of them. Likewise, even when facing the real rewards they waited almost 18 minutes when they imagined the rewards as if they were pictures."
|
. . . 2002-06-12 |
I got my act together and it closed in Boston
My characteristically glum affect skewed yesterday's scattered reflections, but that's what happens with scattered reflections....
As far as I know, pseudonymous publication did begin as a way to avoid persecution. But as authorship became established as a central marketing and critical unifier, pseudonyms became something more like theatrical roles, or like writing fiction, or hoaxes, or drag names: an assumption of a voice explicitly "not yourself" that permits the existence of what "you yourself" could never have made or imagined making, thus changing the boundaries of "you yourself."
I feel especially foolish for not making that step since it's a tenet of neuraesthetics that form causes content. Or, as Visible Darkness puts the fallacious contrary:
"Facts are objects to be mined and refined, and are not created through social interaction. "
. . . 2002-06-29 |
"Who's the weirdo?"
"The object of group truth is group-confirmation and perpetuation." |
As usual, Laura Riding is unpleasantly correct. Science says so! A top priority for any social group is to protect the integrity of the group by erasing disputes within the group and exaggerating disputes with those outside the group. We need to synchronize our beliefs; their truth is a secondary (if that) consideration. One might even speculate that the very concept of verifiable "truth" develops -- not invariably -- from the social pressure to eliminate disagreement.
One well-documented result is that group discussions polarize attitudes, leading not so much to the lowest common denominator as to the most extreme tenable discriminator. Stereotyping of other groups, for example, follows that pattern: after a good hearty talk about Those People, mild prejudices become more vicious, and, having been publicly stated, more clung to.
Sadly (for those of us who perceive innate value in "truth"), just providing evidence to the contrary to everyone in the group isn't enough to interfere with this high-contrast-filter transformation. People -- not being essentially rational -- don't waste attention on evidence unless there's a reason to. If everyone in the group shares familiarity with the same counter-stereotypic information, they don't feel compelled to bring that information up. It's old news, as the saying goes.
But in "The Communication of Social Stereotypes" (Journal of Personality & Social Psychology, Vol. 81, No. 3), Markus Brauer, Charles M. Judd, & Vincent Jacquelin found a loophole:
If only one member of the group knows the counter-stereotypic information, the information is grounds for disagreement. Thus it becomes interesting, attention is drawn to it, and polarization doesn't occur. Heterogeneity within the group increases the visibility of evidence, and thus the validity of group opinions.
Some comments:
. . . 2002-07-01 |
"Who's the weirdo?", cont.
You know what makes me happy?
... well, yeah, "trying the patience of the reader" works, but you know what else?
It's when results reported in the Journal of Personality & Social Psychology match results reported in cognitive science texts from MIT Press. As for example, on the good ship Cognition in the Wild, skippered by Edwin Hutchins, who I trust, among other reasons, 'cause he says "the real value of connectionism for understanding the social distribution of cognition will come from a more complicated analogy in which individuals are modeled by whole networks or assemblies of networks, and in which systems of socially distributed cognition are modeled by communities of networks." Boo-yah!
Cap'n Hutchins set up a constraint-satisfaction connectionist network to simulate hypothesis resolution between communicating gatherers of evidence for or against conflicting hypotheses.
Consider a simulation experiment in which all the networks have the same underlying constraint structure, and all have the same access to environmental evidence, but each has a slightly different initial pattern of activation than any of the others. Furthermore, all the networks communicate with one another, all the units in each network are connected to all the units in the other networks, and the communication is continuous. This can be regarded as a model of mass mental telepathy. With a nonzero persuasiveness, each individual network moves toward [the same] interpretation more quickly. ... Once there, they respond only a little to additional evidence from the environment. Once in consensus, they stay in consensus even if they have had to change their minds in order to reach consensus.... a group mind would be more prone to confirmation bias than any individual mind.Given that, he went on to set up primitive models of such painfully familiar conflict-resolution approaches as monarchy, Quaker-style consensus, and majority-rule voting. No surprises as to the plusses (shortened time to resolution) or minuses (d'oh!) of monarchy. Or of consensus:... diversity of interpretations is fairly easy to produce as long as the communication among the members of the community is not too rich. If they are allowed to go their own ways for a while, attending to both the available evidence and their predispositions, and then to communicate with one another, they will first sample the information in the environment and then go (as a group) to the interpretation that is best supported.
if some individuals arrive at very well-formed interpretations that are in conflict with one another before communication begins, there may be no resolution at all.With majority rule, he points out:
voting does not always produce the same results that would be achieved by further communication. That this is so can easily be deduced from the fact that the result of a voting procedure for a given state of the community is always the same, whereas a given state of the community may lead in the future to many different outcomes at the group level (depending on... the bandwidth of subsequent communication).Probably because of wanting to keep the models simple, he doesn't mention another serious problem with working democracies, or at least the one I'm in right now: Block-voting by a prematurely and persistently frozen-state monoculture of theocratic fundamentalists. Once a plurality of voters has arrived at very well-formed interpretations, they may ignore any evidence that contradicts their hypotheses and still be able to win control of the government.
Hutchins speculates that "in some environments, chronic indecision may be much less adaptive than some level of erroneous commitment." And I have a further obsessive (with the combined force of two obsessions) speculative comment of my own:
5. |
The computer simulations I've seen of language and other human memory-experience-extenders assume constant access and transmission.
That might be true of oral culture. The only thing transmitted through time is what's always important at each time. (Which may in turn be how the notion of sacred narratives and formulae developed: as a way of keeping seemingly arbitrary language in place, working against the ravages of convenience.) But artifacts -- such as writing -- can outlast their time and their popularity, and survive to transmit new information -- that is, to transmit old information to new recipients. Anything that develops outside of our own cultural circumstances provides, by definition, that healthy "diversity of interpretation" based on "broken communication" between entities that have "gone their own ways for a while." My quixotic rage against copyright extension has nothing to do with those profitable works that get all the publicity --those which are popular and reprinted. I don't care whether Disney gets the money for Disney properties or not, so long as the Disney properties are available. No, the utterly blankly death-reeking evil aspect of copyright extension and extension and extension is our forced regression to a secular oral culture, crushing into dust (if paper) or vinegar-reeking glue (if film) those artifacts that aren't currently -- at every moment -- obviously overwhelmingly profitable. |
. . . 2003-12-07 |
Francis at the Mitchell Brothers Theater
Lawrence L. White extends the popular series:
Jessie Ferguson is okeh with lack of consensus in "pure aesthetics," but how are you going to keep it pure? & I'm not talking about keeping the non-aesthetic out of aesthetics: how do you keep the aesthetic out of sociology, history, etc?I like those questions.I have this idea (one that I can't explain or justify to any acceptable degree) that it's all about poesis, that is, "making" in a general sense, "creation," as in things made by humans. Things such as society. That the poem and the economy are variants (mostly incommensurate) of the same drive. That "I'm going to put some stuff together" drive. To go kill me that deer. To clothe my child. To flatter the chief. To exchange for some stuff that guy in the other tribe put together.
Does this insight have any practical application? Not that I relish exposing my reactionary tendencies (yet again), but among those practicing sociological versions of aesthetics, the cultural studies crowd, I'd like less of the scientistic model — let me tell you how things are!— and more of the belletristic model — here's something I wrote! I would like to practice good making in criticism. (& as an inveterate modernist, I'm willing to call obscure frolics good making.)
But what of socio-economics? Is that supposed to be more like a poem, too? Perhaps there are other models. If I can throw out another murky notion to cushion my fall, Wittgenstein seems to say as much when he speaks of "grammar." I always took that term to contain potential pluralities, as if every discipline had a somewhat distinct way of talking, of presenting evidence, making inferences, etc. Which is not to say everything goes. He also spoke of needing to orient our inquiries around the "fixed point of our real need." Not that there's much consensus on that. But let me offer a suggestion: the inability of the English Department to come to a "consensus" severely debilitates its ability to ask for funding. Because the folks with the purse do want to know what exactly it is that you do.
Let me try it from another angle, through another confusion, this time not even so much a notion as a suggestion. Allen Grossman, the Bardic Professor, once reminded us that "theater" and "theory" have the same root. He, too, seems not to have said something Bacon didn't know. Grossman, though, as a reader of Yeats & Blake, wouldn't take it where our Francis wants to go. Perhaps the Baron Verulam's heart might be softened by this plea: isn't the point of the socially inflected sciences to make things as we "would wish them to be"? For example, don't the "true stories out of histories" serve to help us order our current situation, despite Santayana's overstatement of the case? (John Searle once told us in lecture that the drive behind philosophy was nothing more radical than simple curiosity. I found the answer to be unsatisfying philosophically and reprehensible politically.)
To add a trivial one, though: Hasn't the English Department's problem always-already been self-justification? Poetry and fiction weren't very long ago exclusively extracurricular activities, and it takes a while to explain why it should be otherwise. Isn't jouissance its own reward? Or do students pay to be titillated and spurred forward by the instructors' on-stage examples? (No wonder consensus isn't a goal.)
. . . 2004-05-15 |
Aristotle, being a philosopher, naturally thought the purpose of art was to convey philosophy.
Helen Vendler, being a pompous blowhard, naturally thinks the purpose of art is to generate feelings of gratitude toward pompous blowhards.
Of course, I'm just a simple kindly old country software engineer, but it 'pears to me like in suggesting that art classes replace history, Vendler conveniently forgets that most artwork is at least as dull as most history, and that most arts teachers are at least as incompetent as most history teachers, and that, dealing in such vast categories, any example she cites will be easily defeated by a counterexample. If you want to find a purpose for all art, including the stuff you don't like, you'd better start by looking at the boundaries you've set. (Not that art is usually made for art's sake, or that art is very often noticed for art's sake. But any coherent generalization about art-as-art perforce restricts itself to the art's-sake part.)
Art consists of artifacts. An artifact isn't delineated by its profundity but by:
This has nothing to do with the artifact's narrative or argumentative aspects. Aristotle's favorite artwork portrayed one type of idealized individual; Vendler's favorite artwork portrays another type. Doesn't matter: in noticing a Euripides show or a Carver story, they've drawn their lines.
Sure, as soon as we see those particulars, we'll grab 'em and start abstracting again. But the initial move must be toward particularization. You can't write your Comparative History of the Misericord in Medieval Southern England until you've distinguished misericords from how it feels to go to church.
Despite their privileged auctorial position, artisans tend to be the first to enforce this separation of object and context. It's hard to learn a craft while thinking of it as a gift from heaven or an emotional outburst.
So long as an artifact is wholly subsumed in message, we don't perceive it as art. We only perceive it as message.
What purpose is served by serving something other than purpose? The British Navy eventually found limes necessary despite their obvious lack of nutritional value. Similarly, in an ascetic culture, the artifactual excess may hide sensual pleasure; in a consumer culture, it may supply the illusion of community, or spritual afflatus, or empathy (Vendler's sop of choice, scored to "It's a Small World, After All").
In any case, what makes it art isn't the exact utility of the dietary supplement, but in its being supplemental.
Excess and particularization lead to the artifact's mysterious ability to maintain a social life apart from the artifact's creators. According to everything I was told as a child, the Mona Lisa was a great painting because its eyes seem to follow you around the room; Leonardo had other goals in mind; the Mona Lisa remained the Mona Lisa regardless. As John Clute has several times pointed out, art still (or best) fulfills its artistic role when misinterpreted. Or, to paraphrase Plato, the creepy thing about poetry is that it can't be tortured to ascertain its point.
This mystery, or threat, is typically explained as an infusion of life-force from, for example, divine or demonic inspiration, the collective unconscious, the folk, or, in my time and place, the sincerity of the individual or the irreducible physicality of the medium (poetry as self-conscious language, music as self-conscious sound). Again, though, the category of art doesn't depend on whatever justification is currently fashionable, but instead on the ability of a constructed object to call for such justification.
So when I go neuraesthetics hunting in science journals, I look not for the sublime or for the cathartic, but for the particularizing non-utilitarian urge — a balance to those abstracting and tool-making tendencies generally celebrated as our highest achievement.
I quite like the early cyborg you've unearthed for your current header.
Yes, if it weren't for the genius of Adolf Giesin, my view of the racetrack would be interrupted every time I removed my pipe from my mouth. Bless you, Herr Giesin!
Lawrence White points out (very graciously) that I was kinda carrying some baggage of my own up there:
Hey, synchronicity, man! Just last night my wife & I were wondering how someone could read history but not novels, and vice versa. The results of our dishwashing seminar were such: literature develops the imagination, and history provides the best object for such an imagination, actual people's lives. Are there holes in our theory? You betcha. & Professor Vendler's got hers, too. (Don't you have to admit for a pompous blowhard she's awful decorous? Not at all rude.) But Jonathan Dresner, in the comments on Burstein's site, says it all. A) it's a stupid choice in targets. We're surrounded, kids. Let's not start shooting at each other. & B) since when has a literary critic known about anything beyond their own field? Now, this last question does have some good answers. (I'd start w/Benjamin.) But as Tim Yu's response to Stefans points out, Vendler will be a bad answer, in large part because she doesn't know her own field that well. You don't have to like Language Poetry, but you should understand it better, if you want to understand poetry. You see, I'm a conservative. I take Modernism as a given, as an undeniable precedent. & I'm perfectly happy w/calling Postmodernism the child that denies he looks & acts just like his dad. Vendler reads Stevens by converting him into a trippy Keats, w/the trippy part left out. (By the way, Allen Grossman, though more than a touch thaumaturgical, is much better on all this.)
P.S. One of my life-assignments is to make a defense of the MFA program. Because as Yu pointed out, everyone despises them. Doesn't that make them an underdog of some sort?
Well, as we've learned from the outraged screeching of slightly slighted conservatives, underdoggedness makes slippery moral high ground. But you're right: Vendler and MFA programs have never directly insulted me personally, and so I should have tempered my language. The irony of an unqualified weblogger calling anyone a pompous blowhard is manifest but hardly transformative.
I'm sorry to disagree with dishwashers, having just come from a long stint myself, but literature is used to close down the imagination as often as to open it, and history has stimulated the imagination of many a good writer. I agree it's hard to see how an enthusiastic reader of one wouldn't as a matter of course be a reader of the other. But what do I know? I still haven't even figured out why the right and left brain hemispheres are supposed to be competing. Does our left foot battle our right foot for supremecy when we walk?
Since when does a cosmologist know about anything outside her field?
PF gently takes me to task for mischaracterizing (or at least drastically oversimplifying):
Aristotle (but maybe I'm generous because I always have Plato in mind) seems very interested in what art is as it is, not what use it can have for philosophy. Catharsis, as you recall, is central (and forbiddingly obscure) in the Poetics, and it's not a philosophical state, neither of thumos nor psyche.
Not much is clear about catharsis, but given Aristotle's sensible insistence on centering his aesthetic studies in pleasure, you're also right: it's likely that he was no more reductionist than I strive to be. I have a hard time following Aristotle in general, what with my heels being dug in, which is all the more reason for me not to talk trash about him; he wouldn't have entered my mind if The Little Professor hadn't reminded me of his prioritization of "universals," which does seem a philosopher's preference. In hindsight, I should have dropped those first two sentences. And changed the title, I guess.
. . . 2004-07-10 |
Language and Creativity: The art of common talk
by Ronald Carter, Routledge, 2004
An affable celebration of the formal qualities of informal conversation, backed by two big assets:
The book is therefore recommended to one and all, although it suffered a persistent limp after its first misstep into "Creativity," the gopher hole.
What Carter means by creative seems something more exactly named non-semantic, and better approximated by aesthetic, prosodic, performative, hedonic, ludic, or even politic.
What a difference a bad word makes.
For starters, and harrumphing as a math major and computer programmer, it's kind of offensive to presume (as Carter's forced to) that there's no creativity in semantics. Where do new abstractions and techniques come from? Yeah, I know some people think they're just lying around in the cave waiting for us to trip over them, but some people think that about alliteration too.
Attacking on the other front, prosodic patterning relies on formula. Tags, well-worn puns and rhymes, simple repetition, are all aspects of conversation that Carter wants to bring out, but calling them "creative" stretches the flavor out of the word.
CANCODE documents the impulse to self-consciously draw attention to the material units of supposedly transparent communication: a social need to undo meaning in favor of surface. That's worth documenting, all right. But Carter's "creative" slant gives preferential treatment to idiomatic metaphors when virtually any non-core aspect of speech or gesture can be fucked with: a proper name, for example, or an instruction manual.
Here's Carter's example of language which thoroughly "lacks the property of literariness":
"Commence by replacing the hub-bearing outer race (33), Fig. 88, which is a press fit and then drop the larger bearing (32) into its outer member followed by oil seal (31), also a press fit, with lip towards bearing. Pack lightly with grease."
Only a little earlier he had transcribed a group of friends making double-entendre hash of the job of drilling a hole in a wall. Imagine what they could've done reading this aloud. Imagine it in a political poetry anthology under the title "White Man's Burden". It doesn't take much effort to re-insert "literariness" into writing.
Re-insert the literary into writing.... That has a peculiar sound, doesn't it?
Writing, in our current origin myths, was designed to carry an ideational burden, starting with ledgers, shopping lists, and rule books. If that's the case, then it would require special writerly effort to reinstate the non-semantic balance conversation achieves so effortlessly. That special effort, which we might call "literary," would then receive special notice. When the social cues that hold conversation together changed, so would "literary" style, and, for example, the current fiction-writer's hodgepodge of brand-and-band names wouldn't be a sign of fiction's decline, but of its continued adaptability. (Man, I wish I felt this as easily as I argue it.) In a focus-driven reversal of perspective, the written, having gotten such abundant credit for its efforts to mimic ordinary prosody, would eventually become the norm for prosodic effects.
And so we end up here, praising quotidian conversation for possessing the very "poetic" qualities that originated in it. Carter's use of the term "creative" (as in "Creative Writing Department") reinforces this confusion while his evidence clears it up.
Finally, the positive self-help connotations of "creativity" somewhat obscures one of the most intriguing trails through CANCODE's walled garden: the extent to which playful, euphonic, and memorable language is prompted by hostility. Or, more precisely, how the verbal dance between meaning and surface mutually instigates and supports the social dance between individual aggression and communal solidarity.
This might help explain the peculiarly bickering or bitchy tone that emerges in the extended nonsense of Lewis Carroll, Walt Kelly, Krazy Kat, and Finnegans Wake, and why many a delightful bit of fluff begins life as vicious parody. (Also for the record, I think how the fluff ends up is just as important a part of the story as how it began. May all your unintended consequences be little ones.)
Now that's nattering!
Recognizing the essential truth of adaptability doesn't mean you have to like or even think well of the Thing, Adapted. (The eohippus was sweet, after all: was it too high a price to pay for the horse? Can't we all just get along?)
Danged good point. Almost fell into prescriptive grammarian hell there.
You were a math major?
The hows and wherefores have been mentioned here before, but, to my surprise, the whys have not, although they might be guessed at easily enough by more general remarks. In brief, given an apparent choice between paying more lip-service to my pleasures and being allowed to keep them, I preferred the latter.
But where does it all come from?
I ask the same thing every time I have a sinus infection.
"Pack lightly with grease" has such a delicate feel to it . . . very nice, very literary.
The always rewarding Tom Matrullo has found a particularly challenging angle to strike his flint against. May sparks fly high and wide.
John Holbo (aided by Vladimir Nabokov) combines the topics of abstraction, art, and aggression in a lovely meditation on chess.
. . . 2004-09-06 |
The Extent of the Literal
by Marina Rakova, 2003
Neurological, developmental, and ethnographic evidence all resist clean divisions of literal from metaphorical meaning. So far, so George Lakoff.
But the Lakoff posse, like a certain brand of postmodernist, tends to understate precursors and overstate conclusions. In this case, their precursors include Kant, Nietzsche, and a lot of twentieth century literature, and their conclusion is that "all meaning is metaphorical."
As Marina Rakova objects, that stretches the meaning of "metaphor" so far as to make "metaphor" meaningless. Linguistic units like hot, sharp, bright, and disgusting cut across sensory and social realms at too early an age in too many cultures to establish precedence of the physical over the psychic. Instead of fat primal sensory perceptions from which all other meanings are metaphorically derived, Rakova posits even fatter primal synesthetic-social conceptual bundles from which the categories of "literal meaning" and "sensory realms" are eventually (and socially) abstracted.
This gumbo is starting to smell good. But — dissertation pressure, maybe?—Rakova sometimes comes close to implying a one-to-one relation between concept and word. As a handful of okra I'd add that these seemingly primal concepts of hers can only be communicated (or, if you prefer, approximate mirroring by another human being can only be cued) by nonconceptual means which carry a whole 'nother set of fuzzies.
"Concepts are not meanings." And neither are words meanings. We're fortunate critters in that definitions can help us learn new words. But, speaking as an uncomfortably monolingual subject, I can assure you that access to definitions doesn't guarantee communication. Meanings need to triangulate against something meatier before they cohere into language.
And, although words provide Rakova's chief evidence for concepts, neither are words concepts. Even the experience of watching someone cry differs from the experience of crying. Like the individualistic experience Rakova calls concept, the extended empathy we call language precedes any notion we might form of the "literal"; like (but distinguishable from) the concept, the word tangles the physical and the abstract. If concepts are fat, words are hairy.
And we haven't even started trying to do anything yet.
. . . 2006-01-11 |
Moretti sounds like a happy guy. And it's infectious. Why pledge allegiance to a groove and turn it into a rut? Get out of that stuffy coffee shop and into a cool refreshing stats lab. Live a little! (With the aid of twenty grad students.) An OuBelLetriPo is overdue. Let's pick a quantitative approach and a subject out of the hat: "Pie charts" and "Coming-out stories"—wait, um, I wasn't ready; can I try again? "Income distribution" and "Aphra Behn"? Perfect!
Will you end up with a demolished bit of received wisdom? A sociological footnote? Or just graphic representation of a critical triteness? You don't know! You think Perec knew the plot of La Disparation before he started?
From this set of ongoing experiments, "Graphs" seem to be going best. Those cross-cultural Rise-of-the-Novel curves hold immediate appeal.
And what they appeal for is correlation with something else. Moretti plausibly and skeptically explains who might've stepped on the brakes when the curve dips, but who revs the engine? Do accelerators vary as inhibitors do?
Even more intriguing is Moretti's report that nineteenth-century English fiction genres tended to turn over in a clump every twenty-five or thirty years, rather than smoothly year by year. But his report relies exclusively on secondary sources, and risks echo chamber artifacts. Are generational timespans a convenience for the researchers he draws from? What if dialogic genres ("Jacobin novel" and "Anti-Jacobin novel") weren't shown separately? How closely do the novel's clumps lock step with transitions in other forms? How far can we reliably carry statistical analysis of a non-random sample of forty-four?
Plenty of intrigue, then, and plenty of opportunity to re-make the mistakes of others who've tried to turn history into a "real science."
Since maps are often referred to by writers (and, when otherwise unavailable, as in fantasy genres, often passed along to the reader), their re-use by critics tends to be confirmatory rather than revelatory — most dramatically when Clive Hart walked each character's path through the "Wandering Rocks". In "Maps", Moretti's diagrams make a good case for a not very startling thesis: a nostalgic series of "village stories" will most likely feature a village from which meanderings are launched but which fades into insignificance over time. As he admits, his scatter plot of Parisian protagonists provides even less novelty: if you have an ambitious young French hero, you start him in the Latin Quarter and aim him elsewhere. (In 1987 Pierre Bourdieu diagrammed The Sentimental Education's character trajectories on a Parisian map and similarly found graphic confirmation of what was never really in doubt.)
Judging by early fruit, "Trees" hold the least promise. As presented, the "free indirect discourse" evolutionary tree doesn't meet Moretti's own standards of rigor, since he offers no material justification for either his selection of source material or his linkages.
His other evolutionary trees may be most interesting for failing to justify their initiating assumption: that visible decipherable clues define the classic mystery genre. Extending the branches to verifiable examples of "fair play" might draw the tree-builder into unabstractable tangles. In the classic blend of detection with gothic and horror elements, consider how often the resolution seems arbitrary, delivered with a wink. Given how poorly most human beings follow a logical argument, does anything more than lip service have to be paid to rationality? To what extent was that expectation set by reviewers rather than noticed by readers? How quickly after the rule's formulation was it challenged by re-assertion of other aspects of crime melodrama in spy stories, thrillers, procedurals, and hard-boiled stories, and then how quickly was it undermined by "cross-breeding"? (My own experience of genre change seems closer to Alfred Kroeber's self-grafting Tree of Human Culture than to species divergence. You only go so hardcore before background singers return to the mix.)
More exhaustive and more focused, Moretti's "everything published in the Strand" tree carries more conviction (and much less tidiness) than his initial "Conan Doyle and his rivals" tree. Exhaustively constrained to such an extent, though, the tree may describe something less than Moretti seems to hope for. I can imagine a tree tracing certain ingredients of virtual reality stories in 1980s science fiction. But would that graph evolution or just Gardner Dozois's editorial obligation to avoid strict repetition?
Moretti closes his trilogy with two general remarks.
One is a call for materialism, eclecticism, and description. This I applaud, since the most interesting scholarship I've read lately includes interdisciplinary studies of "accidentals", histories of readership and publishing, text-crunching of non-canonical sets, whether mechanically or passionately.... There's plenty of life even in purely literary anti-interpretive experiments such as those collected in Ben Friedlander's Simulcast.
The other "constant" Moretti claims is "a total indifference to the philosophizing that goes by the name of 'Theory' in literature departments." (He doesn't define "Theory" more precisely, but Novalis is apparently not on the prohibited list.) And here, I think, I'll keep my hands quietly folded.
I agree that twentieth century philosophers and psychologists have made awful interpretation factories, and that literary studies sometimes reek of old shit under new labels. But interpretations generated from political science, economics, quantum physics, or fMRI averaging tend to be just as inane. What makes such readings tedious isn't which foreign discipline has been used to slap together a mold, but the inherent moldiness of the affair.
For a critic and pleasure reader like myself, Moretti's text-twice-removed findings fit best in the foundations and framework of aesthetics, clearing false assumptions and blocking overly confident assertions. That's also where neurobiology, developmental and social psychology, and other cognitive sciences seem most useful.
Along with philosophy. Having agreed to open up the field, why ban one of the original players? This isn't the sort of game that's improved by team spirit.
Moretti didn't he do those beige still lifes Thibaud was so fond of? Or no, I remember now it was that beer. All well and good really, but what have you bastards done with Wealth Bondage?
That information is available only on a need to know basis.
And so is the Tutor, come to think of it.
And the only people who need to know are the ones gonna join him.
(You have to respect Candida's choice of domain registrars, though.)
+ + +
"My interest is really, why do our senses start being filtered? And what does it do to our history and our art?"- bhikku
Eidetic imagery — the ability to retain in detail a pictorial configuration — is found in approximately 8% of the school population, but almost never in adults, aside from artists.
In "Senses, Symbols, Operations" (The Arts and Cognition, David Perkins & Barbara Leondar, eds.), H. Gardner compared performances of a group of eleven-year-olds and a group of fourteen-year-olds across a wide variety of perceptual, motor, and cognitive tasks. For the most part, there was no improvement with age, or there was a slight decline. Improved: solving brain-teasers; recall of important narrative details. Significantly worse: memory of irrelevant details; dot-counting.
"...we must ask whether a cultural emphasis on operative thinking has had, as an unintended consequence, a deletrious effect upon figurative capacities. ...the decline of incidental learning, the waning of interest in the arts which is so characteristic of adolescence, and contrasting strategies of adolescents and preadolescents in the style discrimination tasks [adolescents tending to compare, preadolescents tending to describe] at least hint at the possibility...."
"Even as a schoolboy I took tremendous delight in Shakespeare, especially the historical plays. I have also said that formerly pictures gave me considerable and music very great delight. But now for many years I cannot endure to read a line of poetry. I have tried lately to read Shakespeare and found it so intolerably dull that it nauseated me. I have also lost my taste for pictures and music.... My mind seems to have become a kind of machine for grinding general laws out of large collections of facts but why this should have caused the atrophy of that part of the brain alone, on which the higher states depend, I cannot conceive."- Charles Darwin
'Shades of the school-house begin to close/Upon the growing Boy', eh? A rather worrying interpretation, but I suppose a logical one. My only quibble is with the phrase 'almost never in adults, aside from artists.' Is there not a spectrum between adults and artists? (strikes pose)
P F demurs mildly:
History sure as heck ought to be a science! (or rather there ought to be scientific work done in history, you can make room for belles lettres in it still), it just shouldn't import standards of sciences alien to it. Where would we be without scientific standards for the use of documents and a scientific orientation to the weighing of evidence? - but accepting that does not mean that you should demand accurate predictions from it, nor that it needs to be quantified, any more than taxonomy or paleontology.
And Genevieve-or-Michael Tucker more strongly:
Surprised to hear the conclusions you have drawn from Gardner's writing - only one set of experiments after all, there is plenty of literature on the eidetic elsewhere I should think. Personally I have had a very strong visual memory for most of my life, particularly for music on a page, but also for people, places, events with strong emotional connections. Sure, the details are rather corrupted now, but the visual is hard to drive out in those for whom it's the main learning strategy, I think.
Disappointment may be the reviewer's bread and butter, but it's cold mutton for an essayist. I tried to lend interest to this "book event" contribution by borrowing Moretti's bluff tone. If the results escaped fraudulence, it's only by their incoherence: flashy threads on a sketchy character.
As punishment, I was condemned to explain my reaction all over again in comments, first unpacking the reference to "echo chamber artifacts," and then taking up the perennial topic of cultural evolution.
Following Moretti's dismissive response, I became increasingly irascible. The reception of Graphs, Maps, Trees seemed to repeat a familiar irony: confidently self-proclaimed scientific objectivity met by hero-worship, declarations of allegiance, and inattention to the evidence. By the end of the Valve event, I felt like the hapless (and dickless) EPA inspector of Ghostbusters, pointing out the right things in the wrong movie.
Occasionally over the next few months I attempted to draw attention to related research, most pointedly in this post from March 4, 2006.
Eventually, though, I think I got the message. Graphs, Maps, Trees wasn't a collection of research papers. It was a celebration, a manifesto whose solipsism gave it the appeal of a human interest story. Any questioning of its results would inevitably be taken as dissent from its cause. In more ways than one, my lack of enthusiasm was genre-based.
. . . 2006-09-04 |
The following entry treats both the author himself and a more sapient object with insulting familiarity. Anticipating community outrage, this commentator may be excused for repeating that the relationship of author to post is no more identity than the relationship of fence-builder to post, and most resembles the relationship of Edgar Bergen to Charlie McCarthy: accepting liability; denying agreement.
A vulgarized notion of "subconscious" encourages the popular horror fantasy of the ventriloquist's dummy sitting amok. Given due ponderosity, however, the figure might provide insight into the workings of a psychology which takes conscious craft as evidence and takes art criticism as retirement home. "Sub" misleads: What consciousness disavows lies not beneath it but beside it, is supported by and supports it, is not its dirty root but a fellow twine-trunk of the hollow banyan.
Or it might not.
+ + +
"The Role of Aesthetic Judgments in Psychotherapy"
by John S. Callender,
Philosophy, Psychiatry, & Psychology 12.4 (2005)
(via Project MUSE or Cap'n Morgan the scurvy intellectual pirate)
It's become fashionable to frame (in a nice way) recently discredited psychiatric theorists as fashioners of useful myths, or as literary critics who discovered a more direct way to soak neurotics. The Bible and Shakespeare provide eternal wisdom despite weak grasps of history; similarly, Freud and Lacan provide eternal wisdom despite lapses in scientific hygiene. Cheaper than taking 'em off the reading lists.
And hey, I want to look as fashionable as the next guy, though I do wonder when someone will join William S. Burroughs in acknowledging the eternal wisdom of L. Ron Hubbard.
Meanwhile, the pseudomedical side of the divide has done their bit with best-sellers like Proust, Not Hydrotherapy. (How is it that the mental health professionals always grab the lucrative end of the stick?)
Now John S. Callender takes another step towards reunification of the pre-analytic State of Philosophy.
You know how patients have this exasperating tendency to walk agreeably along with the therapist, step by reasonable step, yet then deny his conclusion? "Yes, I see what you mean, but I still feel dirty and disgusting." "I would prefer not to." What are you gonna do with someone like that? They continue to believe something they admit they can't prove, even when it can't be justified as investor exuberance!
Well, Kantian aesthetics might explain this puzzle. Firmly held beliefs, check; unprofitable, check; irrational, check; demonstrably not universal yet more than purely personal, check; fiercely judgmental and painfully trivializable, hell yeah.
As the proud-yet-humble coiner of "neuraesthetics", I applaud Dr. Callender's essay and fully concur with the parallel he draws between cognitive therapy and art appreciation.
What I don't understand is why he finds the resemblance mutually flattering.
. . . 2007-03-18 |
Factual Fictions: The Origins of the English Novel
by Lennard J. Davis, 1983 (2nd ed. 1996)
Both Tom Jones's hero and genre were mysterious bastards. Unlike the hero, the genre's parentage remained open to question, and, in '83, Davis ambitiously aimed to prune classical romances (and even the mock-heroic anti-romance) from its family tree.
In place of that noble lineage, he proposed a three-act structure:
In his own storytelling, Davis sometimes stumbled — most painfully, he blew the punchline — and I wished he'd included a chapter on "secret histories", whose length, legal issues, and formatting (memoirs, correspondence, oddly well-informed third-person narrators) all seem to make them at least as germane as ballads. Most of all, without broad quantitative analysis to back them up, such ventures can always be suspected of cherry-picking the evidence.
But I'm an irresponsibly speculative collagist myself, and these cherries are delicious. I already understood how framing narratives relieve pressure, how they establish both authenticity and deniability: "I don't know, but I been told." But I hadn't realized how often pre-fictional writers had felt the need for such relief. Not having read a life of Daniel Defoe, I hadn't known how brazenly he forged even his own letters. And, speaking of letters, I hadn't read Samuel Richardson's flip-flops on the question of his real-world sources.
The sheer number of examples convinces us that something was shifting uncomfortably, tangled in the sheets of the zeitgeist. How else explain, across decades and forms and class boundaries, this increasingly vexed compulsion to face the old question head on, like a custard pie?
And by the end of the book, we still haven't found fully satisfying answers; the process continues. Recently and orally, for example, our impulse to simultaneously avow and disavow narrative discovered a felicitous formula in the adverbial interjections "like" and "all like".
We haven't even fully agreed to accept the terms of the problem. Remember those quaint easy-going characters in Lennard Davis's Act I? Believe it or not, living fossils of unperplexed truthiness roamed the Lost World of rural America during our lifetimes! My own grandmother sought out no journalism and no novels; she read only True Confessions and watched only her "stories" — that is, soap operas, "just like real life" they were, another quotidian reconfiguration.
* * *
All novelists descend from Epimenides.
Well, OK, if you want to get technical about it, so do novel readers ("All Cretans know my belief is false"), and so does everyone else.
That's the problem with getting technical. (Or, Why I Am Not an Analytic Philosopher, Again.)
But what about memory retrieval??In contrast to common past-future activity in the left hippocampus, the right hippocampus was differentially recruited by future event construction. This finding is notable, not only because others report right hippocampal activity to be common to both past and future events (Okuda et al., 2003) but also because it is surprising that future events engage a structure more than the very task it is thought to be crucial for: retrieval of past autobiographical events....It does seem strange that no regions were more active for memory than for imagination. So memory doesn't differ from fiction? At the very least, it didn't result in greater brain activity than fiction, not in this particular study (an important point).There was no evidence of any regions engaged uniquely by past events, not only in the PFC but across the entire brain. This outcome was unexpected in light of previous results (Okuda et al., 2003). Moreover, regions mediating retrieval processes (e.g., cue-specification, Fletcher et al., 1998) such right ventrolateral PFC (e.g., BA 47) should be engaged by a pure retrieval task (i.e., past events) more than a generation task (i.e., future events). More surprising was the finding that right BA47 showed more activity for future than past events, and that past events did not engage this region significantly more than control tasks.
(I should admit, even though that re-citation honestly conveys what's on my mind — I happened to read it while writing this, and so there it is — it doesn't honestly convey what I consider a strong argument. Like The Neurocritic, I'm skeptical about the functional neuroimaging fad; it seems too much like listening to a heart pound and deducing that's where emotion comes from. Reaching just a bit farther, then — from my keyboard to my bookshelf....)
For researchers in the cognitive sciences, a narrative works like a narrative, whether fictional or not:
... with respect to the cognitive activities of readers, the experience of narratives is largely unaffected by their announced correspondence with reality. [...] This is exactly why readers need not learn any new "rules" (in Searle's sense) to experience language in narrative worlds: the informatives are well formed, and readers can treat them as such.- Richard J. Gerrig, Experiencing Narrative Worlds
According to Davis, modern mainstream genres partly result from legal changes which forced propositionally ambiguous narratives to face courtroom standards of truth. I didn't find his evidence completely convincing, but there's something that felt right about his tale.
A narrative is not a proposition. When narrative is brought into a courtroom, interrogation attempts to smash it into propositional pieces.
But any hapless intellectual who's made a genuine effort to avoid perjury can testify how well that works. We don't normally judge narratives: we participate in them, even if only as what Gerrig calls (following H. H. Clark) a side-participant. If we restricted ourselves to "deciding to tell a lie" or "trying to tell the truth," there wouldn't be much discourse left. Depending on personal taste, you may consider that a worthwhile outcome; nevertheless, you have to admit it's not the outcome we have.
We've been bred in the meat to notice the Recognizable and the Wondrous. The True and the False are cultural afterthoughts: easily shaken off by some, a maddening itch for others, hard to pin down, and a pleasure to lay aside:
At the tone, it will not be midnight. In today's weather, it was not raining.
January 2009: Since I haven't found anyplace better to note it, I'll note here that the best academic book I read in 2008 (unless Victor Klemperer's The Language of the Third Reich counts) was Reading Fictions, 1660-1740: Deception in English Literary and Political Culture, by Kate Loveman, whose metanarrative convincingly allows for (and relies on) pre-"novel" hoaxes and satires while not erasing generic distinctions.
. . . 2007-07-17 |
Scientists love abstractions, whereas humanistic scholars are buried in the concrete. They cannot see the forest for the trees. I'm sure we all have an Uncle Fred who thinks that we shall be quite fascinated as to where he plans to plant the carrots next Spring and as to what his wife said about Mildred to her second cousin. Uncle Fred is buried in the concrete. This can be verified if one tries to tell him about abstractions; he'll not understand them and become quite bored. There can be many reasons for this, but a very likely one is that he is not very intelligent. Thus, he cannot understand abstractions. Could the problem with humanistic scholars be that they tend to rather untelligent? This is in fact no doubt one of their many problems. |
From "The proper place of humanism: Qualitative versus scientific studies of literature"
by Colin Martindale
in The Psychology and Sociology of Literature: In honor of Elrud Ibsch,
ed. Dick Schram and Gerard Steen,
John Benjamins Publishing Company, 2001.
BYO "[sic]"s.
"they tend to rather untelligent"? wot?
There you go again, noticing things....
ROTFL!!, as they say
Asberger militias. The love them concrete abstractions. The ones you can get your hands on, and do things to.
T.V. points us to Dr. Martindale's Regressive Imagery Dictionary. I foresee many Creative Writing assignments....
. . . 2007-10-21 |
There's a lot of ink spilled over 'meaning' by literary theorists (you noticed that, too?) There isn't much discussion of 'function' (in the relevant sense). But, actually, there is a pretty obvious reason why 'function' would be the preferable point of focus. It's more neutral. It is hardly obvious that every bit of a poem that does something has to mean something. Meter doesn't mean anything. (Not obviously.) But it contributes to the workings of the work. (If you are inclined to insist it 'means', probably all you really mean is that it 'does'. It is important.) "A poem should not mean but be" is somewhat overwrought, in a well-wrought urnish way; but 'a poem should not mean, but do' would be much better.Has any literary theorist really written about 'functions', in this sense?
- John Holbo (with a follow-up on intention and Wordsworth-quoting water sprites)
Analytic philosophers often sound like a blind man describing an elephant by holding the wrong end of a stick several blocks away from the zoo. This is one of those oftens.
When talking about species-wide traits, we need to keep track of teleological scales. One can easily invent (and very rarely find evidence for) evolutionary justifications for play or sexual variability among mammals. But that's not quite the same as asking the function of this tennis ball to the dog who wants it thrown, or of this foot to the cat who's ready for a game of tag, or of this photo of Keanu Reeves to the man gazing so intently. Particulars call for another vocabulary, and art is all about the particulars.
In the broadest sense, art doesn't have a function for homo sapiens — it is a function of homo sapiens. Humans perceive-and-generate patterns in biologically and socially inseparable processes which generally precede application of those patterns. That's what makes the species so adaptable and dangerous. Even in the most rational or practical occupations, we're guided to new utilitarian results by aesthetics. Software engineers, for example, are offended by bad smells and seek a solution that's "Sweet!"
Making of art in the narrower sense may be power display or sexual display, may be motivated by devotion or by boredom. Taking of art touches a wider range of motives, and covers a wider range of materials: more art is experienced than art is made. Clumped with all possible initiators or reactions, an artwork or performance doesn't have a function — it is a function: a social event. Whether a formal affair or strictly clothing-optional, the take-away's largely up to the participant.
As you can probably tell by my emphasized verb switches, I disagree with John Holbo's emendation of Archibald MacLeish. Yes, Ezra Pound and the Italian Futurists thought of their poems as machines which made fascists; yes, Woodie Guthrie thought of his guitar as a machine that killed them. But I've read the ones and heard the other and I didn't explode, and so the original formula's slightly more accurate, if only because it's slightly vaguer.
Still, when you get down to cases, "to be" and "to do" are both components of philosophical propositions. Whereas, as bog scripture teaches, the songness of song springs from their oscillation.
Like function, intentionality tends to be too big a brush wielded in too slapdash a fashion. CGI Wordsworth and miraculous slubmers in the sand sound closer to "performance art" than "poetry," but obviously such aberrations can't accurately be consigned to any existing genre. Nevertheless, I honestly and in natural language predict that insofar as my reaction to them wasn't a nervous breakdown or a religious conversion, it would have to be described as aesthetic: a profound not-obviously-utilitarian awareness of pattern.
Most art is intentionally produced, and, depending on the skill and cultural distance of the artists, many of its effects may be intended. And yes, many people intentionally seek entertainment, instruction, or stimulation. But as with any human endeavor, that doesn't cover the territory. (Did Larry Craig run his fingers under the toilet stall with political intent? Did that action have political consequences?) Acknowledging a possible typo doesn't make "Brightness falls from the air" any less memorable; the Kingsmen's drunken revelation of infinite indefinite desire made a far greater "Louie Louie" than the original cleanly enunciated Chuck Berry ripoff. Happy accident is key to the persistence of art across time, space, and community, and, recontextualized, any tool can become an object of delight or horror. A brick is useful in a wall, or as a doorstop, or as a marker of hostility or affection. But when the form of brick is contemplated with pleasure and awe and nostalgia, by what name may we call it?
a poet should not be
A poet should not be mean.
Jordan Davis writes:
Setting aside Holbo's unfortunate reversion to Macleish's formula, I find his distinction between function and meaning (use and mention) useful when discussing the 99 percent of poetry that does not get discussed. To play in prime time, every last function has to show up dressed as meaning.Ben Friedlander on a tangent: 'it's the obscurity of the near-great and almost-good that gets to the heart of things. Which for me is not the bad conscience of tradition (the correction of perceived injustice, which is where tradition and avant-garde clasp hands and sing), but its good conscience, the belief that there are those who "deserve to be forgotten."'
You're right that I sound too dismissive of a valuable insight. It's that damned analytic-philosophy lack of noticing that gets me down. A poem or play does "function" when it works as a poem or play, but how and why it functions isn't shared by all poems or plays, or by all experiences of the poem or play. The Shepheardes Calender and "Biotherm" functioned differently for their contemporaries than for me; even further, the effect of the Calender likely varied between Gabriel Harvey and John Taylor, and "Biotherm" likely did different numbers on Bill Berkson and Robert Lowell. None of this is news to you, of course, but Holbo's formulation doesn't seem to allow room for it.
Jordan again:
To take your point about Holbo -- I appear to have misread him as having spoken about apparent aporias -- I thought he meant that one accustomed to kaleidoscopes might not know how to hold a bicycle.
Beautifully put. And it's just as likely that I misread him, lured into hostile waters by the chance to make that graffiti joke.
But surely 'functions' is a plural enough noun to cover a plurality of cases, no? (How not?)
Way to bum my disputatious high, dude. I've been kneejerking against the vocabulary of John's examples, but, yes, another way to read him (and it's the way Joseph Kugelmass and Bill Benzon read him) puts us more in the same camp.
. . . 2008-05-20 |
The one part of Raymond Tallis's polemic I left uncheered was its Axis-of-Evil association of neuroscience-based foolishness with "Theory"-based foolishness. The intersection of "writers classified as Theory" and "writers I follow" offers no support, and most cognitive-aesthetics bosh I've seen was written by Theory-bashers. (Unless I count. I don't count, do I? ... do I?)
Well, I haven't been following Sabine Sielke, who's just transported Derrida's structural trademark "I wrote these words on a bar napkin last night so they must be very important but I can't for the life of me remember why" into a new era:
What, then, does it mean to "re-cognize Dickinson"? And why re-cognize the poet in the first place? ... In some way, we have always been — and cannot help — re-cognizing Emily Dickinson.
I look forward to her take on re-ader re-sponse studies.
But given the essay's many admiring citations of Camille Paglia, we might still need a broader label for that brush. How about "foolishness at large"?
For an alternative application of neuroscience to poetry criticism, see (possibly with the aid of your friendly local pirate) Hisao Ishizuka.
. . . 2011-05-15 |
Imagining Minds: The Neuro-Aesthetics of Austen, Eliot, and Hardy
by Kay Young, Ohio State, 2010
Young earns her blurbs: you could hardly find a purer product of citational discipline then this compare-and-compare of three canonical novelists, several well-established academic critics, and snippets from a tall stack of popular cognitive science and psychology books — but mostly William James — assembled with affable deafness to any intrastack squabbling. Nor do pop-explicators and novelists ever disagree; nor does any reader find any novel disagreeable.
"It really makes no difference what it is that is to be proved by such means." Still, by the end, I was convinced that nineteenth-century English fiction writers and recent Anglo-American purveyors of generalized anecdotes share many notions of human nature.
Suggested follow-ups:
The Man Who Mistook His Dad For The Law: And Other Bad Calls
rather lacanic
. . . 2011-06-04 |
"An Attentional Theory of Continuity Editing" by Tim J. Smith, dissertation (2005)
"Edit Blindness: The relationship between attention and global change blindness in dynamic scenes" by Tim J. Smith & John M. Henderson, Journal of Eye Movement Research (2008)
"Film, Narrative, and Cognitive Neuroscience" by Jeffrey M. Zacks & Joseph P. Magliano,
from Art & the Senses, ed. D. P. Melcher and F. Bacci (in press)
"Watching you watch THERE WILL BE BLOOD" by Tim J. Smith,
from Observations on film art, ed. Kristin Thompson & David Bordwell
The most suspenseful serial I've watched this year is Tim Smith & the Secret of the Hollywood Edit. (Least suspenseful: The League of Extraordinary Plutocrats & the Treasure of Depression.)
Yeah, so what's the big secret? Assuming we've somehow found something other than a monitor to look at, we'd probably be startled if our entire field of vision was replaced by something new while we sat in one spot looking straight ahead. And yet we remain calm in the face of such transitions in a movie theater, even if we've somehow found a movie theater screen large enough to fill our field of vision. The seemingly more natural transitional device of a whip-pan disrupts us more than the seemingly impossible straight cut. The film industry has built up a store of standard wisdom regarding which cuts are disruptive and which are close to indiscernable, and a few editors have even tried to explain how indiscernability works.
This topic cluster was bound to attract the attention of the perception-driven cognitive sciences — Daniel T. Levin and Daniel J. Simons have some nice overviews. From the distinguished crowd, Smith's Harold-Lloyd-ish gumption wins my heart. Smith extends a generous line of credit to filmmakers and does all he can within reason to underwrite them; reason dictates, however, that their working hypotheses might fail, and Smith apologetically but thoroughly reports a wide range of negative results.
Besides making him a more sympathetic character, the underbrush cleared by his latest batch of invalidations leaves room for what should be (to judge by Zacks & Magliano) some very sweet new growth.
. . . 2011-06-19 |
Embodied Visions: Evolution, Emotion, Culture and Film
by Torben Grodal, Oxford, 2009
I've been a season-ticket-holding fan of the cognitive sciences since 1993, but it's no secret that I've been disappointed by their aesthetic and critical applications. And I suppose no surprise, given how disappointed I was by applications of close reading, deconstruction, feminism, Marxism, evolutionary biology, and so forth. (Lacanian criticism had the great advantage of being disappointment-proof.) All these approaches snapped off their points while scribbling across a professionally sustainable territory, all in the same way: Mysteries do not survive levels of indirection.
Mortality is a mystery. Why Roger Ackroyd died is a different sort of mystery. Once we've assumed mortality, however, why Agatha Christie died is no sort of mystery at all: she died because people are mortal. Too often writers like Grodal and Kay Young inform us that Agatha Christie died because species propagation does not require individuals to survive long past childrearing age! And also Roger Ackroyd died! And also Henry VIII!
As if to underline the over-specification, much of what Grodal says about his chosen films apply equally well to their adapted sources:
Although love often leads to integration in the prevailing social order, just as often it leads to a conflict with the existing social order, as in Luhrmann's Romeo + Juliet....
What can be gained by explaining Forrest Gump or Mansfield Park with what lies beneath human culture and history? At such removes, "mirror neurons" add nothing to the already biologically-marked "monkey-see-monkey-do." At any remove, "lizard brains" add nothing to anything besides lizards. Why not read David Bordwell straight? Grodal answers by pitting his truisms against the falsehoods of ad-absurdum Derrida, ad-absurdum Focault, ad-absurdum Mulvey, and ad-absurdum Barthes, just as earlier critical fads attacked an ad-absurdum T. S. Eliot. We could call these strawmen arguments, except that the strawmen demonstratively were made and sent out onto the field. Let's call it a battle of scarecrows.
Grodal, to his credit, is no scarecrow. He cites Ramachandran's discovery that the universal standard of feminine beauty is an anorexic with a boob job but immediately points out why it's false. He's noticed that genres are ambiguous and that evolution is not a particularly useful concept to apply to them. He doesn't insist that narratives need a narrator other than the audience. He doesn't always remember that a significant number of human beings are not heterosexually paired and reproducing, but he remembers it at least once.
Sadly for the cause of sanity, banishing arrant nonsense from his shop leaves Grodal without novelties to peddle and leaves the book's first half undermotivated. A professional scarecrow like David Brooks strews fallacy wherever he flails, but he achieves a recognizable goal: to grab attention.
The second half of Grodal's book is less Movie-Goers Guide to Consciousness and far more compelling. Now here, for example, is a first-order mystery: How can generic signals such as by-the-negative-numbers continuity flips, an unlikely proliferation of masochists, and long takes with nothin'-happenin'-at-all reliably induce sensations of depth and uncanniness and individuality among film-festival audiences when it's obvious that the auteur's just slapping Bresson patties and Godard cheese on the grill? (I should emphasize that this is my problem statement rather than Grodal's.)
Periods of temps mort evoke a sense of higher meaning for two intertwined reasons. The first is that streams of perceptions are disembodied, insofar as they are isolated from any pragmatic concerns that might link them to action. Temps mort thus serves expressive and lyrical functions that give a feeling of permanence. The second reason is a special case of the first: since the viewer is unable to detect any narrative motivation for a given temps mort — a given salient and expressive perceptual experience — he or she may look for such motivation in his or her concept of the addresser, the filmmaker.... The perceptual present is ultimately transformed into the permanent perceptual past of the auteur's experience.
These excess features therefore activate particularly marked attention, switching on feelings and emotions which suggest that these features contain a meaning that the viewer cannot fully conceptualize. The viewer is therefore left with the sense that there must be some deep meaning embedded in these stylistic features, because the emotional motivation for making meaning out of salient features cannot be switched off. Style thus serves as an additional guarantee for some higher or deeper meaning, while at the same time giving rise to a feeling of permanence, since the perceptual, stylistic cues continue to trigger meaning-producing processes without reaching any final result.
...aspects of a film that are easily linked to the actions of one of the main characters are experienced as objective, but if there are no protagonists, or the characters' or viewers' action tendencies are blocked or impeded, this will lend a subjective toning to our experience of the film. This subjective toning expresses intuitive feelings of the action affordances of what we see: subjective experiences may be more intense and saturated but at the same time felt as being less real, because the feeling as to whether a given phenomenon is real depends on whether it offers the potential for action.
Subjectivity by default is much more obvious when it is cued in films than in real life. In real life, our attention is controlled mainly by our current interests. If we have exhausted our interest in one aspect of our surroundings, we turn our attention to something else. But when we watch a film, we are no longer able to focus our attention on the basis of our own interests because the camera prefocuses our attention. Provided that the film catches our attention by presenting us with a focused narrative or salient audiovisual information, this lack of control of our attention does not disturb us. Potential conflict over control of the viewer's attention surfaces only when the filmmaker confronts the viewer with images that do not cue focused propositions or that have no links to the protagonists' concerns. Most ordinary filmgoers shun such films, labeling them dull because they do not have the motivation or the skills necessary to enjoy what they see. More sophisticated viewers switch into a subjective-lyrical mode, seeking at the same time to unravel parts of the associative network to which the film gives rise.
Reviewing these sketches of frustrated drives, congested animal spirits, and spiritual afflatus, I'm not sure Grodal needs a scientific vocabulary younger than Nietzsche or William James. But if his solutions aren't quite as first-order as his mystery, they at least let me dismiss it for a while. Lunchtime!
fine thing, needling the haystack
Reminds me of the election in the Buffalo English dept ten or twelve years ago, wherein there were something like eighteen votes for Professor Conte, twenty for Professor Bono, fifteen for Professor Dauber, and five for lunch.
The afore-and-oft-cited David Bordwell sketches how some individual quirks became genre markers.
. . . 2011-09-23 |
Forgetful Muses: Reading the Author in the Text
by Ian Lancashire, Toronto, 2010
Fearing another polemic, I flinched at the opening Barthes joke. No need; the point is that Barthes and Foucault helped make room for scholarship like Lancashire's own.
Barthes's target was a particular sort of Critic (James Wood or Jonathan Franzen or Allen Tate, say) who allows only a particular sort of Work (a well-wrought urn full of well-burned ash), the titular "Author" being the Critic's implausible prop. With Critic and Author removed, Franco Moretti recently went on to elbow Reader, Theorist, and Work out of the Howard-Roark-ish Researcher's spotlight.
More generous, Lancashire instead invites another figure onto the stage: the Writer Writing. His book directs the tourist's attention to the distance between conscious intent and the actuality of creation, both as living process and in posthumous analysis — in two words, muse and style.
Comfortably switching between the roles of researcher and reader, Lancashire can treat the "General Prologue" as Chaucerian Work or as bundle of characteristic tics; he can study Shakespeare as Author and as tic-bundler; he can turn Agatha Christie's pageturners and then winnow them for symptoms. To show his peaceful intent, he goes so far as to intersperse goofy monologues from his own first-draft stream-of-consciousness, who proves as insulting towards his fister as any other vent dummy. And again I marvel at how free discourse flows to bicker.
Even more more again I marvel at how, gathered at the Author Function, we collectively manage such both-ands. When Lancashire uncorks his inner Jerry Mahoney, by what magic do the undergrads believe that communication has occurred?
The accuracy of that belief is questionable, yes, but not its transient certainty. I still recall the shudder with which Henry James's overstuffed suspension settled into clear solution, counterpane into windowpane. And farther back in adolescent memory how the fraudulent assemblages of T. S. Eliot and cheesy pop musicians gathered warmth and depth and breath. And, more near and less pleasantly, watching in embarrassment Jean-Luc Godard's miraculous constructs scurf, slough, and collapse into scrap. How do the imprecise and formulaic grunts of Homer or Dan Brown transform, in the susceptible listener, into vividly imprecise and formulaic experience?
A question which may animate a less particular sort of Critic.
. . . 2013-01-01 |
Long before 1993, I'd thought of art(-in-the-most-generalized-sense-possible)-making as a human universal, and since I don't believe homo sapiens was formed de limo terræ on the sixth day by that ginormous Stephen Dedalus in the sky, I must perforce believe the inclination to have evolved(-in-the-most-generalized-sense-possible).
But scientists' applications of neuroscience, neural nets, and comparative zoology to art were sheer inanity, and with a few very welcome exceptions the "neuro-aesthetics" and "evolutionary turns" which migrated to humanities journals and popularized books catered no better fare. As Paul Bloom put it in a recent issue of Critical Inquiry:
Surely the contemporary human's love of literature has to have some evolutionary history, just as it has a cultural history, just as it has an instantiation in the brain, just as it emerges in the course of child development, and so on. Consider, as a concrete example, the proposal by the English professor Lisa Zunshine. She argues that humans have evolved a taste for stories because they exercise the capacity for social reasoning or theory of mind. Suppose, contrary to my own by-product view, Zunshine is correct. Why should this matter to your average Jane Austen scholar (to use a common synecdoche for English professors everywhere)? It would seem to be relevant in exactly the same way as finding that stories are processed in a certain part of the frontal lobe — that is, not at all.While literary critics can safely ignore those interested in theories of the origin and nature of stories, the converse isn't true.
To a published-or-perished team-player, my little biographia literaria may sound naïvely promiscuous: tacking to each newly prevailing wind without a glance at the charts, discarding yesterday's party allegiance in the face of today's confident campaign ad.
I swear, however, this ever unrulier tangle springs from one integrated ground, albeit of well-manured soil rather than bedrock: a faith born at pubescence in the realization that mumbling through Shakespeare's King John was a different thing, a different incarnate thing, than speed-reading Isaac Asimov or Ellery Queen; a faith which developed through adolescence and reached near-final form by age twenty.
This chapel's sacrament is aesthesis, sense-perception, rather than "high art":
For it is false to suppose that a child's sense of beauty is dependent on any choiceness or special fineness, in the objects which present themselves to it, though this indeed comes to be the rule with most of us in later life; earlier, in some degree, we see inwardly; and the child finds for itself, and with unstinted delight, a difference for the sense, in those whites and reds through the smoke on very homely buildings, and in the gold of the dandelions at the road-side, just beyond the houses, where not a handful of earth is virgin and untouched, in the lack of better ministries to its desire of beauty.
But honest attention to sensibility finds social context as well as sensation. Words have heft; the color we see is a color we think. And art(-in-the-most-general-sense-possible) wins special interest as a sensible experience which is more or less bounded, shared, repeatable, and pre-swaddled in discourse.
Pluralism is mandated by that special interest. Any number of functions might be mapped into one chunk of multidimensional space. Integer arithmetic and calculus don't wage tribal war; nor do salt and sweet. We may not be able to describe them simultaneously; one may feel more germane to our circumstances than another; on each return to the artifact, the experience differs. But insofar as we label the experiential series by the artifact, all apply; as Tuesday Weld proved, "Everything applies!" And as Anna Schmidt argued, "A person doesn't change because you find out more." We've merely added flesh to our perception, and there is no rule of excluded middle in flesh.
Like other churches, this one doesn't guarantee good fellowship, and much of the last decade's "aesthetic turn" struck me as dumbed-down reactionism. But The New Aestheticism was on the whole a pleasant surprise. Its reputation (like the reputation of most academic books, I suppose) is based on a few pull-quotes from the editors' introduction; the collection which follows is more eclectic. Howard Caygill sets a nice Nietzschean oscillation going in Alexandria, Gary Banham's "Kant and the ends of criticism" nostalgically resembles what I smash-&-grabbed from the display case back in college, and Jonathan Dollimore snaps at ethical presumptions with commendable bloodlust.
The contributors keep their disagreements well within the disciplinary family, however. They cite Adorno, Kant, and Heidegger very frequently, Wilde once, and Pater never, and disport themselves accordingly. After all, Adorno was a contentious fussbudget and therefore makes a respectable academic role model, whereas Pater was an ineffectual sissy.
Till at a corner of the wayWe met with maid Bellona,Who joined us so imperiouslyThat we durst not disown her.My three companions coughed and blushed,And as the time waxed later,One murmured, pulling out his watch,That he must go—'twas Pater.- "The Traveller" by Arthur Graeme West
Some (Adorno for starters) might feel at home in a community of li'l Adornos; whereas a majority of such as Pater, "the very opposite of that which regards life as a game of skill and values things and persons as marks or counters of something to be gained, or achieved, beyond them," would admittedly be the heat death of the world. But there's more to existence than procreation, and aesthetic philosophers, of all people, should be able to appreciate the value of one-offs and nonreproducible results. We can no more say that Derrida "proved" Searle wrong than that Bangs "proved" the Godz brilliant musicians or Flaubert "proved" us all doomed to follow Frédéric Moreau. That doesn't mean Derrida was therefore best when dishing unset Jello like Glas and Lester Bangs was therefore best when writing fiction and Flaubert was therefore best avoiding emotionally hot topics. Every flounder to its own hook.
Back in the land o'Adorno, if false dilemmas and Mitty-esque battles against empire or barbarism were what's needed to drag some of these white bellies to the surface, well, I suppose that's no more ridiculous a procedure than our own, of constructing imaginary villages with real explainers in them. I wouldn't presume to say it's all good, but it is all that is the case.
giddy upon the Hobby-Horse
Whoa! Whoa! Whoa! Aw, come on, horsey! Please, horsey? Please, whoa. Purty please? Doggone it now, horsey! Won't you please whoa?
Has Dollimore gotten less irritating since his days of applying Godwin's law to literature? ("Here essence and teleology are explicitly affirmed while history becomes the surrogate absolute. If we are used to finding this kind of utterance in our own cultural history it comes as something of a shock to realise that these were the words of Alfred Bäumler, a leading Nazi philosopher writing on race." etc etc)
He kicks off with Hesse, so probably not.
Dollimore kicked off Radical Tragedy with Hesse as well! So this is a rerun, I gather.
A preview of the third edition intro, looks like.
. . . 2017-07-03 |
From an otherwise innocuous cog-lit volume's bullet-pointed list of
answers that have been given [to the question "What is literature for?"], starting with the negative or dismissive ones:
- ...
- It's a means of wish-fulfilment, fantasy, escapism ("cognitive pornography"), or at best a form of self-indulgence or entertainment (like eating chocolate, or having sex without procreative intent).
- It has affinities with delusion...
Now, masturbation (solitary and unwitnessed, presumably) is a familiar exemplar of self-indulgence. But what's being contrasted to "sex without procreative intent"? Even in financial terms (our culture's current exemplar of properly motivated action), intentionally-procreative sex offers ambiguous awards unless you're a slave-owner supplementing your assets. Whereas I understand there's a fair amount of evidence that cooperative sexual acts have proven utility in strengthening social bonds, or, at minimum as a (sometimes brutal) rebuke to solipsism.
Are strengthened social bonds, then, "at best a form of entertainment"? If so, the book at least taught me something about entertainment.
Copyright to contributed work and quoted correspondence remains with the original authors.
Public domain work remains in the public domain.
All other material: Copyright 2024 Ray Davis.