Last conference presentation to live blog from the sc symposium
A sonification project. Alberto de Campo is consulting on the project.
A project 7 years in the world, inspired bya test from the 70's. You can distinguish educated and uneducated background based on how they speak. Sociologists picked up on this. There was an essay about this, using Chomsky's grammar ideas. Learning grammar as a kid may help with maths and programming. Evidence of how programmers speak would seem to contradict this . . .
But these guys had the idea of sonifying grammar and not the words.
Sapir-Whorf: how much does language influence what we think. This also has implications for programming languages. How does your medium influence your message?
(If this stuff came form the 70's and was used on little kids, I wonder if I got any of this)
Get unstuck form hearing only the meaning of words.
don't use grammar as a general rule: no top down. Instead use bottom up! Every rules comes with an example. Ambiguous and interesting cases.
- syntax categories - noun phrases, prepositional phreases, verb phrases. These make up a recurive tree.
- word positon: verb, nouns, adverb
- morphology: plural singular, word forms, etc
- function: subject object predicate. <-- This is disputed
The linguistics professor in the audience says everything is disputed. "We don't even know what a word is."
They're showing an XML file of "terminals," words where the sentence ends.
They're showing an XML file of non-terminals.
Now a graph of a tree - which represents a sentence diagram. How to sonifiy a tree? There are several nodes in it. Should you hear the whole sentence the whole time? The first branch? Should the second noun phrase have the same sound as the first, or should it be different because it's lower in the tree?
Now they have a timeline associated with the tree.
they're using depth first traversal.
Now the audience members are being solicited for suggestions.
(My though is that the tree is implicitly timed because sentences are spoken over time. So the tree problem should reflect that, I think.)
Ron Kuivila is bringing up Indeterminacy by John Cage. He notes that the pauses have meaning when Cage speaks slowly. One graph could map to many many sentences.
Somebody else is recommending an XML-like approach with only tags sonified.
What they're thinking is - chord structures by relative step. This is hard for users to understand. Chord structures by assigning notes to categories. They also though maybe they could build a UGen graph directly from the tree. but programming is not language. Positions can be triggers, syntax as filters.
Ron Kuivila is suggesting substituting other words: noun for noun, etc, but with a small number of them, so they repeat often.
They're not into this, (but I think it's a brilliant idea. Sort of reminiscent of aphasia).
Now a demonstration!
Dan Stowell wants to know about the stacking of harmonics idea. Answer: it could lead to ambiguity.
Somebody else is pointing out that language is recursive, but music is repetitive.
Ron Kuivila points out that the rhythmic regularity is coming from the analysis rather than from the data. Maybe the duration should come how long it takes to speak the sentence. The beat might be distracting for users, he says.
Sergio Luque felt an intuitive familiarity with the structure.