Monday, 21 July 2014

This is a test

The revolution has not actually been cancelled, but I am curious about using other people's photos without asking... 'Liberating' them, if you will. So, this is actually part of the revolution, maybe? Assuming revolutions conform to the API's TOS, so I guess facebook is running my revolution now. Anyway, check out my invite for a general strike...

What I've learned from this exercise:

  • You can't sign up to instagram without putting their app on your smart device, so no losers with cameras that are just cameras.

  • If you want to grab somebody's picture (somebody much cooler than me because they have a smart phone and I don't), click on the '...' button in the lower right corner, click on embed, copy the embed code, paste into your blog post.

  • This embed code does not seem to support ALT text and so is completely shit on disability issues, so only use this if you're happy with parts of your website being inaccessible to people using screen readers.

  • Facebook really cares very deeply about your copyright and gives access to anyone and gives them loads of control over their content and how it is shared. ... ha ha ha. Yeah, won't be getting an account here ever.

By the way, for all my lynx readers, the image above is a page of some 50's style line drawings plus text in fancy fonts, including the phrase 'Sorry, the Revolution has been cancelled'

Friday, 18 July 2014

Review: Digital Revolution

I went to see the Digital Revolution exhibition at the Barbican yesterday. It was much better than I expected. This exhibition includes part of Google's DevArt initiative. That framework is unusual for arts projects funding because it requires the use of Google APIs and thus is problematic in terms of controlling artists. Also, things I've read have implied it to be a sort of anti-historical move, as if Google invented the idea of doing arts with a computer

The exhibition, therefore, provides a solid historical grounding in the history of computer creativity. Many historical digital arts projects are displayed. Some of them running on emulation, but many running on the original machines they were designed for. Thus I learned about an art movement called the Algorists, who's ideas live on in the Algorave movement. Much of art works were interactive, thus giving visitors a chance to interact with old hardware and software platforms as much as the artwork. For example, there was a piece of web art that was running on an early version of Netscape Navigator, on a period machine.

This is where the exhibition lost focus. Unsatisfied with showing a record of historical projects on historical machines, they went further to amass a collection of old digital stuff, more suited to the Science Museum. So next to an iPad showing Conway's Game of life, there was a completely unrelated early computer. Across from the Algorist was a Linn Drum in a glass case, with some headphones playing a song by the Human League that used this drum machine (which created some cacophony, discussed below).

The historical computers included working versions of several home game computers, so kids could get their first taste of Super Mario Brothers on 8 bit. Near that was a NeXT cube running the web browser written for it. This part of the exhibit was apparently calculated to make me feel old. Just to reinforce that, I will now complain about the loudness.

The room itself had a lot of projection displays and loud sounds that seemed to lack context. They were seemingly played through the entire room and would turn out to be linked to one of the displays in the middle. A short excerpt from some display somewhere and then on to the next loud thing. So the projections might show Super Mario or something else and then a blast of the Human League, all in an effect full of sound and fury, but with very fuzzy signification.

Other parts were blatantly corporate. The huge installation explaining how they did the animation and lighting for the film Gravity was interesting, but, again, perhaps better suited to the Science Museum.

The new artworks did tend to be quite good and were, thankfully, mostly not in the room of old computers and flashing lights. One piece was birds made up of upcycled mobile phones, with bird heads on the phone's colour displays. There was an (even louder) pop music video experience which used the hollow face illusion on a projector screen, which was stunning when experienced for short periods. (Alas, that I did not take notes on titles and artists and this information appears not to be on their website.)

Next along was an installation using Kinect, which was strikingly well constructed. Users had to assume the arms over head kinect pose. However, rather than being an annoying pre-requisite to further interaction, the piece used the starting position of an essential element of verticality which began with the hands. In the final bit of it, hands upwards dramatically opened to wings upwards.

All of the new art in the first section was interesting and a lot of it was fun (Shelly Knotts burst our laughing at one point) and some of it, such as the kinect piece were inspired.

There are also interactive installations in the free parts of the building, for example a video game that tracks where you're looking to control which way the player's actions were executing. A kinect-using robotic petting zoo was enigmatic. The robots were definitely interacting, but with what? The small display screens up top suggested that the robot's gaze was not looking where you might expect (or that the kinects were aimed poorly, but let's be generous).

The second part of the exhibition was a cage full of modern computers running indie games. I didn't have time to hang around in a cage trying out a lot of games. Many modern games are really complex and beautiful and take hours to figure out what the potentials of the environment really are, so I'm not sure about this format, but I have a feeling a similar format is used at industry events.

The third and final part was interactive laser beams in a fog-machine-filled room deep in the basement. After I got over making quiet jokes about sharks with laserbeams (which took longer than it should in an adult), the piece was fun and the interactivity well-designed.

Standard tickets are £12.50, which seems steep, especially considering the amount of corporate branding all over the signs and website. If Google is going to pay for art that's designed to promote themselves, then they should actually pay for it. Steep ticket prices also do nothing to ameliorate the digital divide, nor do displays encouraging people to download apps to their smartphones. I've programmed (as in curated) smartphone based art in the past, at the Network Music Festival and I'm not against it in general, it just seemed off in this context. To be fair, in pieces that were driven by apps, the Barbican had installed tablets running the app, so us smartphoneless riffraff could still use the piece. However, in a time of austerity, I feel public institutions such as the Barbican should be making an effort to encourage open access, especially for something that is intended to be the next major cultural/export product from the UK. The exhibition is clearly intended to promote this and recruit people into the field, so I feel the high price is an impediment to the (obviously, blatantly) commercial goals of the project.

Is it worth the price? I don't know, but it's a lot easier to get to than ZKM in Kalrsruhe or the Musée des Arts et Métiers in Paris.

Thursday, 3 July 2014

Musical interface Design - experience oriented frameworks

Why define frameworks? It's established practice in Human computer Interaction. Because it is useful for designing. they propose heuristics or interaction dimensions.

Existing frameworks have little use in practice. Also 'interactive installations are often missing (i don't understand this). Things are very complex and often arbitrary.

He's showing some cool instruments/installations and aksing us to consider the player's experience.

their framework is Musical INterface for User Experience Tracking - MINUET

focuses on player experience. it is a design process unfolding time. includes DMIs and interactive installations. (by which they mean they consider installations and notjust instruments)

Design goal: purpose of interactions. Design specification: how to fill objectives.

Design goals are about a high-level user story. Lie a playful experience, or to stimulate creativity, education etc.

Goals can be about people, activities or contexts.

contexts: musical style, physical environment, social environment.

case study: hexagon

An educational interface to train musical perception

they wanted to make it easy to master the iterface ut have a high ceiling for the educational bit.

It's for tonal music....

speeding through the last slides

Questions

Maraije (sp) wants to know about player, participant and audience as three user types on a scale. More or less informed participants. she also wants to know about presentation, to teach people how to use the instrument or environment.

Interesting points!

the prospects for eye-contolled musical performance

Instrument controlled only by rotation of eyeballs!

He's only found 6 examples of eyeball music.

Eye trackers shine an IR light on your eyeball and then look for a reflection vector with a camera. Callibrate by having users look at spots on a display.

eyes move to look at objects in peripheral vision. Eyes jump fast.

Data is error-prone. Eyes need visual targets. Only 4 movements per second with eyes.

Eyes can only move to focus. So an eyeball piece needs visual targets.

Audiences have nothing to see with eyeball instruments. Also, eyeball trackers are super expensive.

People have built eyeball instruments for computer musicians or for people with disabilities.

He is showing us documentation of 6 existing pieces.

"It's sad to see a person [who] is severely disabled" says the presenter, and yet nobody in the documentation looks sad at all....

Questions>

Q: Many of the nime performances were designed for people with disability.

A: Ok

Dimensionality and appropriation in digital music instrument design

Musicans play instruments in unexpected ways. SO they decided to build something and see what people did with it.

appropriation is a process in which a performer develops a working relationship with the instrument. This is exploitation or subversion of the design features of a technology (ie turntablism)

Appropriation is related to style. Appropriation is a very different way of working with something.

How does the design of the instrument effect style and appropriation?

they gave 10 musicians a highly constrained instruments to see if they got style diversity. They made a box with a speaker in it with a position and pressure speaker. the mapped timbre to pressure and pitch to movement. And a control group with only pitch.

People did some unexpected things, like tapping the box, putting the hand over the speaker and licking the sensor (ewww)

The users developed unconventional techniques because of and in spite of constraints

One desgree of freedom had more hidden affordances. The two agrees of freedom had only 2 additional variations and no additional affordances.

Users of the 1 degree of freedom group described it as a richer and more complex device which they had not fully explored. Users of the more complex instrument felt they had explored all options and were upset about pitch range.

The presenter beleives there is a cognitive bandwidth for appropriation. More built options limit exploration of hidden affordances.

This was a very information-rich presentation of a really interesting study.

Questions

Q: Is pitch so dominant that it skews everything? What if you did an instrument that did just timbre?

A: Nobody complained about loudness.

Q: If participants were all musicians, did their primary instrument effect their performance?

A: Some participants were NIMErs, others were acoustic players. They're studying whether backgrounds effected perofrmance style.

Constraining movement for DMI .....something....

Someone is playing a sort of a squeeze box that is electronic, so you can spin the end of it. Plus it has accelerometers. It's kind of demo-y, but there's potential there.

This comes from a movement-based design approach. They came up with the movement first. Design for actions.

Movement-based designed need not use the whole body. It need not be kinecty, non-touch. The material has a physicality.

Now we see a slide with tiny tiny words. It's about LMA, which is Laban Movement Analysis. For doing a taxonomy of movements. They want to make expressive movement not just for dancers.

Observe movement, explore movement, devise movement, design to support the designed movement. (is this how traditional instrument makes work??)

the analysed a violinist and a no-input mixer. they made a shape change graph. There is a movement called 'carving' which he liked. The squeeze box uses the movement and is called 'the twister'

They gave the prototype to a user study and told them to try moving it (with no sound). Everybody did the carving movement. People asked for buttons and accelerometers (really??)

the demo video was an artistic commission, specifically meant to be demo-y (obvs)

Questions

Laban theory is about 'effort theory' about the body. Did the instruments offer resistance?

They looked at interface stiffness, but decided not to focus on it. Effort describes movement dynamics, but is personal to performers.

From Shapes to Bodies: Design for Manufacturing in the Prosthetic Instruments

the Prosthetic Instruments are a family of wearable instruments designed for use by dancers in a professional context. The instruments go on tour without the creators.

the piece was called 'Les Gestes' and was a tour in Canada. Instrument designers and composers were from McGill. The choreography and dancing was form a professional dance company in Montreal. Van Grimde Corps Secret.

there were a fuckton of people involved in the production. Lighting gesigners, costume designers, etc all had a stake in instrument design.

One of these instruments was in a concert here and was great. It looks like part of a costume.

The three instruments are called spine, ribs and visors. They are hypothetical extensions to the body. Extra limbs for your body. Dancers wear them in performance. They are removable in this context.

Ribs and Visors are extremely similar. They are touch sensitive. The spine has vertebrae connected by pvc tubing and a PET-G rod.

Professional artistic considerations - durability, usability. backups required. limiting funding and timeframes. small scale manufacturing. How are these stored and transported? what about batteries? Is there anything that needs special consideration or explanation (how to reboot).

Collaboration requires iterative design and tweaking.

Bill Buxton talks of 'artist spec', the most demanding standard of design. People have spent year developing a technique and your tool needs to fit in that technique.

Questions

  • Why mix acrylic and pvc?

    There is a lot of stress on the instruments, so they use tough materials.

  • Can you talk about the dancer's experiences?

    The dancers did not seek technical knowledge, but they wanted to know how to experience and interact with it. They had preferences for certain instruments.

Fan Mail

found this in my inbox this morning

I suggest you take down http://www.berkeleynoise.com/celesteh/podcast/ that noise seriously sucks and is pretty much good for NOTHING but giving someone a head ache!!!

I would put it’s “usefulness” at the same level as Yoko Ono sounds… USELESS !

George
a musician with respect for real music

the subject is: 'Noise ? yep... pure sh*t.' which, coincidentally, is the title of my next album

Wednesday, 2 July 2014

CloudOrch - A portable SoundCard in the Cloud

There is a beautiful graphic and another beatiful graphic. All presentations should be made up of cartoons like this.

Cloud computing keeps you from having to drag your entire wolfram cluster to a gig. However, the cloud does not have a speaker at your venue. Unless you use internet audio streaming.

The graphic is slightly less beautiful

anyway, there is latency issues, compatibility issues, packet size issues... You can get fragmentation. and TCP is acked and all that.

OMG, the speed of light is TOO SLOW

Things can get jittered

Big buffers are lovely, but are more jittery. HTML5 has big buffers.

Window size in compression has delay as well. He says to send raw, uncompressed audio.

Use HTML 5 to play audio in the browser and then you get portability.

He suggests sending 256 or 357 bytes per packet

70 ms delay in sending http request sounds 160ms to alberta. Eduoroam is like 300 ms.

Granular synth is a csound synth running on 14 virtual computers. (oh my god)

Oh my god

star networks are fast, but don't scale. Tree networks scale, but have latency per hub.

the demo is, wow, less than compelling

Ping Pong: Musically Discovering Locations

this is a video of a talk of a guy stuck in visa-hell. alas, I've been there.

this is for mobile phone ensembles. devices do not know each other's physical position. so performers need not stand in a pre-arranged formation.

They play sounds at each other to measure location. This is a piece in and of itself.

They make pairwise distances and then compute three dimension position from the distances. 1 sample error is 1.5cm error. also, clock issues.

One device hears a message, the other plays sound back. They have an agreed upon delay, so as to duck the clock issue. They have an onsent detector that does fast and slow envelopes, like yesterday's presentations. The measurements were repeatable and accurate.

They do vector matrixes to guess positions.

The ping pong notes are going along a midi score. There are some ways to recover failure.

There is geometric ambiguity. Background noise creates a problem. As do reflections. They are wondering how to solve this without restorting to harsh noise, but I say embrace it. Why not hardcore mobile phone ensembles?

Questions

Will the system work with sounds outside of the human range of hearing?

Probably not with iPhones or iPods, but it could work.

Why use the audio clock instead of the CPU clock?

the audio clock is much more reliable because its runs in real time. The CPU clock has a lot of jitter.

Architecture for Server-based Indoor Audio Walks

Use case: Lautlots

People walked around wearing headphones with a mobile phone stuck on top like extra silly cybermen.

they had six rooms, including two with position tracking. They used camera tracking in one room, and bluetooth plus a step counter in the other room.. They had LEDs on the headset for the camera tracking

He is showing a video of the walk.

they used a server/client architecture, so the server knows where everyone is. This is to prevent the guided walk from directing people to sit on each other.

Client asks for messages they want to receive.

He is showing his PD code, which makes me happy I never have to code in PD

this is also at github

Questions

What did users think of this?

Users were very happy and came out smiling.

Communication Control and Stage Sharing in Netowrked Live Coding

Collaborative live coding is more than one performer live coding at the same time, networked or not, he says.

Network music can by synchronus or asynchornos, collocated or remote.

There are many networked live coding environments.

You can add instrumental performers to live code stuff, for example by live-generating notation. Or by having somebody play an electronic instrument that is being modifies on the fly in software.

How can a live coding environment facillitate mixed collaboration? How and what sould people share? Code text? State? Clock.? variables? How to communicate? How do you share control? SO MANY QUESTIONS!!

They have a client/server model where only one machine makes sound. No synchronisation is required. There is only one master state. However, there are risks of collision and conflict and version control.

the editor runs in a web browser because every fucking thing is in a browser now.

Shows variables in a window and a chat window and a big section of the text. shows the live value of all variables in the program state. Can also show the network/live value.

Now showing collusion risk in this. if two coders use the same variable name, this creates a conflict. Alice is corrupting Bob's code, but maybe Bob is actually corrupting her code. Anyway, every coder has their own name space and can't access each other's variables, which seems reductive. Maybe Bob such just be less of a twat. The live variable view shows both Alice's and Bob's variables under separate tabs.

His demo says at the top ('skip this demo is late'

How do people collaborate if they want to mess around with each other's variables? They can put some variables ina shared name space. click your variables and hit the shared button and woo shared.

How do you share control?

Chat messages show in the mobile instrument screen for the ipad performer. The programmer can submit a function to the performer in such a way so that the performer has agency in deciding when to run the function.

the tool for all of the this is called UrMus

Questions

Would live coders actually be monitoring each other's variables in a performance?

Of course, this used in general coding, and hand waving

NEXUS UI: simplified expresive mobile development

This is a distributed performance system for the web. started being focussed on the server, but changed toelp with user interface development tools. Anything that uses a browser can use it, but they're into mobile devices.

They started with things like knobs, sliders and now offer widgets of various sorces. This is slightly gimmicky, but ok.

NexusUI.js allows you to access the interface. The example is very short and has some toys on it.

They're being very handy-wavy about how and where audio happens. (they say this runs on a refrigerator (with a browser), but the tilt might not be supported in that case)

Audio! You can use Web Audio if you love javascript. Can use AJAX to send it to servers or Node.js rails, whatever. Can also send to libPD on iOS. nx.sendTo('node') for node.js

They are showing a slide of how to get OSC data from the UI object.

This is a great competitor to touchOSC, as far as I can tell form this paper.

However, Nexus is a platform. There is a template for building new interfaces. It's got nifty core features.

They are showing a demo of a video game for iphone that uses libPD

Now they are testifying as to ease of use. They have made a bunch of Max tutorials for each nexus object. Tutorials on how to set up on a local server. They have a nexusDrop ui interface builder makes it very competitive with touchOSC, but more generally useful. Comes with an included server or something.

NexusUP is a max thingee that will automagically build a nexusUI based on your pre-existing max patch. (whoah)

Free and open source software

Building a bunch of tools for their mobile phone orchestra.

Tactile overlays

laser cut a thingee in the shape of your ui. put it on your ipad and you get a tactile sense of the interface.

Questions

Can they show this on the friday hackathon?

Yes

Making the most of Wifi

'Wires are not that bad (compared to wireless)' - Perry R. Cook 2001

Wireless performance is risky, lower, etc than wired, but dancers don't want to be cabled.

People use bluetooth, ZigBee and wifi. Everything is in the 2.4 gHz ISM band. All of these technologies use the same bands. Bluetooth has 79 narrowband channels. It will always collide, but always find a gap, leading a large variance in latency.

Zigbee has 16 channels, doesn't hop.

Wifi has 11 channels in the UK. Many of them overlap, but 1, 6, and 11 don't. It has broad bandwidth. It will swamp out zigbee and bluetooth.

the have seveloped XOSC, which sends OSC over wifi. It hosts ad-hoc networks. The presenter is rubbing a device and a fader is going up and down on a screen. The device is configured via a web browser.

You can further optimise on top of wifi. By using a high gain directional antenna. And by optimising router settings to minimise latency.

Normally, access points are omni directional, which will get signals from audiences, like mobile phone wifi or bluetooth. People's phones will try to connect with the network. A directional antenna does not include as much of the audience. They tested the antenna patterns of routers. Their custom antenna has three antennas in it, in a line. It is ugly, but solves many problems. the tested results show it's got very low gain at the rear, partly because it is mounted on a grounded copper plate.

Even commercial routers can have their settings optimised. This is detailed in their paper.

Packet size in routers is optimised for web browsing and is biased towards large packets, which has high latency. Tiny packets have huge throughput in musical applications.

Under ideal conditions, they can get 5ms of latency.

They found that channel 6 does overlap a bit with 1 and 11, so if you have two different devices, but them on the far outside channels.

Questions

UDP vs TCP - have you studied this wrt latency?

No, they only use UDP

How many drop packets do they get when there is interference?

that's what the graph showed.

Tuesday, 1 July 2014

To gesture or not? An analysis of technology in nime proceedings 2001-2013

How many papers use the word 'gesture'?

Gesture can mean many different things. (my battery is dying.)

Gesture is defined as movement of the body in dictionaries. (59 slides, 4 minutes of battery)

Research deifnitions of gesture: communication, control, metaphor (movement of sound or notation).

Who knows what gesture even means??

He downloaded nime papers and ran searches in them. 62% of all nime papers have mentioned gesture. (Only 90% of 2009 papers use the word 'music')

Only 32% of SMC papers mention gesture. 17% of ICMC

He checked what words 'gesture' came next to - collocation analysis.

NIME papers are good meta-research material

He suggests people define the term when they use it.

Data is available/

Questions

I can't tell if this question is a joke or not..... oh, no, we're on semiotics.... Maybe the pairong of the word 'gesture' with 'recognition' says something fundamental about why we care about gesture.

The word 'gesture' goes in and out of fashion.

Maybe 'movement' is a more meaningful word sometimes.

how often is gesture defined?

He should have checked that, he says.

Harmonic Motion: a tool kit for processing gestural data for interactive sound

they want to turn movement data into music.

This has come out of a collaboration with a dancer, using kinect. It was an exploration. He added visualisation to his interface. And eventually 240 parameters. The interface ended up taking over compared to the sound design.

They did a user survey to find out what other people were doing. So they wanted to write something that people could use for prototyping, that's easy, extensible, and re-usable.

They wanted something stable, fast, free and complementary, so you could use your prototype in production. Not GPL, so you can sell stuff.

A patch-based system, because MAX is awesome all of the time.

This system is easily modifiable. He's making it sound extremely powerful. Paramters are easy to tweak and saved with the patch, because parameters are important.

Has a simple SDK. Save it as a library, so you can run it in your project without the gui. this really does sound very cool.

Still in alpha.

http://harmonicmotion.timmb.com

Questions

CNMAT is doing something he should look at, says a CNMAT guy.

Creating music with leap motion and big bang rubette

Leap Motion is cool, he says.

rubato composer is software that allows people to do stuff with music and maths structures and transforms. It's MAXish, but with maths.

The maths are forms and denotators, which is based on category theory and something about vector spaces. You can define vectors and do map stuff with them. He's giving some examples, which I'm sure are meaningful to a lot of people in this room. Alas, both the music AND the math terms are outside of my experience. .... Oh no wait, you just define things and make associations between them. .... Or maybe not.....

It sounds complicated, but you can learning while doing it. They want to make it intuitive to enter matrixes via a visual interface, by drawing stuff.

This is built on ontological levels of embodiment. Facts, processes, gestures (and perhaps jargon). Fortunately, he has provided a helpful diagram of triangular planed in different colours, with little hand icons and wavy lines in a different set of colours, all floating in front of a star field.

Now we are looking at graphs that have many colours, which we could interact with.

Leap Motion

A cheap, small device that track hands above the device. More embodied than mouse or multitouch, as it's in 3d and you cna use all your fingers.

Rubato

Is built in java, as all excellent music software is. You can grab many possible spaces. Here is a straightfoward one in a five dimensional space, which we can draw with a mouse, but sadly, not in five dimensions. Intuitively, his gui plays audio from right to left. The undo interface is actually kind of interesting. This also sends midi....

The demo seems fun.

Now he's showing a demo of waving his hands over a MIDI piano.

Questions

Is the software available?

Yes, on sourceforge, but that's crashy. And there will be an andriod version.

Are there strategies to use it without looking at the screen?

That's what was in the video, apparently.

Can you use all 3 dimensions?

Yes

Triggering Dounds From Discrete Gestures

Studying air drumming

Air instruments, like the theremin need no physical contact. The kinect has expanded this.

Continuous air gestures are like the theremin.

Discrete movements are meant to be triggers.

Air instruments have no tactile feedback, which is hard. They work ok for continuous air gestures, though. Discrete ones work less well.

He asked users to air drum along to a recorded rhythm.

Sensorimotor Synchronization research found that people who tap along to metronomes are ahead of the beat by 100ms.

Recorded motion with sensors on people.

All participants had musical experience, were right handed.

They need to analyze the audio to find drum sounds.

anaylsis looks for 'sudden change of direction' in user hand motion.

The have envelope following that is slow and fast and then compare those results. Hit occurs at velocity minimum. (usually)

Acceleration peaks take place before audio events, but very close to it.

Fast notes and slow notes have different means for velocity, but acceleration is unaffected.

Questions

Can this system be used to predict notes to fight latency in the kinect system?

Hopefully

Will result be different if users have drum sticks?

Maybe?

Nime live blog: Conducting aanalysis

They asked people to conduct however they wanted, to build a data set. Focus on the relationship between motion and loudness.

25 subjects conducting along to a recording. Used kinect to sample data. Used libxtract to measure loudness in the recordings.

Users listen to the recording twice and then conduct it 3 times

Got joint descriptors; velocity, acceleration and jerk; distance to torso.

Got general descriptors about quality of motion, maximum hand height.

they looked for descriptors highly correlated to loudness. they found none. some participants said they didn't follow dynamics. 8 subjects were removed.

Some users used hand height for loudness, others used larger gestures. they separated users into two groups.

They have been able to find tendencies across users. However,a general model may not be the right approach.

Questions

How do they choose users?

People with no musical training were in the group that raised hand height.