Tuesday, 20 January 2015

It's Time to Relabel the Brain

Another day, another study finds that 'visual' cortex is activated by something other than information from the eyes:
A research team from the Hebrew University of Jerusalem recently demonstrated that the same part of the visual cortex activated in sighted individuals when reading is also activated in blind patients who use sounds to “read”. The specific area of the brain in question is a patch of left ventral visual cortex located lateral to the mid-portion of the left fusiform gyrus, referred to as the “visual word form area” (VWFA). Significant prior research has shown the VWFA to be specialized for the visual representation of letters, in addition to demonstrating a selective preference for letters over other visual stimuli. The Israeli-based research team showed that eight subjects, blind from birth, specifically and selectively activated the VWFA during the processing of letter “soundscapes” using a visual-to-auditory sensory substitution device (SSD) (see www.seeingwithsound.com for description of device).
There's lots of research like this. People are excited by mirror neurons because they are cells in motor cortex that are activated by both motor activity and perception of that motor activity. It's incredible, people cry - cells in a part of the brain that we said 30 years ago does one thing seem to also do another thing. How could this be??

I would like to propose a simple hypothesis to explain these incredible results and that is that we have been labeling the brain incorrectly for a long time. The data telling us this has been around for a long time too and continues to roll in, but for some reason we still think the old labels are important enough to hold onto. It's time to let go.

I'm going to go out on a limb and say it's a bit more complicated than this

Thursday, 15 January 2015

The Size-Weight Illusion Induced Through Human Echolocation

Echolocation is the ability to use sound to perceive the spatial layout of your surroundings (the size and shape and distance to objects, etc). Lots of animals use it, but humans can too, with training. Some blind people have taught themselves to echolocate using self-generated sounds (e.g. clicks of the tongue or fingers) and the result can be amazing (I show this video of Daniel Kish in class sometimes; see the website for the World Access for the Blind group too).

In humans, this is considered an example of sensory substitution; using one modality to do what you would normally do with another. This ability is interesting to people because the world is full of people with damaged sensory systems (blind people, deaf people, etc) and being able to replace, say, vision with sound is one way to deal with the loss. Kish in particular is a strong advocate of echolocation over white canes for blind people because canes have a limited range. Unlike vision and sound, they can only tell you about what they are in physical contact with, and not what's 'over there'. 'Over there' is a very important place for an organism because it's where almost all of the world is, and if you can perceive it you give yourself more opportunities for activity and more time to make that activity work out. This is why Kish can ride a bike.

A recent paper (Buckingham, Milne, Byrne & Goodale, 2014; Gavin is on Twitter too) looked at whether information about object size gained via echolocation can create a size-weight illusion (SWI). I thought this was kind of a fun thing to do and so we read and critiqued this paper for lab meeting.

Wednesday, 22 October 2014

Do people really not know what running looks like?

Faster, higher, stronger -
When we run, our arms and legs swing in an alternating rhythm. Your left arm swings back as your left leg swings forward, same with the right. This contralateral rhythm is important for balance; the arms and legs counterbalance each other and help reduce rotation of the torso created by swinging the limbs. 

It turns out, however, that people don't really know this and they draw running incorrectly surprisingly often. Specifically, they often depict people running in a homolateral gait (with arms and legs on the same side swinging in the same direction at the same time; see the Olympics poster). I commented on a piece by Rose Eveleth at the Atlantic about a paper (Meltzoff, 2014) that identifies this surprising confusion in art throughout history and all over the world, and that then reports some simple studies showing that people really don't know what running is supposed to look like.

Rose covered the topic well; I wanted here to critique the paper a little because it's a nice example of some flawed cognitive psychology style thinking. That said, I want to say that I did like this paper. It's that rare thing - a paper by a single author who just happened to notice something and think about it a little then report what he found in case anyone else thought it was cool too. This is a bit old school and I approve entirely.

Tuesday, 14 October 2014

Your hand is not a perceptual ruler

Visual perception has a problem; it doesn't come with a ruler. 

Visual information is angular, and the main consequence of this is that the apparent size of something varies with how far away it is. This means you can't tell how big something actually is without more information. For example, the Sun and the Moon are radically different actual sizes, but because of the huge difference in how far away they are, they are almost exactly the same angular size; this is why solar eclipses work. (Iain Banks suggested in 'Transition' that solar eclipses on Earth would be a great time to look for aliens among us, because it's a staggering coincidence that they work out and they would make for great tourism :) 

This lack of absolute size information is a problem because we need to know how big things actually are in order to interact with them. When I reach to grasp my coffee cup, I need to open my hand up enough so that I don't hit it and knock it over. Now, I can actually do this; as my reach unfolds over time, my hand opens to a maximum aperture that's wide enough to go round the object I'm reaching for (e.g. Mon-Williams & Bingham, 2011). The system therefore does have access to some additional information it can use to convert the angular size to a metric size; this process is called calibration and people who study calibration are interested in what that extra information is.

The ecological approach to calibration (see anything on the topic by Geoff Bingham) doesn't treat this as a process of 'detect angular size, detect distance, combine and scale', of course. Calibration uses some information to tune up the perception of other information so that the latter is detected in the calibrated unit. The unit chosen will be task specific because calibration needs information and tasks only offer information about themselves. A commonly discussed unit (used for scaling the perception of long distances) is eye height, because there is information in the optic array for it and it provides a fairly functional ruler for measuring distances out beyond reach space. 

Linkenauger et al (2014) take a slightly different approach. They suggest that what the system needs is something it carries with it and that remains constant (not just constantly specified, as with eye height). They present some evidence that the dominant hand is perceived to be a fairly constant length when magnified, and suggest that this length is stored and used by the system to calibrate size perception in reach space. There are, let's say, a few problems with this paper. 

Wednesday, 8 October 2014

Limits on action priming by pictures of objects

If I show you a picture of an object with a handle and ask you to make a judgment about that object (say, whether it's right side up or not) you will be faster to respond if you use the hand closest to the handle. This is called action priming (Tucker & Ellis. 1998) and there is now a wide literature using this basic setup to investigate how the perception of affordances prepares the action system to do one thing rather than another.

There is, of course, a problem here. These studies all use pictures of objects and these are not that same as the real thing. These studies therefore don't tell us anything about how the perceived affordances of objects make us ready to act on those objects. This is only a problem because it's what these researchers think they are studying which means they don't pay attention to the nature of their stimuli. The result is a mixed bag of results.

For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.

Tuesday, 23 September 2014

Visual illusions and direct perception

A while back I reviewed a bunch of papers by Rob Withagen who is currently arguing that while perception is not typically based in specifying variables, it can and should still be ecological in nature. While we are also developing an account of information that goes beyond specification I still have some reservations about the details of Rob's work. That said, I do think there is a lot of overlap and I'm still interested in figuring out what that is

Rob's latest paper (de Wit, van der Kamp & Withagen, 2015) is about visual illusions, and how to talk about them ecologically. There is a lot of very useful stuff in here, not least the review of the many things Gibson said about illusions. After this review, the paper tries to put all of this into Rob's and Tony Chemero's evolutionary framework and uses this to formalise and tidy up what Gibson was up to. I'm adding this paper to the set I've covered from Withagen as I continue to think through these issues.

Wednesday, 30 July 2014

Rhythmic constraints on stress timing in English

What kind of embodied constraints affect the production of speech? Can we say anything we like when we like, or are there constraints in play that make some things easier than others? This is the question asked in Cummins & Port (1998) which we recently read in lab meeting (with our PhD student Agnes).

Cummins and Port asked participants to produce sentences over and over and examined when during the cycle a certain stress beat occurred. They set it up so that the beat was timed with a beep to occur throughout the cycle, but showed that people could actually only place the beat in 2 or 3 places in the beat reliably. The big picture result is that speech production is shaped, in part, by the underlying dynamics of production described in terms of the rhythms it is set up to produce.

The nice detail here comes from the theoretical set up and analysis that drives this study. Cummins and Port are directly inspired and guided by work in coordination dynamics. Agnes is interested in this work because she's looking at ways to investigate language and speech using the tools of dynamical systems and embodied cognition - remember, our big pitch is that language is special but not magical and we should be able to study it the way we study, say, rhythmic movement coordination.