Sorry, your browser doesn't support canvas element !
Alex Holcombe & Patrick Cavanagh's Binding Experiment
The binding process is too slow to bind fast changes in color and orientation
5
Hz
0
Color ↔ Brightness
1
No ↔ Yes
v1.0
© 2020 KyberVision - Innovation in Vision Sciences
Send Feedback
If features such as color and orientation are processed separately by the brain at early stages, how does the brain subsequently match the correct color and orientation? That is the question Alex Holcombe and Patrick Cavanagh tried to address using this stimuli combination. They found that spatially superposed pairings of orientation with either color or luminance could be reported even for extremely high rates of presentation, which suggests that these features are coded in combination explicitly by early stages, thus eliminating the need for any subsequent binding of information. In contrast, reporting the pairing of spatially separated features required rates an order of magnitude slower, suggesting that perceiving these pairs requires binding at a slow, attentional stage.

References:

  Holcombe & Cavanagh (2001) Early binding of feature pairs for visual perception. Nature Neuroscience 4:127–128(2001)

  Holcombe (2009) Seeing slow and seeing fast: two limits on perception. Trends in Cognitive Sciences 13(5):216–221
Here is the math behind this stimulus:

  tmod = rectanglewave(time,1/tf,0,0.5)
  halfup = (x*x+(y-sep)*(y-sep))<radius*radius & (y-sep)>0
  halfdown = (x*x+(y+sep)*(y+sep))<radius*radius & (y+sep)<0
  leftgrating = sign(sin(2*pi*(x+y)*sf))
  rightgrating = sign(sin(2*pi*(x-y)*sf))
  env = halfup+separated*halfdown
  fixation = (abs(x)<2 | abs(y)<2) & r<sep/2
  back = (1-env) - fixation
  composition = halfdown*separated+(1-separated)*halfup
  brightonly = 1+mode*(1-separated)/2
  gratingmod = (tmod*rightgrating*brightonly+(1-tmod)*leftgrating/brightonly)
  nocompose = (1-mode)*(1-separated)
  halfdowngrating = (1-nocompose)*(cnt*0.5*gratingmod+0.5)
  ngrating = (gratingmod+1)/2
  nocomposecommon = nocompose*(1-ngrating)*cnt
  common = composition*(nocomposecommon+halfdowngrating)+back
  bmode = 0.5*mode*halfup*((1-tmod)*(1+cnt)+tmod*(1-cnt))
  cmode = (1-mode)*halfup
  composegrating = nocompose*composition*ngrating
  zr = separated*(tmod*cmode+bmode)+tmod*composegrating+common
  zg = separated*((1-tmod)*cmode+bmode)+(1-tmod)*composegrating+common
  zb = separated*bmode+common
The whole stimulus is generated in real-time using a GLSL shader that runs right inside your WebGL-compatible browser. The plain Math behind the stimulus was converted to this optimized GLSL shader using the new Psykinematix Pro Edition. Translation to Matlab and Python code is also possible !

This whole widget was also fully generated using Psykinematix Pro Edition. The parameters that control the stimulus properties through the sliders are the same as the ones you would define as dependent or independent variables when using the stimulus in an actual psychophysical experiment run in Psykinematix. The widget creation is otherwise fully customizable with your own logo, copyright, links, etc.

To learn more about the widget creation, click on the above "Made With" button !
v1.0
© 2020 KyberVision - Innovation in Vision Sciences