Display++ makes it easy to display calibrated visual stimuli with precision timing, and provides robust and reliable synchronisation of the stimulus presentation with external data collection equipment, at an affordable price.
Configurable contrast resolution combined with fast panel drive rate, custom lag-free electronics, and a strobing LED backlight are some of the tools that make Display++ ideal for cognitive, psychophysical and neurophysiological investigations of vision and the brain.
Guide Price: £5250
Deliver synchronised multisensory stimuli and measure accurate reaction times
Display++ integrates all the benefits of Cambridge Research Systems’ proven technology into an LCD display device designed from the ground up for science. It’s as easy to use as a normal computer monitor, and compatible with community tools like Psychtoolbox and PsychoPy, and commercial tools like Presentation and Psykinematix, or your own software. Configurable contrast resolution combined with fast panel drive rate, custom lag free electronics, real-time luminance calibration, perfect greyscale tracking and accurate colour reproduction make Display++ the ideal solution for a wide range of visual stimuli.
Robust, high quality infra-red touch screen technology is built directly into Display++. This gives a streamlined, easy-to-clean design with a durable glass surface and no additional layers to obscure your stimulus. Full integration delivers precise timing of visual stimuli and touch registration. All touches are time stamped internally, and therefore unaffected by non-deterministic host computer uncertainties. Touch coordinates and time stamp are returned to the host computer on USB. The optional analogue I/O module provides positional information directly encoded on 2 DACs. The internal timer can also register visual stimuli (e.g. onset or offset), response box presses and external triggers.
Audiofile is a novel USB soundcard that combines local solid-state storage of audio samples with an intelligent digital I/O interface. Audio streams are simply selected and triggered via Display++, ensuring that the onset of the audio sample is perfectly synchronised with the desired video frame. This scheme eliminates variable host operating system delays, providing deterministic timing and no significant latency.
Multisensory Stroop Effect: Application Note
What is the Stroop effect?
The classic Stroop effect:
It is difficult to name the colour in which the word is printed:
(Correct answer is "red")
The effect disappears when the task is to point to a mtaching patch of colour.
The reverse Stroop effect:
It is difficult to identify the word printed ("blue") and point to a colour.
(Correct answer is "blue")
Durgin [Psych. Bulletin & Review, 7(1), 121-125, 2000] used a new manual task that almost completely eliminates the traditional Stroop interference and produced strong colour-based interference when the task was to identify the words (reverse Stroop).
Experience the difference between the classic and reverse Stroop
"Colour" Task: Touch the patch that matches the colour of the word.
"Word Task": Touch the patch that matches the word.
What happens with conflicting audio information?
"Colour" task + "word" audio: Touch the patch that matches the colour of the word.
"Word" task + "colour" audio: Touch the patch that matches the word.
How the Multisensory Stroop demo was implemented
The observer is instructed to touch the patch that matches either the colour of the word (“colour”condition) or the spelling of the word (“word”condition).
A calibrated 32’’1920x1080 IPS LCD (Display++) is used to display the visual stimuli. The stimuli are programmed in Psychtoolbox using the BitsPlusPlus function. Observers’ responses are collected using the infra-red touch technology integrated into Display++.
The timing of the visual presentation is managed by Display++.
The synchronous presentation between the visual and audio stimuli stored in the auditory stimulator (AudioFile) is controlled by Display++.
The visual stimuli consist of a 60-pixel Courier New lower case letters displayed in the centre of a black background. The colours used correspond to red (1,0,0), green (0,1,0), blue (0,0.5,1), and yellow (1,0.9,0). The coloured response patches are 300x400 pixels and are placed at the corners of the display. The audio stimuli consist of WAV files of the spoken names of the set of colours.
At the beginning of each trial, a white fixation cross was presented in the centre of the screen for 500ms after which it is replaced by the coloured word and the four response patches. These remain on the screen until the subject has provided their answer. The audio stimulus is optionally delivered at the onset of the visual stimulus.
Median reaction times and percent of correct responses are provided at the end of each session.