Scholarly article on topic 'APCV 2014, 19–22 July 2014, Takamatsu, Japan Abstracts'

APCV 2014, 19–22 July 2014, Takamatsu, Japan Abstracts Academic research paper on "Psychology"

0
0
Share paper
Academic journal
i-Perception
OECD Field of science
Keywords
{""}

Academic research paper on topic "APCV 2014, 19–22 July 2014, Takamatsu, Japan Abstracts"

¿-PERCEPTION

Pion publication

i-Perception (2014) volume 5, pages 205-405

ISSN 2041-6695

APCV 2014, 19-22 July 2014, Takamatsu, Japan ABSTRACTS

O1A-1: Two distinct mechanisms underlying facial expression perception

Chien-Chung Chen

Department of Psychology, National Taiwan University, Taiwan

c3chen@ntu.edu.tw Shin-Yin Sheu

Department of Psychology, National Taiwan University, Taiwan

We used a noise masking paradigm to investigate the tuning properties of the visual mechanisms underlying facial expression perception. The targets were emotional faces (happy, sad, fearful and angry) of six individuals while the distractors were neutral faces of the same individuals. The face stimuli were embedded in band-passed noise created by applying vertical or horizontal band-pass filters (central frequency 2.5 to 40 cycles/image) to white noise. In a 2AFC trial, the target was presented in one interval while the distractor was in the other. The observer was to indicate which interval contained the emotional face. The target contrast threshold was measured with a PSI staircase procedure at 75% correct level. Under the horizontal noise, target threshold increased and then decreased with noise spatial frequency, suggesting a band-pass tuning. The threshold elevation produced by the vertical noise decreased monotonically with spatial frequency, showing a low-pass tuning. The horizontal noise produced much larger threshold elevation than the vertical noise. The horizontal superiority is consistent with the "bar-code" theory (Dakin & Watt, 2008, Journal of Vision) for face identification. However, different spatial frequency properties suggest that vertical information does play a role in facial expression perception and is processed by different mechanisms from those for face identity.

Supported by NSC 102-2410-H-002 -050-

O1A-2: Seeing objects as face enhances object detection Kohske Takahashi

Research Center for Advanced Science and Technology, The University of Tokyo, Japan takahashi.kohske@gmail.com Katsumi Watanabe

Research Center for Advanced Science and Technology, The University of Tokyo, Japan

Objects are sometimes erroneously seen as faces (pareidolia phenomenon). We investigated whether seeing objects as faces would affect object detection performance. Participants were asked to indicate whether a target was present briefly (60 ms) or absent before a mask pattern. The participants were randomly assigned into a face task or a triangle task. In the face task, the target was either three dots arranged in triangle that could be seen as a face or a cartoon face, and the participants were instructed to detect a face. In the triangle task, the target was either the three dots as the face task or a line-drawing triangle, and the participants were instructed to detect a triangle. We found that the detection performance (d') of the three-dots target was higher in the face task than in the triangle task. The results suggest that seeing objects as faces could enhance object detection performance.

Supported by JST CREST and JSPS KAKENHI (25700013).

O1A-3: Holistic processing in visual expertise acquisition: An inverted U-shape function in the development of Chinese character recognition

Ricky Van Yip Tso

Department of Psychology, The University of Hong Kong, Hong Kong richic13@conncct.hku.hk Terry Kit-fong Au

Department of Psychology, The University of Hong Kong, Hong Kong Janet Hui-wen Hsiao

Department of Psychology, The University of Hong Kong, Hong Kong

Holistic processing (HP) behaviorally marks expert face and object recognition. In contrast, expert Chinese character recognition involves reduced HP. Tso, Au and Hsiao (2013) showed that this reduction in HP can be better explained by writing than reading experience; similarly, face-drawing artists recognized faces less holistically than ordinary controls. Using the complete composite paradigm, here we examined the developmental trend of HP in Chinese character recognition in Chinese and non-Chinese children, and its relationship with literacy abilities. Chinese first-graders showed increased HP compared with non-Chinese first-graders; nevertheless, the HP effect in Chinese children was reduced as they reached higher grades. These effects suggest an inverted U-shape pattern of HP: an increase in HP due to initial reading experience, followed by a decrease in HP due to writing practice. In contrast, HP in non-Chinese children was similar across grades. Additionally, we found that writing performance predicts reading performance through reduced HP as a mediator. This suggests that writing hones analytic processing, which is essential for Chinese character recognition, thus facilitating reading in Chinese. This result confirms that writing/sensorimotor experience can modulate HP effects, and suggests that both holistic and analytic processing skills may be important in the development of visual expertise.

Support by Research Grant Council (Project# HKU 745210H and HKU 758412H to J.H. Hsiao).

O1A-4: Gaze constancy in adults and infants

Yumiko Otsuka

School of Psychology, UNSW Australia, Australia yumikoot@gmail.com Isabelle Mareschal

School of Biological and Chemical Sciences, Queen Mary University of London, UK Hiroko Ichikawa

Department of Psychology, Chuo University, Japan So Kanazawa

Department of Psychology, Japan Women's University, Japan Masami K Yamaguchi

Department of Psychology, Chuo University, Japan Colin W G Clifford School of Psychology, UNSW Australia, Australia

Constancy in the perception of gaze direction across lateral head rotation depends on the integration of information from the eye region and information about head rotation (Otsuka, Mareschal, Calder, & Clifford, in press). We examined whether such an integration process in adults is affected by facial inversion, which is known to disrupt integration of information across features in face processing. Adults performed categorical judgements of perceived gaze direction for faces seen in 3 lateral rotation poses (-20°, 0°, 20°) in upright and inverted facial orientation. There were three image conditions: normal face condition, eyes-only condition, and Wollaston condition (eyes from the 0° pose placed in the angled face). Integration of eye and head information was inferred by comparing the effect of pose between the eyes-only and the normal face condition, and by examining the effect of pose in the Wollaston condition. The results provided little evidence that facial inversion impairs the integration of information from the eye region and information about head rotation in gaze processing. Instead, upright and inverted faces yielded similar levels of gaze constancy. We will also present the results from a preferential looking study examining integration effect in infants using Wollaston images.

Supported by ARC Discovery Project DP120102589. CC is supported by ARC Future Fellowship FT110100150.

O1A-5: The composite effect in static and dynamic familiar face perception

Simone Favelle School of Psychology, University of Wollongong, Australia simone_favelle@uow.edu.au Alanna Tobin

School of Psychology, University of Wollongong, Australia Daniel Piepers

School of Social Science and Psychology, University of Western Sydney, Australia Rachel Robbins

School of Social Science and Psychology, University of Western Sydney, Australia Darren Burke School of Psychology, University of Newcastle, Australia

Much research has investigated the utility of motion for face perception and recognition; however, the question of how motion influences the way in which faces are processed has been less well studied. Recent studies claiming to test holistic processing for moving faces using the composite task have failed to present faces in the same format (that is, static or dynamic) at both study and test. In this study, we asked participants to learn faces in motion as well as test with composite faces in motion, as compared to static faces at learning and test. We also tested inverted conditions in order to determine the contribution of the motion signal per se to performance. We found a clear composite effect for upright static and upright dynamic faces, and there was no significant difference in the magnitude of those effects as measured by naming reaction time (RT), inverse efficiency or baseline corrected RT. Further, there was no evidence of composite or motion effects in the inverted conditions, ruling out use of the motion signal itself as an explanation of performance. Together, these results show that upright faces in motion are processed holistically in a similar manner to static faces.

O1A-6: Human variation in autistic traits predicts the perception of direct gaze for males, but not for females

Daisuke Matsuyoshi

Research Center for Advanced Science and Technology, The University of Tokyo, Japan matsuyoshi@fennel.rcast.u-tokyo.ac.jp Kana Kuraguchi

Graduate School of Letters, Kyoto University, Japan Yumiko Tanaka

College of Arts and Sciences, The University of Tokyo, Japan Seina Uchida

College of Arts and Sciences, The University of Tokyo, Japan Hiroshi Ashida

Graduate School of Letters, Kyoto University, Japan Katsumi Watanabe

Research Center for Advanced Science and Technology, The University of Tokyo, Japan

Individuals with autism spectrum disorders (ASD) exhibit atypical behavior in perceiving others' eye gaze and eye contact, a crucial factor underlying social communication. Despite the emphasis of ASD as a continuum of atypical social behaviors and the sexual heterogeneity of phenotypic manifestations, whether gaze processing constitutes an autistic endophenotype in both sexes remains unclear. Using the Autism-Spectrum Quotient (AQ) and a psychophysical approach in a normal population (N = 128; 64 females and 64 males), here we demonstrated that individual differences in autistic traits predicted direct-gaze perception for males, but not for females. Our findings suggest that direct-gaze perception may not constitute an autistic endophenotype in both sexes, and highlight the importance of sex differences when considering relationships between autistic traits and behaviors.

Supported by JSPS #26540061, #22220003, and #24300279; CREST, JST.

O1A-7: Accurate computation of average attractiveness of faces Anna Xiao Luo

Department of Psychology, The University of Hong Kong, Hong Kong

h1080074@hku.hk Guomei Zhou

Department of Psychology, Sun Yat-sen University, China

Automatic, accurate extraction of feature means is evolutionarily significant, enabling fast, efficient processing of meaningful information. Our within-subject-design study tested human ability to compute mean attractiveness of a set of faces. We morphed two extreme faces to produce a 50-face attractiveness spectrum. In the first two experiments, we examined participants' ability to compare the mean attractiveness of a set of four faces and the attractiveness of a single face subsequently presented. Results revealed that participants could perform as accurately when the four faces were heterogeneous in attractiveness (mean computation was required) as when they were homogeneous (mean was directly seen) (correct rates at 73.5%~94.9%); performance was not affected by within-set attractiveness variability of the heterogeneous faces. To verify that participants relied on mean computation, rather than precise memory of individual faces in a set, to give response, Experiment 3 asked participants to judge if a single face presented after a set of four heterogeneous faces was a set member. Performance was significantly worse than that of the first two experiments, suggesting the necessity of mean computation in previous experiments. Taken together, our findings indicate human ability to accurately compute mean attractiveness of faces regardless of attractiveness variability within the set.

O1A-8: Effects of viewing time on rightward bias in aesthetic impression formation in viewing pictures

Makoto Ichikawa

Department of Psychology, Chiba University, Japan

ichikawa@L.chiba-u.ac.jp Airi Motoki

Department of Psychology, Chiba University, Japan

Although some previous studies have reported that right-handed observers preferred pictures which show their main objects in the right side of the picture (Levy, 1976), the other studies have demonstrated that observers preferred the pictures in which the centroid of the objects corresponds to the center of the picture frame (Morinaga, 1954). We have demonstrated that the observers' right biased preference was restricted to the pictures with concrete background (Nakajima & Ichikawa, 2008). In this study, we examined how viewing time affects the aesthetic impression formation in viewing pictures. In the experiment, viewing time was restricted to 50, 250, or 500 ms for the pictures (10.0 x 16.2 arc deg) with three objects (diameter was about 1.3 arc deg) on a concrete background. The centroid of the three objects was located at -10.8, -5.4, 0, 5.4, or 10.8 arc deg (positive number indicates the rightward position against the center of the picture frame). In ratings of impressions, observers showed leftward bias with 50 ms viewing time while they show rightward bias with 250 and 500 ms viewing times. These results are compatible with the notion that the preference in aesthetic impression formation depends upon the fluency in perceptual processing in viewing pictures.

Supported by JSPS Grant-in-Aid for Scientific Research (B) #25285197.

O1B-1: Aging effects on chromatic discrimination measured by the Cambridge Colour Test

Keizo Shinomori

School of Information, Kochi University of Technology, Japan shinomori.keizo@,kochi-tech.ac.jp Athanasios Panorgias

Department of Ophthalmology & Vision Science, University of California, Davis, USA John S. Werner

Department of Ophthalmology & Vision Science, University of California, Davis, USA

Chromatic discrimination along L-, M- and S-cone confusion lines was measured with the Cambridge Colour Test (CCT). 168 color normal observers (16 to 88 years old) and an additional 44 anomalous trichromats participated in this study. They were screened and categorized by a Neitz anomaloscope.

All three chromatic vectors measured with the CCT on color-normal observers showed significant age-related increases. Both protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bi-linear model. The influence of age-related changes in ocular media density was modeled to evaluate whether there were significant shifts in the cone vectors with age. In addition, we evaluated whether the CCT classified protan and deutan individuals of various ages in a manner that is consistent with their Rayleigh matches.

Modeling suggested that protan and deutan vector thresholds were determined by the response of an L-M cone-opponent mechanism. However, it was difficult to explain tritan vector thresholds quantitatively using a traditional color discrimination model. The CCT classification of protan observers agreed with the Rayleigh matches in all 9 cases. For 27 of 35 deuteranomalous observers the CCT vector thresholds were consistent with the corresponding anomaloscope classification. Importantly, while age-related changes in ocular media density contributed significantly to a reduction in discrimination along tritan vectors, these changes did not lead to misclassification.

Supported by NIA (AG04058) and KAKENHI (24300085 and 24650109).

O1B-2: Color terms and basic color categories in Mandarin Chinese characters Vincent C Sun

Center for Color Culture and Informatics, Chinese Culture University, Taiwan

csun@faculty.pccu.edu.tw Chien-Chung Chen

Department of Psychology, National Taiwan University, Taiwan

We investigated basic color categories among Taiwanese native Mandarin speakers. Color samples (1" square chips) comparable with the Berlin and Kay (1959) color survey stimulus arrays were chosen from a collection of NCS color papers and mounted on a neutral gray cardboard, which in turn was mounted on a touch-screen monitor, in D65 illumination. Thirty-four single-character color terms were used. They were selected from a publicly available Chinese character database based on usage frequency. For each word surveyed, the participants were to select the color chips that matched the word by pressing a virtual button on the touch-screen presented under each chip. Due to the limited monitor size, three separate cardboards are used to present the whole set of color stimuli. The results, when displayed in naming arrays used in Berlin and Kay (1959), show that the terms that may translated into basic color terms in English have concentrated term maps comparable to the result of WCS. However, some terms, such as § (Ink), ^ (Iron), and ^ (Vegetal), show wide spreads of term maps and inconsistent among subjects. Our findings suggest that the basic color categories and basic color terms in Mandarin Chinese are similar to those found in WCS.

Supported by NSC 101-2410-H-034 -036 -MY2.

O1B-3: Effects of luminance contrast on the color selective responses in macaque V4 and inferior temporal cortex

Tomoyuki Namima

National Institute for Physiological Sciences, Japan The Graduate University for Advanced Studies (SOKENDAI), Japan namima@nips.ac.jp Masaharu Yasuda National Eye Institute, NIH, USA Taku Banno

National Institute of Neuroscience, Japan Gouki Okazawa

National Institute for Physiological Sciences, Japan Hidehiko Komatsu

National Institute for Physiological Sciences, Japan

The Graduate University for Advanced Studies (SOKENDAI), Japan

Color appearance of a stimulus is significantly affected by the luminance contrast relative to the background. To understand its underlying mechanisms, we examined the effect of luminance contrast on the responses of color selective neurons in the macaque V4, anterior and posterior inferior temporal (IT) color areas (AITC and PITC) by using color stimuli that evenly distributed on the CIExy chromaticity diagram. We found that the effect of luminance contrast on the color selectivity for individual neurons was larger for V4 and PITC than AITC. The correlation between the population responses to a bright stimulus and a dark stimulus with the same chromaticity was high for all colors in AITC, whereas V4 and PITC neurons showed low correlation for colors from cyan to blue and achromatic colors, respectively. These results suggests that V4, PITC and AITC neurons encode color and luminance signals in different ways and that they play different roles when the color and luminance signals interact in color perception.

O1B-4: Color constancy of color deficient observers under illuminant changes on confusion lines and off confusion lines

Ruiqing Ma

School of Information, Kochi University of Technology, Japan ma.ruiqing@kochi-tech.ac.jp Keizo Shinomori School of Information, Kochi University of Technology, Japan

The color constancy mechanism of red-green color deficient observers (dichromats) has been investigated by few scientists. Blue-yellow subsystem of color vision and adaptation of the retinal photoreceptors were thought to be presumably significant factors in color constancy of dichromats. Here we explore this idea asking dichromats (3 deutans, 3 protans and 1 protanomalous observers) to make an asymmetric simultaneous surface match (haploscopic paper match) under illuminant changes on individual confusion lines and off confusion lines. When the illuminant was changed along the individual confusion line, dichromats cannot discriminate the test illuminants, which appear greenish or reddish from D65 illuminant.

The results showed that dichromats have some color constancy under illuminant changes on confusion lines and off confusion lines. The dichromats have the best color constancy under the yellow illuminant, which cause relatively large S-cone stimulation changes of surfaces, reflecting the fact that L- or M-cone adaptation to illumination mainly contributes to brightness (luminance) matching. It indicates that the color constancy of dichromats is mainly related to the perception of illuminant color—that is, bluish or yellowish change obtained from the scene surfaces.

Supported by Kakenhi 24300085.

O1B-5: Hue selectivity in human visual cortex studied by fMRI with a novel stimulation paradigm

Ichiro Kuriki

Research Institute of Electrical Communication, Tohoku University, Japan ikuriki@riec.tohoku.ac.jp Pei Sun

RIKEN Brain Science Institute, Japan Kenichi Ueno

RIKEN Brain Science Institute, Japan Keiji Tanaka

RIKEN Brain Science Institute, Japan Kang Cheng RIKEN Brain Science Institute, Japan

Representation of colors at the intermediate level of human visual system, particularly between cone-opponent and categorical levels, is still unclear. We studied the variability of hue-selective neurons in human visual cortex by BOLD fMRI using a novel stimulation paradigm, a variation of the phase-encoding technique. The visual stimulus was a checker pattern reversed between background gray and a hue with fifteen percent luminance pedestal. The hue changed continuously along an elliptic locus defined in the subject's isoluminant plane. The BOLD signal time-course was analyzed after taking the difference between BOLD responses for two stimulus blocks that changed hues in the counter phase. The preferred hue in each voxel (2 x 2 x 3 mm3 in size) was estimated by fitting a cosine curve. The histogram showed the presence of abundant voxels selective for intermediate directions of cone-opponent axes at the level of V1. To rule out the possibility of combined responses of cone-opponent mechanisms, responses selective for intermediate hues were also tested using an adaptation paradigm. The results support the presence of neurons selective for intermediate hues in the cone-opponent space at the early stage of human visual cortex. However, we found no clear correspondence between hue selectivity and individual subjects' unique hues.

Supported by JSPS KAKENHI 21330165 and 24330205 to IK.

O1B-6: Various double-pulse resolutions of color opponent channels Lin Shi

Information Engineering and Automation School, Kunming University of Science and Technology, China

Yunnan Provincial Computer Application Key Laboratory, China

lin.shi@live.cn

Luminance, red-green, and blue-yellow opponent channels are critical in human color vision while the temporal resolution of those channels remains unclear. Inter-Stimuli-Interval (ISI) thresholds of distinguishing temporal double-pulse in positive and negative luminance contrast, isoluminance red-green contrast, and isoluminance blue-yellow contrast (denoted by LM+, LM-, L-/M+, L+/M-, S-/LM+, and S+/LM-, respectively) and in 0, 2, and 8 cpd spatial frequencies were measured from 13 normal color vision observers at intensities of single-pulse detection threshold (95% correct answer) of corresponding six directions individually and at two times of pulse duration of the single-pulse under less than 3% display error of the CIE (2006) tristimulus values L, M, and S. Results showed that: (1) ISI thresholds of S-/LM+ were significant higher than those of S+/LM- which suggested that temporal resolutions of blue-yellow and yellow-blue opponent channels were different; (2) ISI thresholds of S-/LM+ at 0 cpd were different from those at 2 and 8 cpd; (3) ISI thresholds of LM+ and those of LM- were similar in all three spatial frequencies, and so were L-/M+ and L+/M-, which indicated similar temporal resolutions between positive and negative luminance contrast and similar temporal resolutions between red-green and green-red contrast.

Supported by National Natural Science Foundation of China (61368005).

O1B-7: Influence of glossiness and shape of surface on color constancy Yoko Mizokami

Graduate School of Advanced Integration Science, Chiba University, Japan mizokami@faculty.chiba-u.jp Asuka Akahori

Faculty of Engineering, Chiba University, Japan Hirohisa Yaguchi

Graduate School of Advanced Integration Science, Chiba University, Japan

In our previous study, we examined the effect of specularity on color constancy using real matt and gloss paper, and did not find any systematic differences. This suggests that the contribution of specular component is small in a real environment. However, we used a rather complex wavy-shape sample with uneven shades and specular regions, which could make the judgment of color appearance difficult. Here, we examine whether the surface shape of a gloss object influences color constancy. We built a booth arranged like a normal room illuminated by white and reddish lamps. We compared test samples with half-cylindrical and wavy surfaces, covered with gloss, semi-gloss and matt paper. We tested achromatic and four-color samples. Observers evaluated the color appearance of test samples using an elementary color naming method. As a result, the color appearance of test samples under two illuminations showed small differences, meaning good color constancy. However, they did not show any systematic differences between the glossiness of paper in both surfaces. Our results suggest that the specular reflection of color paper has little effect on color constancy in a real environment regardless of surface shape.

Supported by MEXT KAKENHI 23135506.

O1B-8: Comparison of fMRI measurements in lateral geniculate nucleus (LGN) and primary visual cortex (V1) with visual deficits in glaucoma

Sophie Wuerger

University of Liverpool, UK s.m.wuerger@liverpool.ac.uk Joanne Powell University of Liverpool, UK Anshoo Choudhary University of Liverpool, UK Laura Parkes University of Manchester, UK

The purpose of our study was to determine whether, in glaucoma patients, selective behavioural deficits in the three main visual pathways (magnocellular, parvocellular, koniocellular) are associated with selective changes in the neural activity in LGN and V1.

A POAG group («=20) and a control group («=20) were examined using the following tests: standard automated perimetry (SAP); the Cambridge Colour Vision test (CCT); thresholds along the three cardinal directions of colour space (BW, RG, YV). For a subset of the participants («=9 in either groups), BOLD signals in the LGN and V1 were measured in response to supra-threshold modulations along the cardinal directions.

We find significant visual threshold differences between the Glaucoma and the control group, in particular for S-cone isolating stimuli. The average BOLD signal in primary visual cortex (V1) is lower in the Glaucoma group compared to the controls; contrary to our expectations, the LGN signal is increased in POAG patients compared to the controls. We speculate that this may reflect altered feedback from primary visual cortex.

Equipment: Welcome Trust; JP and scans were supported by the Eire & UK Glaucoma Society.

O2A-1: The visuotypy of autistic tendency

David P Crewther

Swinburne University of Technology, Australia

dcrewther@swin.edu.au

Happe (2006) argues that there will be no single (genetic or cognitive) cause for the diverse symptoms defining autism, on the basis of low correlations between its classic triad of behavioural symptoms. However, there is growing evidence of a common visuotypy related to the degree of autistic tendency, even across the "normal" population. This paper presents evidence from 6 studies (psychophysical and electrophysiological) asking whether autistic vision is explained by an afferent magnocellular abnormality, or by altered cortico-cortical processing leading to enhanced local perception. Reliably, cohorts with high versus low scores on Baron-Cohen's Autistic spectrum Quotient (AQ), show some reduction in the initial negativity in the first order multifocal VEP response, together with increased amplitude of the magnocellular-generated second order nonlinearity. The competition between global and local percept in the diamond illusion reveals a peripheral global neglect in those with high AQ, while with Navon figures, a high AQ group had difficulty in withdrawing attention from the salient local level when identifying the (incongruent) global level. Evidence from saccadic suppression indicates that the answer may lie in abnormal suppression of transient attention in those with high versus low autistic tendency.

Supported by NHMRC Australia.

O2A-2: Motor Intention on a complex airplane piloting task is predicted by attentional modulation of visual brain areas

Daniel E Callan

Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Japan Multisensory Cognition and Computation Laboratory, NICT, Japan ATR Neural Information Analysis Laboratories, Japan

dcallan@nict.go.jp Daniel Cassel

ATR Neural Information Analysis Laboratories, Kyoto, Japan The Hospital for Sick Children, Toronto, Canada Cengiz Terzibas

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan

Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan

Mario Gamez

ATR Neural Information Analysis Laboratories, Kyoto, Japan Hiroshi Ando

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan

Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan

The objective of this experiment is to determine goal-directed attentional neural correlates underlying movement intention on a complex piloting task. The trial started with the plane going toward a set of cones directly in front of the plane. Once the plane passed these cones the subject piloted the plane to the left or right set of cones (as previously instructed) at a specific altitude and in a vertical orientation. The first experiment (3T fMRI) showed that direction of flight is related to greater visual cortex activity in the hemisphere opposite the direction of movement. The second experiment (3T fMRI) contrasted the flying task with a watching only task. Although the standard SPM fMRI analysis did not reveal a significant difference in direction of flight between flying and watching conditions, the support-vector-machine decoding results of voxels in visual areas did reveal a significant difference in predicting direction of flight between flying and watching conditions that occurs prior to movement onset and is correlated with task performance. These results suggest that goal-directed attentional processes may modulate activity in visual areas predicting future direction of flight serving as a target space for intentional motor control.

O2A-3: Biased competition between vision and biomechanics in human walking

Ute Leonards

University of Bristol, UK ute.leonards@bristol.ac.uk John Fennell

University of Bristol, UK Charlotte Goodwin

University of Bristol, UK Jeremy Burn University of Bristol, UK

What determines where we put our feet when walking on a pavement covered in slippery leaves? Do we choose dry patches to avoid slipping or also step on leaves to avoid losing balance? To start answering such questions, we investigated the impact of perceptual grouping on foot placement: participants performed a stepping stone task in which pathways consisted of targets placed for optimal balance, with visual distractors in their proximity. Targets and distractors differed in shape and colour so that each subset of stones could be easily grouped perceptually. In half of the trials, one target swapped shape and colour with a distractor in its close proximity. We show that in these "swapped" conditions participants chose the visual instead of the balance-optimized stepping stone in over 40% of trials, even if the distance to awkward swapped steps was considerable, jeopardising balance. These results suggest biased competition between the biomechanically driven balance system and low-level visual mechanisms when it comes to foot placement, even at a clear cost of balance. Findings are discussed with regard to their importance in understanding increased falls risk in older adults who tend to rely more on visual input during walking than younger adults.

Supported by the BRACE charity.

O2A-4: The critical delay for changing self-body sensations and self-view sensations Seiya Kamiya

Tokyo Institute of Technology, Japan

kamiya.s.ac@m.titech.ac.jp Takako Yoshida

Tokyo Institute of Technology, Japan

The temporal delay between human action and its visual feedback is critical for self-body sensation. Previous research on the self-body sensation examined two types of the sensation: sense of ownership and sense of agency, with regard to only a single body part, not for eye movement and our vision. The assumption of the ownership for our vision may be related to first-person perspective. In this research, we tested how visual feedback delay changes these two sensations for the hand shown on a visual display, and the ownership for the image on the visual display as well as eye and hand behaviour. Participants executed a block-copying task that involved manually collecting and arranging colored blocks to duplicate a pattern observed on a delayed video image. As delay increased, self-body sensations and self-view sensations questionnaire scores decreased. This decay slope changed after a delay of approximately 350 ms, which was when distribution of fixation duration and hand velocity also showed qualitative changes at this value. These results suggest that 350 ms is the critical delay for changing self-body sensations and self-view sensations. Whether this change was due to the visual and tactile asynchrony is uncertain. However, this is the first report on the ownership of the first-person field of view with eye-tracking data, and further research for the visual agency may be required as well.

Supported by COIT and JST to TY.

O2A-5: Attentive tracking of switching-in-depth moving objects in 3D space Anis Ur Rehman

Graduate School of Science and Engineering, Kagoshima University, Japan anis@ibe.kagoshima-u.ac.jp Akiko Matsumoto

Graduate School of Science and Engineering, Kagoshima University, Japan Ken Kihara

Graduate School of Science and Engineering, Kagoshima University, Japan Sakuichi Ohtsuka

Graduate School of Science and Engineering, Kagoshima University, Japan

Multiple object tracking allows participants to successfully track up to four moving targets. A recent study reported that participants performed better when the moving targets were equally divided by a stereoscopic depth of 2 cm. However, it is not yet confirmed whether humans can track moving targets presented in not only a stereoscopic but also real 3D environments. In this study, four targets and four distractors were presented on nearer- and/or farther depth planes; which was made by a half-mirror and two CRT monitors. We found that, compared with the single plane condition, the tracking performance for the targets divided into two depth planes was similar when the depth was 10 cm, but that performance dropped when the depth was 50 cm. Our results also demonstrated that participants failed to follow the switching-in-depth targets (nearer-to-farther or farther-to-nearer) if all targets were presented on a single plane from the beginning of each trial; however, participants could track the switching-in-depth targets if they appeared on both planes at the beginning. These results suggest that humans have sufficient flexibility to attend to multiple depth planes on a wide range, up to about 50 cm, in real 3D environments.

Supported by JSPS KAKENHI Grant Number 25730095.

O2A-6: The mirror illusion: Does proprioceptive drift go hand in hand with the sense of agency?

Daisuke Tajima

Tokyo Institute of Technology, Japan tajima.d.aa@m.titech.ac.jp Tota Mizuno The University of Electro-Communications, Japan Yuichiro Kume

Tokyo Polytechnic University, Japan Takako Yoshida

Tokyo Institute of Technology, Japan

When participants viewed their left hands in a mirror positioned along the midsaggital axis, they hardly noticed the spatial offset between the hand in the mirror and the obscured real right hand while moving both hands synchronously. This is called the mirror illusion. This illusion encompasses two phenomena: proprioceptive drift and sense of agency. Proprioceptive drift applied to a perceptual change of obscured hand position toward the hand in the mirror. Sense of agency applied to the subjective sense of controlling the body image as their own body. We examined the spatial overlap between these two. Participants responded to a two-alternative forced choice (2AFC) question about the subjective right hand position and questionnaires about sense of agency at various positions of the right hand. We analysed the 2AFC data using support vector machine and compared its classification result and the questionnaire results. Our data analysis suggested that the two phenomena can be observed in concentric space, but the estimated range of the proprioceptive drift was narrower than the range of agency. This outcome may be due to differences in measurement or analysis; nonetheless, this is the first report to suggest that proprioceptive drift and sense of agency do not perfectly overlap.

Supported by COI-T and JST to TY.

O2A-7: Multiple gaze saccades during unrestrained eye-head movement in visual search Yu Fang

Graduate School of Information Sciences, Tohoku University, Japan fangyork@riec.tohoku.ac.jp Ryoichi Nakashima

Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan

Kazumichi Matsumiya

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan

Ichiro Kuriki

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan Satoshi Shioiri

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan

Gaze shifts involve rotations of both eyes and head. Previous studies have focused on eye-head coordination during a gaze shift between two simple targets such as LED lights displaced along horizontal axis. They reported systematic relationships between the eyes and head movements in such simple gaze shifts. However, multiple saccades frequently occur during a single head movement when observers perform an everyday task. Here, we examined the eye-head coordination during a visual search task, in which observers rotated their body to conduct the task inside a 360° display system. The result showed, first, that multiple saccades were observed frequently during one head movement. Second, the head moved with gaze even in the range of gaze shifts within which the head moved little according to the previous studies. Third, the amplitude and duration of head movements increased with the number of saccades during a head movement. These results suggest that eye-head coordination works differently in visual search from what is predicted from single gaze shifts. Head movements appear to be programmed for a period as long as several saccadic eye movements, which could produce evidence in the contribution of relatively higher stages of visual process for the eye-head coordination.

O2A-8: Motion aftereffect induced by actively moving one's own hand

Kazumichi Matsumiya

Research Institute of Electrical Communication, Tohoku University, Japan kmat@riec.tohoku.ac.jp Satoshi Shioiri

Research Institute of Electrical Communication, Tohoku University, Japan

We use our own hands to manipulate a variety of objects. Previous studies have suggested that seeing hands during object manipulation is important for successful interactions with objects. Here we investigated how seeing one's own hand influences the perception of visual motion around the hand. The results indicate that a visual motion aftereffect shows spatial selectivity in hand-centered coordinates. This selectivity appeared only when the participants saw their own hand actively moving, and was not due to attentional modulation on the adaptation grating. Moreover, we investigated how senses of hand ownership and agency influence the hand-centered MAE using the rubber hand illusion. We found that the hand-centered MAE occurs only when the participants have both senses of hand ownership and agency for a virtual hand, and also found a positive correlation between the magnitude of the hand-centered MAE and the rubber hand illusion. These results reveal that, for visual motion analysis anchored to one's own hand, both senses of body ownership and agency for a seen hand generate a perceptual representation of the space encoded in hand-centered coordinates. This suggests that bodily self-consciousness plays a functional role in the perceptual process of peripersonal space.

This work was partially supported by KAKENHI grant number 23500251 and by funding from the RIEC, Tohoku University Original Research Support Program to KM.

O2A-9: Visual search for self-controlled vs. non-controlled object

Hideyuki Kobayashi

Tokyo Institute of Technology, Japan

koabayashi.luia@nUitecluic.jp Takako Yoshida

Tokyo Institute of Technology, Japan

A self-controlled mouse cursor among randomly moving distractors is easier to find for an operator than observers. (Watanabe et al., 2013) The role of attention in this phenomenon was assessed with a visual search experiment with a self-condition (participants controlled a target object via a computer mouse and searched for the target among randomly moving distractors) and observe condition (participants searched for a target object that moved in accordance with recorded mouse movement). The search slope was shallower for the self than the observe condition but was not flat like a "pop out" search function. These results are similar to a "pop out" by higher-level features. This suggests that the saliency of the target was relatively high due to an expectancy driven by a feed-forward process. To further test the contribution of multimodal discrepancy and a sense of intuitive controllability or agency with the cursor, a temporal delay was inserted between the mouse and cursor movement. As delay increased, the search function steepened. The estimated temporal delay matching performance between the "delayed" self and observe conditions was 4043 ms. This delay might be the critical timeframe where individuals determine causality between hand movement and delayed visual feedback.

Supported by COI-T and JST to TY.

O2B-1: Electrophysiological responses to visual symmetry Marco Bertamini

Department of Psychological Sciences, University of Liverpool, UK

m.bertamini@liv.ac.uk Alexis D J Makin

Department of Psychological Sciences, University of Liverpool, UK

In a series of experiments we have explored visual symmetry processing by measuring event related potentials and neural oscillatory activity. We report a summary of results organised in six points. (1) There is a sustained posterior negativity (SPN) related to the presentation of symmetry. This response is automatic and independent of task, and this supports preattentive symmetry processing. (2) The SPN is generated in extrastriate visual cortex, and is it therefore the electrophysiological correlate of activity measured using fMRI (Sasaki et al., 2005). (3) This response is not unique to reflection. Amplitude is largest for Reflection but present for other regularities (Rotation and Translation). (4) It does not matter whether symmetry is associated with an object, as opposed to a ground region, although exact procedure may be important. (5) The SPN generated by symmetry is independent of view angle, because we confirmed it for symmetry with a 50 degree slant. However, slant compensation is not automatic; observers have to focus on the slanted frame of reference. (6) Symmetrical and random patterns produce bilateral alpha desynchronization. When people are actively classifying regularity, this signal is stronger in the right hemisphere, supporting a specialization of the right hemisphere for global spatial processing.

O2B-2: The law of proximity in new light

Cees van Leeuwen

Laboratory for Perceptual Dynamics, KU Leuven, Belgium Department of Cognitive Science, TU Kaiserslautern, Germany RIKEN Brain Science Institute, Japan cees.vanleeuwen@ppw.kuleuven.be Sergei Gepshtein RIKEN Brain Science Institute, Japan SALK Institute, USA Aleksandra Zharikova

Laboratory for Perceptual Dynamics, KU Leuven, Belgium

Ekaterina Levichkina

RIKEN Brain Science Institute, Japan University of Melbourne, Australia

The Gestalt principle of proximity is one of the basic perceptual grouping principles in vision. Whereas this principle leads to a good quantitative description of static stimuli, this is not always the case for moving stimuli. The perception of moving textures (e.g. dot lattices) and objects, both in conditions well above detection threshold, as well the detection of moving gratings at threshold all appear to follow the same psychophysical law of motion sensitivity. I will illustrate these conclusions based on recent experimental and computational studies from the Laboratory for Perceptual Dynamics.

Supported by an Odyeseus grant from the Flanders Society for Research (FWO).

O2B-3: Perceptual constancy and subjective/objective modes of perception John O'Dea

Department of Interdisciplinary Cultural Studies, The University of Tokyo, Japan

odca@chora.c.u-tokyo.ac.jp

Perceptual constancy is normally defined as the tendency to see objects as stable with respect to a perceived attribute through a range of viewing conditions. However, this definition does not differentiate perceptual constancy from merely veridical perception. I show in this paper that there is an important difference. Constancy phenomena involve not just the recovery of veridical intrinsic object properties, but the disambiguation of an intrinsic/extrinsic property pair (surface colour/illumination; shape/orientation; size/distance). The link between members of thcsc pairs is such that thc value of onc constitutivcly depends on thc value of thc other, not just computationally but phenomenologically. Thc failure to notice thc phcnomcnological closeness of the intrinsic/extrinsic attribute pair that characterises constancy has led to continuing arguments for a sensation/perception distinction in visual experience. This is held to be needed to account for such phenomena as the "small" look of distant objects. In this paper I present an alternative possibility, a way of understanding thcsc phenomena without needing to make a distinction between subjective and objective, or "proximal" and "distal", modes of perception. This alternative draws on thc idea that mechanisms responsible for multistablc perception can result in different ways of seeing a scene.

O2B-4: Relationship between brightness enhancement and self-luminosity of the glare illusion

Hideki Tamura

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan tamura13@vpac.cs.tut.ac.jp Shigeki Nakauchi

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan Kowa Koida

Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, Japan

The glare illusion is one of the optical illusions, which induces brightness enhancement and self-luminosity of the center region. It is not clear which those two aspects of the illusion arise independently or are causally related to each other. To test this, we measured the effect of stimulus luminance of the glare illusion while the background luminance was kept constant. First, eight participants compared the brightness between a glare stimulus and a control, and then they performed categorical naming task from 4 alternatives; "black", "gray", "white" and "glowing white". The glare stimulus consisted of an achromatic uniform disk surrounded by a luminance ramp. The control was same, but the surround was uniform. Luminance of both center and surround were modulated in a multiplicative way. We found that brightness enhancement of the glare stimulus was observed at any luminance range of the stimulus except low luminance (<15% of the background). However, a "glowing white" response was only observed for the glare stimulus at a high luminance range (>145%). In fact, brightness enhancement was observed even when its category was gray. Thus, we exclude the theory that brightness enhancement is induced because the stimulus appears self-luminous.

O2B-5: Shape and material from intensity gradient: A hypothesis Masataka Sawayama

NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan

masa.sawayama@gmail.com Shin'ya Nishida

NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan

Human perception of shape from shading is affected little by certain changes in surface material, including addition/removal of specular highlights (Nefs, Koenderink, & Kappers, 2006). While highlights drastically change the pixel intensity pattern of the surface image, particularly the steepness of intensity gradient, they affect the pixel intensity order only slightly. This is why an appropriate compressive tone remapping can make a glossy object look like a matte object (Motoyoshi, Nishida, Sharan, & Adelson, 2007). Considering these facts together with the importance of orientation field in shape perception (Fleming, Holtmann-Rice, and Bülthoff, 2011), we hypothesized that human shape-from-shading perception might be more sensitive to image features given by the intensity order information than those conveyed by the detailed intensity gradient information, while the gradient information might be dominantly used for material perception of the object surface. As one test of this hypothesis, we applied a variety of non-linear remappings to several object images and had observers estimate the perceived shape of the objects by setting a gauge probe with the matching apparent surface slant/tilt. In agreement with our hypothesis, the perceived shape of the tone-remapped images was similar to that of the original images unless their intensity order was disrupted.

Supported by: Grant-in-Aid for Scientific Research on Innovative Areas "Shitsukan" (No. 22135004) from MEXT, Japan.

O2B-6: The development of contrast sensitivity for gratings and natural images: revisiting the gold standard

Dave Ellemberg

Department of kinesiology, University of Montreal, Canada dave.ellemberg@umontreal.ca

We compared root-mean-square contrast sensitivity (CS) for natural images, phase-scrambled images, and Gabors in children aged 6, 8, and 10 years (n = 16 per age) and in adults. Natural and phase-scrambled images were band-pass filtered at one of five frequencies (0.33, 1, 3, 10, and 20 cpd). Detection thresholds were measured using a temporal 2AFC task. CS with Gabors was adult-like for 8-year-olds. However, our results raise three new issues regarding CS. First, for both adults and children, the shape of the CSF is different for natural images compared to gratings and phase-scrambled images. For natural images, peak sensitivity lies at higher spatial frequencies and the slope of the high spatial frequency turn-down is shallower. Second, adult sensitivity is higher for natural images. Finally, CS for natural images is still immature for 10-year-olds and the difference in threshold between children and adults is greater for natural images than for gratings or phase-scrambled images, indicating that sensitivity to natural images develops more slowly. Given the important developmental differences between traditional measures of CS using gratings and CS measured with natural images, the latter might be more relevant for the clinical assessment of visual development and visual pathology.

Supported by the Natural Sciences and Engineering Council of Canada.

O2B-7: Image deformation as a perceptual cue to a transparent layer Takahiro Kawabe

NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan

kawabe.takahiro@lab.ntt.co.jp Kazushi Maruya

NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan

Shin'ya Nishida

NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan

Human observers effortlessly recognize various types of transparent materials. While the traditional study of phenomenal transparency assumes a situation wherein a static transparent material partially reflects and/or absorbs light, dynamic image deformations can produce perception of transparent layers that neither reflect nor absorb light (Kawabe, Maruya, & Nishida, 2013). The present study examined how dynamic transparency perception utilizes two basic components of image deformation, compressive and shearing deformations, wherein opposite image movements occur coaxially and non-coaxially, respectively. Our stimulus consisted of a static natural image deformed by a sinusoidally drifting or standing wave of compressive or shearing deformation. We also manipulated the spatial and temporal frequencies of the waves. The task of observers was to judge whether the movie appeared to depict a single deforming layer or dual layers wherein a transparent layer was seen in front of a static natural scene layer. Generally, dual layers were perceived when the deformation spatial frequency was high. More frequent reports of a dual layer were obtained with compressive than shearing deformations, and drifting than standing wave deformations. The results suggest that the visual system can utilize basic image deformations and their flows to generate perception of a transparent layer.

Supported by: Grant-in-Aid for Scientific Research on Innovative Areas "Shitsukan" (No. 22135004) from MEXT, Japan.

O2B-8: A number of experiments on number

Nicholas E Scott-Samuel

School of Experimental Psychology, University of Bristol, UK n.e.scott-samuel@bris.ac.uk Natasha Davies

School of Experimental Psychology, University of Bristol, UK Caitlin Molloy

School of Experimental Psychology, University of Bristol, UK Alan To

School of Experimental Psychology, University of Bristol, UK

There are thought to be up to three processes which underlie enumeration: subitizing, counting and estimation. Opinions vary as to whether these different processes are underpinned by separate mechanisms, and there is also some debate about whether estimation of numerosity can be reduced to estimation of density. Subjects reported the number of dots presented in apertures of differing sizes containing varying numbers of dots (1 to 3660), which either moved or remained static. In a second set of experiments, a temporal 2AFC task was used to investigate whether static or moving dots looked more numerous. Changes in density had modest effects on estimates of number. The numerosity of static dots was veridical for lower numbers and systematically underestimated for higher numbers. Moving dots were overestimated at lower numbers and underestimated for higher. Estimates of moving dots were generally more accurate than those for static ones. 10% fewer moving dots appeared as numerous as static dots. This effect decreased with the speed of the moving dots. We conclude that: (i) disagreements about the role of density could be attributed to methodological differences; (ii) a single process cannot underpin enumeration of static and moving objects; and (iii) movement improves estimates of number.

O2B-9: Optimal audiovisual integration of object appearance and impact sounds in human perception of materials

Waka Fujisaki

Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Japan w-fujisaki@aist.go.jp Naokazu Goda

Division of Sensory and Cognitive Information, National Institute for Physiological Sciences, Japan

Isamu Motoyoshi

Department of Life Sciences, The University of Tokyo, Japan Hidehiko Komatsu

Division of Sensory and Cognitive Information, National Institute for Physiological Sciences, Japan

Shin'ya Nishida

NTT Communication Science Laboratories, Nippon Telegraph & Telephone Corporation, Japan

Suppose you find an elegant wine glass in a store. Although it looks like a high-quality item, tapping on it with your finger unexpectedly produces a dull sound. Then you notice that it is a cheap plastic imitation. Perception of material, of which an object is made, is a fundamental function of the human brain to evaluate the value of the objects in the real world. Nevertheless, the underlying computational and neural mechanisms remain largely unknown. In our previous report (Nishida et al., 2012 ECVP), we psychophysically examined how information about material category and material property was combined across different sensory modalities, and found dramatic audiovisual interactions—e.g., the appearance of ceramic paired with a paper sound was perceived as plastic. In the present report, we further explore the computational principle of the observed audiovisual integrations. Our data indicate that rating material category likelihoods follows a multiplicative integration rule, in that the categories judged to be likely are consistent with both visual and auditory stimuli, while rating material properties, such as roughness and hardness, follows a weighted average rule. Despite a difference in their integration calculations, both rules can be interpreted as optimal Bayesian integration of independent audiovisual estimations.

Supported by Grant-in-Aid for Scientific Research on Innovative Areas (No. 22135004, 22135007) from the Ministry of Education, Science, Culture, Sports and Science, Japan.

O3A-1: Effect of changing vertical disparity on perceived trajectory of moving object Hirohiko Kaneko

Department of Information Processing, Tokyo Institute of Technology, Japan

kaneko@ip.titech.ac.jp Takashi Adachi

Department of Information Processing, Tokyo Institute of Technology, Japan Toru Maekawa

Department of Information Processing, Tokyo Institute of Technology, Japan

Theoretically, one can estimate the direction of an object relative to the head using the vertical disparity produced by the object. However, several reports have described that vertical disparity has little or no effect on the perception of visual direction of a static object. Our previous researches have shown that changing vertical disparity of the stimulus affected the perception of absolute distance and head movement. These findings suggest that the visual system might detect the temporal change of vertical disparity and use it as information for visual direction and for motor control. This study investigated whether changing vertical disparity affected perceived trajectory of a moving object and eye movement to it. The stimulus was a circular disk, a set of horizontal lines or a cloud of random dots. We manipulated independently changing horizontal disparity, changing size and changing vertical disparity of the stimulus to simulate an approaching object to the right or left of the observer. We measured observers' perceived trajectory and eye movements to the stimulus. The results showed that changing vertical disparity affected perceived trajectory of the object. The effect was consistent among observers and with the geometry of the situation although the magnitude of it was small.

Supported by a Grant-in-Aid for Scientific Research (No. 24500236) from the Japan Society for the Promotion of Science.

O3A-2: Sensory and decisional factors involved in resolving audio-visual motion sequences

Philip M Grove

School of Psychology, The University of Queensland, Australia p.grove@psy.uq.edu.au

Grove, Ashton, Kawachi, and Sakurai (2012) investigated the sensory and decisional factors underlying the audiovisual "stream/bounce" illusion using signal detection theory. Observers distinguished between objective streaming and bouncing events when a transient sound was presented at coincidence or not. Sensitivity (d') measures were the same between sound and no sound conditions, but criterion (c) changed significantly across these conditions, suggesting that decisional processes underlie the illusion. However, Grassi and Casco (2012) reported evidence for both sensory and decisional processes in the illusion. One important difference between these studies is that Grassi and Casco required participants to make judgments about the perceived overlap of the targets near the point of overlap, while Grove et al. had observers report which sequences depicted streaming or bouncing. Grassi and Casco's task restricted judgments to epochs close to the time of coincidence. Grove et al. had participants respond after viewing the entire motion sequence. To investigate this discrepancy, I manipulated the duration the targets were visible after coinciding. Sensitivity measures were the same between sound and no sound conditions across duration conditions but criterion changed significantly between sound conditions. These data are further evidence for decisional processes but not sensory processes in our version of the stream/bounce illusion.

O3A-3: When a Mackintosh chair looks like a young woman: A study on visual object personification

Ryosuke Niimi

The University of Tokyo, Japan

niimi@L.u-tokyo.ac.jp

Humans like to personify non-human objects, and just a single observation might lead to personification. The current study examined underlying factors related to the personification of common objects. In Experiment 1, participants were asked to personify 32 cars/chairs. For every object, its gender and age were reported. Confidence in personification judgments was also rated. Overall, participants successfully personified objects, and mean personified gender and age varied across objects. There was a significantly positive correlation (r = 0.44-0.49) between gender confidence was age confidence. For cars, color saturation was significantly correlated with personified gender and age (e.g., cars with saturated color were personified as young females on average). While personification can elicit positive affect, likability ratings were not correlated with any personification measure. Previous studies reported that individuals attribute a user/ owner to common objects; thus, knowledge of a typical user/owner regarding an object might dominate personification. In Experiment 2, participants estimated the age/gender of a car/chair user. Estimated age/gender was correlated with personified age/gender (r = 0.63-0.87). Overall, results showed that personification of visual objects is influenced by an observer's knowledge base, while visual features (color) also play a role.

O3A-4: Integration of multiple spatial frequency channels in V1 disparity detectors Mika Baba

Department of Frontier BioSciences, Osaka University, Japan rappaman@Jbs.osaka-u.ac.jp Kota S Sasaki

Department of Frontier BioSciences, Osaka University, Japan Izumi Ohzawa

Department of Frontier BioSciences, Osaka University, Japan

A small difference between the images on the left and right retina, called 'binocular disparity', produces different magnitudes of phase shifts depending on the spatial frequency (SF). While many binocular neurons in the primary visual cortex (V1) are selective to binocular disparity, it's been unclear whether information from different SF channels is integrated in V1 single cells such that these cells can be tuned to the same disparity across a broad range of SF. To ask this question, we performed extra-cellular single-unit recordings from the V1 of anesthetized and paralyzed cats. First, some complex cells showed binocular interaction only for stimuli whose SFs matched between the two eyes; that is, they did not show it for unmatched stimuli. This characteristic cannot be explained by a single disparity detector. Therefore, the binocular receptive fields of a subset of single complex cells in V1 may be constructed by pooling multiple disparity detectors tuned to different SFs. Second, binocular receptive fields reconstructed in a restricted SF range were tuned to the same disparity across different SF bands. This suggests that disparity detectors for different SF channels are pooled in these neurons to encode the same disparity across a broad range of SF.

O3A-5: V1 mechanisms related to encoding global visual motion

Choongkil Lee

Seoul National University, Korea cklee@snu.ac.kr Kayeon Kim

Seoul National University, Korea Eunyoung Kim

Seoul National University, Korea Taehwan Yoon Seoul National University, Korea

A fundamental goal of the central visual system is thought to integrate global stimulus information, such as motion, from the activity of neurons with local receptive fields (RFs). In the current presentation, we summarize experimental results on the characteristics of neuronal activity of the primate V1 in relation to processing global visual motion. A sequence of two identical stationary Gabor stimuli, optimal for the cell under study when presented over the RF, was used to examine the modulation of spike activity of V1 neurons in fixating macaque monkeys. The first stimulus of the two was presented outside the classical RF, and after a variable temporal interval the second one was presented over the RF. The spatial location and timing of the first stimulus with respect to the second determined the direction and speed of visual motion. Although the first stimulus did not evoke spike response by itself, it evoked a robust change in local field potential (LFP), and this subthreshold change modulated spike activity in response to the RF stimulus. The modulation was selective for sequence direction and speed, and cortical layers. These results suggest that V1 neurons participate in encoding global visual motion based on fine-tuned surround interaction.

Supported by the Cognitive Neuroscience Program of the Korea Ministry of Science, ICT and Future Planning.

O3A-6: Non-uniform flow aftereffect produced by adaptation to sparse uni-directional motion flows

Kazushi Maruya

NTT Communication Science Laboratories, Nippon Telegraph & Telephone Corporation, Japan

maruya.kazushi@lab.ntt.co.jp Shin'ya Nishida

NTT Communication Science Laboratories, Nippon Telegraph & Telephone Corporation, Japan

The motion flow produced by natural scenes is often complex and spatially non-uniform. Whereas the visual system has an excellent ability to recognize complex non-uniform motion flows (e.g., liquid), little is known about the underlying neural computation. As a potential clue to the non-uniform flow processing mechanism, we recently reported a non-uniform flow aftereffect (NUFA; Maruya & Nishida, VSS2014). This aftereffect is characterized by an elevation of the minimum direction difference required to see a checkerboard structure contained in an array of bi-directionally moving Gabor plaids after prolonged viewing of similar non-uniform flows. NUFA can be ascribed to the adaptation of non-uniform flow detectors in the visual system. Here we further report that NUFA is also generated by adaptation to uni-directional sparse motion fields: i.e., a motion-defined checkerboard pattern in which a half of the squares consisted of flow in a common direction, while the remaining squares were stationary, or uniform grey. This finding is consistent with the idea that non-uniform motion flow is represented as multiple spatial maps for different directions, with each being an amplitude-modulation map of motion signals in a given direction.

O3A-7: Perception of global image contrast is predicted by the same spatial integration model of gain control as detection and discrimination

Tim Simon Meese

Aston University, UK t.s.meese@aston.ac.uk Daniel H Baker Aston University, UK Robert J Summers Aston University, UK

How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of 'battenberg' micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 x 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.

Supported by Engineering and Physical Sciences Research Council, UK.

O3A-8: Modelling the human visual cortex, a complete model from visual stimulus to BOLD measurement

Mark M Schira

School of Psychology, University of Wollongong, Australia Neuroscience Research Australia, Australia mschira@uow.edu.au Peter Robinson

School of Physics, University of Sydney, Australia Michael J Breakspear

QIMR Berghofer Medical Research Institute, Australia Kevin M Aquino Neuroscience Research Australia, Australia School of Physics, University of Sydney, Australia

Functional magnetic resonance imaging (fMRI) has become a standard tool in vision science, and many properties of visual cortex are fairly well understood and modelled, such as retinotopic organisation and contrast response functions.

However, fMRI is an indirect measure resting upon the blood oxygen level dependent (BOLD) signal. Hence, there are many steps from an experimental manipulation, such as a visual stimulus, to the BOLD response. Our understanding of each of these processes matures, more and increasingly sophisticated models have been proposed. We present a framework integrating an assembly of existing models generating concrete and applicable predictions of the BOLD measurements for any experiment with a simple visual stimulus. We combine previously independent models of the spatiotemporal properties of the BOLD response (Aquino et al., 2012, PLoS CB) with an algebraic model of the transformation of visual space to early visual cortex (Schira et al., 2010, PLoS CB) and further onto an average cortical surface (Benson et al., Current Biology). This allows predicting concrete (Freesurfer average brain) and detailed responses in space and time for an arbitrary visual stimulation (movie), providing a first pass bottom up prediction for testing, validating and optimizing visual experiments and the space-time separability of response.

Supported by ARC DP120100614, ARC DP130100437.

O3A-9: The effect of flicker on apparent spatial frequency Sae Kaneko

Department of Psychology, University of California, San Diego, USA Japan Society for the Promotion of Science, Japan sakaneko@ucsd.edu Deborah Giaschi

Department of Ophthalmology and Visual Sciences, University of British Columbia, Canada Stuart Anstis

Department of Psychology, University of California, San Diego, USA

Adaptation to spatially uniform luminance flicker can raise the apparent spatial frequency of a coarse test grating. After adaptation to a flickering Gaussian blob, two Gabor patches were presented for 500 ms (adapted and non-adapted side). Subjects judged which Gabor had higher spatial frequency. The adapting flicker rate was 0.5 Hz (slow) or 10 Hz (fast), and the spatial frequency of the test gratings was 0.5 cpd or 4 cpd (subjective contrasts of adapted and matching gratings were kept equal). Fast adapting flicker raised by ~10% the perceived spatial frequency of the 0.5 cpd but not the 4 cpd test gratings. This effect was not seen when we adapted to equiluminant color flicker. Slow adapting flicker had no effect. Also, spatially uniform 10Hz flicker, transparently superimposed on a 0.5 cpd grating, raised the grating's apparent spatial frequency by ~10% (this is not the frequency-doubling illusion). The 4 cpd grating showed no effect.

These interactions between flicker and spatial frequency imply that perceived spatial frequency is determined by a balance between transient/magno channels that respond to high temporal, low spatial frequency, and sustained/parvo channels that respond to low temporal, high spatial frequency plus color.

O3B-1: Evolving the stimulus to fit the brain: A genetic algorithm reveals the brain's feature priorities in visual search

Erik Van der Burg

University of Sydney, Australia erik.vanderburg@sydney.edu.au John Cass University of Western Sydney, Australia Jan Theeuwes

VU University, Amsterdam, The Netherlands

David Alais University of Sydney, Australia

Finding a target in clutter is a common task studied for 30 years using the visual search paradigm. A reliance on factorial experimental designs, however, has limited visual scene complexity to impoverished displays. Here we examine search in complex displays using a genetic algorithm (GA). Human subjects searched a series of complex displays and those supporting fastest search were selected to reproduce ('survival of the fittest'). Their display properties ('genes') were crossed and combined to create a new generation of 'evolved' displays. Displays evolved quickly over generations towards a stable, efficiently searched array. Contrary to current models, evolution was serial, not parallel: colour evolved first, followed by orientation. The evolved displays also contained spatial patterns suggesting a coarse-to-fine search strategy not predicted by current models. The GA, therefore, not only simplifies evaluation of complex search spaces; it also adapts the display to the brain and reveals its own search strategies.

O3B-2: Visual Search in heterogeneous displays is not a categorical process: Evidence from genetic algorithms

Garry Kong University of Sydney, Australia garry.kong@sydney.edu.au David Alais University of Sydney, Australia Erik van der Burg University of Sydney, Australia

In this study we examine how observers search heterogeneous displays, far too complex to understand using factorial designs. In two experiments, participants were asked to either search for a vertical target line amongst 23 distractor lines of varies orientations (ranging from -40° to 40° from vertical), or a pink target amongst 23 distractor lines of varying colours (from white to red). We evolved the search displays using a genetic algorithm, where the displays with the fastest RTs ('survival of the fittest') were selected and recombined to create new displays. In both experiments, search times declined over generations. This decline was due to a reduction of distractors within a certain orientation and colour range. Interestingly, within this window of interference, the decrease in distractors was strongest for distractors closest to the target orientation or colour, and weaker for distractors further away. The results suggest that top-down driven visual search is not a strict categorical process as proposed by some models of visual search.

O3B-3: Human visual search performance for military issued camouflaged targets

Olivia Emily Matthews

School of Experimental Psychology, University of Bristol, UK Olivia.Matthews@bristol.ac.uk Tim Volonakis

School of Experimental Psychology, University of Bristol, UK Innes C Cuthill

School of Biological Sciences, University of Bristol, UK Nicholas E Scott-Samuel

School of Experimental Psychology, University of Bristol, UK Roland Baddeley

School of Experimental Psychology, University of Bristol, UK

There is a paucity of systematic research investigating object detection within the military context. Here, we establish baseline human detection performance for five standard military issued camouflage patterns. Stimuli were drawn from a database of 1242 calibrated images of a mixed deciduous woodland environment in Bristol, UK. Images either contained a PASGT helmet, or didn't. Subjects (20) discriminated between the two image types in a temporal 2AFC task (500 ms presentation for each interval). Cueing (cued/not-cued to target location), colour (colour/greyscale) and distance from the observer (3.5/5/7.5 m) were manipulated, as was helmet camouflage pattern. A Generalized Linear Mixed Model revealed significant interactions between all variables on participant performance, with greater accuracy when stimuli were in colour, and the target location was cued. There was also a clear ranking of pattern in terms of effectiveness of camouflage. We also compare the results to a computational model based on low-level vision, with encouraging results. Our methodology provides a controlled means of assessing any camouflage in any environment, and the potential to implement a machine vision solution to assessment. In this instance, we show that existing solutions to the problem of concealment on the battlefield are not equally effective.

O3B-4: The many ways to hide: a computer vision-based model of visual similarity between camouflage patterns

Laszlo Talas

University of Bristol, UK l.talas@bristol.ac.uk Roland Baddeley

School of Experimental Psychology, University of Bristol, UK Innes C Cuthill

School of Biological Sciences, University of Bristol, UK

Humans use camouflage to disguise themselves in battle environments; however, the function of uniform patterns goes beyond concealment—it also helps to distinguish between friends and foes. The conflicting forces of concealment vs recognition have resulted in an extraordinary number of camouflage patterns around the world. Establishing a metric for visual similarity between textures is essential for understanding this diversity and finding divergence or convergence in pattern designs over time. As our current database holds over 600 patterns, we devised an automatic, computer vision-based system to compare camouflage uniform patterns. The model uses mixtures of factor analysers to cluster pre-processed images of patterns into classes. We show that the posterior probabilities for a pattern coming from a set of classes can be used as a similarity metric between patterns. We tested the model against human performance, where participants were asked to group 64 randomly selected camouflage patterns using various information such as colour and structure. We compare the model's results to human performance both for colour and pattern based metrics and show the strengths and limitations of the approach.

O3B-5: The collinear masking effect can emerge as fast as 40 ms Li Jingling

Graduate Institute of Neural and Cognitive Sciences, China Medical University, China jlli@mail.cmu.edu.tw Ching-Wen Chiu

Graduate Institute of Neural and Cognitive Sciences, China Medical University, China

A salient target is usually easier to find in visual search. However, perceptual grouping, collinearity in particular, can reverse this observation. The phenomenon, called collinear masking effect, was observed for a search display that was filled with horizontal bars while a column of bars were vertical (the collinear column). This collinear column thus is salient and well-grouped. The task was to discriminate a small oriented bar either in the collinear column or in the other columns in the background. The collinear masking effect refers to the phenomenon that discrimination was slower for a target in the collinear column compared to that in the background. Since feature saliency affects visual search relatively early (e.g., 40-70 ms) while feature conjunctions are late (e.g., 150-200 ms) in a time course, we examined the time course of the collinear masking effect in this study. We found that the collinear masking effect was reliable in all limited presentation durations (40, 70, 150, or 300 ms). This result implied that the collinear masking effect occurred as fast as saliency was calculated, suggesting that collinear grouping interferes with visual search very early in the information processing stream.

Supported by NSC101-2410-H-039-001-MY2.

O3B-6: Habitual video gamers can still learn from new games Michael D Patterson

Nanyang Technological University, Singapore mdpatterson@ntu.edu.sg Han Jing Yang

Nanyang Technological University, Singapore Qinyuen Wong Nanyang Technological University, Singapore

The effect of playing video games on cognition has been studied using two methods: cross-sectional, by comparing habitual gamers with non-gamers, and longitudinal, by comparing performance of non-gamers before and after 10-50 hours of video game playing. We combined these methods by examining training effects on non-gamers, action gamers, or other gamers (without action-game experience). Each group was then subdivided into three training groups and played one of three video games, a hidden object game, an action game, or a physics-based puzzle game for twenty hours over four weeks. Participants were tested before and after training on measures of visual attention and perception. The training video games were chosen based on the degree they resembled habitually played games. We expected action gamers would only show post-training differences after playing the puzzle or hidden object game. For other gamers, we expected only effects from playing action games, and for non-gamers we expected training effects from all three video games. The results largely matched our expectations for attentional measures, with action gamers significantly improving in alerting and executive attention only after playing non-action games. Thus, even habitual gamers can benefit from playing video games of a different style than they usually play.

Supported by DSO grant to Michael D. Patterson.

O3B-7: Eye of origin information does not always facilitate target search

Sunny Meongsun Lee

Department of Psychology, University of Hong Kong, Hong Kong sunnylee@connect.hku.hk Yuk Sheung Yeung

Department of Psychology, University of Hong Kong, Hong Kong Hiu Mei Chow

Department of Psychology, University of Hong Kong, Hong Kong Department of Psychology, University of Massachusetts Boston, USA Chia-huei Tseng Department of Psychology, University of Hong Kong, Hong Kong

Despite being unconsciously experienced, eye of origin information (Zhaoping, 2008) and an invisible collinear structure (Tseng & Chow, 2013) are both reported to guide attention in visual search. Here we investigated the interaction between the facilitative capacities of ocularity and the impairment from collinearity. Observers searched for a target (tilted gap on a bar) in a 9 x 9 horizontal bars display, except that in one column the bars are vertical (collinear item), and in another column presented to another eye (ocular item). The target could overlap with both, either, or neither visual items. Surprisingly, search performance was worsened when a target overlapped with the ocular item, opposite to Zhaoping's (2008) finding. To isolate the ocular effect from collinear suppression, we removed the collinear column and adopted a homogeneous display of horizontal bars only with an ocular item that comprises parallel horizontal bars. The worsened search performance persisted when the target overlapped with the ocular item. Our results imply that the effect of ocular information on visual search may be task-dependent, and possibly interacts with perceptual organization of display and relative target size.

O3B-8: Influence of divided visual attention on the visual control of steering toward a goal

Rong Rong Chen

University of Hong Kong, Hong Kong rainerrchen@gmail.com Li Li

University of Hong Kong, Hong Kong

We examined the influence of divided visual attention on the visual control of steering toward a goal.

The display (113° H x 88° V) simulated a participant traveling at 2 m/s or 15 m/s over a textured ground. Participants used a joystick to control the curvature of their traveling path to steer toward a target post while simultaneously tracking one dot (low-attentional load) or three dots (high-attentional load) among eight dots that randomly moved inside a circle (radius=3.5°) on top of the target post. Participants' virtual heading was offset from their straight ahead by 10° such that aligning the target with their straight ahead would cause a 10° heading error. Across 20 participants, tracking accuracy decreased with the attention load more at 15 m/s than 2 m/s. For the steering performance in the trials with accurate tracking, the target-heading angle decreased faster at 15 m/s than 2 m/s. Across both speeds, the target-heading angle increased with the attentional load during the initial 2-6 s steering but converged to participants' optimal performance near the end of the 10 s trial.

We conclude that participants have more difficulty with the high attention-demanding task when traveling at high than low speed. Divided attention affects only the early stage of the visual control of steering.

Supported by Hong Kong Research Grant Council, HKU 7480/10H and 7482/12H.

O3B-9: Correlation between amplitude and phase of SSVEP as an attention measure Satoshi Shioiri

Research Institute of Electrical Communication, Tohoku University, Japan Graduate School of Information Sciences, Tohoku University, Japan shioiri@riec.tohoku.ac.jp Yoshiyuki Kashiwase

Graduate School of Information Sciences, Tohoku University, Japan Kazumichi Matsumiya

Research Institute of Electrical Communication, Tohoku University, Japan Graduate School of Information Sciences, Tohoku University, Japan Ichiro Kuriki

Research Institute of Electrical Communication, Tohoku University, Japan Graduate School of Information Sciences, Tohoku University, Japan

We have reported the time course of attention measured with amplitude and phase coherence of steady-state visual evoked potential (SSVEP), previously. The study found that the modulation latency of phase coherence rather than that of amplitude was consistent with the latency of behavioral performance in the experiment where the observer shifted attention to a location after a cue presentation. In the present study, we analyzed the time course of visual attention shift using correlation of amplitude and phase signals for individual variations. This attempt was made because previous study showed that the two measure correlate more at the location of attentional focus and, therefore, the degree of correlation can be used to estimate attention modulation. The present analysis showed that correlation index increased in time after cue presentation, as did amplitude and phase. Since the time course is more similar to that of phase, the increase in correlation can be attributed to the increase in amplitude due to the increase of phase coherence. These findings are consistent with the assumption that there are two factors of attention modulations: gain and phase coherence.

Supported by the Core Research for Evolutional Science and Technology of the Japan Science and Technology Corporation.

O3C-1: Neural correlates of vection under wide-view stereoscopic visual display Atsushi Wada

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan

Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan a-wada@nict.go.jp Yuichi Sakano

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan

Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan Hiroshi Ando

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan

Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan

In the present study, we conducted an fMRI experiment to investigate the neural substrates of visually induced self-motion perception (vection) by using a custom-developed wide-view stereo display system to elicit a robust perception of self-motion. We used random dot stimuli that were either static, in random motion, or in coherent motion that simulated forward self-motion. The stimuli were presented stereoscopically or non-stereoscopically (binocularly), in four stimulus size conditions. An ROI analysis combined with a psychophysical measurement of subjective vection strength identified significant correlation between the neural responses and the ratings in hMT+, V6, precuneus motion area (PcM), cingulate sulcus visual area (CsV), and the posterior insular vestibular cortex (PIVC). The correlation was significantly higher for V6, PcM, and CsV compared with V1, and for CsV compared with hMT+. Furthermore, a multi-voxel correlation analysis found that the spatial response pattern within each of hMT+, V6, and CsV significantly differed among different motion patterns. Taken together, these findings suggest that the optic-flow responsive regions in the medial wall, especially CsV, may play an important role in giving rise to vection.

This study was conducted as a work of "Development of technology for 3D image without stereo glasses" entrusted by Ministry of Internal Affairs and Communications of Japan.

O3C-2: Far from the skies, close to the ground—why do we misperceive distances Oliver M Toskovic

Laboratory for Experimental Psychology, Faculty of Philosophy, University of Belgrade, Serbia

otoskovi@f.bg.ac.rs

In previous research we showed that participants tend to equalize shorter vertical with longer horizontal distances, meaning that vertical distances are perceived as longer. This anisotropy might be due to better action performance, since, in order to reach something towards zenith, we oppose gravity, and if perceived distance is longer, we would put more effort and easily oppose gravity. If this is true, action towards the ground would be in line with gravity, and perceived distance should be shorter. Two experiments were performed in which 27 participants (14+13) had the task to equalize the perceived distances of stimuli on three directions (horizontal, tilted 45 degrees and vertical). In the first experiment tilted and vertical directions were toward zenith, and in the second toward the ground. One of the stimuli was the standard, and participants matched distances of other two with the standard. Results of the first experiment show significant difference in matched distances between all three viewing directions. For the second experiment, results show significant difference between horizontal and other two directions. Distances toward zenith were perceived as longer, while distances towards the ground were perceived as shorter. Both of these findings are in line with our hypothesis.

Supported by Ministry of Education and Science of Republic of Serbia, project number ON179033.

O3C-3: Relationship between maximum screen disparity for viewing stereoscopic images without discomfort and individual variation in visual function

Haruki Mizushina

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan

mizushina@nict.go.jp Hiroshi Ando

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan

Three-dimensional displays sometimes cause visual fatigue and discomfort. Some viewers experience severe discomfort by watching stereoscopic contents, but others do not. The source of individual difference of discomfort remains unidentified. To examine the source of individual difference, we focused on the individual differences in visual function, especially in vergence and accommodation and their interaction. We measured the vergence and accommodative responses to stereoscopic and real targets, and the AC/A (accommodative convergence/accommodation) and CA/C (convergence accommodation/convergence) ratios, which express the degree of cross-interaction between vergence and accommodation mechanisms. In addition to the visual function, we measured the maximum screen disparity for viewing stereoscopic targets without discomfort of each participant to evaluate the correlation between visual function and tolerance for excessive screen disparity. Our results showed that participants, whose AC/A ratios were closer to a value of one, showed a smaller limit of screen disparity for stereoscopic viewing without discomfort. The limit of screen disparity correlated to neither the CA/C ratio nor the accommodative response to the stereoscopic target. These results suggest that people whose accommodative convergence is optimized to the real world environment (AC/A = 1) are susceptible to excessive screen disparity.

O3C-4: Effect of number of surfaces on perceived depth between two outermost surfaces of a stereo-transparency stimulus

Koichi Shimono

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

shimono@kaiyodai.ac.jp Saori Aida

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan Wa James Tam

University of Ottawa, Canada

We examined the effect of the number of surfaces on the magnitude of perceived depth between the two outermost surfaces of a random-dot stereogram that depicts an n number of parallel-overlapping-transparent-stereoscopic-surfaces (POTS). Experiment 1 showed that the perceived depth of a three-POTS or four-POTS stereogram was smaller than that of a two-POTS stereogram even when they had the same horizontal disparities. Experiment 2 showed that the viewer-matched disparity of the two outer surfaces of a three-POTS stereogram was smaller than that of the two outermost surfaces of a four-POTS stereogram even when their perceived depths were the same. These results indicate that the magnitude of perceived depth can be reduced as the total number of surfaces is increased even when the two outermost surfaces have the same disparity. A cross-correlation analysis, whose operation is assumed to represent an early disparity-detection process, showed that the cross-correlation function profiles corresponded well to the depicted surface locations of POTS stereograms used for the studies, but not the perceived depths. We suggest that the perceived depth for an n-POTS stereogram is mediated through a higher-order process(es) with an output representing perceived depth magnitude that is weakened when the number of its surfaces is increased.

Supported by Sasakawa Scientific Research (23-208) and Grant-in-Aid for Scientific Research (B)(23330215).

O3D-1: Semantic processing of invisible words under interocular suppression: Evidence from evoked-related potentials

Su-Ling Yeh

Department of Psychology, National Taiwan University, Taiwan Neurobiology and Cognitive Neuroscience Center, National Taiwan University, Taiwan Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taiwan suling@ntu.edu.tw Jifan Zhou

Department of Psychology, National Taiwan University, Taiwan Kuei-An Li

Department of Psychology, National Taiwan University, Taiwan Yung-Hsuan Tien

Department of Psychology, National Taiwan University, Taiwan Yi-Huan Chen

Department of Psychology, National Taiwan University, Taiwan Yih-Shiuan Lin

Department of Psychology, National Taiwan University, Taiwan Alan Pegna

Laboratory of Experimental Neuropsychology, Neuropsychology Unit, Neurology Clinic, Geneva University Hospitals, Switzerland Faculty of Psychology and Educational Science, University of Geneva, Switzerland

Whether word meaning can be accessed under interocular suppression remains debated. We examine whether invisible words presented under the continuous flash suppression paradigm lead to an N400 effect—a highly sensitive index of semantic processing. ERPs were recorded when a prime (word or non-word) was presented to one eye and the dynamic Mondrian masks to the other eye. After viewing the suppressed prime, participants were instructed to judge the lexicality of a subsequent, unsuppressed target (word or non-word), and then report whether the prime was visible. The contrast of the prime was adaptively modulated according to the visibility reports to maintain comparable contrast and number of visible and invisible trials. Based on the visibility reports, the ERP data were divided into visible and invisible trials for further analysis. We found a larger N400 effect for non-words than for words in invisible trials, whereas the lexicality effect was mainly reflected in the P600 time window in visible trials; both were frontally distributed. Consistent with our previous behavioral findings (Yang & Yeh, 2011), the current ERP results provide further psychophysiological evidence for unconscious semantic processing under CFS, and suggest that semantic processing of subliminal words occurs early at an N400 time window.

Supported by National Science Council (NSC 101-2410-H-002-083-MY3).

O3D-2: Aesthetic influences on visual awareness?: An 'inward bias' in inattentional blindness

Yi-Chia Chen

Department of Psychology, Yale University, USA

yi-chia.chen@yale.edu Brian J Scholl

Department of Psychology, Yale University, USA

Seeing is often an intrinsically aesthetic experience, since even with abstract patterns we cannot help liking or disliking what we perceive. Aesthetic experience is not often studied in vision research, however, and the few exceptions still treat it as independent from the rest of perception: seeing, in this view, simply provides the inputs to liking. In contrast, we suggest that liking is a part of seeing, and so may influence other aspects of vision. We focus on the 'inward bias': when an object with a salient 'front' is placed near the border of a frame, observers find the image more aesthetically pleasing if the object faces inward (toward the center) vs outward (away from the center). We demonstrate that this factor can also influence visual awareness itself, in an inattentional blindness task. Observers performed a demanding task on simple, sparse, framed visual displays. After a few uneventful trials, an unexpected figure with a salient 'front' appeared in the display. Many observers failed to see it, but this inattentional blindness was much less severe when the figure faced inward —even though the facing direction was physically quite subtle. Aesthetic factors may thus influence what we see in the first place.

O3D-3: Perceptual learning weakens crowding by reducing identity but not position errors

JunYun Zhang

Department of Psychology, Peking University, China zhangjunyun@pku.edu.cn YingZi Xiong

Department of Psychology, Peking University, China Cong Yu

Department of Psychology, Peking University, China

Reporting errors due to crowding may results from failing to identify the target letter (identity errors), or misperceiving a correctly identified target letter at a flanker location (position errors), which can be revealed by a comparison of partial report (reporting the flanked target letter only) and whole report (reporting the target and flanking letters). In addition, reporting errors are associated with frequent reports of a flanker as the target letter. Hence flanker substitution is hypothesized as one source of crowding. On the other hand, crowding can be eased through perceptual learning, but how the above types of errors are affected is unclear. Here we trained observers to recognize a central target letter in a trigram, and assessed the training impacts on unflanked single letter recognition, identity and position errors, and flanker substitution errors. Our results show that crowding learning had no significant impact on single letter recognition and position errors of flanked target letter, but mainly on identity errors. Crowding learning also had no significant impact on flanker substitution errors, suggesting that flanker substitution is more likely a by-product, rather than a cause, of crowding. An observer may report a more visible flanker when failing to recognize the central letter.

Supported by Natural Science Foundation of China grants 31000459 (JYZ) and 30725018 (CY).

O3D-4: Filling in the blanks in cortical blindness Juno Kim

School of Optometry and Vision Science, The University of New South Wales, Australia

juno.kim@unsw.edu.au Stuart Anstis

Department of Psychology, University of California, San Diego, USA

Lesion of any part of the visual system from retina to cortex can lead to areas of blindness (scotomata). This blindness does not typically result in apparent voids in visual information, because so-called "filling-in" processes serve to promote continuity in visual experience. We conducted experiments in a patient with a missing quadrant due to cortical blindness to determine whether visual completion was driven by an active or passive process. We found that sensitivity to misalignment between two lines—a vernier—that bridged the blind quadrant was lower compared with that across an equal gap in the intact quadrant. We found that misaligned parallel lines within the intact or missing quadrants became perceptually aligned over time, with greater physical misalignments able to appear co-aligned across the blind quadrant. We also found that spots placed across the blind quadrant appeared farther apart than spots placed across the intact quadrant. When the spots were "jittered" in local motion, the perceived distances were similar across blind and intact quadrants. These observations suggest that active processes of form and motion, both at and within the perimeter of the blind quadrant, serve to preserve visual awareness in the partially blind.

P1-1: A case of impaired color knowledge but spared color perception Yuka Ohishi

Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine, Japan

glucks.schlussel@gmail.com Hikaru Nagasawa

Department of Neurology, Yamagata Prefectural Central Hospital, Japan Kazuyo Tanji

Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine, Japan

Naohiro Saito

Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine, Japan

Kyoko Suzuki

Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine, Japan

We report on a patient with the left medial occipitotemporal lesion, who exhibits total loss of color knowledge but spared color perception. A 79-year-old right-handed man showed the right upper quadrantanopsia, agraphia and impaired color knowledge following the cerebral infarction. No other neurological abnormalities were found. Detailed neuropsychological evaluations were performed one year after the onset. He was able to arrange a set of color plates in the correct order in the Farnsworth Panel D-15 and perform flawlessly in the City University Colour Vision Test III. It indicated his color perception was preserved. In contrast, his performance was poor in various tasks tapping knowledge of object color. He named all the objects correctly, although he colored them with a wrongly colored pencil. He could neither choose a correct picture of an object based on color nor recall the appropriate color name from a line drawing of an object. On the other hand, his knowledge of object form, size and function was well preserved. This suggested that knowledge of object color and knowledge about other properties of object might be represented separately in the brain. This case provides evidence for a critical involvement of left medial occipitotemporal region in color knowledge.

Supported by Grant-in-Aid for Scientific Research on Innovative Areas "Shitsukan" (No. 25135703) from MEXT, Japan.

P1-2: Population responses in the macaque inferior temporal cortex encode perceptual gloss parameters

Akiko Nishio

National Institute for Physiological Sciences, Japan nishio@nips.ac.jp Takeaki Shimokawa

ATR Neural Information Analysis Laboratories, Japan Naokazu Goda National Institute for Physiological Sciences, Japan The Graduate University for Advanced Studies (SOKENDAI), Japan Hidehiko Komatsu

National Institute for Physiological Sciences, Japan

The Graduate University for Advanced Studies (SOKENDAI), Japan

Surface gloss affects the appearance of objects and provides important information for object recognition. We found that there exist neurons in the inferior temporal (IT) cortex of the monkey that selectively respond to specific ranges of gloss that are characterized by various combinations of three physical reflectance parameters (Nishio et al., 2012). In the present study, we studies how the activities of gloss selective IT neurons are related to perceived gloss. As a previous psychophysical study (Ferwerda et al., 2001) has identified a perceptually uniform gloss space, we tested the responses of gloss selective neurons to stimuli in the perceptual gloss space. We found that gloss selective neurons systematically changed their responses in the perceptual gloss space, and the distribution of the tuning directions of gloss selective neurons is biased toward the directions to which perceived gloss increases. Furthermore, we found that a set of perceptual gloss parameters as well as surface albedo can be well explained by the population activities of gloss selective neurons, and it is likely that these parameters are coded by the gloss selective neurons in this area to represent a variety of gloss. These results suggest that the IT cortex represents perceptual gloss space.

P1-3: Color discrimination of color vision defectives for monochromatic stimuli and natural image

Mitsuru Suzuki

Graduate School of Advanced Integration Science, Chiba University, Japan

jumatsuki@chiba-u.jp Hirohisa Yaguchi

Graduate School of Advanced Integration Science, Chiba University, Japan Yoko Mizokami

Graduate School of Advanced Integration Science, Chiba University, Japan

The color discrimination of color vision defectives has been investigated. However, most of the research uses simple monochromatic color for stimuli, and natural images which contain various colors have not been examined. In this research, we conducted color discrimination experiments using image stimuli and monochromatic square stimuli with color defective and color normal observers. Both stimuli are displayed on the CRT monitor in a dark room. As monochromatic stimuli, four squares were displayed on a white background, and one of them temporarily changed color. Observers answered which square changed color. In the case of using image stimuli, three images were displayed successively on a black background. The second image was always original, and then observers answered whether either the first or the third image was the modulated image. The color of stimuli was modulated on the ATD color space. Two color normals and three deuteranomalous observers participated. Our result shows that the color discrimination of color defectives for image stimuli was better than that for monochromatic stimuli. On the other hand, color normals did not show the difference. These suggest that only color defectives use a clue from the unique characteristic of an image, such as variety of color, object and material recognition for color discrimination.

P1-4: Are cast-shadows coarsely processed? Using cue-conflict stimuli to explore perceptual weightings

P George Lovell

Division of Psychology, Abertay University, UK p.g.lovell@abertay.ac.uk Ken Scott-Brown

Division of Psychology, Abertay University, UK

Cast shadows provide a strong cue to depth (Mammassian, Knill, & Kersten, 1998). Search times for inconsistent shadows differ when images are presented upside-down, compared to upright. Some studies have shown faster detection in upside-down images (Rensink & Cavanagh, 2004), while others have found faster detection for upright images with large shadow discrepancies, but the opposite pattern for small discrepancies (Lovell et al., 2009). Lovell et al. suggest that this pattern of results is explained by a coarsely scaled shadow processing mechanism that only comes into play with light-from-above stimuli. Here we report a series of experiments that explore whether shadows are coarsely processed. Stimuli feature floating discs casting a shadow onto a vertical (fronto-parallel) surface. The observer was asked to identify which disc was located furthest towards the observer; the only available cue was the cast-shadow. Stimuli contained a cue-conflict, where higher and lower spatial frequency components convey different physical depths. By examining the location of the point of subjective equality relative to the two cue-depths, we can estimate perceptual weightings for the coarse and fine-scaled information, we can also examine the differences in these weightings for upright and upside-down images. In upright images coarser cues seem to receive stronger perceptual weighting.

P1-5: Effect of object motion on perceiving a thick transparent object Shohei Ueda

Graduate School of Engineering, Toyohashi University of Technology, Japan s-ueda@real.cs.tut.ac.jp Yusuke Tani

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan Takehiro Nagai

Graduate school of Science and Engineering, Yamagata University, Japan Kowa Koida

Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, Japan

Shigeki Nakauchi

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan Michiteru Kitazaki

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan

When we put a thick transparent object in front of a textured background, the background texture is perceived as distorted through the transparent object. This distortion filed is a critical cue for perceiving transparency (Fleming, Jaekel, & Maloney, Psychological Science, 2011). The purpose of this study was to investigate the effect of object motion or moving distortion filed on perceiving a thick transparent object. We predicted that motion could improve perception of the thick transparent object. We presented a test stimulus and a matching stimulus side by side, and asked subjects to adjust the refractive index of the matching stimulus to make its material identical to that of the test stimulus. The test stimulus was randomly chosen from stimuli of five different refractive indices, and either rotated around the vertical axis or remained stationary. The matching stimulus was stationary, and its refractive index could be varied manually by subjects. We found that the perceived refractive index was higher and less accurate with the moving transparent object than the static object, contrary to our prediction. This result suggests that object motion may not contribute to accurate perception of refractive index and may increase perceived refractive index.

Supported by Grant-in-Aid for Scientific Research on Innovative Areas (22135005).

P1-6: Binocular color perception based on integration of unbalanced monocular color information

Takashi Mitsunaga

Graduate School of Advanced Integration Science, Chiba University, Japan

x0t1571@students.chiba-u.jp Yoko Mizokami

Graduate School of Advanced Integration Science, Chiba University, Japan Hirohisa Yaguchi

Graduate School of Advanced Integration Science, Chiba University, Japan

We perceive the world by integrating information from right and left eyes. There is little difference in color information from two eyes usually, but it could be different under particular situations such as a cataract operation for a single eye. It would be important to study the integration of monocular color information. The purpose of this study is to clarify how binocular color perception integrates different monocular color information from each eye. We examined the achromatic color perception of left, right and both eyes under the condition of putting a yellow or blue filter in front of one eye only. Observers adapted to a viewing condition (with filter) binocularly in a normal room for 20 minutes, then started the achromatic judgment of test stimuli on a display in a dark room. As a result, color perception of the eye with the filter shifted the opposite direction from the color of the filter, and that of the eye without the filter showed a little shift. Binocular achromatic color perception was placed between that of the eyes with and without filters, and generally close to the eye without a filter. This implies that binocular color perception is rather stable even with unbalanced monocular color information.

P1-7: A local display characterization method to improve calibration accuracies and to achieve quick chromaticity reproductions

Hiroshi Ban

Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Japan Graduate School of Frontier Biosciences, Osaka University, Japan ban.hiroshi@nict.go.jp Hiroyuki Yamashiro

Aino University, Japan Hiroki Yamamoto

Graduate School of Human and Environmental Studies, Kyoto University, Japan

We propose a novel display characterization procedure for vision experiments. In this procedure, all the calibrations and estimations can be achieved only in a local luminance/color space, ignoring the irrelevant input range. Our aim is especially focused on developing a fairly quick and efficient method for finding display inputs that produce specific pre-specified luminance and chromaticity outputs. Specifically, our method consists of two estimation steps. First, the linearity between video inputs and luminance magnitudes is attained by measuring luminance outputs in a limited input space and by generating Color LookUp Tables (CLUTs) for that local space (Local Gamma-Correction). Here, note that these local CLUTs can, if the range is properly selected, fairly preserve the output quality; in our tests, we found that the local CLUTs for 0.4-0.9 video inputs can cover 70-90% of luminance range of those generated in the whole input space. Second, the RGB video inputs to present the required chromaticity values are assessed by a local color transformation matrix estimated by a least-squares method (Local Color Estimation). This local estimation method is robust against measurement noises. The whole procedure is integrated into GUI-based display characterization software, Mcalibrator2, written in MATLAB and now publicly available (https://github.com/hiroshiban/Mcalibrator2).

Supported by MEXT (23135517), JSPS (22530793).

P1-8: Conditions of a multi-view autostereoscopic 3D display for high-quality glossiness reproduction

Yuichi Sakano

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan yuichi@nict.go.jp Hiroshi Ando

Universal Communication Research Institute, National Institute of Information and Communications Technology (NICT), Japan Center for Information and Neural Networks (CiNet), NICT and Osaka University, Japan

It would be very important to determine system requirements of multi-view (autostereoscopic) 3D displays for accurate reproduction of appearance. In this study, we focused on the reproduction of surface glossiness. We previously found that a current multi-view 3D display is better than a two-view 3D display in terms of glossiness reproduction when a viewer moves the head. This better glossiness reproduction can be explained by the fact that a glossy surface presented on a multi-view display appropriately produces luminance changes when the viewer moves the head, but not on a two-view display. In the present study, we developed a simulator of a multi-view display that enabled us to change the viewpoint interval and the magnitude of crosstalk. By using the simulator, we conducted a psychophysical experiment and found that glossiness reproduction is most accurate when the viewpoint interval is small and there is a moderately small amount of crosstalk. The impaired perceived glossiness under conditions with too little crosstalk was presumably due to blanks between the viewpoints, which produced flickering appearance of the whole image when the head moved. Lastly, we examined whether the results can be explained by the magnitude of luminance change accompanying the head movements.

This study was conducted as a work of "Development of technology for 3D image without stereo glasses" entrusted by Ministry of Internal Affairs and Communications of Japan.

P1-9: Practical color visibility evaluated by response time of search and selection rate in paired comparison

Toshihiro Toyota

Industrial Research Institute of Shizuoka Prefecture, Japan toshihiro_toyota@iri.pref.shizuoka.jp Keizo Shinomori

School of Information, Kochi University of Technology, Japan Taka-aki Suzuki Industrial Research Institute of Shizuoka Prefecture, Japan Shigeki Nakauchi

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan

To investigate a rational approach to evaluate visibility of color practically, we performed two measurements: the response time (RT) to search the image of a colored handrail, and the selection rate (SR) of the handrail color for higher visibility in the paired comparison method. The stimulus images were generated using physically based spectral rendering under three types of lighting: a fluorescent lamp (FLD condition), and LED lamps of color temperatures 3200 K and 5000 K (LED_A and LED_D conditions). The color of the handrail was one of 8 Munsell colors in the highest chroma on the monitor: 5B6/8, 5G7/10, 5GY8.5/10, 5Y8.5/12, 5YR7/12, 5R4/14, 5RP5/12 and 5P5/10. Twenty-six elderly observers (64 to 83 years old) participated in this study.

The results of RTs indicated that under LED_D and FLD conditions, the RT of 5P was significantly longer (p < 0.05). Under the FLD condition, RTs of 5YR, 5R and 5RP were significantly shorter (p < 0.001). Under the LED_A condition, however, RTs of 5GY and 5Y were longer and RTs of 5R and 5RP were shorter significantly (p < 0.001). The results of SRs followed these tendencies, although statistical significance appeared partly. Both data sets were transformed to the z-scores, which were significantly correlated (r = 0.850 andp < 0.001). Overall, 5R had advantages in color visibility to other colors and 5P was the worst. This proposed approach can provide the rational method in function-oriented color selection, at least for the visibility of color.

Supported by research grant by Aronkasei Co. Ltd. and KAKENHI 24300085. The authors show appreciation for N. Murai and Y. Ozaki in Aronkasei Co. Ltd. for their experimental works.

P1-10: A colour vision test optimised for the clinical population

Caterina Ripamonti

UCL Institute of Ophthalmology, UK

c.ripamonti@ucl.ac.uk Sarah Kalwarowsky

UCL Institute of Ophthalmology, UK Marko Nardini

UCL Institute of Ophthalmology, UK

Many popular colour vision tests (e.g. Ishihara, FM100HT) developed to screen and classify dichromats require relatively good visual acuity. Thus, individuals affected by low vision may fail to perform these tests. To solve this problem, we have developed a Universal Colour Discrimination Test (UCDT), which is particularly suitable for individuals with visual acuity worse than 1.0 logMAR. The test consists of a coloured 5 deg square which can be distinguished from the background only on the basis of its hue. Using a 2AFC, observers indicated the position of the square as its saturation changed during the experiment. The task was easy to perform, even by 7-year-old observers. Participants were labelled as Normal or Affected according to their performance to additional colour vision tests included in the protocol. Normal saturation thresholds agreed with the ones obtained by the Cambridge Colour Test (CCT). When a comparison was possible, Affected thresholds were consistent with the observer's performance at other tests. More importantly, Affected observers who failed to perform the conventional colour vision tests were able to perform the UCDT. This result has important clinical implications: in patients undergoing novel treatments, it allows chromatic discrimination baselines to be determined and monitored over time.

Supported by Fight for Sight, BBSRC, Moorfields Eye Hospital, NIHR Biomedical Research Centre for Ophthalmology at Moorfields and the UCL Institute of Ophthalmology.

P1-11: Local regions with normal brightness contribute more to color constancy.

Yongjie Li

School of Life Science and China, China liyj@uestc.edu.cn Shaobing Gao

School of Life Science and China, China Wangwang Han

School of Life Science and China, China Ruoxuan Li

School of Life Science and China, China

Chaoyi Li

School of Life Science and China, China

Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, China

In a recently published work (Gao et al., A color constancy model with double-opponency mechanisms, ICCV'2013), we have proposed a physiologically inspired color constancy model, which was successfully used to estimate the light source color of the color-biased scenes by simulating the properties of the double-opponent (DO) cells with concentrically organized receptive field in the primary visual cortex (V1) of the primate visual system. In this work, based on our published DO-inspired color constancy model, we further investigated the role of local regions with different brightness in accurately estimating the illuminant. We input specific regions with various levels of brightness into our models. The results show that local regions with different brightness contribute differently to the performance of color constancy. In particular, local regions with very low brightness contribute negligibly, and local regions with quite high brightness contribute negatively. That is, removing the local regions with quite high brightness significantly improved the estimate accuracy of light source color. In short, for color-biased images, only the local regions with normal brightness contribute more to color constancy, which was also preliminarily validated by psychophysical experiments based on an eye-tracker.

Technology, University of Electronic Science and Technology of

Technology, University of Electronic Science and Technology of Technology, University of Electronic Science and Technology of Technology, University of Electronic Science and Technology of Technology, University of Electronic Science and Technology of

Supported by 973 Project (#2013CB329401), NSFC (#61375115).

P1-12: Contrast-dependent variations in the excitatory classical receptive field and suppressive nonclassical receptive field of cat primary visual cortex

Ke Chen

Key Laboratory for Neuroinformatics, Ministry of Education of China, University of Electronic Sciences and Technology, China

bewildboy@163.com Jiaojiao Yin

Key Laboratory for Neuroinformatics, Ministry of Education of China, University of Electronic Sciences and Technology, China Chaoyi Li

Key Laboratory for Neuroinformatics, Ministry of Education of China, University of Electronic Sciences and Technology, China

The spatial summation of excitation and inhibition determines the final output of neurons in the cat V1. To characterize the spatial extent of the excitatory classical receptive field (CRF) and inhibitory nonclassical receptive field (nCRF) areas, we examined the spatial summation properties of 169 neurons in cat V1 at high (20-90%) and low (5-15%) stimulus contrasts. Three categories were classified based on the difference in the contrast dependency of the surround suppression. We discovered that the three categories significantly differed in CRF size, peak firing rate, and the proportion of simple/complex cell number. Both the CRF and nCRF of three types expanded at low contrast, but the expansion is more marked for the CRF than for the nCRF. Although the effect of contrast on surround suppression was varied, the overall suppression increased significantly at high contrast. Moreover, the contrast-dependent change in the extent of CRF is independent of the change in suppression strength. Overall, our results in cat are in agreement with those obtained in macaque monkey.

P1-13: Crowding effect and reading performance among children with dyslexia in Klang Valley area

Indira Madhavan

Programme of Optometry and Vision Science, Faculty of Health Science, Universiti Kebangsaan Malaysia, Malaysia

indira_inthu@yahoo.com Mohd Izzuddin Hairol

Programme of Optometry and Vision Science, Faculty of Health Science, Universiti Kebangsaan Malaysia, Malaysia

Crowding effect is one of the factors which causes difficulties during reading among children with dyslexia. The purpose of this study was to investigate the effect of increasing spacing between letters, words and lines on reading performance in children with dyslexia. Reading performance was assessed by determining the reading time and rate. The study was divided into four experiments. In Experiments One to Three, the optimum spacing between letters, words and lines that increased reading performance respectively was determined. In Experiments Four, reading performance was compared between standard spacing text and optimum spacing text. The optimum spacing text was developed using the optimum spacing determined from Experiments One to Three while the standard spacing text was using the school textbook spacing. The standard text spacing in the school textbook is 0.15°, 0.46° and 0.61° for letter, word and line spacing, respectively. A total of 20 students with dyslexia who was screened to have normal vision (average age of 8.10±0.78 years) was chosen randomly for the first stage of study. The results showed that the reading time was minimum and reading rate was maximum at spacing 0.46° (p < 0.05), 1.14° (p < 0.05) and 1.21° (p < 0.05) for spacing between letters, words and lines, respectively. Subjects showed a significant decrement (p < 0.05) in the reading time and a significant increment (p < 0.05) in reading rate with optimum spacing text compared to the standard spacing text. The results suggest that increasing the spacing between letters, words and lines increases the reading performance in children with dyslexia. In conclusion, the application of the optimum spacing in school textbooks typographical settings is expected to reduce the reading difficulties among children with dyslexia.

P1-14: Binocular contrast interactions in iso-oriented surround suppression Pi-Chun Huang

Department of Psychology, National Cheng Kung University, Taiwan pichun_huang@mail.ncku.edu.tw Chien-Chung Chen Department of Psychology, National Taiwan University, Taiwan

The detectability of a visual target can be reduced by the presence of a surround. We used a pattern masking paradigm to investigate whether such surround suppression occurs before or after binocular integration and whether the surround effect was to modify the response or contrast gain in the target mechanisms. The detection threshold of the target (horizontal Gabor, 2 cpd) was systematically measured with the presence of a pedestal and an iso-orientation surround at different contrast levels in order to derive the contrast-response functions of the target detection mechanisms. The modulation effects were compared across different eye origin combinations. In the absence of the surround, the target threshold was decreased first then increased with pedestal contrast. The suppression effect was stronger for dichoptic pedestals. The surround modulation effects were varied across pedestal contrast levels and viewing configurations. The data were fitted well with a two-stage binocular gain control model. The values of the model parameters suggested that the post-binocular summation surround suppression alone cannot explain our results. Instead, both monocular and interocular surround suppression were involved. The effect of the surround was to change the contrast gain of the target detection mechanisms.

Supported by NSC-101-2401-H-006-003-MY2 and NSC 102-2420-H-006 -010 -MY2 to PCH and NSC 102-2420-H-002 -018 -MY3 to CCC.

P1-15: Directional perception of briefly presented motion-defined motion

Junji Yanagi

Department of Psychology, Chiba University, Japan

yanagi@L.chiba-u.ac.jp Makoto Ichikawa

Department of Psychology, Chiba University, Japan

The percept of the movement of motion-defined figure, such as its speed or direction, can be affected by the movement that defines the figure (inner dot motion). This influence seemed remarkable immediately after the stimulus began to move. This observation motivated us to investigate how motion-defined motion was perceived when it was briefly presented. Participants observed motion-defined motion and made judgments on: (1) the direction of inner dot motion, (2) the shape of static motion-defined rectangle, and (3) the direction of motion-defined motion. We found that the duration that was necessary for correct judgment was the longest for the third task. This result indicates that one does not always correctly perceive the direction of motion-defined motion at the exact moment when he/she detects its shape. It might take some time for an additional processing to establish the "proper" percept of motion-defined motion. We also found an anisotropy between centripetal and centrifugal motion when stimuli were presented peripherally. That is, centrifugal motion could be detected faster than centripetal motion when the inner dot moved oppositely to the figure itself. This result coincides with previous reports using other types of non-Fourier motion, which suggests a centrifugal preference of higher-order motion processing.

P1-16: Fluttering-heart illusion caused by coexistence of first- and second-order edges

Masahiro Suzuki

Kanagawa Institute of Technology, Japan

msuzuki@ctr.kanagawa-it.ac.jp Kazuhisa Yanaka

Kanagawa Institute of Technology, Japan

We examined whether the fluttering-heart illusion occurs under conditions in which first- and second-order edges, different in latency of edge detection, coexist. The fluttering-heart illusion is a phenomenon in which the motion of the inner figures appears to be unsynchronized with that of the outer figures surrounding them but, in actuality, the motion of both sets of figures is objectively synchronized. In Experiment 1, edges of the outer and inner figures were defined by luminance and texture, respectively, and the point of subjective equality of the fluttering-heart illusion was measured using the method of adjustment. The results indicated that observers saw synchronous motions of the inner and outer figures when, in actuality, the motion of both sets of figures was objectively unsynchronized; that is, the fluttering-heart illusion occurred. Experiment 2 was conducted using the same method as in Experiment 1, except that the edges of the inner figures were defined by temporal modulation of luminance. Again, the results indicated that the fluttering-heart illusion occurred. These findings support our hypothesis that different latencies of edge detection cause the fluttering-heart illusion.

P1-17: A directional bias of apparent position shift of a moving element with a hard edge

Rumi Hisakata

Japan Society for the Promotion of Science (JSPS), Japan NTT Communication Science Laboratories, Nippon Telegram and Telephone Cooperation, Japan rumi.hisakata@icloud.com Shin'ya Nishida

NTT Communication Science Laboratories, Nippon Telegram and Telephone Cooperation, Japan

De Valois and De Valois (1991) showed that the position of a Gabor patch with a moving carrier appears to shift in the carrier direction. Linares and Holcombe (2008) showed that the position shift was greater for the motion away from the fovea. We found a bias in the opposite direction when using a grating with a hard edge. In our experiment, a white reference probe was presented 6 deg right or left of the fixation point. Above the probe, a circular grating with a diameter of 4 deg and contrast of 0.1 was presented. The grating moved rightward, leftward, or was static. The horizontal position of the target grating was shifted by a small amount relative to the probe. The observers judged in which direction the target-probe orientation was tilted. We found that the perceived vertical for the motion toward the vertical median line was distorted, while the perceived vertical for the motion in the opposite direction was nearly the same as that for the static condition. We speculate that some mechanisms for the second-order/third-order orientation perception might be related to this bias because we found a different pattern of bias when we changed the probe to nonius lines.

P1-18: Effect of display inclination on the vertical-horizontal illusion

Akiko Yasuoka

Graduate School of Design, Sapporo City University, Japan a.yasuoka@scu.ac.jp Masahiro Ishii Graduate School of Design, Sapporo City University, Japan

A stimulus that consists of two lines of the same length, one horizontal and the other vertical, at a 90 degree angle from one another forming an inverted-T shape creates an illusion. Observers overestimate the length of a vertical line relative to a horizontal line (the vertical horizontal illusion). One explanation is that observers perceive the vertical line as extended in depth, out of the frontal plane. In this research, we measured the illusion that was presented on a physically inclined surface. A monitor was rotated on a horizontal axis (backward or forward inclination). The lines were white on a black background. The horizontal line subtended 6.3 deg. The experiment was conducted by the method of adjustment; subjects were asked to adjust the length of the vertical line to be apparently the same length of the horizontal line. Four subjects participated. They observed the stimuli with both their eyes. The results indicate that the illusion declines when the display was inclined forward (bottom side away). This suggests that observers underestimate the length of a vertical line when the display surface inclined forward.

Supported by CREST, JST.

P1-19: Illusory movement in depth-inverted, but otherwise photorealistic, Bas Relief art

Norman D. Cook

Department of Informatics, Kansai University, Osaka, Japan cook@res.kutc.kansai-u.ac.jp Gianluca Sanvido Japan Medical Planning, Co., Japan Ryu Satoh

Green Art Gallery, Japan Takefumi Hayashi

Department of Informatics, Kansai University, Osaka, Japan

"Reverse perspectives" (Wade & Hughes, 1999 Perception, 28, 1115) are popular 3D visual illusions, typically created by inverting the apparent depth structure of buildings such that protruding corners recede and receding corners protrude. Motion parallax effects produce illusory movement of the buildings, as changes in the visual scene with self motion create counterintuitive effects. Those effects are accompanied by strong activation of area MT in visual cortex (Hayashi et al., 2007 Brain Research, 72, 1163). We have created a new form of this illusion through depth inversion of more complex (non-planar) shapes sculpted in "Bas Relief'. Similar to the classic reverse perspectives of Patrick Hughes, the illusory movement of depth-inverted Bas Reliefs relies upon the viewer's perception of the 3D shape of objects in the artwork due to linear perspective and shadow/shading pictorial depth cues. Here, we illustrate the more complex depth reversal technique with an inverted Bas Relief that was sculpted after the painting "Three Graces" (1504) by Raffaello (Raphael, 1483-1520). Provided only that the viewer perceives the apparent 3D structure of the dancing women, viewer movement produces counter-intuitive motion parallax that the human brain inevitably interprets as "spontaneous movement" within the artwork itself.

P1-20: Random-sampling leads to multiplicative noise in crowded displays

Carl Michael Gaspar

Center for Cognition and Brain Disorders, Hangzhou Normal University, China

earl.leatherman@yahoo.co.uk Wei Chen

Center for Cognition and Brain Disorders, Hangzhou Normal University, China

Is crowding stochastic? Dakin et al. (2010) show that crowding may result in random sampling of flankers. This model predicts that internal noise should increase with the variance of flanker-orientation—a form of multiplicative noise. We tested this hypothesis directly by measuring the effect of flanker-orientation variance on response consistency in a 2AFC orientation discrimination task. 2 observers identified which of 2 peripheral Gabor patches (2.5 deg eccentricity), surrounded by 8 Gabor flankers, was tilted (clockwise or counterclockwise from horizontal). Orientation thresholds and response-consistency were measured separately for various levels of flanker-orientation variance. Critically, the mean of flanker orientation was always horizontal for every stimulus; performance could only be affected by flanker variance. Orientation thresholds increased with flanker variance by a log-log slope of 1/2. Most importantly, response-consistency shows that the slope between proportion-correct and proportion-agreement reached an upper asymptote at the highest flanker variance, which is the signature of muliplicative noise and consistent with the model of random-sampling suggested by Dakin et al. (2010). In a current experiment we are estimating the contribution of Gestalt factors.

P1-21: New simple geometrical illusion figure: Slightly bent thick line looks straight when thin straight line is added to it

Teluhiko Hilano

Kanagawa Institute of Technology, Japan

hilano@ic.kanagawa-it.ac.jp

We report a new geometrical illusion figure that consists of two lines. One is a thick line that is slightly bent at the center, and the other is a thin straight line that crosses at the center of the former at a small angle. It appears that the bent line is not bent when adding the thin straight line. When the thicknesses of the lines are equal, the straight one is perceived to be slightly bent in the opposite direction. The optical illusion effect of this figure also changes by rotating the figure. Although a figure that consists of a thin line crossing a thick line at a small angle is called a "Poggendorff's illusion", the thin line seems to not be connected (Ninio, 2001 The science of illusions, page 136, figure 12-4, Ithaca, NY: Cornell University Press); we perceive the thin line to be connected, i.e. Poggendorff's illusion does not occur in our figure. We discuss how the optical illusion effect of this figure changes depending on the size of the figure, the thicknesses of the lines, and how much to bend the thick line.

P1-22: Functional architecture for "far" and "near" judgment in cat's V1 Ling Wang

Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, China

w_ling@uestc.edu.cn Zhengqiang Dai

Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, China Jiaojiao Yin

Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, China Chaoyi Li

Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, China

Shanghai Institutes of Biological Sciences, Chinese Academy of Sciences, China

In mammals, neurons encode the depth of moving objects are found firstly in primary visual cortex (V1/V2). Single-unit descriptions on binocular disparity, the key feature to form depth perception, are made in V1/V2. However, there are limits studies on precise functional architecture for depth discrimination in V1/V2. Recently, some researchers had discovered the micro-architecture for binocular disparity in second visual cortex, V2 or area 18, with the help of two-photon calcium imaging or intrinsic signal optical imaging. However, little is known about the functional architecture at V1, area 17. Combing intrinsic optical imaging and sing-unit electrophysiology, using dynamic random-dot stereoscopic gratings in different depth as stereo stimulus, our work showed that disparity sensitive neurons were also clustered to be a functional architecture. In short, these neurons at cat's V1, area 17, gathered into two categories to distinguish "near" and "far". But they were incapable to identify how near or how far precisely. The result indicated that the rough judgment of "far" or "near" could be done even at early V1, start of binocular fusion.

Supported by grants from the National Natural Science Foundation of China (NSFC 31000492, 91120013) and the Fundamental Research Funds for the Central Universities (ZYGX2011X017).

P1-23: Spatial summation organization of visual cortical neurons embedded in the pinwheel-like-orientation map

Xuemei Song

Shanghai Institutes of Biological Sciences, Chinese Academy of Sciences, China xmsong@sibs.ac.cn Chaoyi Li

Shanghai Institutes of Biological Sciences, Chinese Academy of Sciences, China Ming Li

National University of Defense Technology, China Tao Xu

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, China Dewen Hu

National University of Defense Technology, China Kaifu Yang

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, China Yongjie Li

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, China

In the visual cortex of cat, neurons are arranged in iso-orientation domains (IOD) in the pinwheel-like patterns. Using optical imaging combined with single-unit recording, a sub-structure based on spatial summation properties of the neurons is found within the IOD. By precisely locating the recording sites of the electrodes, we found that the neurons could be subdivided into three subgroups in light of their locations within the IOD. The first group was clustered around the pinwheel center, the neurons possessed smallest receptive size (2.7d ±0.2) and strong orientation-non-selective surround suppression (suppression index (SI) = 0.55 ±0.03). The second group located in the middle of the IOD, having a medium-size receptive field (3.4d ±0.2) and strong orientation-selective surround suppression (SI = 0.52 ±0.03). The third group was located in the IOD around the midpoint between two neighboring pinwheel centers, having the largest receptive field size (4.4d ±0.4) and weak or no surround suppression (SI = 0.30 ±0.04). These findings provide direct electrophysiological evidence to show that the orientation pinwheel structure is further subdivided into three sub-regions in light of the differences in spatial summation properties of the neuron clusters.

P1-24: Effect of analytic observation on line length judgments of geometrical optical illusion stimulus

Ayane Murai

Graduate School of Design, Sapporo City University, Japan 1462012@st.scu.ac.jp Ryuichi Yokota

Graduate School of Design, Sapporo City University, Japan Masahiro Ishii Graduate School of Design, Sapporo City University, Japan

A stimulus that consists of two lines forming an inverted T shape creates an optical illusion. The vertical line appears to be longer than a horizontal line of equal length (the vertical horizontal illusion). In an ordinary way, one judges the line length by its look and feel. One can also observe it with some strategy. An observer, for instance, might mentally divide the linked lines into two disconnected lines, then rotate and translate one of them to compare side-by-side. In this research, we examined whether analytic observations modulated the length judgment of vertical line of the vertical horizontal illusion stimulus. The results indicated the observers underestimated the length of a vertical line relative to a horizontal line when they observed the stimulus with some strategy. We then conducted a similar experiment using the Oppel-Kundt illusion stimulus (a horizontal line with vertical graduated scale lines appears longer than a horizontal line without such vertical lines). The results indicated the observer strategies had no effect on modulation of line length judgment. This research suggests that, in some illusion stimulus, the distortion of perception disperses when the geometrical feature is neglected in observation.

Supported by CREST, JST.

P1-25: Stereo-curvature aftereffects from multi-level adaptation Pengfei Yan

Graduate School of Engineering, Kochi University of Technology, Japan 166001m@gs.kochi-tech.ac.jp Hiroaki Shigemasu

School of Information, Kochi University of Technology, Japan

It is still unclear whether adaptation for stereo-curvature aftereffects occurs at levels other than the level of shape curvature processing. To investigate this, we examined whether the aftereffects on centrally located test stimuli depend on the retinal position or the scale of adaptation stimuli. A comparison among different retinal-position conditions showed that the aftereffects were retinal-position dependent, suggesting that the adaptation also occurs at a lower level than shape curvature level. We also found that the aftereffects from adaptation to dynamic-scale stimuli were not weaker than that from adaptation to fixed-scale stimuli, suggesting that the aftereffects are scale independent and there is an adaptation effect at higher and/or lower levels than shape curvature level. To clarify what caused the retinal-position dependence, we further investigated the effect of stimulus eccentricity on adaptation strength. The aftereffect strength was compared among three eccentricity conditions in which adaptation and test stimuli were presented at the same retinal position for each trial. The results indicated a significantly less adaptation in near-eccentricity condition than that in both central and far-eccentricity condition. Therefore, multi-level adaptation can account for stereo-curvature aftereffects of which the retinal-position dependence results from the eccentricity effect to some extent.

P1-26: Improvement of sharpness by the addition of graininess Xiazi Wan

Graduate School of Advanced Integration Science, Chiba University, Japan natsukowan@yahoo.co.jp Yuya Nakao

Graduate School of Advanced Integration Science, Chiba University, Japan Naokazu Aoki

Graduate School of Advanced Integration Science, Chiba University, Japan Hiroyuki Kobayashi Graduate School of Advanced Integration Science, Chiba University, Japan

This paper develops a previous study by using color natural images as the evaluation image rather than black and white single-frequency patterns. The results indicate that the effect of noise is more evident in a blurred image than in a sharp image and in a large blurred image than a small blurred image. In our previous paper,(1) we investigated how noise affects the sharpness perception of an image using white noise and one- and two-dimensional single-frequency patterns as stimuli. The results showed that the sharpness perception of higher-frequency stimuli decreased with increased noise, whereas that of lower frequency stimuli increased at certain levels. The frequency dependence of the sharpness improvement effect by the addition of noise was consistent with the results seen in previous studies. The results obtained using the black-and-white single-frequency patterns were also confirmed in the color images of natural objects. Further, we were able to obtain new knowledge in terms of the influence of texture, which is not considered in previous studies. That is, there is negligible effect in an image that has many edges, whereas the highest effect was observed in an image with no texture.

(1) T. Kurihara et al. (2011). Analysis of sharpness increase and decrease by image noise. Journal of Imaging Science and Technology, 55, 030504.

P1-27: Accuracy and resolution differences in time perception between the central and peripheral visual field

Riku Asaoka

Graduate school of Arts and Letters, Tohoku University, Japan

rikuasaoka@s.tohoku.ac.jp Jiro Gyoba

Graduate school of Arts and Letters, Tohoku University, Japan

The way that humans perceive duration is an ongoing debate. Recently, it has been suggested that time perception systems exist specific to sensory modalities and that duration is judged according to these systems. Many studies have examined the relationship between time perception and visual information processing. However, the influences of visual field differences on time perception have not yet been investigated. The present study examined whether duration discrimination would differ between the central and peripheral visual fields. In each trial, a standard and comparison stimulus were presented in the central or peripheral visual field, respectively. The participant was to judge which stimulus was presented for a longer duration. Each participant's responses were fitted with a logistic function to determine the point of subjective equality (PSE) and just noticeable difference (JND). We found that, when the standard and comparison stimuli were presented in the peripheral field, PSE was lower and JND was higher than in the central vision condition. These results showed that peripheral vision has a lower degree of accuracy and resolution than central vision in the duration discrimination task. Therefore, we demonstrated the possibility that central vision may dominate over peripheral vision in time perception tasks.

P1-28: Perception of shape from shadows in infancy Kazuki Sato

Society for the Promotion of Science, Chuo University, Japan a12.rrx5@g.chuo-u.ac.jp So Kanazawa

Department of Psychology, Japan Women's University, Japan Masami K. Yamaguchi

Department of Psychology, Chuo University, Japan

We investigated whether infants could discriminate the shapes from attached-shadow and cast-shadow. In adults, participants perceived illusory shape of ball from attached-shadow and illusory shape of disk from cast-shadow (Elder et al., 2004). However, when the contrast of the shadows reversed, adults perceived only crescent shapes. In infants, the ability to represent a 3D shape across pictorial depth cues developed in 6-to-7-month-olds (Tsuruhara et al., 2009). Therefore, we hypothesized that around 7-month-olds could perceive the shapes from shadows. We used the familiarization method to test whether 5-to-8-month-olds could discriminate a ball (attached-shadow) and a disk (cast-shadow) produced by the two types of shadows (Experiment 1). Infants were familiarized with the ball or the disk and then they were tested for a novelty preference to the ball or the disk which was not presented in the familiarization. If infants could perceive an illusory shape from these shadows, they would prefer the novel shape after the familiarization. A result showed that only 7-to-8-month-olds could discriminate these shapes. In Experiment 2, when the contrast of crescent areas reversed and these areas were not perceived shadows, the infants did not show the novelty preference. These results suggested that 7-to-8-month-olds can perceive different shapes from the attached-shadow and cast-shadow, respectively.

Supported by Grant-in-Aid for Scientific Research on Innovative Areas "Shitsukan" (No. 25135729) from MEXT, Japan.

P1-29: Reduced perceptual error in action observation induced by body ownership illusion

Shigeaki Nishina

Honda Research Institute, Japan

nishina@jp.honda-ri.com

Imitation of others' movements by observation would be the most efficient way to learn a new motor skill. Although human adults and children seem to be somehow able to effectively perform this learning, imitation requires computationally difficult problems, including resolving self-other correspondence problems and generation of motor commands that appropriately reproduce the observed motion. In order to understand the underlying mechanism of the learning, it is important to examine which process is actually improved. Previously, we have reported that imitative learning of a sequencial finger movement was significantly improved when the learner was experiencing a body ownership illusion onto the computer-generated instructor's hand (Ishii et al., ECVP 2012). In this study, we examined performance of action observation independently from motor execution, and found that action observation itself was significantly improved by body ownership illusion. The results indicate that the observed improvement of imitative motor learning could primarily take place in the action observation process.

P1-30: Perceptual narrowing toward adult faces in Japanese infants: a behavioral and a near-infrared spectroscopic study

Megumi Kobayashi

National Institute for Physiological Sciences, Japan Japan Society for the Promotion of Science (JSPS), Japan megumik@nips.ac.jp Viola Macchi Cassia

University of Milano-Bicocca, Italy So Kanazawa

Department of Psychology, Japan Women's University, Japan Masami K Yamaguchi

Department of Psychology, Chuo University, Japan Ryusuke Kakigi

National Institute for Physiological Sciences, Japan

Recent studies have reported that adults (e.g., Kuefner at al., 2008) and children (e.g., Macchi Cassia et al., 2009) show an advantage in processing adult vs non-adult faces. This processing bias emerges in Caucasian infants by the age of 9 months (Macchi Cassia et al., 2014), through a process known as perceptual narrrowing. That is, 9-month-olds discriminate adult faces but not infant faces based on identity, while 3-month-olds can discriminate faces from both age groups. The aims of the current study were (1) to extend evidence of perceptual narrowing toward adult faces to Japanese infants (Experiment 1), and (2) to investigate the neural correlates of the discrimination advantage for adult faces in 9-month-old infants using near-infrared spectroscopy (NIRS) (Experiment 2).

Results of Experiment 1 showed that Japanese 9-month-old infants discriminated individual adult faces but not newborn faces, while Japanese 3-month-olds discriminated identity in both adult and newborn faces. Results of Experiment 2 indicate that the increase in hemodynamic response during stimulus presentation was larger for adult faces compared to newborn faces (p < 0.01). Overall, our data suggest that perceptual narrowing toward adult faces in infancy is a cross-cultural phenomenon, which, by the age of 9 months, translates into specialized brain responses.

Supported by a Grant-in-Aid for Scientific Research on Innovative Areas, "Face perception and recognition" from the Ministry of Education, Culture, Sports, Science and Technology KAKENHI (20119002), a Grant-in-Aid for Scientific Research.

P1-31: Relationship between timing of object category representation and the level of category abstraction in the human visual cortex

Masashi Sato

Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan

m.sato@cns.mi.uec.ac.jp Yoichi Miyawaki

Center for Frontier Science and Engieering, The University of Electro-Communications, Japan yoichi.miyawaki@uec.ac.jp

Objects are grouped into categories that have common properties, and the categories can be hierarchically ordered from abstract to concrete levels. Previous studies suggest that spatial patterns of human brain activity for various objects show relationship consistent with the hierarchical structure. However, it remains unclear when each object category is represented in human brain activity patterns and whether it is also ordered in a hierarchical manner in the time domain according to the level of category abstraction. In this study, we measured human brain activity in high temporal resolution using magnetoencephalography while subjects observed object images selected from multiple categories. We reconstructed cortical current distributions using an fMRI-constrained source estimation method and then applied multivariate pattern analyses to the reconstructed cortical current on the visual cortex to estimate the timings at which each object category was represented in brain activity patterns. Our present data showed no significant difference in latency of object category representation along the level of category abstraction. These results suggest possibility that object category information is represented at similar latency in the human brain regardless of the level of abstraction.

Supported by JSPS KAKENHI Grant Number 26120514, KDDI Foundation, and Narishige Neuroscience Research Foundation.

P1-32: Asymmetrical effects of smile and sad expressions on attractiveness

Ryuhei Ueda

Graduate School of Letters, Kyoto University, Japan spc24@icloud.com Hiroshi Ashida

Graduate School of Letters, Kyoto University, Japan Kana Kuraguchi Graduate School of Letters, Kyoto University, Japan

Many psychological studies have shown that facial expression affects evaluation of facial attractiveness; smiles can enhance attractiveness and other negative expressions can decrease attractiveness. However, the process of evaluating sad faces could be more complex because it might also motivate other people to help the person. We investigated how attractiveness differs between smiles and sad faces through a rating task on female faces with three expressions (smile, neutral, sadness). The results showed four points. (1) Participants gave higher ratings to smiles than neutral faces, and this positive effect was stronger for female observers than for male observers. (2) They gave lower ratings for sad than neutral faces, and this negative effect did not depend on observers' gender. (3) The smile positive effect did not differ by the strength of attractiveness, while (4) the sad negative effect was stronger for high-attractiveness faces than for low-attractiveness faces. The results demonstrate that facial expression indeed has complex effects on attractiveness that are neither uniform across genders nor symmetrical on the positive and the negative sides. Our findings also suggest that females might be more affected by social contexts than males, and that deformation might have more impact on high-attractiveness than low-attractiveness faces.

P1-33: Reliability and interpretation of visually evoked single-trial peaks Wei Chen

Center for Cognition and Brain Disorders, Hangzhou Normal University, China emma.chen.w@gmail.com Carl Michael Gaspar Center for Cognition and Brain Disorders, Hangzhou Normal University, China

Single-trial (ST) methods in EEG are a promising development in visual electrophysiology. Most ST methods have 2 steps (De Vos et al., 2012), and it is step 1 that varies across methods: (1) clean the data to enhance the detectability of peaks on individual trials; and (2) find peaks within a time window. Researchers can then compare the mean ST peak amplitude and variance of ST peak latency (jitter) across experimental conditions. Here, we evaluate the efficacy of ST methods using face-evoked responses from Gaspar et al. (2011). First we determine whether ST-specific cleaning (Ahmadi & Quiroga, 2013) enhances the reliability of ST amplitude and jitter, compared to generic cleaning. We find no benefit of ST-specific cleaning. Second, we question the interpretation of ST jitter. High jitter reliability could reflect individual differences in the variability of true peak latency across trials. Some subjects appear to have no peaks; 'peak' latencies are defined purely by the edges of the time-window (high correlations between window-size and latency distance from window-center). These same subjects also show the highest values of jitter. We conclude that significant experimental variations in ST jitter are confounded by the disappearance of peaks due to noise.

P1-34: The aftereffect of different mouth shapes in Korean language system

Chunmae Lee

Department of Psychology, Yonsei University, Korea chunmei127@gmail.com Dayi Jung

Department of Psychology, Yonsei University, Korea Kyongmee Chung Department of Psychology, Yonsei University, Korea

Aftereffect refers to the phenomenon whereby adaptation to certain stimuli influences subsequent perception. Antecedent research suggests the aftereffect of mouth shape (sustained /m/ or /u/) in the English-speaking group. The current study investigated whether aftereffect of mouth shapes (the pronunciation similar to /m/ or /u/ in Korean) takes place in the Korean-speaking group and estimated the range of threshold of aftereffect. 14 students (6 males and 8 females) with their ages ranging from 25 to 29 participated in this study. They were consistently adapted to faces on consistent condition ("m" or "u" mouth shape) before investigating the shape of the mouth. The results of repeated-measures ANOVA revealed the main effects of adaptation condition. Also, only the sizes of the aftereffect for 25% "u" shape and 50% "u" shape were larger than 1. These results indicate that there existed an aftereffect of mouth shape in participants belonging to "u" shape condition. However, these sizes of aftereffect were not as large as those shown in antecedent research. Additionally, our estimation suggests the threshold of aftereffect of 100% "u" shape to be between 0 and 25%. Finally, our findings suggest that aftereffects of mouth shape may vary according to the language system in use.

P1-35: Degraded recognition of facial expressions in patients with Parkinson's Disease

Li-Chuan Hsu

Schoool of Medicine, China Medical University, Taiwan

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan lchsu@mail.cmu.edu.tw Chia-Yao Lin,

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan Chon-Haw Tsai,

Neuroscience Laboratory, Department of Neurology, China Medical College Hospital, Taiwan Yi-Min Tien

Department of Psychology, Chung Shan Medical University, Taiwan

Because of dopaminergic neurodegeneration, not only Parkinson's Disease (PD) patients' motor and cognitive functions but also emotion recognition deteriorate with time. We aim to find whether PD patients with advanced motor problems show a much greater deficit in recognizing facial expressions than mild ones. We recruited two groups of PDs, low (LMS) and high motor severity (HMS) to compare with age-matched healthy controls and adopted an Emotional Identification Task. All were asked to identify positive (happiness) or negative (sadness, fear, anger) faces. Results revealed that compared to healthy controls, LMS performed worse only in the condition of sad faces. HMS had dysfunctions in recognition of not only negative emotions (sad and angry), but also positive emotion. For verifying that disease progression resulted in the decline of emotion recognition, we further separated LMS into two subgroups (low severity, LMS-L; high severity, LMS-H) according to LMS patients' motor scores while matching their depression scores. Results showed that LMS-H had more impairment than LMS-L in negative emotions (sad, fear and angry). No deficits of positive emotion were found at this early disease stage. We conclude that patients' abilities of emotion recognition are getting worse along disease progression. Negative emotions were impaired in the beginning and then the positive one.

Supported by NSC 102-2410-H-039 -001.

P1-36: The difference of visual ability between experts and novices in visual art

Yusuke Tani

Toyohashi University of Technology, Japan tani@vpac.cs.tut.ac.jp Ryo Nishijima

Toyohashi University of Technology, Japan Takehiro Nagai

Graduate school of Science and Engineering, Yamagata University, Japan Kowa Koida

Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, Japan

Michiteru Kitazaki

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan Shigeki Nakauchi

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan

Experts in visual art, painters, or designers undoubtedly have special talents for depicting objects. It should be a reasonable guess that their visual inspection is also superior. However, it is still unclear what is the difference between them and novices in vision. This study investigated how their visual abilities to perceive the surface quality were different. Highlights, or local bright regions tell that the surface is glossy. However, if the positional congruence between highlights and shadings is destroyed, the surface looks matte with bright spots rather than glossy. The bumpy glossy surface images and the modified versions, whose specular reflection components were manipulated by angular rotation, were used as stimuli. We set a 2AFC incongruence detection task and a 2AFC glossiness judgment task against the same stimuli. We compared the performances of the professional designers and those of novices. The detection thresholds of both groups were not different, whereas the glossiness judgment differed. The order of tasks affected only the novices' judgment especially in shorter durations; the judgment of novices who firstly conducted the glossiness judgment was worse than the others. We conclude that both experts and novices can detect highlights incongruence while experts utilize it to perceive glossiness without any clue.

Supported by Grant-in-aid for Scientific Research on Innovative areas #22135005.

P1-37: The effects of a first person shooter game on cognitive task performance Yasuhiro Seya

College of Information Science and Engineering, Ritsumeikan University, Japan

yseya@fc.ritsumei.ac.jp Hiroyuki Shinoda

College of Information Science and Engineering, Ritsumeikan University, Japan

It is well known that video game playing enhances various cognitive functions, such as attention and working memory. Recent studies have reported that the enhancement of cognitive functions depends on the types of games played and skills required in video game playing. In the present study, we investigated the effects of a first person shooter (FPS) game on cognitive functions by using a reaction time (RT) task, a useful field of view (UFOV) task, and a visual working memory (VWM) task. In Experiment 1, we measured performance on the three cognitive tasks for FPS players and non-FPS players. In Experiment 2, differences between task performance before and after a 10-hr training on the FPS game were examined. The results of Experiment 1 showed that FPS player performed better than non-FPS players on UFOV and VWM tasks. The results of Experiment 2 showed increases in game scores and performance on all cognitive tasks after the training. These results support the findings of previous studies and suggest that FPS game playing would be useful, at least in part, for enhancing cognitive functions.

P1-38: Visual statistical learning of temporal structure in different hierarchical levels Jihyang Jun

Graduate Program in Cognitive Science, Yonsei University, Korea jihy.jun@gmail.com Sang Chul Chong Graduate Program in Cognitive Science, Yonsei University, Korea Department of Psychology, Yonsei University, Korea

Human observers can extract temporal regularities while they perceive a complex scene (Fiser & Aslin, 2002). A typical scene contains various objects, processed in different hierarchical levels, such as global and local levels. However, how such hierarchical structures influence the ability to learn statistical regularities has not been investigated. The current study investigated whether participants could learn temporal regularities of objects presented in triplet streams in both global and local levels. In a learning phase, participants passively viewed Navon-like object streams, in which temporal regularities were present in the two levels. In the following 2AFC of familiarity judgments, the learned triplets were compared with foil triplets violating the learned regularities at either a local (Experiment 1) or a global (Experiment 2) level. We found that participants could extract statistical regularities of temporal sequences in both the global and local levels. In addition, there was no difference in the degree of learning between the two levels. Considering the fact that the degree of learning temporal regularities between the global and local levels did not differ, encoding regularities in different levels might be learned independently.

Supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST, No. 2011-0025005).

P1-39: Responsive extrastriate maps despite a V1 lesion and quarterfield blindness Hiroshi Horiguchi

Ophthalmology, Jikei University School of Medicine, Japan hhiro@jikei.ac.jp Brian A Wandell

Psychology, Stanford University, USA Yaping Joyce Liao

Ophthalmology, Stanford Univesity, USA Jonathan Winawer

Psychology, New York Univesity, USA

Primary visual cortex (V1) projects signals to many regions in visual cortex. There are other pathways that bypass V1 and also convey signals to extrastriate visual areas. We report a study of a 68-year-old man with a lesion to right, ventral occipital cortex, spanning ventral V1/V2/V3 extending into hV4 and VO-1/2. Visual field testing showed a corresponding homonymous left upper quadrantanopia. We measured responses in spared portions of extrastriate maps that were deprived of V1 input. Visual field coverage was assessed by superimposing population receptive fields within a map, measured with fMRI. Spared right V1/V2/V3 covered only the lower left quarterfield, as expected from the lesion. Surprisingly, responses in several extrastriate map clusters—V3-A/B, temporal-occipital, and lateral occipital—covered the complete left hemifield. Within these clusters, pRF size did not differ systematically between upper and lower visual field. These measurements show that extra-geniculo-striate pathways can bypass V1/V2/V3, support complete visual field coverage, and maintain normal population receptive field size in extrastriate maps. Nonetheless, the patient remains blind in the quarterfield that is missing in V1/V2/V3. The presence of these extrastriate responses suggests that some visual function can be preserved or restored despite the loss of V1/V2/V3.

P1-40: The effect of depth information on visual attention under monocular and stereopsis viewing: An object substitution masking study

Kana Miyaji

Graduate School of Science and Engineering, Kagoshima University, Japan sc110076@ibe.kagoshima-u.ac.jp Ken Kihara

Graduate School of Science and Engineering, Kagoshima University, Japan Jun Shimamura

NTT Media Inteligence Laboratories, Nippon Telegraph and Telephone Corporation, Japan Yukinobu Taniguchi

NTT Media Inteligence Laboratories, Nippon Telegraph and Telephone Corporation, Japan Sakuichi Ohtsuka

Graduate School of Science and Engineering, Kagoshima University, Japan

Since augmented reality (AR) is so popular, we must determine how visual information on different depth planes can impact attention. Object substitution masking (OSM) is a phenomenon wherein dots masking a target appear to follow the target as it is offset; OSM depends on the division of attention. Our previous OSM study presented the target and masking dots on different depth planes. The results suggested that OSM was attenuated under binocular observation since the depth cues of vergence, disparity, and accommodation were available simultaneously. However, it was unclear whether OSM was affected by the depth difference between the target and mask dots if either vergence and disparity or accommodation can be used as depth cue(s). To resolve this issue, we investigated the effect of OSM with depth separation between the target and mask dots under monocular and stereopsis viewing. The results revealed that, unlike binocular viewing, target accuracy was not improved, regardless of the depth difference and viewing style. These results suggest that using a combination of depth cues is important if we are to benefit from depth differences in visual information, and so facilitate attentional orientation in AR environments.

Supported by JSPS KAKENHI Grant Number 25730095.

P1-41: The level of categorical knowledge affects visual search efficiency Kanji Tanaka

Research Center for Advanced Science and Technology, The University of Tokyo, Japan kanji.t9@gmail.com Na Chen

Research Center for Advanced Science and Technology, The University of Tokyo, Japan Katsumi Watanabe

Research Center for Advanced Science and Technology, The University of Tokyo, Japan

Determining whether a visual target (e.g., a digit) is present or not is more difficult with distractors from the same category (e.g., digits) than with distractors from the different categories (e.g., letters). We examined how the efficiency of visual search and the level of categorical knowledge would be related. For this we used Kanji-characters that represented the names of prefectures in Japan, which were categorized in six regions based on their geological relations, and recruited native Japanese observers and Chinese observers who read the Kanji characters, but their categorical knowledge of the Japanese prefecture-region relationship varied. The observers searched for a specific target prefecture name among distractor prefecture names and indicated whether the target was present or not. We found that correctly reporting the absence of the target in target-absent trials was slower when the target and distractor prefectures were from the same region (e.g., Aomori in Tohoku region) than from different region (Kyoto from Kanto region). The magnitude of the within-category effect correlated positively with the test performance for the Japanese prefecture-region relationships. These results suggest that the level of categorical knowledge affects rejecting efficiency in visual search.

P1-42: Time course of interference by task-irrelevant distractors in the Stroop and Simon effects

Satoko Ohtuka

Saitama Institute of Technology, Japan

satoko@sit.ac.jp Reishi Kogure

Saitama Institute of Technology, Japan

Human information processing can be slowed by task-irrelevant distractors (e.g., in the well-known color-word Stroop effect and in the spatial Simon effect). This study examines the time course of the interference to clarify its underlying mechanisms. In experiment 1, to examine the Stroop effect, there was a color word stimulus in which the ink color was consistent or conflicted with the word meaning. Participants had to identify the ink color. In experiment 2, to examine the Simon effect, a directional stimulus was presented at a consistent or conflicting relative position. Participants were asked to identify the relative position of the stimulus. Stimulus onset asynchrony (SOA) between the relevant and irrelevant components varied from -300 ms to 300 ms. The irrelevant component preceded the relevant component in the negative SOAs, and the relevant component appeared first sequentially in the positive SOAs. Mean reaction time (RT) was the longest when the SOA was 0 ms in both experiments, whereas the shape of the RT function curve against SOA was different. The consistent conditions yielded generally shorter RTs than the conflict conditions, and the discrepancy was more substantial in experiment 1. These results imply that two different interference effects occur at different processing stages.

P1-43: Vection inhibition by adding semantic meanings to optic-flow stimuli

Masaki Ogawa

Department of Design, Kyushu University, Japan masakiogawa@outlook.com Takeharu Seno Institute for Advanced Study, Kyushu University, Japan Faculty of Design, Kyushu University, Japan

Research Center for Applied Perceptual Science, Kyushu University, Japan

Self-motion perception determined by vision alone is termed "vection". We hypothesized that vection strength is modulated by the semantic meanings of the stimuli. In Experiment 1, we used three different objects for downward motion signals along with three meaningless shapes that were of the same corresponding size, luminance, and color as the three objects. For these object conditions, we used a feather, petal, and leaf as the three different moving components. Participants were asked to press a button when they perceived self-motion. After each trial, the participants rated the subjective vection strength using a 101-point rating scale ranging from 0 (no vection) to 100 (very strong vection). The results showed that vection was highly inhibited when the stimuli had the semantic meanings. In Experiment 2, we used the downward motion of identical dots to induce vection. Participants observed stimuli while holding either an umbrella or a wooden sword. Results showed that vection was inhibited when participants held the umbrella and the stimuli was perceived as having the semantic meaning of rain or snow falling. The two experiments together suggest that vection is modulated by the semantic meaning of stimuli.

P1-44: Cognitive processing of gaze and arrow in a flanker task Huan Yu

School of Psychology, Beijing Normal University, China carolll@126.com Xuemin Zhang School of Psychology, Beijing Normal University, China

State Key Laboratory of Cognitive Neurosciences and Learning, Beijing Normal University, China

As a representative social cue, gaze is different from arrow, which is a kind of symbol cue. Although many studies have examined how the human mind selects relevant information from the environment, the results remain controversially discussed and gaze has rarely been studied in this field. This study employed behavioral and psychophysiological measures (event-related potentials; ERP) to examine the cognitive processing of gaze and arrow in a flanker task. In gaze (arrow) task blocks, participants indicated the orientation of a central target gaze (arrow) while ignoring flanking gaze (arrow) that oriented either the same or a different direction as the target. The task of mixed blocks is almost the same as the gaze (arrow) task except that the distractors were arrows when the target was gaze, and vice versa.

Amplitudes during 500 ms and 800 ms for incongruent orientation of target and flankers were more positive than those for congruent orientation. Amplitudes during 200 ms and 350 ms for the mixed task were more positive than those for the gaze (arrow) task. Results suggest that arrow is more powerful than gaze to distract one's attention and it may be easier to select relevant information in the context of gaze. Gaze as distractor induce less cognitive conflict than arrow.

P1-45: Contrast effect on legibility of Thai letters presented on LED display

Kitirochna Rattanakasamsuk

Color Research Center, Faculty of Mass Communication Technology, Rajamangala University of Technology, Thailand

kitirochna@gmail.com

There are several Thai letters which differ from another letter in only a small part of the letter, such as the existence of a letter head. When those Thai letters are presented on a bright display, the elderly may face difficulties in identifying those letters because the glare from the text itself or from the background may obscure the letter's details. In this research, we investigated the effect of text/background contrast on legibility of Thai letters. Three groups of subjects participated in this research. The first group was 15 young people who had normal or corrected-to-normal visual acuity. The second group was the simulated elderly. We asked the young people from the first group to wear cataract-experiencing goggles when they were doing the experiment. The third group was five real elderly people. The stimulus configuration was a row of ten random Thai letters presented on an LED display. The polarity of text/background contrast was composed of five levels of positive and five levels of negative contrast. The subjects had to view the stimuli in a room which was illuminated at 0 and 300 lux. We found that the results of the young and the simulated elderly are quite similar to each other whereas the results of the real elderly differed a lot from the result of those two groups. The required text/background contrast for the elderly had to be higher than the required contrast of those two groups.

P1-46: The effects of attentional concentration on dynamic characteristics of drift eye movements

Takeshi Kohama

Faculty of Biology-Oriented Science and Technology, Kinki University, Japan kohama@info.waka.kindai.ac.jp Daisuke Noguchi

Faculty of Biology-Oriented Science and Technology, Kinki University, Japan Sho Kikkawa

Faculty of Biology-Oriented Science and Technology, Kinki University, Japan Hisashi Yoshida

Faculty of Biology-Oriented Science and Technology, Kinki University, Japan

In this study, we analyzed drift eye movements while gazing with attentional concentration in foveal region to evaluate the effects of visual attention on the dynamics of drift eye movements objectively. We proposed a signal processing method to separate microsaccades and drift eye movements from fixation eye movements, and compared the properties of extracted drift eye movements between the conditions of attentional intensities. Subjects performed RSVP tasks with instructions to maintain their fixation on the stream of alphabet characters. The subjects' attention was controlled by changing the contrast of target characters. Microsaccades were detected by using an order-statistic low-pass differentiation filter. The start- and end-point of each microsaccades were discriminated by a discrete pulse transform analysis, then microsaccades were removed from the data. The gaps derived by removed microsaccades were filled by use of an autoregressive model to extract pure drift eye movements. After the extraction of drift eye movements, we analyzed the frequency components and mean-square displacements to examine the fluctuation properties of the drifts. As results, drift eye movements were not influenced by the foveal attention allocation, but were rather affected by diffusing attention to the peripheral visual field.

P1-47: Measuring attention around the hand by using flash lag effect

Ryota Nishikawa

Graduate School of Information Sciences, Tohoku University, Japan rnishi@riec.tohoku.ac.jp Kazumichi Matsumiya Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan Ichiro Kuriki

Graduate School of Information Sciences, Tohoku University, Japa

Research Institute of Electrical Communication, Tohoku University, Japan Satoshi Shioiri

Graduate School of Information Sciences, Tohoku University, Japan

Research Institute of Electrical Communication, Tohoku University, Japan

Recent studies have suggested enhancement of visual attention near hand. We measured flash lag effect (FLE) under different hand positions. FLE is an illusory perception that a flash stimulus presented near a moving object is perceived as displayed behind it, and FLE has been used to measure degrees of attention. We first confirmed that FLE was smaller at attended locations than at unattended locations. Two stimuli presented in the left and right of the central fixation on the display were used to examine the attention effect. When the location of the flash was fixed at either left or right, the FLE was smaller than that when it was randomly presented between left and right. Second, we compared the FLE with either hand positioned right below a display corner (left bottom for the left hand and right bottom for the right hand). The attention effect near the hand provides a prediction that there is a smaller FLE at the left stimulus with left hand under the left bottom corner than at the right stimulus or vice versa. There was, however, no clear effect of the hand positions, whereas we found that the FLE was smaller in the right than in the left with either of hand positions.

P1-48: Visual crowding distorts oculomotor space

Masahiko Terao

The University of Tokyo, Japan masahiko_terao@mac.com Ikuya Murakami Department of Psychology, The University of Tokyo, Japan

When multiple objects are presented in close proximity in the peripheral vision, the identification of visual features becomes difficult. This effect is known as visual crowding. Visual crowding also distorts apparent positions of objects in the cluttered scene (Greenwood et al., 2009). While accurate peripheral position information is important for efficient eye movements, it is unclear how visual crowding influences oculomotor behavior. Here, we investigated whether visual crowding also distorts the landing position of a saccade. A target cross and two flanker crosses were presented at 12 degrees to the right of the fixation point. In this configuration, the apparent crossing point of the target is shifted toward the crossing points of the two flankers. Observers were asked to make a saccade to the crossing point of the target as soon as the target onset. We found that the landing positions of saccades were also biased toward the crossing points of the two flankers. This effect could not be ascribed to a general tendency for saccades to land between two nearby objects. Therefore, the oculomotor system is driven by perceived positions as a consequence of spatial interactions, rather than by physical retinal positions, in a cluttered scene.

Supported by a JSPS Grant-in-Aid for Scientific Research on Innovative Areas (25119003).

P1-50: Guidance or interference? Augmented feedback benefits bimanual coordination even after removal

Shiau-Chuen Chiou

Institute of Cognitive Neuroscience, National Central University, Taiwan sylvia.chiou@gmail.com Erik Chihhung Chang

Institute of Cognitive Neuroscience, National Central University, Taiwan

Previous studies have shown that learning bimanual coordination is more resistant to the removal of feedback when acquired with auditory feedback than with visual feedback. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to better sensorimotor integration via the non-dominant auditory feedback channel or better linkage to kinesthetic information under rhythmic input. The current study was aimed at distinguishing how modalities (visual vs auditory) and information types (visuospatial vs rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Specifically, the feedback provided was a Lissajous plot indicating the integrated position of both arms and visual or auditory rhythm reflecting the relative timing of the movement. The results showed differential progression of error reduction under these three conditions during acquisition and diverse performance change depending on feedback condition after acquisition when feedback was removed, implicating that the guidance effect could be jointly determined by modality and information type of feedback. Furthermore, a similar tune-in effect shown in an additional no-feedback interference task suggested that an internal control strategy could have been acquired. Feedback removal may shift participants' attention from external to internal focus, and such a conscious control strategy of movement may actually interfere with bimanual coordination.

P1-51: Orientation of the palms alters bistable visual motion perception

Godai Saito

Tohoku University, Japan

godai.saito@gmail.com Jiro Gyoba

Graduate School of Arts and Letters, Tohoku University, Japan

Bistable visual motion perception, or the stream-bounce phenomenon, can be altered by the presentation of sound, whereas little is known about the effect of proprioceptive information. This study investigated whether the orientation of the palms of the hands, when located just below a visual display, would alter bistable motion perception. Participants reported whether two moving objects appeared to "stream through" or "bounce off' each other. They performed the judgment task keeping their palms together, using only the right palm, making contact between the backs of their hands just below the coincidence point of the two moving objects, or resting the both hands on their laps. The results showed that the participants tended to report a bouncing percept more frequently when they kept their palms together than they did in the other palm orientation conditions. The findings of the present study suggest that motion-related perceptual ambiguity was reduced by proprioceptive information regarding the orientation of palms.

P2-1: Recognition of illumination and object color changes on an object with specular reflection components

Takehiro Nagai

Department of Informatics, Yamagata University, Japan tnagai@yz.yamagata-u.ac.jp Shigeyuki Kaneko

Department of Informatics, Yamagata University, Japan Yuki Kawashima

Department of Informatics, Yamagata University, Japan Yasuki Yamauchi

Department of Informatics, Yamagata University, Japan

One of potential cues for illumination color estimation from retinal images (e.g., color constancy), which is an ill-posed problem, is color of specular reflection components on objects in a scene because they typically reflect an illumination color regardless of object colors. In this study, we investigated effects of specular reflection components on recognition of illumination color change and object color change. The stimulus in our psychophysical experiment was an object on a black background created with a computer-graphics software. It was either a rough object with specular reflection components or a flat object with no specular components. The color of all pixels on the object, including specular reflection components, abruptly changed from gray to a chromatic color interleaved by a black screen in a trial. The observer responded if the color change was recognized as an illumination color change or an object color change. In the results, a small color change was recognized as an illumination change and a large one was recognized as an object change regardless of the object types. These results suggest that colors of specular components rarely contribute to illumination color recognition, but instead experience of different illumination colors may more strongly affect recognition of illumination colors.

P2-2: Brightness induction similar to the surround modulation Makiko Miyasaka

Graduate School of Art and Design, Joshibi University of Art and Design, Japan 132102@isis.joshibi.jp Katsuaki Sakata Joshibi University of Art and Design, Japan

The perceived contrast of the grating in the center region is modulated by the influence of the orientation in the surround region. We investigated the perceived brightness of the uniform gray center region by changing the difference of the angle between the gratings. As a result, the strength of the brightness induction of the center region is changed with an increase of the angle between two gratings. The brightness contrast observed in the center region was strongest when the surround was iso-oriented and was weakest in case of a cross-oriented surround. These results are similar to the changes observed in the surround modulation by the orientation contrast. It suggests that the inhibitory mechanism by the orientation selective cells affects the strength of the brightness induction.

P2-3: Colorimetry-free color management system for displays Naoki Kurita

Graduate School of Information Science and Engineering, Ritsumeikan University, Japan is0064ik@ed.ritsumei.ac.jp Hiroyuki Shinoda

College of Information Science and Engineering, Ritsumeikan University, Japan Yasuhiro Seya

College of Information Science and Engineering, Ritsumeikan University, Japan

Although the phenomenon of color constancy enables surface colors of objects to continue to be perceived even when changes occur in illumination environments, it can alter color appearance in self-emitting displays. Several studies have proposed color management systems (CMSs) that are able to keep color appearance in displays constant. However, most CMSs are based on measurements by a colorimeter; as a result, several factors (e.g., individual differences and adaptation) are not taken into account. In the present study, we propose a new CMS, which does not employ the use of a colorimeter. In our method, observers were asked to perform a color-matching task. In the task, either a color chip or display presenting a square was located in two different booths with different illuminations. Observers repeatedly moved between the two booths and viewed the chip and display. They then adjusted the color of the square on the display until it appeared to be equivalent to that of the chip. From RGB values set by observers, a color conversion matrix was derived. Results showed that predicted RGB values from the conversion matrix fit quite well with those obtained in the color-matching task, indicating that our CMS can accurately predict how colors appear to observers under different illuminations.

P2-4: Perceptual rivalry in afterimages

Hiroyuki Ito

Department of Design, Kyushu University, Japan ito@design.kyushu-u.ac.jp Erika Tomimatsu

Department of Design, Kyushu University, Japan

After a scene is viewed steadily, its afterimage is seen in complementary colors (negative afterimage). In this study, perceptual rivalry in afterimages is reported. In Experiment 1, participants viewed a crosshatch pattern consisting of thin vertical and horizontal lines in red or green colors. After ten seconds of steady viewing, participants saw afterimages on a blank grey screen. A group of cyan or magenta lines alternately appeared as afterimages. When same-color lines were used as adaptation stimuli, an afterimage in a crosshatch pattern appeared with little rivalry. In Experiment 2, the horizontal and vertical lines were presented to the left and right sides of fixation, respectively. Even when the two groups of lines were different colors, little rivalry occurred. The results show that a negative afterimage tends to appear as decomposed basic visual elements in the brain. The occurrence of perceptual rivalry may indicate mutual inhibition between represented visual objects that share a space.

Supported by KAKENHI(23243076).

P2-5: Spatial layout of a natural scene with no luminance information influences lightness perception

Kei Kanari

Department of Information Processing, Tokyo Institute of Technology, Japan

kei.kanari@ip.titech.ac.jp Hirohiko Kaneko

Department of Information Processing, Tokyo Institute of Technology, Japan

Our visual system could estimate a scene's illumination from a spatial layout based on an apparent correlation between the illumination and the scene. For instance, indoor and outdoor spaces are typically illuminated by artificial lighting and the sun, respectively. This study investigated the influence on lightness perception from spatial layout surroundings given no luminance information. The illumination and spatial layout in natural scenes were measured to examine their correlation. The stimulus consisted of random dots having binocular disparity, representing a 3D layout based on scanned data of a distance distribution. Observers responded to the lightness of a test patch presented in the stimulus space by adjusting the luminance of a comparison patch. Results showed that the responded luminance was greater when the test patch was presented in the spatial layout of the indoor scene than when presented in that of the outdoor scene, indicating that the visual system interpreted the illumination in the indoor scene as weaker than that in the outdoor scene. This point was consistent with field measurement data. These results suggest that the visual system can infer the illumination of a scene from a spatial layout given no information of luminance for lightness perception.

P2-6: Development of contrast dependent pattern discrimination in the macula during childhood

Li-Ting Tsai

School of Occupational Therapy, College of Medicine, National Taiwan University, Taiwan tingwind718@gmail.com Chien-Chung Chen

Department of Psychology, National Taiwan University, Taiwan Yu-Chin Su

Stroke Center and Department of Neurology, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taiwan. Chien-Te Wu

School of Occupational Therapy, College of Medicine, National Taiwan University, Taiwan Kuo-Meng Liao

Department of Endocrine and Metabolism, Zhong-Xiao branch, Taipei City Hospital, Taiwan

We investigated the pattern discrimination performance at various contrast levels across the horizontal and vertical meridians of macular visual field in children. Participants were adults («=7, age 22-33 years) and five groups of 44 children with normal or corrected to normal visual acuity. The stimuli were Lea symbols, with contrast 10%, 20%, 40% and 80%, randomly presented at the fovea, and 20, 40, 60 and 80 eccentricities in the upper, lower, right, and left meridians. At each location, the size threshold was measured with a staircase procedure. A linear regression model was fit to size threshold as a function of eccentricity. The results of model fitting were affected by age, meridian and contrast. The slope of linear functions was steeper at the vertical than the horizontal meridian and at the low than the high contrast. Pattern discrimination of the children in the age 8-12 group reached the adult level at all contrasts. Pattern discrimination of the children in the 4-years-old group was more mature at the high contrast conditions. In conclusion, pattern discrimination at the high contrast levels matures more rapidly than that at the low contrast level. There is an interaction among age, location and contrast in affecting pattern discrimination.

Supported by Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation (TCRD-TPE-103-57).

P2-7: Highly saturated color evoked human dorsal pathway: fMRI studies Ippei Negishi

School of information, Kochi University of Technology, Japan negishi.ippei@kochi-tech.ac.jp Keizo Shinomori

School of information, Kochi University of Technology, Japan

We investigated the processing of saturation in human cortex using fMRI. We presented 10 circles filled by different 10 hues (5R, 5YR, 5Y, 5GY, 5G, 5BG, 5B, 5PB, 5P and 5RP on Munsell color system) on the background of achromatic luminance-dithering pattern, and manipulated the Munsell chroma of the circles on 4 levels (6, 4, 2 and 0). The diameter of the circle was 3.0 deg and the center of the circle was 6.2 deg from the fixation point. In the trial, the visual stimuli described above and their scrambled patterns were altered in 1 Hz. Subjects just observed the fixation at the center of the screen during the trial. Each block lasted 15 s followed by a 15 s interval. In the first 5 s of the interval, subjects performed an easy perceptual task concerning the shape of the fixation point to confirm that they were awake.

The results showed that the brain activations related to the value of chroma were observed at the dorsal (where) visual pathway while no significant activations were observed in the ventral (what) pathway when the significance of the bold signal was set top < 0.001 (uncorrected). This suggests that saturation of the visual stimuli evoked the saliency of their positions. The results will be verified with the analysis of the retinotopic coordinates in each subject's cortex.

Supported by KAKENHI 24300085 and 24650109.

P2-8: Adding blue or yellow sector onto Benham's top

Chiemi Miyata

Graduate School of Science and sc109069@ibe.kagoshima-u.ac.jp Ken Kihara

Graduate School of Science and Sakuichi Ohtsuka

Graduate School of Science and Hiroshi Ono

Department of Psychology, York University, Canada

Schramme showed that the S and (M+L) cone opponent color mechanism is responsible for seeing a yellowish apparent color at the transition of black to white and a bluish one at the transition of white to black when Benham's top is spinning. To better understand this mechanism, we investigated the effects of adding a blue or yellow sector at the transition edges. Nine observers who had normal color vision were asked to judge whether the bluish tint component was greater in the experimental stimulus than in the spinning Benham's top. We found that (1) the stimulus with yellow (or blue) colored sectors at black to white (or white to black) transitions induced a bluish (or yellowish) color at both transitions, and (2) the stimulus with blue (or yellow) colored sectors at black to white (or white to black) transitions induced a highly saturated bluish color at the white to black edges but induced same amount of yellowish-colored arcs at the black to white edges compared with that of normal Benham's top. (1) and (2) together showed that the yellow sector affects percepts at both transitions of black to white and white to black, but the blue sector affects percepts only at the transition of white to black.

Engineering, Kagoshima University, Japan

Engineering, Kagoshima University, Japan Engineering, Kagoshima University, Japan

Supported by JSPS KAKENHI Grant Number 24500149.

P2-9: Time course of color appearance change on mobile display under illuminance change Hidenori Tanaka

Graduate School of Advanced Integration Science, Chiba University, Japan aada0372@chiba-u.jp Hirohisa Yaguchi

Graduate School of Advanced Integration Science, Chiba University, Japan Yoko Mizokami

Graduate School of Advanced Integration Science, Chiba University, Japan

The color appearance of a self-luminous mobile display changes as surrounding illuminance changes. To maintain constant appearance, the colors of display should be corrected in proportion to color appearance. We investigated the time course of color appearance change on a mobile display under the rapid illuminance change of surrounding.

We used a haploscopic color matching technique. A display was put in each box placed side by side. The color of a square test patch on each display was compared. Test colors were red, green, yellow, and blue. We tested chroma, hue and lightness appearance in separate sessions. After adaptation to the same illuminance level (1000 or 0 lx) with both eyes, the illuminance of right box was changed (to 0 or 1000 lx) and observers adjusted the color of a test patch in the right box to match that in the left box. They repeated the judgment twelve times during 0-120 s after the illuminance change.

We found that, when illuminance decreased, the appearance of chroma decreased, red and green shifted to yellow direction, and lightness increased. They showed the opposite trend when illuminance increased. Additionally, chroma and hue appearance changed rapidly until 10 s, and became stable subsequently.

P2-10: Measurement of color appearance with color category rating method

Yasuki Yamauchi

Department of Informatics, Yamagata University, Japan yamauchi@yz.yamagata-u.ac.jp Yuhei Shoji

Yamagata University, Japan Tatsuya Tajima

Kyoto University, Japan Takehiro Nagai

Department of Informatics, Yamagata University, Japan Taiichiro Ishida

Kyoto University, Japan

Categorical color naming has been used as a method to measure color appearance. Describing the color space with 11 basic color terms (red, blue, green, yellow, purple, pink, orange, brown, black, grey and white) is useful to grasp general characteristics. To measure color appearance in detail, category rating estimation method was proposed, which allowed a subject to use up to three basic color terms, rating the colors according to the order. Therefore, when three same color names are used in the same order for two colors, it was impossible to distinguish these two colors even if these two colors were different in appearance. We modified this method by permitting subjects to rate freely the intensity of the color element. A subject answered the order of the colors perceived in the color chip with up to three colors out of 11 basic color terms, and also rated the weight of each of the selected colors in a total of 10 points. For example, the subject described the color appearance of a chip as purple 7, blue 2 and gray 1. We conducted an experiment with Munsell color chips. Our results will be compared with those obtained from an elementary color naming experiment.

P2-11: Your eyes want to see the illusion: Directional asymmetry of eye movements increases illusory motion

Soyogu Matsushita

Ritsumeikan Global Innovation Research Organization, Ritsumeikan University, Japan Graduate School of Human Sciences, Osaka University, Japan soyogu@hus.osaka-u.ac.jp Shigeru Muramatsu

Department of Psychology, Ritsumeikan University, Japan Akiyoshi Kitaoka Department of Psychology, Ritsumeikan University, Japan

A stationary picture can induce strong motion perception—as if it is drifting—when the picture is composed of systematically arranged patches of luminance gradients (Kitaoka and Ashida, 2003, Vision, 15, 261-262). By manipulating the direction of both luminance gradients and observers' eye movements, Matsushita et al. (2013, Perception, 42, 39) showed that the magnitude of illusory motion is greater when those directions are orthogonal than when they are parallel. In the current study, participants observed the stimuli with their natural eye movement, and we measured the distances that the eyes traveled in orthogonal and parallel directions. The results demonstrated that observers moved their eyes more in the direction orthogonal to the luminance gradient direction. This suggests that observers unconsciously moved their eyes to perceive greater illusory motion.

P2-12: Effect of distribution of elements between front and back surfaces on perceived numerosity for a stereo-transparency stimulus

Saori Aida

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

Japan Society for the Promotion of Science (JSPS), Japan saori.aida.t@gmail.com Tsutomu Kusano

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

Koichi Shimono

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

Elements consisting of a three-dimensional (3D) stereoscopic stimulus (when fused, two transparent surfaces are observed) are estimated to have higher numerosity than those consisting of a two-dimensional (2D) stereoscopic stimulus (when fused, a single surface is observed), even when they have the same elements (Aida et al., 2012). We examined the effect of the distribution of elements between two surfaces of the fused 3D stimulus on numerosity overestimation in two experiments, where 2D and 3D stimuli (random-dot stereograms) were presented side by side. In Experiments 1 and 2, observers judged which of the two stimuli had a larger element number and whether the front (or back) surface of the 3D stimulus or the single surface of the 2D stimulus had a larger element number, respectively. Results showed that the ratio of the element number on the front surface to that on the back surface had no effect on the numerosity overestimation of a 3D stimulus. Further, overestimation was observed on the back surface but not on the front, suggesting that the perceived elements of the back surface may play a role in the phenomenon and that numerosity overestimation is not simply related to that of the back surface.

P2-13: Two distinct patterns of activity in early visual areas during binocular rivalry

Hiroyuki Yamashiro

Aino University, Japan yamashiro.hiroyuki@gmail.com Hiroki Yamamoto

Graduate School of Human and Environmental Studies, Kyoto University, Japan Hiroaki Mano

Departments of Medical Informatics, Meiji University of Integrative Medicine, Japan Masahiro Umeda

Departments of Medical Informatics, Meiji University of Integrative Medicine, Japan Toshihiro Higuchi

Departments of Neurosurgery, Meiji University of Integrative Medicine, Japan Jun Saiki

Graduate School of Human and Environmental Studies, Kyoto University, Japan

Previous fMRI and electrophysiological studies of binocular rivalry have yielded inconsistent results regarding the relationship between visual awareness and visual processing hierarchy. In the present human fMRI study, we found two distinct patterns of activity in early visual areas during binocular rivalry, each of which were consistent with previous fMRI or electrophysiology. We measured brain activity while subjects viewed a novel binocular rivalry stimulus, which allowed us to measure not only the response modulations around the time of perceptual transitions but also the responses evoked by the physical onset of the stimulus, whether it was suppressed or dominant during binocular rivalry. The onset responses were larger when it was visible than invisible in all the areas. This elevation of the responses increased along the visual processing hierarchy, consistent with previous electrophysiology. On the other hand, the response modulations around the time of perceptual transitions were equally large across V1, V2, V3, and V4v, which is consistent with previous fMRI studies. Our results suggest that the discrepancy between fMRI and electrophysiology may not be an artifact due to the intrinsic signal difference, but may arise from two distinct processes in early visual areas underlying binocular rivalry.

Supported by JSPS KAKENHI Grant Numbers 25135720, 24240041.

P2-14: Perceived speed of apparent motion in high-speed conditions

Yutaka Nakajima

The University of Electro-Communications, Japan nakajima@hi.is.uec.ac.jp Yutaka Sakaguchi

The University of Electro-Communications, Japan

Apparent motion can be perceived faster than continuous motion with the same physical speed in some cases (Castet, 1995; Giaschi & Anstis, 1989). We investigated whether this perceptual difference could occur for motion stimuli with much higher speed than ever examined. A moving rectangle was displayed with a high-speed projector at 500 Hz refresh rate (i.e., frame duration was 2 ms). Moving distance was 480 pixels (16 deg in visual angle). Continuous motion consisted of 240 frames of rectangles whose speed was 2, 4, 6, or 8 pixels/frame (33, 66, 99, or 132 deg/s). Apparent motion was presented with SOAs of 8, 16, 24, or 40 frames (16, 32, 48, or 80 ms) and an on-time of 4 frames (8 ms). Observers were asked to answer which stimulus moved faster between continuous and apparent motion (with various speed combinations) in a two-interval forced-choice task. The result showed that the apparent motion was perceived faster than the continuous motion even if these stimuli moved with the same speed, and that the larger SOA enhanced this effect. This indicates that the perceptual speed-up of apparent motion is consistently observed in a higher speed region.

Supported by JST CREST.

P2-15: Impaired speed estimation of overlapping moving objects

Kentaro Yamamoto

The University of Tokyo, Japan

yamaken@fennel.rcast.u-tokyo.ac.jp Katsumi Watanabe

Research Center for Advanced Science and Technology, The University of Tokyo, Japan

The present study examined whether the overlapping order of multiple moving objects would influence speed perception. In Experiment 1, white and black cross-shaped objects rotating in opposite directions were simultaneously presented; one as standard (constant speed of 0.2 rps) and the other as test (various speed) stimuli. The crosses were presented at the same (i.e., one overlapping the other at the fixation point) or separate (i.e., left and right) locations. The results of the speed comparison task showed that, when the stimuli were presented in the same location, discrimination threshold was higher and the speed of the back stimulus was perceived to be slower than the front stimulus. In Experiment 2, we presented the test stimulus after presenting two overlapping standard stimuli and asked observers to compare the rotating speeds between the test stimulus and the front or back standard stimulus. We found the back stimulus was still perceived slower but the discrimination thresholds did not differ. These results suggest that overlap impairs simultaneous comparison of speeds of two moving objects and perceived speed changes depending on the overlapping order.

P2-16: Motion illusion induced by color changes

Akiyoshi Kitaoka

Department of Psychology, Ritsumeikan University, Japan akitaoka@lt.ritsumei.ac.jp

A motion illusion in a stationary image was proposed by Fraser and Wilcox (1979). This illusion was enhanced with a new pattern produced by Kitaoka and Ashida (2003): "Rotating snakes" is one of the applied works. Although the Fraser-Wilcox illusion chiefly depends on luminance, its color-dependent variant was proposed by Kitaoka (2010). The color-dependent Fraser-Wilcox illusion is produced with a combination of four elemental parts: long-wavelength part, short-wavelength part, dark part, and bright part. The present study reports a new motion illusion induced by color changes, a strong effect that occurs when an image of the color-dependent Fraser-Wilcox illusion is smoothly alternated with a homogeneously colored blank. The direction of illusory motion depends on whether the blank color is a long-wavelength or short-wavelength one. Moreover, the illusion is fully observed in the central vision as well as the peripheral vision. Possible mechanisms are discussed.

P2-17: The effect of color on vection Megumi Yamaguchi

Graduate School of Information Science and Engineering, Ritsumeikan University, Japan sj0012hx@ed.ritsumei.ac.jp Yasuhiro Seya

College of Information Science and Engineering, Department of Computer and Human Intelligence, Ritsumeikan University, Japan Hiroyuki Shinoda

College of Information Science and Engineering, Department of Computer and Human Intelligence, Ritsumeikan University, Japan

In the present study, we investigated the effect of color on forward and backward vection. Approaching or receding optical flow observed during forward or backward locomotion was simulated by using random dots with depth cues of changing size, velocity, and disparity. The dots were presented in equiluminant white, red, yellow, green, or blue, on a black background. Velocity of optical flow was also manipulated (i.e., 10 km/h and 20 km/h). Participants observed optical flow for 60 seconds with a shutter goggle and reported whether they perceived vection. Vection onset latency, duration and magnitude were measured. Results showed that vection onset was faster and magnitude was stronger by chromatic dots than by achromatic dots. Vection onset was slower but magnitude was stronger by red dots than by other chromatic dots. On the other hand, vection onset was faster but magnitude was weaker by blue dots than by other chromatic dots. These results suggest that stimulus color can affect vection. The present results are discussed with regard to color sensitivity in mesopic vision and chromatic stereopsis.

P2-18: What makes space-time interactions in human cognition asymmetrical?

Chizuru Take Homma

Graduate School of Letters, Kyoto University, Japan chizuru_take@live.jp Hiroshi Ashida

Graduate School of Letters, Kyoto University, Japan

The interaction of space and time affects perception of extents: (1) the longer the exposure duration, the longer the line length is perceived and vice versa; (2) the shorter the line length is, the shorter the exposure duration seems. Previous studies have shown that space-time interactions in humans are asymmetrical; spatial cognition has a larger effect on temporal cognition rather than vice versa (Merritt et al., 2010). What makes the interactions asymmetrical? One of the factors might be the saliency of spatial and temporal information; the balance between the difficulty of spatial and temporal tasks affects the extent of space-time interactions. In this study, participants were asked to judge exposure duration of lines that differed in length or to judge the lengths of the lines within different exposure times. The temporal task was relatively easy, whereas the spacial task was relatively difficult as compared to previous studies. The results suggested that saliency of spatial and temporal information affects the extent of space-time interactions.

P2-19: Anisotropies in apparent displacements of a visual target during large-field visual motion stimulation

Wataru Teramoto

Muroran Institute of Technology, Japan teramoto@csse.muroran-it.ac.jp Hiromu Hoshiga Muroran Institute of Technology, Japan

Exposure to constant angular acceleration around a yaw-axis causes the illusory motion perception of a stationary visual target and its apparent displacement in the direction of acceleration ("oculogyral illusion", Graybiel & Hupp, 1946; Lackner, 1976). The same phenomenon occurs when presenting a large-field visual motion. Although the illusory motion perception during large-field visual motion stimulation has been investigated extensively, the apparent displacement component has been relatively neglected. In order to address its underlying mechanism, the present study investigated whether the apparent displacement occurs uniformly across the visual field. Participants stereoscopically viewed a fixation point superimposed on a large-field dot pattern rotating around the participant's yaw-axis. Soon after the disappearance of the fixation point, a visual target was presented for 17 ms and participants pointed to its perceived spatial location. Results showed that the apparent displacements were not uniformly distributed across the visual field, but occurred mainly in the visual field in the direction opposite of visual motion. We will discuss the contribution of the eye fixation signals to override the covert optokinetic nystagmus in this phenomenon.

Supported by Grant-in-Aid for Grant-in-Aid for Young Scientist (B) (No. 23730693) and Scientific Research (B) (No. 26280067) from the Japan Society of Promotion of Science.

P2-20: A model of depth perception using relative motion velocity in motion parallax

Aya Shiraiwa

Kwansei Gakuin University, Japan

shiraiwa.aya@kwansei.ac.jp Takefumi Hayashi

Kansai University, Japan

When depth from motion parallax is perceived, the magnitude of depth is sometimes underestimated. Relevant factors, such as head movement, the temporal integration process, and pursuit eye movement, have previously been considered. We additionally have found that discontinuity of the velocity field also causes reduction of the depth (Shiraiwa & Hayashi, 2012). The main objective of this study was to construct a model of depth perception that takes these factors into account. The model is divided into two stages. In the first stage, the relative motion velocity is extracted and the entire three-dimensional surface is reconstructed accompanied by detection of its discontinuity. In the next stage, the magnitude of depth is corrected due to the perceived surface. The influences of discontinuity of the surface and head movement is also added at this stage. Hosokawa et al. (2013) and Caudek et al. (2002) have also proposed two-stage models, in which the first stage calculates depth from local distribution of the velocity field. Contrary to their bottom-up models, our model contains a top-down aspect in which depth is corrected from the globally perceived surface. The present model predicts the experimental results by assessing whether or not the perceived 3D surface is continuous.

P2-21: Measurements of subjective length of filled duration determined by dynamic random dots

Erika Tomimatsu

Faculty of Design, Kyushu University, Japan tomimatsu@kyudai.jp Yoshitaka Nakajima

Faculty of Design, Kyushu University, Japan Hiroyuki Ito

Faculty of Design, Kyushu University, Japan

The duration of a moving object from one point to another is often perceived to be longer than that of a static object when these durations are equal to each other physically. Instead of a moving object, we employed a visual field filled with dynamic random dots to reveal whether the duration during which this dynamic field is presented is also perceived to be longer than when the same field is filled with static random dots. The stimulus durations ranged from 100 to 900 ms. Participants adjusted an empty duration delimited by two random-dot flashes to make it subjectively equal to the duration filled with the dynamic or the static random dots. We also employed comparable empty durations to serve as standards in control conditions. The durations filled with dynamic dots were indeed perceived to be longer than the durations filled with static dots. Both durations were overestimated compared with the empty control durations. This overestimation increased with the stimulus duration in the dynamic random-dot condition, whereas it was almost constant in the static random-dot condition. Thus, the overestimation of dynamically filled duration occurs even when the dynamic stimulus pattern occupies the same spatial area during the presentation.

P2-22: Motion processing in retinotopic and spatiotopic coordinates at low light levels

Sanae Yoshimoto

Japan Women's University, Japan Japan Society for Promotion of Sciences (JSPS), Japan n1384003ys@gr.jwu.ac.jp Mariko Uchida-Ota

Japan Women's University, Japan Tatsuto Takeuchi Japan Women's University, Japan

This study aimed to clarify the effects of light level on retinotopic and spatiotopic coordinate bases in association with motion processing. For this purpose, we used a phenomenon called visual motion priming, in which the perceived direction of a directionally ambiguous test stimulus is influenced by the moving direction of a priming stimulus. Previous studies have indicated that negative and positive priming are induced by low-level and high-level motion mechanisms, respectively. In the experiments, subjects made a saccade after termination of the priming stimulus and judged the perceived direction of the subsequently presented test stimulus in retinotopic or spatiotopic coordinates at different light levels. We found that, in retinotopic coordinates, negative priming was observed at all light levels. In spatiotopic coordinates, positive priming was observed at photopic and scotopic levels, whereas at mesopic levels, positive priming was observed only when the priming and test stimuli were presented in the reverse order. These results suggest that the low-level motion mechanism functions in a similar manner across light levels in retinotopic coordinates; however, the high-level motion mechanism functions differently in spatiotopic coordinates at mesopic light levels. This may be caused by different spatiotemporal properties of cones and rods.

P2-23: The optimal density ratio for aesthetic judgment in symmetric patterns

Chia-Ching Wu

Department of Psychology, Fo Guang University, Taiwan ccwu@mail.fgu.edu.tw Chien-Chung Chen Department of Psychology, National Taiwan University, Taiwan

The density ratio means the number of dots in an image component compared with that of the whole random dot pattern; or, if there is only one image component, the number of dots in an image to the number of pixels in the display. We investigated the density ratio effect on aesthetic value of symmetry patterns with one or two image components (defined by the color of the dots). In each 2AFC trial, a reference symmetric pattern was randomly presented in one interval while a test symmetric pattern in the other. The density ratio of the reference was 0.275 while that of the test varied from 0.05 to 0.5. The density of the 2-component pattern was 0.382. The observer was to select the most appealing interval. We measured the probability of selecting the test at different density ratios. Preference increased and then decreased with density ratio in both 1- and 2-component patterns. The most preferred density ratio, estimated by fitted quadratic functions, was 0.32 and 0.34 for the 1- and 2-component patterns, respectively. Thus, there is an optimal density ratio for the whole image and for individual image component. This suggests a visual preference mechanism tuned to Golden ratio.

Supported by NSC102-2420-H-431-002-MY2.

P2-24: Effects of horizontal vs vertical presentations of two successive flashes on duration categorization

Tsuyoshi Kuroda

Kyushu University, Japan tkuroda@neurophy.med.kyushu-u.ac.jp Simon Grondin Université Laval, Canada Shozo Tobimatsu Kyushu University, Japan

It is said that empty time intervals are perceived as longer when these intervals are marked by two successive signals located further away from each other. This phenomenon, called the kappa effect, is often cited as an example of spatiotemporal integration but was tested mainly with three signals marking two neighboring intervals, instead of two signals marking single intervals. The present experiment was conducted to examine whether space would modulate a perceived duration of single intervals. Each of the two flashes was delivered from the left or right side in one session (horizontal direction), while each was delivered from the upper or bottom side in the other session (vertical direction). Participants tended to overestimate duration when two flashes were delivered from different locations than when delivered from an identical location, but only when these flashes were presented in the horizontal direction. However, the horizontal and the vertical presentations both reduced the level of discrimination. The implications of these results are discussed.

Supported by JSPS KAKENHI Grant Number 25-3275.

P2-25: Spatial frequency in paintings by artists with/without migraine: a pilot study

Shu Imaizumi

Chiba University, Japan

Japan Society for the Promotion of Sciences, Japan shuimaizumi@gmail.com Akira Iwaya

Chiba University, Japan Haruo Hibino

Chiba University, Japan Shinichi Koyama Chiba University, Japan Nanyang Technological University, Singapore

Visual discomfort can be induced by paintings with excessive energy at medium spatial frequencies (Fernandez & Wilkins, 2008). Migraineurs are especially susceptible to visual discomfort (Marcus & Soso, 1989). However, some obscure painters with migraine are thought to represent their experience of migraine and visual discomfort in their paintings, so-called "migraine art". Several famous painters, including Picasso, van Gogh, and de Chirico, have anecdotally been reported to also suffer from migraine. Despite this, it is still unknown whether paintings by migraine artists and these master painters contain spatial characteristics contributing to visual discomfort. We performed spatial-frequency analysis on five groups of paintings by migraine artists, Picasso, van Gogh, de Chirico, and non-migraine artists. Results showed that paintings by migraine artists possessed more energy at medium spatial frequencies than those by non-migraine artists. Paintings by Picasso, van Gogh, and de Chirico also possessed this spatial characteristic. Furthermore, Picasso and van Gogh's paintings had more energy at high spatial frequencies than migraine artists' paintings. These results suggest that painters with migraine may create paintings with spatial characteristics capable of inducing visual discomfort in viewers, and that there are differences between spatial characteristics of paintings by migraine artists and the famous painters.

Supported by Grant-in-Aid for JSPS Fellows (25-943) from Japan Society for the Promotion of Sciences.

P2-26: The role of upper and lower faces in discriminating happy and sad facial expressions Pei-Yin Chen

Department of Psychology, National Taiwan University, Taiwan d02227102@ntu.edu.tw Yi-Cheng Pan

Department of Psychology, National Taiwan University, Taiwan Chien-Chung Chen

Department of Psychology, National Taiwan University, Taiwan

We investigated how the human visual system integrates facial features to discriminate subtle changes in faces to perceive facial expression. The stimuli were morphed faces interpolated between happy and sad faces of the same six models selected from a facial expression image database (Chen et al., 2009). The morphed image was then cut in half and divided into upper and lower parts. In each 2AFC trail, the observer was to compare a reference face of a designed morphing level and a test face with a slight change in morphing level in either the upper or the lower faces. We measured the discrimination threshold, or the minimal morphing level difference for an observer to perceive a change in expression change, at 75% correct response level. For each specific upper face, the discrimination threshold for the lower face was always the same regardless of the morphing level of the lower reference face, suggesting that upper face plays little role in happy-sad discrimination. For the lower face was held constant, the threshold decreased with the morphing level of the upper reference face, suggesting an expansive nonlinearity in the underlying expression discrimination mechanisms.

P2-27: Experience-driven perceptual bias in face processing for 8-11 month-old infants Nga Ki Kong

Department of Psychology, The University of Hong Kong, Hong Kong

mickong@connect.hku.hk

Cynthia Yui Hang Chan

Department of Psychology, The University of Hong Kong, Hong Kong

Kevin Ho Man Cheng

Department of Psychology, The University of Hong Kong, Hong Kong

Chia-huei Tseng

Department of Psychology, The University of Hong Kong, Hong Kong

Adults inspect the left side (from viewers' perspective) of others' face first and for longer time (left gaze bias) and use left side information when face-related perceptual judgments (e.g. similarity, gender, emotion) are requested (left perceptual bias). Infants are reported to exhibit left gaze bias, and we examined whether they possess perceptual bias also.

We habituated 19 infants to a real face. During test stage, two faces, each consisted of one half of the habituated face aligned with its own mirror image, hence left-left face (LL face) and right-right face (RR face), were presented side-by-side on the screen. If infants look longer at either face, it indicates that infants find that face more novel, thus implying a perceptual bias. We used Tobii T120 to track infants' eye movement during both habituation and test stages. We did not find gaze bias during free-viewing habituation or perceptual bias during the test. Instead, we found a right-side bias that our infants looked at faces on the right side of the screen significantly longer than on the left side. Additionally, we observed a tendency that infants' gazing history during habituation could predict their preference at test stage: those who fixated longer at the left side of a face during habituation were more likely to study longer at the RR faces in test phase and vice versa. This implies a preference of face perception driven by the immediate past experience during infancy, which was never reported before.

P2-28: The novel facial identity aftereffect elicited by simple curve Miao Song

School of information and engineering, Shanghai Maritime University, China songmiaolm@gmail.com

Previous researches have reported the repulsive perceptual shift facial aftereffect elicited by the real face. Here, we used the simple curve, Chinese character, and the real face as adaptation stimuli, and tested on the real face. The results show that that the curve and Chinese character generated the significant facial aftereffect on the real face, although the strength by non-face stimuli is much weaker that that by the real face. These results indicate a hierarchical contribution from low and/or middle levels to the high-level face system. We also performed the conversed experiment that the real face is used as adapting stimuli and tested on the curve. However, we do not found the aftereffect from the face on the curve. These experiments together suggest an asymmetrical interference between face system and low/middle level visual system.

P2-29: Race-specific aftereffect using anti-face: A pilot study

Euihyun Kwak

Yonsei University, Korea kehmms@gmail.com Chunmae Lee

Yonsei University, Korea Kyong Mee Chung Yonsei University, Korea

This research investigated whether the size of aftereffect is influenced by racial characteristics of participants during emotion discrimination tasks that employ adaptation paradigm using antifaces. Thirteen typically developed Asian graduate students participated in this research. The participants were first adapted to one of the three anti-emotional faces (anti-happy, anti-sad, and anti-angry) of either an Asian or Caucasian. Then, they were asked to choose the recognized emotions of neutral faces which were presented for 500 ms after the anti-emotional faces. The results revealed that the size of aftereffect was significantly reduced when anti-emotional faces of the incongruent race with participants were previously presented than when the participants were exposed to those of the congruent race. These results suggest that the racial characteristics of subjects influence the size of aftereffect while recognizing facial expressions. This experimental design could be applied to clinical groups such as individuals with autism spectrum disorder, who have deficits in face recognition, and is expected to contribute to finding out perceptual features of those individuals.

Supported by the National Research Foundation of Korea.

P2-30: Infants' visual discrimination of mirror letter images

Wakayo Yamashita

Graduate School of Science and Engineering, Kagoshima University, Japan wyamashita@ibe.kagoshima-u.ac.jp Yumiko Otsuka

School of Psychology, UNSW Australia, Australia Ayanori Tanaka

Department of Psychology, Chuo University, Japan Kazuki Sato

Department of Psychology, Chuo University, Japan So Kanazawa

Department of Psychology, Japan Women's University, Japan Masami K Yamaguchi

Department of Psychology, Chuo University, Japan

This study examined whether 7- to 8-month-old infants have the ability for invariant perception of mirror-letter images by using a change detection paradigm. We reasoned that if infants have the ability for invariant perception of mirror-letter images, they should show a greater difficulty in discrimination of mirror-letter from the original letter than discrimination between non-mirror images. In each trial, infants were shown two displays (Change/No-change display) that contain flashing image sequence side by side. In both displays, each image flashed for 300 ms followed by a 450 ms blank screen. In the No-change display, the same letters were shown repeatedly throughout the trial. On the other hand, a pair of original letters and mirror-image letters were shown alternately in the Change display. Each infant underwent two 15 seconds trials where the position of two displays were reversed across trials. Our current results from 12 infants showed that infants did not show a significant looking preference to either of the displays, suggesting that they failed to discriminate between the original and mirror-image letters. By using the same paradigm, we will further test whether infants can discriminate between non-mirror-letter images. Results from the further test will be presented at the conference.

P2-31: The BOLD and the Beautiful: Neural responses to natural scene statistics in early visual cortex

Zoey Jeanne Isherwood

The University of New South Wales, Australia zoeyji@gmail.com Mark M Schira The University of Wollongong, Australia Neuroscience Research Australia, Australia Branka Spehar

The University of New South Wales, Australia Neuroscience Research Australia, Australia

Natural scenes are known to share a specific linear distribution of spatial frequencies and associated luminance intensity variations known as the 1/f amplitude spectrum (with a slope around -1.2). We sought to investigate the response profile of early visual areas to random noise images with varying 1/f slopes (-0.25, -0.75, -1.25, -1.75 and -2.25) across two contrast levels (10% and 30%) and two viewing conditions (aesthetic rating and an unrelated central visual search task). The aesthetic condition was chosen as images sharing natural scene characteristics have been frequently reported as more aesthetically pleasing compared to images that do not. The two viewing conditions were directly compared to identify brain areas related to aesthetics. Participants (n = 12) underwent fMRI scanning whilst viewing these images. In each visual area analysed (V1 to V4) BOLD responses were 1.5 to 2.5 times higher for natural slopes (-1.25) compared to unnatural slopes (-0.25 or -2.25) across both contrasts and tasks. Only during the aesthetic condition the putamen and mOFC were found to be active. Together, our results show that early visual areas are optimally tuned toward processing images with natural scene statistics, potentially contributing to their aesthetic appeal.

Supported by Australian Research Council DP120103659 (to B. Spehar).

P2-32: Does cuteness differ from beauty in peripheral vision?

Kana Kuraguchi

Graduate School of Letters, Kyoto University, Japan kuraguchi.kana.23c@st.kyoto-u.ac.jp Hiroshi Ashida Graduate School of Letters, Kyoto University, Japan

Guo, Liu, and Roebuck (2011) showed that attractiveness was detectable in peripheral vision. As there are different kinds of attractiveness (Rhodes, 2006), we investigated how beauty and cuteness are detected in peripheral vision with brief presentation. The results showed that both beauty and cuteness were detectable in peripheral vision, but not in the same way. Accuracy rates for judging beauty were invariant in peripheral and central vision, while accuracy rates for judging cuteness declined in peripheral vision as compared to central vision. In addition, for male participants, it was more difficult to judge cuteness than beauty in peripheral vision, suggesting gender differences especially in judging cuteness. A control experiment with spatially low-pass filtered facial images in central vision indicated that lower resolution in peripheral vision might not be the main cause of the tendency for cuteness as described above. Central vision might be therefore suitable for judging cuteness while judging beauty might not be affected by central or peripheral vision. This might be related with the functional difference between beauty and cuteness.

P2-33: Unconscious contour integration based on biocular rivalry Hongmei Yan

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, China

hmyan@uestc.edu.cn Huiyun Du

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, China Cheng Chen

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, China

Contour detection and contour integration are important functions of human visual system. The intrinsic mechanism of the contour integration has been a hot topic in vision research. A large number of physiological and psychophysical experiments showed that we could integrate contours based on the basic gestalt perceptual grouping rules. This is a complex process which may be based on contextual interactions involved in the forward input from the primary visual pathway and top-down feedback from the higher cortical area. This process is also influenced by the contour of the background. Whether the contour integration process must be completed in the case of conscious participation and which brain areas this process occur in the visual pathway, there are no conclusions now. A psychophysical method was used in this study, using binocular rivalry flash suppression paradigm, to study whether peripheral contour clues could influence center contour integration under the state of unconsciousness. The results showed that conscious peripheral contour cues could improve the performance of center contour integration and so did unconscious peripheral contour cues.

Supported by the 973 project (2013CB329401), the Natural Science Foundations of China (91120013, 61375115, 31300912), and the fundamental research funds for the Central Universities (ZYGX2013J098).

P2-34: Comparison of effectiveness of different types of misdirection: using card magic illusions

Ryo Tachibana

Department of Psychology, Graduate School of Arts and Letters, Tohoku University, Japan

ryotachibana.315@gmail.com Jiro Gyoba

Graduate School of Arts and Letters, Tohoku University, Japan

Misdirection is a skill developed by magicians, who have long used this technique to manipulate the attention of their audiences to perform persuasive magical illusions. Psychologists currently utilize misdirection to elucidate the characteristics of perception and cognition in our daily lives by capitalizing on its ability to establish realistic experimental settings. Although the number of studies investigating misdirection has recently increased, it is still unclear as to which specific forms of misdirection most successfully manipulate our attention. In order to examine this issue, this study presented movie clips of three typical playing card magic illusions (novelty, movement, and contrast) to participants and measured their gaze using an eye tracker. Results showed that the duration of fixation in misdirected areas induced by novelty misdirection (the sudden appearance of a new object) was significantly higher than that caused by the other two misdirection types. This finding corroborates previous research that suggests that the abrupt onset of stimuli strongly captures attention and indicates that misdirection by novelty most effectively manipulates our attention and enhances magical illusions.

P2-35: Attention-dependent modulation of spike synchrony and firing rates for border-ownership selective neurons in a network model

Nobuhiko Wagatsuma

School of Science and Engineering, Tokyo Denki University, Japan Krieger Mind/Brain Institute, Johns Hopkins University, USA nwagatsuma460@gmail.com Rudiger von der Heydt

Krieger Mind/Brain Institute, Johns Hopkins University, USA Ernst Niebur

Krieger Mind/Brain Institute, Johns Hopkins University, USA

Border-ownership selectivity (BOS) gives the figure direction with respect to the border. Attention modulates the responses of BOS neurons in V2. Martin and von der Heydt (Abstract presented at VSS 2013) have physiologically characterized top-down attentional effects on BOS neurons in terms of spike synchrony and firing rates. In order to understand the mechanisms of attentional modulation of BOS neuron activity, we propose a network model with spiking neurons. To account for the observed physiological results, it was necessary to include two types of inhibitory neurons in our model, feedforward inhibitory (FFI) and top-down inhibitory (TDI) neurons (Buia & Tiesinga, 2008), in addition to excitatory border-ownership (EBO) neurons. Top-down attention was mediated by the projection of TDI to FFI neurons, whereas bottom-up inputs projected to both EBO and FFI neurons. Model simulations showed that attention decreased spike synchrony between pairs of EBO neurons, independent of the increase in their firing rates due to attention. These contrasting modulation patterns between spike synchrony and firing rates were consistent with physiological reports. The simulation results of our model suggested that the behavior of two types of inhibitory neurons plays a critical role for attentional modulation during figure-ground segregation.

P2-37: Can unaware emotional information activate space-valence association?

Kyoshiro Sasaki

Kyushu University, Japan k-ssk@kyudai.jp Yuki Yamada

Kyushu University, Japan Kayo Miura Kyushu University, Japan

Emotion affects our body movement on the basis of space-valence metaphor (e.g., positive emotion guides upward body movement). Here, the present study examined whether the same was true even when emotional information was presented unconsciously. To eliminate emotional information from awareness, we adopted a continuous flash suppression (CFS) technique in which a stimulus presented to one eye is interocularly masked by a continuous flash of Mondrian images presented to the other eye. Participants were shown an emotional image (positive, negative, and neutral) for 2500 ms. After the image disappeared, they were asked to move a dot stimulus from the center of a display to an arbitrary position by a joystick fixed on vertical surface: if the participants moved their arm upward (downward), the dot moved upward (downward) on the display. The emotional image was masked (masked condition) or not (unmasked condition) by CFS. The results revealed that in the unmasked condition the average position of the dot was significantly higher when the participants observed the positive image than the negative image. However, this modulation disappeared in the masked condition. Our findings suggest that conscious access to emotional information is necessary for activation of space-valence association.

Supported by The Japan Society for the Promotion of Science.

P2-38: Focus size of attention: Exploring the focus hypothesis on the collinear masking effect in visual search

Ching-Wen Chiu

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan

weinsy75@gmail.com Li Jingling

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan

In searching for an item in our visual field, usually a salient or a behavioral relevant item captures our attention. However, Jingling and Tseng (2013) reported an interesting finding that a salient target is actually more difficult to identify than a non-salient target. This novel phenomenon was observed in a specific search display, which was filled with regularly aligned small bars. The salient structure was formed by several bars collinear to each other. When the target was on the salient collinear structure, search was more difficult than when the target was in the background, which is called the collinear masking effect. Since the target was a small bar while the collinear structure was longer, they may elicit different focus size and conflict to each other. The goal of this study was to test whether the conflict on focus size can explain the collinear masking effect. We manipulated the size of the target and the size of the collinear structure systematically. Results of five experiments consistently showed the collinear masking effect regardless of the size of the target. We concluded that the collinear masking effect might be associated with other perceptual properties rather than the size of the attention focus.

Supported by NSC101-2410-H-039-001-MY2.

P2-39: Contextual cueing effect for unseen targets

Masayuki Kobayashi

Graduate School of Information Sciences, Tohoku University, Japan m-koba@riec.tohoku.ac.jp York Fang

Graduate School of Information Sciences, Tohoku University, Japan Ryoichi Nakashima

Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan Kazumichi Matsumiya

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan Ichiro Kuriki

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan Satoshi Shioiri

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan

Spatial layout in visual search displays can be learned implicitly by repeated use of the same layouts (Contextual cuing effect). The context cuing effect has been shown with a whole display seen simultaneously. Visual contexts in everyday life include information around the viewer, not only the information seen at a moment, and is used for searching things in natural environments. There may or may not be a contextual cueing effect for the visual information unseen simultaneously. We investigated the effect of repeated layouts for a 360° stimulus display (six LCDs surrounded the oberver), where the context of front view may inform the location of the target in the back after learning layouts. Search time was analyzed as the addition of the time required to reach the display with the target (global RT) and the search time within the target display (local RT). We found shortening of global RTs after repetition of the same search layouts. This suggests that spatial layout beyond the visual field can be learned by repetition. We also confirmed that the learning effect is implicit by a recognition test after the search experiment. The average rate for judging 'seen' for repeated layout was 56.9%.

P2-40: Seeing the word I and my face activates my name: Close link between self-awareness and one's own name

Kuei-An Li

Department of Psychology, National Taiwan University, Taiwan

n60213@hotmail.com Jen-Hao Li

Department of Psychology, National Taiwan University, Taiwan Su-Ling Yeh

Department of Psychology, National Taiwan University, Taiwan

Object naming is important for mental processing of the perceived world. The phenomena of tip-of-the-tongue and the clinical symptom of anomia, however, indicate that naming is not a prerequisite for object perception. Here we used the stroop paradigm to examine whether one's own name is automatically triggered if the referring object is oneself. Our participants were restricted to those whose family name is Huang (^), meaning "yellow". In Experiment 1, the colored Chinese character ^ ("I", painted in yellow or blue) was used to induce self-awareness. Participants were asked to respond to the ink color as soon as possible. Results showed shorter RTs to yellow I than blue I, compared to control words. In Experiment 2, a priming procedure was adopted, and the participant's own face was used to activate self-awareness; he or she needed to respond to the color of a square that appeared after the face. Results showed shorter RTs to the yellow square when the prime was the participant's face, compared to control faces. These results suggest that self-awareness triggered by the word I or one's own face activates one's own name automatically. This supports our hypothesis that self-awareness and one's own name are linked tightly.

Supported by grants from the National Science Council (NSC 101-2410-H-002-083-MY3).

P2-41: Pre-cueing of correct response enhances the Simon Effect Fei Tian

Graduate School of Human Sciences, Osaka University, Japan

hnhnhnwflying@gmail.com Kazumitsu Shinohara

Graduate School of Human Sciences, Osaka University, Japan

The purpose of this study is to examine whether pre-cueing the correct response could ease the interference brought by the incompatible spatial correspondence between stimuli and response within a Simon Task. According to the Dual-Routes Model, the delayed response in the incompatible condition is thought to be caused by the conflict between the automatic response induced by the position of stimuli and the correct response which follows the task instruction. Based on this explanation, we predicted that the cue indicating the correct response would cause the response determination to be initialed before stimuli presentation. It would reduce the burden of processing the correct response within the indirect route, and more cognitive resources could be available for dealing with the inhibition of response conflict resulting in mitigating the Simon effect. In this experiment, a left or right arrow was presented in advance to the Simon stimulus to indicate the correct response. Obtained results showed that, with the pre-cue of correct response, no change of response time in the incompatible condition was found, but the response time was shorter in the compatible condition. The size of the Simon effect seemed to increase by providing the pre-cue. These results can be interpreted in terms of the attention-shift account.

P2-42: Does ocular origin of stimuli always help you find a target? It depends!

Yuk Sheung Yeung

Department of Psychology, The University of Hong Kong, Hong Kong yeungys518@gmail.com Sunny Meougsun Lee

Department of Psychology, The University of Hong Kong, Hong Kong Hiu Mei Chow

Department of Psychology, The University of Hong Kong, Hong Kong Department of Psychology, University of Massachusetts Boston, USA Chia-huei Tseng

Department of Psychology, The University of Hong Kong, Hong Kong

Salience maps guide attentional deployment. Ocular origin of stimuli has recently been discovered to contribute to salience computation in addition to orientation, luminance and color (Zhaoping, 2008). However, how it interplays with other simple features is little understood. We investigated how ocular origin of stimuli interacts with color and luminance in a visual search task. The search display contained 9 x 9 vertical green bars against a black background (Experiment 1) or dark-grey bars against a mid-grey background (Experiment 2). One randomly selected singleton column was colored red (Experiment 1) or light grey (Experiment 2) in five possible locations. Another column, independently selected, was presented to a different eye (ocular singleton) from the other columns. Observers reported the orientation of a target, a small tilted gap on one of the bars.

We found in Experiment 1 that targets congruent with the ocular singleton were slower to be detected than those incongruent with the ocular singleton, but there was no effect on the color-defined singleton. Interestingly, in Experiment 2, we found no effect of ocular origin but the search facilitation for targets overlapped with the luminance singleton. Our findings that the ocular origin of stimuli works distinctly with color and luminance here imply these features are interactively coded during salience computation for visual search.

P2-43: Global versus local saliency: determinants of access to visual awareness Yung-Hao Yang

Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Germany

Department of Psychology, National Taiwan University, Taiwan yunghaoyang@gmail.com Su-Ling Yeh

Department of Psychology, National Taiwan University, Taiwan Andreas Bartels

Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Germany

What contributes to the phenomenon that a salient singleton attracts attention even without awareness (Hsieh, Colas, & Kanwisher, 2011)? Two possibilities are proposed and tested: (1) a global averaging process that pools the whole stimuli and detects the outlier (Chong & Treisman, 2005), and (2) a local gradient detection that compares the local neighboring elements and finds the contrast (Julesz, 1986). We manipulated three conditions: singleton (a red blob among green blobs, or the reverse), alternation (red-green switching blobs), and homogeneity (all red-green blobs). The singleton condition had a strong outlier in global averaging but weak local contrast, while the alternation condition was strong in local contrast but lacked saliency in global averaging. By presenting these stimuli to one eye and dynamic high contrast masks to the other eye using the continuous flash suppression paradigm, we measured the release from suppression time of these stimuli. We found a faster release-from-suppression time for the alternation than the singleton and homogeneity conditions, a result unchanged with varying blob intensity. These results demonstrate that not global saliency but local contrast was the primary drive in release from suppression. We thus conclude that local contrast contributes to the saliency of visual processing without awareness.

Supported by Deutsche Forschungsgemeinschaft (DFG grant number BA4914/1-1 to A.B.); the Centre for Integrative Neuroscience (DFG grant number EXC 307); Taiwan's National Science Council (NSC101-2410- H-002-083-MY3 to S.L.Y.; NSC102-2917-I-002-096 to Y.H.Y.).

P2-44: Effect of optical flow in the entire visual field on attentional blink Shinya Takagawa

Graduate School of Science and Engineering, Kagoshima University, Japan sc109036@ibe.kagoshima-u.ac.jp Ken Kihara

Graduate School of Science and Engineering, Kagoshima University, Japan Sakuichi Ohtsuka

Graduate School of Science and Engineering, Kagoshima University, Japan

Recently popularized see-through augmented-reality (AR) systems produce optical flow in the entire visual field. It is possible that this optical flow interferes with the visual information given by the AR system. Previous studies have shown that the optical flow generated in a limited background affects the visual attention when stimuli are presented successively at the central visual field. However, the relationship between the optical flow of the entire field and the visual attention is still unclear. To address this issue, we used the attentional blink (AB) phenomenon; with two targets embedded in a rapid stream of stimuli, identification of the second target becomes difficult if it occurs 200-500 ms after the first target. We examined the effect of the optical flow induced by moving dots scattered across the entire visual field on AB. The characteristics of the optical flow were varied by changing the direction, speed and number of moving dots. The results of experiments revealed that there were no specific differences in AB size regardless of optical flow conditions. These results suggest that optical flow in the entire field may not have a significant impact on the visual attention in the environment of see-through AR.

Supported by JSPS KAKENHI Grant Number 25730095.

P2-45: Visual search for a walker and a box: Search asymmetry in approaching and deviating walkers

Rie Ishimoto

Graduate School of Engineering, Toyohashi University of Technology, Japan

ishimoto@real.cs.tut.ac.jp Michiteru Kitazaki

Department of Computer Science and Engineering, Toyohashi University of Technology, Japan

We aimed to investigate whether visual search for a human walker is different from that for ordinary objects. We presented six walkers made with computer graphics side by side on a CRT display. Subjects were asked to search for an approaching walker (target) among five deviating walkers (distractors) or a deviating walker among five approaching walkers. Orientation difference was either 9 or 30 deg. We found that the search for an approaching walker was significantly faster than the search for a deviating walker with a 9 deg difference, but the search for a deviating walker was faster than the opposite search with a 30 deg difference. In a following experiment with replacing walkers with boxes, the search for a deviating box among approaching boxes was faster than the opposite search with 6, 30, and 60 deg differences. Thus, the search asymmetry for an approaching or deviating human walker with the small orientation difference was different from the others. This result suggests that searching for an approaching walker in a difficult situation has a special value for human observers, and may drive a socio-perceptual mode or mechanism, which requires human walkers' appearance.

P2-46: Impaired biological motion perception and action recognition in children with autism spectrum disorder

Liang Hui Wang

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan Department of Physical Medicine & Rehabilitation, China Medical University Bei-Gang Hospital, Taiwan

wlhui0815@gmail.com Tzu-Yun Chen

Department of Physical Medicine & Rehabilitation, China Medical University Bei-Gang Hospital, Taiwan Hsin-Shui Chen

Department of Physical Medicine & Rehabilitation, China Medical University Bei-Gang Hospital, Taiwan

Department of Physical Medicine & Rehabilitation, China Medical University, Taiwan Sarina Hui-Lin Chien

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan

Biological motion perception is present at birth; it plays an important role in helping individuals adapt to their social environment. Recent studies revealed impaired biological motion perception in children with autism spectrum disorder (ASD), who are characterized by marked deficits in social interaction and communication. Using point-light displays, the present study intended to examine looking preferences for human and non-human biological motion stimuli (Exp. 1) and action recognition performance (Exp. 2) in typically developing (TD) and ASD children. Forty-two participants (21 ASD and 21 TD children) aged 3-7 years were included in this study. In Exp. 1, we found that ASD children did not preferentially attend to biological motion as TD children did. The ASD group also exhibited shorter total fixation time for the test displays than did the TD group. In the action recognition task of Exp. 2, ASD children made more errors in naming and spent more time in responding than did the TD children. In conclusion, children with ASD are lacking a "normal" preference for biological motion stimuli. Moreover, such abnormality might be due to an overall deficit in processing biological motion information and may explain the poor performance in action recognition in ASD individuals.

Supported by China Medical University Bei-Gang Hospital.

P2-47: The intention of an action distorts the timing of a Go/No-go signal of the action Yoshiko Yabe

The Brain and Mind Institute and the Department of Psychology, University of Western Ontario, Canada

Research Institute, Kochi University of Technology, Japan yyabe@uwo.ca Melvyn Goodale

The Brain and Mind Institute and the Department of Psychology, University of Western Ontario, Canada

Research has shown that the intention to perform an action can distort one's perception of the timing of sensory events triggered by the action. Here we show that, when participants react to sensory events that trigger an action, the triggering events are perceived to have occurred later than they really did.

Participants viewed a black dot on a monitor while depressing a key. A second-hand-like line rotated around the dot. In a Go/No-go task, participants were asked to release the key if the "clock" turned green and not to release it if the clock turned red. In the control condition, the participants saw the same changes in the color of the clock but continued to depress the key. In both tasks, the participants were required to report, at the end of each trial, the location of the clock hand at the moment the clock changed color.

In the Go/No-go condition, there was a significant increase in the perceived timing of the color change—even when they continued to depress the key on No-go trials. This result suggests that the temporal distortion of an event triggering an action is related to the intention to make an action, not the action itself.

Supported by JSPS KAKENHI Grant Number 25750265 to Y.Y.

P2-48: Visual field asymmetry in auditory facilitation effects for visual identification and localization performance

Yasuhiro Takeshima

Tohoku University, Japan yasuhiro.takeshima@gmail.com Jiro Gyoba

Graduate School of Arts and Letters, Tohoku University, Japan

In audio-visual integration, attention directed to vision can spread to encompass simultaneous signals from audition, resulting in enhancement of processing. This effect is well known as cross-modal attentional spread. Based on this effect, auditory stimuli can improve visual performance even though they do not directly convey information about the visual stimuli. Meanwhile, the visual field asymmetry has been reported in respect to attentional bias for visual processing. While the right visual field is more involved in recognition and verbal processing, the left visual field excels in spatial processing. In this context, we hypothesized that the visual field asymmetry would be observed in enhancement of visual processing by sound depending on types of required tasks. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. The results supported our hypothesis. The auditory stimuli improved second target visual identification performance when the visual stimuli were presented in the right. In contrast, auditory stimuli improved second target visual localization performance when the visual stimuli were presented in the left visual field. Thus we found that sound can enhance the visual processing especially in the dominant visual field.

Supported by the Japanese Society for the Promotion of Science KAKENHI (Grant-in-Aid for JSPS Fellows: No. 24-4354).

P2-49: The influence of gravity on length perception: The vestibular system delays eye movements that affects on length judgment

Shuilan He

Graduate School of Joshibi University of Art and Design, Japan heshuilan@gmail.com Katsuaki Sakata

Joshibi University of Art and Design, Japan

The vertical line is reported as perceived longer when compared to a horizontal one with the same length (Avery & Day, 1969). There is growing evidence that this phenomenon is related to gravity input, i.e. the organization of brainstem upward eye movements' pathways suggests that the influence of gravity-vestibular system has an "anti-gravitational" pathway that affects the upward eye movements slower (C. Pierrot, 2009). Given this structure, we hypothesize that this delay makes upward eye movements slower than in the other directions, causing an increase in the saccade time of the vertical line resulting in the overestimation of the vertical length perception. Experiment 1 showed that, under eye movements from fixation cross to inducing dot, the low Michelson contrast targets looming between them were hard to perceive in the upward direction relative to the other directions. Experiment 2 showed that the L configurations of the same-length vertical and horizontal lines flashed between the fixation cross and the inducing dot had the overestimation of length only in the upward direction. These results suggest that the eye movements influenced by gravity effects on the length perception depend upon the line direction.

P2-50: The visual processing facilitation by the head direction

Ryoichi Nakashima

Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan rnaka@riec.tohoku.ac.jp Satoshi Shioiri

Research Institute of Electrical Communication, Tohoku University, Japan CREST, Japan Science and Technology Agency (JST), Japan

People generally prefer frontal viewing (the directions of our head and eyes are the same) rather than lateral viewing (the directions of our head and eyes are largely different), because lateral viewing interferes with the visual processing (Nakashima & Shioiri, 2014). There are two explanations for the results. First, the performance is improved because the head directs to the target. Second, the performance is improved because the head and eyes are aligned. In this study, to examine which was a major factor for the preference of frontal viewing, we conducted a visual task in peripheral vision. Participants were instructed to identify the orientation of the target presented peripherally (15° away from the fixation) and briefly (100 ms). The critical manipulation was participants' head directions and fixation position: the head directed to fixation location, target position or opposite side of the fixation. These conditions were blocked, and participants directed their eyes to the same position in each block. The performance was highest when the head directed to the target position. Based on these results, we suggest that visual processing is influenced not only by eye direction but also by head direction. Visual processing can be facilitated in the head direction.

P3-1: Egg camouflage is maximised by matching chromatic but not textural cues in the Japanese quail

Karen Anne Spencer

School of Psychology & Neuroscience, University of St Andrews, UK Kas21@st-Andrews.ac.uk Camille Duval

School of Psychology & Neuroscience, University of St Andrews, UK P George Lovell

School of Psychology & Neuroscience, University of St Andrews, UK Division of Psychology, Abertay University, UK

Bird eggs vary greatly in their degree of patterning and colouration, and this is thought to be an anti-predator defence. We know that female Japanese quail select laying positions that most optimally hide their eggs, via disruption of the segmentation of the eggs' outline from the laying substrate (Lovell et al., 2013). In this study we further explore the visual basis for laying choices by making available textured substrates in addition to varying substrate colours. In total 128 eggs were laid by 19 female quail over a period of 7 days. Egg detectability was quantified by measuring the amount of the egg to substrate boundary identified by the Canny edge detection algorithm. We also assessed the visual match between the egg patterning and the available substrates using a parametric model of visual texture (Portilla & Simoncelli, 2000). We found that very few eggs were laid on textured substrates; birds preferred substrates that gave the smallest chromatic difference (CIE LAB) between the egg maculation and the substrate colour, regardless of available texture matches. Females also chose substrates that optimally disrupted edge-detection processes. These results confirm that disruptive colouration may be the mechanism employed to maximise egg camouflage.

P3-2: Chromatic aberration of the human eye and corneal and crystalline lens powers Masashi Nakajima

Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Japan

TOPCON Corp., Japan m.nakajima@topcon.co.jp Takahiro Hiraoka

Institute of Clinical Medicine, University of Tsukuba, Japan Tetsuro Oshika

Institute of Clinical Medicine, University of Tsukuba, Japan Yoko Hirohara

Eye Care Company, TOPCON Co., Japan Toshifumi Mihashi Department of Information Processing, Tokyo Institute of Technology, Japan

We measured chromatic aberrations of relatively many human eyes comparing previous studies using a Shack-Hartmann wavefront aberrometer (SHWA). Several researchers have studied relationships between the longitudinal chromatic aberration (LCA) of the eye and other parameters. One interesting result from the previous studies is that they did not find age related change in the LCA. We developed an SHWA with three different light sources. We obtained spherical equivalents and LCAs with the SHWA, corneal curvatures with an auto-keratometer, and axial lengths with a low coherence interferometer (IOLMaster, Carl Zeiss, Germany) of 67 normal eyes in 45 cases. We estimated refractive powers of the ocular inner optics using an eye model (Le Grand, 1967). The corneal curvatures and axial lengths we measured were used to replace the model data. We also adjusted the inner ocular powers to match optical simulations using the modified model to the measured refractions. We found that LCA depends on the corneal power and the inner ocular power but LCA does not depend on the ratio of inner ocular power to the total ocular power.

P3-3: Color constancy effect measured by color naming method with functional filters for optical simulations of aged ocular lens and dichromats

Mio Hashida

Graduate School of Engineering, Kochi University of Technology, Japan

175083b@gs.kochi-tech.ac.jp Ippei Negishi

School of information, Kochi University of Technology, Japan Keizo Shinomori

School of information, Kochi University of Technology, Japan

It is scientifically and practically important to know the difference of the color constancy effect on dichromats and aged observers. To estimate the possible difference, we measured color constancy by color naming method on young and color normal observers with two kinds of functional filters: "Simulation Filters of an Aged Human Lens" (by Geomatec Co.) simulating the age-related ocular lens density of a 75-year-old person for a 32-year-old observer, and "Variantor" (by Itoh Optical Industrial Co.) simulating the color discrimination of dichromats for color normal observers. It is strongly expected that actual aged people and dichromats can carry out better color naming as they must have possible compensation effects by long-term adaptation effects and/or developmental/educational training effects. Thus, this study was expected to show the worst cases of color constancy on aged people and dichromats.

Under one offour illuminants [bright white (223 cd/m2), white, red and blue (17.5 cd/m2)], observers made categorical color naming with basic 11 color terms to 558 OSA color chips without or with one of two filters. The result with the aged ocular lens filter showed little difference in the color naming by basic terms, suggesting that the color constancy should be little or not influenced by normal aging. The result with dichromatic filter showed that color naming was typically changed and can be predicted, meaning that the color name should be obtained by the color constancy effect on color normal and then changed by the simple filter effect.

Supported by KAKENHI 24300085 and 24650109.

P3-4: Evaluation of the relationship between color discriminability, color categorization, and glare perception while wearing colored lenses in young adults

Kenichiro Kawamoto

Faculty of Health Science and Technology, Kawasaki University of Medical Welfare, Japan Research Institute for Visual Science, Kanagawa University, Japan kawamoto-k@mw.kawasaki-m.ac.jp Keiko Yamamoto

Faculty of Health Science and Technology, Kawasaki University of Medical Welfare, Japan Natsumi Noguchi

Kawasaki University of Medical Welfare, Japan Hidetsugu Kawashima

Faculty of Health and Medical Sciences, Aichi Shukutoku University, Japan Akio Tabuchi

Faculty of Health Science and Technology, Kawasaki University of Medical Welfare, Japan Tenji Wake

Research Institute for Visual Science, Kanagawa University, Japan

Colored lenses are visual aids used by people with low vision and elderly people to protect their eyes and reduce glare perception, which causes reduced spectral transmittance of short to middle wavelengths of light in the visible range. Larger reduction in glare perception affords greater protection; however, it may lead to deterioration in color vision.

We conducted three experiments to evaluate the relationship between color discriminability, color categorization, and glare perception. First, color discriminability was measured using Farnsworth-Munsell 100-hue test. Color categorization was then performed using a series of Munsell chips. Finally, the magnitude of perceived glare was estimated. Five types of colored lenses (Tokai Optical, TS, FR, LY, UG, and RO) were used, and ten young adults with normal color vision participated in this study.

The findings of this study revealed that color discriminability and color categorization were affected more by the relative S-cone stimulus value than by luminous transmittance (L + M). In contrast, the magnitude of perceived glare was affected more by L + M. These findings suggest that the effect of reduction in glare perception and color sensation / perception on wearing colored lenses can be assessed using different criteria.

This work was partially supported by Grant-in-Aid for Scientific Research (B) 21300211 (JSPS KAKENHI).

P3-6: Contrast sensitivity function measured by a four-primary illumination system

Naoshi Hamazono

Graduate School of Science and Engineering, Kagoshima University, Japan k8852928@kadai.jp Katsunori Okajima

Faculty of Environment and Information Sciences, Yokohama National University, Japan Sei-ichi Tsujimura

Faculty of Science and Engineering, Kagoshima University, Japan

It has been widely accepted that an apparent contrast is one of the most basic visual attributes for image perception. The aim of this study is to investigate how a modulation of test stimulus of complex spectral radiant power distribution influences the temporal contrast sensitivity function. A four-primary illumination system generates a test stimulus of complex spectral radiant power distribution that enables independent stimulation of each photoreceptor class. In the experiment the following two types of test stimuli were presented: one varying L-, M- and S-cone stimulation without change in stimulation of ipRGCs (cone stimulus) and another varying radiant flux of the stimuli without change in spectral composition which reduced/increased the radiant flux uniformly at all wavelengths (light flux stimulus). The contrast threshold to temporally modulated sinusoidal gratings was measured.

It was found that the two thresholds were different for the modulated gratings: the contrast threshold to the cone grating and the threshold to the light flux grating became distinct at low temporal frequencies. On the other hand, the contrast thresholds at high temporal frequency were almost identical between the two stimuli. These results suggested that ipRGCs play an important role in achromatic vision at low temporal frequency.

P3-7: Measurements of daylight during a day and at various locations Cong Zhang

Department of Information Processing, Tokyo Institute of Technology, Japan zhang.c.aj@m.titech.ac.jp Tomohisa Matsumoto

Department of Information Processing, Tokyo Institute of Technology, Japan Kazuho Fukuda

Department of Information Processing, Tokyo Institute of Technology, Japan Toshifumi Mihashi

Department of Information Processing, Tokyo Institute of Technology, Japan Keiji Uchikawa

Department of Information Processing, Tokyo Institute of Technology, Japan

Daylight is the most important illuminant in our environments. Spectral power distribution of daylight is known to vary according to a variety of observing conditions, such as time of day, weather and location. In this study we aim at examining differences in the spectral power distribution of daylight through a day from sunrise to sunset and at various locations with different surrounding objects. In our measurements we used a spectral illuminometer and a spectral radiometer covering the visible spectrum. We chose a wide-field place to measure changes of the spectral power distribution of daylight during a whole day, and a few different locations to measure those changes in environmental surrounds. The results showed that chromaticities of daylight during a day varied in a quite large range along the CIE daylight curve in the CIE 1931(x, y) chromaticity diagram between 0.25 and 0.33 in the x-axis with only small change between 14 and 16 o'clock in that day. Spectral distribution changes in different locations were found to be not small but comparable to those during a day. It would be surprising that environmental lights surrounding us are quite variable depending on time of day and on surrounding objects.

Supported by JSPS KAKENHI Grant Numbers 25245065, 22135004.

P3-8: Connectivity between brain regions associated with optic flow processing

Maiko Uesaki

Graduate School of Letters, Kyoto University, Japan m.uesaki@googlemail.com Hiroshi Ashida Graduate School of Letters, Kyoto University, Japan

Optic flow is one of the most important visual cues to perception of self-motion (Gibson, 1950; Warren et al., 1988). Previous research suggests that the analysis and processing of optic flow involves a number of brain regions, including visual, multi-sensory and vestibular areas (Cardin & Smith, 2010). However, little is known about connectivity between some of those regions. In recent years, the diffusion magnetic resonance imaging (MRI) technique has been applied in research as a non-invasive method to estimate brain fibre structures. This study aimed to investigate structural connectivity between the brain regions associated with optic flow processing, using diffusion MRI.

Supported by the Japan Society for the Promotion of Science.

P3-9: Individual differences in stereopsis and affecting factors

Masayuki Sato

University of Kitakyushu, Japan msato@kitakyu-u.ac.jp

To examine the extent of individual differences in stereopsis and affecting factors, we conducted three experiments on human depth perception from binocular disparity. In the first experiment stereo thresholds were measured for 118 observers. In the second experiment weights of disparity and perspective depth information in multiple depth-cue integration were measured. The obtained weights of perspective ranged from 0 to 1.5, i.e., some observers gave a negative value to disparity as weight. The correlation between the weights of perspective and stereo thresholds was low. In the third experiment influences of stimulus motion on apparent depth from large binocular disparity were examined for 19 observers. Ten observers perceived very large depth in diplopic targets when they oscillated horizontally while 6 observers did not see depth based on disparity. Moreover, three perceived the opposite depth to the geometrical prediction, and reversed depth was also enhanced by stimulus motion. These results can be related to the basic stereo ability determined in the former two experiments partially but not completely. It appears that other factors such as stereo sensitivity in the peripheral visual field should be measured to understand human stereo ability comprehensively.

P3-10: A new bistable percept: the bistable fan illusion

Yung-Hsuan Tien

Department of Psychology, National Taiwan University, Taiwan

comet10281@hotmail.com Su-Ling Yeh

Department of Psychology, National Taiwan University, Taiwan

Bistable perception—the phenomenon that the same stimulus (e.g., Necker cube) can be experienced as two equally competing percepts—provides a fascinating window that reveals the dynamics of human attention and awareness. Here we demonstrate a new bistable stimulus that is created by superimposing two sets of moving fan figures: one rotates clockwise and the other counterclockwise. The percept alternates between a sense of continuous rotation along one direction or bouncing back and forth (see our demo). To examine the optimal parameters for the bistable perception, we manipulated fan width (Experiment 1) and rotating speed (Experiment 2), recording participants' percept and alternating rate. Results show that the proportion of seeing the bouncing motion increases with fan width; the alternating rate decreases with it. Speed does not modulate the participant's dominant percept; however, the alternating rate increases with rotating speed. In our current setting, the stimulus consisting of two fans of 10 deg width rotating at the speed of 72 deg/s leads to the best bistability, which was replicated in the two experiments. This new fan stimulus, along with its associated bistable percept, provides an excellent tool for future studies on bistable perception.

Supported by the National Science Council (NSC 101-2410-H-002-083-MY3).

P3-11: Binocular directional capture of monocular stimuli without allolotropia Tsutomu Kusano

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

tkusan0@kaiyodai.ac.jp Saori Aida

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

Koichi Shimono

Graduate School of Marine Science and Technology, Tokyo University of Marine Science and Technology, Japan

According to the Wells-Herings laws of visual direction, the difference in the visual direction or the relative visual direction between monocular stimuli is determined by the difference in the retinal positions. However, two monocular stimuli at the physically identical vertical/horizontal retinal position can be seen in different directions when embedded in binocular stimuli that have vertical/horizontal retinal disparities, which is called "binocular capture of monocular visual direction". Hariharan-Vilpuru and Bedell (2009) suggested that two factors may affect binocular capture: binocular averaging of the retinal positions of binocular stimuli ("allelotropia") and perceived depth of monocular stimuli. Herein, we show that binocular capture can occur without allelotropia. In our experiment, two monocular horizontal bars were embedded in binocular random-dot planes, which had horizontal disparities and a horizontal-shear disparity. When observed stereoscopically, the planes were perceived to be at different depths as well as to have the same slant in depth about a horizontal axis. We found that the relative visual direction between the bars varied with the horizontal disparity and the slant of the planes. Because the random-dot planes had no vertical disparity, the result indicates that the binocular capture of monocular visual direction can occur without allelotropia.

Supported by JSPS KAKENHI Grant Number 23330215.

P3-12: Temporal relationship between the flash-drag effect and the flash-lag effect: psychophysics and modeling

Yuki Murai

Department of Life Sciences, The University of Tokyo, Japan ymurai@fechner.c.u-tokyo.ac.jp Ikuya Murakami Department of Psychology, The University of Tokyo

The location of a stationary flashed object appears shifted in the direction of nearby motion (flash-drag effect: FDE). On the other hand, a flash adjacent to a moving stimulus appears to lag behind it (flash-lag effect: FLE), and this effect has been explained by several models including the differential latency hypothesis that a moving stimulus has a shorter latency than a flash does. The FDE occurs even when the flash comes earlier than the motion stimulus (Roach & McGraw, 2009), and the temporal relationship between the FDE and FLE remains unclear. Here, we simultaneously measured them using a random motion stimulus (Murakami, 2001) and compared their temporal properties. While the FLE was measured by judging the offset between a randomly moving bar and a flashed bar, a drifting grating appeared next to the flash at various SOAs to induce the FDE. The grating that was present up to 200 ms after the flash onset induced the FDE, and such FDE temporal tuning was quantitatively accounted for by a computational model incorporating stochastic fluctuations of differential latency estimated from the FLE data and a transient-sustained temporal profile of the motion signal. Our findings suggest that the FDE is processed after the FLE.

Supported by the Japan Society for the Promotion of Science (JSPS) Funding Program for Next Generation World-Leading Researchers (NEXT, LZ004) and a JSPS Grant-in-Aid for Scientific Research on Innovative Areas (25119003).

P3-13: Afterimage duration reflects orientation-selectivity of interocular suppression Motomi Shimizu

Graduate School of Humanities and Social Sciences, Chiba University, Japan

shimizumtm@chiba-u.jp Eiji Kimura

Faculty of Letters, Chiba University, Japan

Strong interocular suppression such as continuous flash suppression (CFS) can render a high-contrast pattern invisible. This invisibility makes psychophysical investigation on the nature of suppression difficult. Recently, we reported that afterimage duration of the test stimulus, which had been completely suppressed from awareness due to CFS, varied with the orientation difference between suppressor and test Gabors (VSS2014). Results suggest orientation selectivity of interocular suppression. However, it is also possible that the suppressor directly disturbed the perception of afterimage and this disturbance, rather than interocular suppression, was orientation selective. To investigate the direct effect of the suppressor on subsequently presented spatial patterns, we measured detection threshold of a target presented after a suppressor. The suppressor was a 5 Hz counterphase-flickering Gabor patch (1.5 cpd, ~100% contrast) and presented to one eye for 3 s. Immediately after the suppressor offset, the target Gabor was presented to the other eye for 1 s. The orientation of the target was either identical or orthogonal to that of the suppressor. Results showed that threshold changes caused by the suppressor was small and did not exhibit orientation specificity. By excluding the possibility of the direct aversive effect, we conclude that afterimage duration reflects the orientation selectivity of interocular suppression.

P3-14: Illusory contours on random dot images

Masahiro Ishii

Graduate School of Design, Sapporo City University, Japan

m.ishii@scu.ac.jp

An important task in the visual system is to segregate the scene into objects. A variety of visual cues are used for segregation. In random dot stereograms for instance, binocular disparity defines their figure region. When viewed monocularly, neither boundary nor shape is recognizable. When viewed stereoscopically, the dots in the figure appear on a plane either in front or behind the rest of the dots. With higher density dots, the region appears to create a shape by grouping of dots, with a boundary. In this research, illusory contours on random dot images created by a variety of visual cues were investigated. Particularly, threshold of dot density to yield illusory contour was measured. White random dot images were essentially used in the experiment. The dots in the figure region had disparity, motion, dynamic change, color, length, size, increased dot density, or reduced brightness to define a figure. The results indicate that random dot images with lower dot density can create illusory contours when the dots in the figure region have disparity, motion, or dynamic change. This suggests that the illusory contours on random dot images arise in brain area(s) involved in motion perception and stereoscopic depth perception.

Supported by CREST, JST.

P3-15: Active task measurements of tolerance to stereoscopic 3D image distortion

Robert Scott Allison

Centre for Vision Research, York University, Canada allison@cse.yorku.ca Karim Benzeroual

Centre for Vision Research, York University, Canada Laurie M Wilcox Centre for Vision Research, York University, Canada

An intriguing aspect of picture perception is the viewer's tolerance to variation in viewing position, perspective, and display size. These factors are also present in stereoscopic media, where there are additional parameters associated with the camera arrangement (e.g. separation, orientation). The predicted amount of depth from disparity can be obtained trigonometrically; however, perceived depth in complex scenes differs from geometrical predictions. It is not clear to what extent these differences are due to cognitive as opposed to perceptual factors. To evaluate this we recorded stereoscopic movies of an indoor scene with a range of camera separations (3 to 95 mm) and displayed them on different screen sizes (all subtending 36 deg). Participants reproduced the depth between pairs of objects in the scene using reaching or blind walking. The effects of IA and screen size were consistently much smaller than predicted, suggesting that observers compensate for distortion in the scene. These results echo those obtained previously with depth magnitude estimation (Benzeroual et al., ECVP 2011). We conclude that the presence of multiple realistic depth cues drives normalization of perceived depth from binocular disparity. Further, these normalization processes are not task-specific; they are evident in both 'perception' and 'action' oriented tasks.

Supported by OMDC, OCE.

P3-16: Effect of the target object visibility on velocity perception during smooth pursuit eye movement

Minhyeok Kim

Graduate School of Engineering, Chiba University, Japan leo7984@gmail.com Keita Ishibashi

Graduate School of Engineering, Chiba University, Japan Koichi Iwanaga Graduate School of Engineering, Chiba University, Japan

Smooth pursuit eye movement (SPEM) is to track the state of the moving target being fixed to the fovea. The aim of this study was to investigate the effect of sensitivity in the peripheral visual field with reference to the task difficulty on velocity perception during SPEM. In the experiment, the velocity of the end points of the comparison stimuli, moving from left to right with sine curves tracing on the screen of the PC, were subjectively compared to standard stimuli. The task difficulty was adjusted by the visibility of SPEM with color schemes for the grating pattern and thickness of the sine curve line. We set 2 conditions of grating patterns (colored black/white or gray/white) and 3 conditions of thickness of the sine curve line (0.06 deg, 0.18 deg, 0.36 deg) for the comparison stimulus. The sine curve color was black. As a result, velocity perception became significantly faster when the color scheme for the grating pattern was gray/white and the thickness of the sine wave curve line was thicker. These results indicate that the velocity perception depended on the visibility of the target movement during SPEM.

P3-17: Ambiguity of rotating directions in the silhouette figure Yin Zhu

Department of Psychology, Hokkaido University, Japan tayama@let.hokudai.ac.jp Tadayuki Tayama

Department of Psychology, Hokkaido University, Japan

The silhouette illusion is an illusion that shows an ambiguity in rotating directions. When the silhouette figure rotates clockwise physically, it is often perceived as rotating anti-clockwise, or vice versa. We found in the previous study that perception of the reversal is affected by factors of dominant eye, used eye and stimulus position (Zhu & Tayama, 2013, Suzhou). The present study investigated whether factors such as body parts (all, upper-half, lower-half), starting positions of rotation, and rotating directions (clockwise, anti-clockwise) affect the reversal in the silhouette figure. In each trial, a rotating stimulus, which is similar to the silhouette illusion, was presented for 1.2 s at the center of the screen. The observer judged whether the rotating direction was clockwise or anti-clockwise. The results showed that the ratio of anti-clockwise responses was higher than that of clockwise responses, although the two ratios were equal. Factors of body parts and starting positions had no effect on the accuracy of judgments and the response time. These results indicate that the effect of presentation position found in the previous study was truly based on the positions of stimuli and not based on body parts of stimuli.

P3-18: Phase tagging for single trial MEG responses under binocular rivalry Takashi Shinozaki

Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Japan tshino@nict.go.jp Yasushi Naruse

Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Japan

Binocular rivalry is a phenomenon caused by presenting similar but different images for each eyes simultaneously (Blake & Logothetis, 2002). Under binocular rivalry condition, the perception of the visual stimuli is spontaneously and randomly alternated within every few seconds. Since noninvasive recording methods of brain responses (EEG, MEG, fMRI etc.) usually require a lot of iterations for the recording, it is difficult to measure temporally random responses of binocular rivalry. In this study, we analyzed single trial MEG responses of binocular rivalry using a phase template analysis method in order to catch the temporally fluctuated response among trials. The phase template analysis is a variant of frequency tagging methods (Tononi et al., 1998), and uses phase information to discriminate tagged stimuli. Visual stimuli were rectangular gratings flickering in 20 Hz, and were tagged with one of two temporal phases: 0 deg for the right eye, and 180 deg for the left eye. Phase templates were prepared from a brain response to a flickering checkerboard, and applied to the brain response of single trials in the binocular rivalry experiment. The results showed the possibility of the discrimination of the perception of the binocular rivalry in 200 ms temporal resolution.

Supported by KAKENHI (24700271).

P3-19: Visual fatigue of viewing stereoscopic display with different ranges of binocular disparity

Hiroyasu Ujike

Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Japan

h.ujike@aist.go.jp Hiroshi Watanabe

Health Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Japan

Potential adverse effects, such as visual fatigue, during viewing of stereoscopic images have been concerned especially with vergence-accommodation conflict produced by stereoscopic 3D displays. The literature reporting the issue, however, has mainly adopted rather short periods of observation in the experiments; visual fatigue may not be detected. In the present study, we used two different relative ranges (23, 60 arcmin) of disparities and their five different center values (-20, -10, 0.0, 10, 20 arcmin; negative values indicate being in front of the display surface). We adopted ten minute observations in which observers performed visual search tasks. When the relative range of stimulus disparities was wider and/or when the center value was smaller, the results showed the following: (i) subjective scores of comfortability obtained every two minutes were lower (relatively discomfortable), (ii) total scores of Ukai's questionnaire (Kuze & Ukai, 2008) were larger (relatively discomfortable), and (iii) the search time was longer. The result for the different center values of stimulus disparities is not necessarily consistent with that of previous reports. We speculate that the absolute values of simulated depth, rather than of disparity, may have some effects on the search performance, and then visual fatigue.

Supported by the project in 2013 of a Japan-US collaboration for standardization on emerging technologies from the Ministry of Economy, Trade and Industry (METI), Japan.

P3-20: Visually induced motion sickness breaks the synchronization of fMRI time-series from the left and right human MT+

Jungo Miyazaki

Frontier Research Center, Canon Inc., Japan miyazaki.jungo@canon.co.jp Hiroki Yamamoto

Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan Yoshikatsu Ichimura

Frontier Research Center, Canon Inc., Japan Hiroyuki Yamashiro

Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan Tomokazu Murase

Departments of Neurosurgery, Meiji University of Integrated Medicine, Japan Masahiro Umeda

Departments of Medical Informatics, Meiji University of Integrated Medicine, Kyoto, Japan Toshihiro Higuchi

Departments of Neurosurgery, Meiji University of Integrated Medicine, Kyoto, Japan

Certain classes of visual motion stimuli provoke signs of motion sickness such as dizziness and nausea, and these effects are called visually induced motion sickness (VIMS). VIMS often accompanies visual deficits like diplopia, implying that VIMS may affect neural processing in the visual system. Several previous functional magnetic resonance imaging (fMRI) studies have used model-based analyses to examine changes in cortical activity during VIMS, but have found no such changes in the visual cortex. We speculated that more complex spatiotemporal changes in cortical activity could be missed by using these model-based approaches and that VIMS does alter visual processing. To test this hypothesis, we investigated the effects of VIMS, which was induced by a video with both translational and rotational camera motion, on human visual cortical activity, using model-free fMRI analysis including left and right inter-hemispheric correlation. Inter-hemispheric correlation of motion-sensitive visual area MT+ decreased significantly during VIMS, whereas that of the primary visual area V1 did not change. Our findings provide the first evidence for an effect of VIMS on visual cortical activation, and further suggest that desynchronized MT+ activity may contribute to VIMS as well as underlie the visual dysfunction and/or homeostatic control caused by VIMS.

P3-21: Estimation of functional specificity of visual areas by a transverse relaxation profile Daehun Kang

Kansei Fukushi Reseach Center, Tohoku Fukushi University, Japan Graduate School of Information Sciences, Tohoku University, Japan kdhun.fine@gmail.com Yulwan Sung

Kansei Fukushi Reseach Center, Tohoku Fukushi University, Japan Satoshi Shioiri

Research Institute of Electrical Communication, Tohoku University, Japan Graduate School of Information Sciences, Tohoku University, Japan Seiji Ogawa

Kansei Fukushi Reseach Center, Tohoku Fukushi University, Japan

Some brain areas specialized for visual recognition can be activated by a non-preferred stimulus as well as a preferred stimulus. In functional MRI signals, the activations were detected in response to different-type stimuli, which may have the same or different changes in amplitude. However, it is uncertain whether the responses actually originate from the same neuronal populations or heterogeneous ones. To address this concern, we propose a novel method to evaluate a change in transverse relaxation profiles of the MR signal caused by stimulation. Time courses of the profile were obtained by a multi-echo echo-planar imaging sequence. And functional specificity in the areas related with visual recognition, such as fusiform face area and parahippocampal place area, was estimated by the transverse relaxation profiles. The proposed method could be useful for probing the microscopic functional specificity of brain areas.

P3-22: Perceiving group-wide attractiveness for human faces

Hidekazu Yarimizu

Graduate School of Psychology, Chukyo University, Japan yarimizh@gmail.com Yoshitaka Makino

Graduate School of Psychology, Chukyo University, Japan Jun Kawahara Graduate School of Psychology, Chukyo University, Japan

Extant studies have examined factors contributing to perception of attractiveness of individual human faces. Because those studies primarily focused on ratings of attractiveness of a single target face, it was unclear whether observers could perceive attractiveness of a group of people as a whole. The present study examined whether observers could compare the group-wide attractiveness between two groups consisting of multiple members. We predicted that observers should be able to discriminate which of the two groups had higher attractiveness. Observers were briefly (1500 ms) exposed to two frames of images, each of which consisted of four faces, and determined the one that they believed more attractive as a whole. The results indicated that discrimination accuracy was above chance level. Also, we examined the fact that observers did not take a undesired strategy based on a face that was randomly selected from four faces. Virtually identical pattern of the results was obtained when each group consisted of eight faces in Experiment 2. These results suggest that observers could perceive the attractiveness of a group of people as a whole when discriminating attractiveness of two groups of people. Therefore, we conclude that we can perceive group-wide attractiveness for human faces.

P3-23: The temporal dynamics of object detection between target and non-target response by using fragmented object contours

Kosuke Taniguchi

Research Center for Psychological Science, Doshisha University, Japan

kosuket314@gmail.com

In order to exhibit the detailed recognition model, we need to study the difference between target and non-target response in 2AFC task. In this study, to address this question, the temporal dynamics of the behavioral performance (accuracy and RT) in object detection was measured. The task was to decide whether the stimulus presented with three duration times (50, 100 and 200 ms) was object or non-object. For the object stimuli, 30 fragmented object contours from Panis et al. (2008) were presented. The fragmented contours were manipulated by fragment length (short versus long). For the non-object stimuli, scrambled contours were presented, which were made by shuffling each fragmented object contour. The data were analyzed by survival analysis so as to study how stimulus factors and duration time influence the reaction time distribution. The analysis indicated that the accuracy of both "object" and "non-object" response was proportional to the length of the duration time. As the duration time was longer, however, the "non-object" response was slower, whereas "object" response was faster. Thus, these results implied that more complex processing is needed for accurate non-target response while target response arises accurately and promptly when there is enough information to decide.

P3-24: Dynamic face adaptation in facial expression judgment and its position specificity

Yu Ting Lew

Nanyang Technological University, Singapore

ytlew1@e.ntu.edu.sg Hong Xu

Nanyang Technological University, Singapore

We have previously shown that adapting to dynamic bubbled faces generated a significant adaptation aftereffect (Luo & Xu, 2014; Xu et al., 2013). However, the dynamic faces were generated by varying the location of the bubbles in the same face. In the current study, we investigate whether the sequence of the facial expression matters in adaptation, such as a neutral face turning to a happy face generating a different adaptation aftereffect compared to a happy face turning to neutral, as the latter has the predictive movement to a sad emotion. We adapted subjects to videos of happy to neutral (condition "happy-to-neutral"), and neutral to happy (condition "neutral-to-happy"), and the baseline condition (no adaptor, "0-f"). Interestingly, we found a similar adaptation aftereffect between the happy-to-neutral condition and the neutral-to-happy condition. This therefore suggests that it is the visual information itself instead of the predictive motion of the facial features that drives the adaptation aftereffect. To test if this dynamic face adaptation is local or non-local, we investigated its positional specificity by shifting the test face away from the adapting location. We found that the aftereffect disappeared when we moved the test face away from the adapting dynamic face location, without any overlap between the two. It thus suggests that this dynamic face adaptation is a local effect.

P3-25: The effect of verbal overshadowing on face identification: A pilot study Minje Kim

Yonsei University, Korea wansogomtang@gmail.com Kyong-Mee Chung

Yonsei University, Korea Woo Hyun Jung Chungbuk National University, Korea

Verbal rehearsal is considered as an effective method for increasing recognition in general. Yet, recent studies have shown that it works in an opposite way when it comes to face recognition; this is called the verbal overshadowing effect. One possible explanation for this effect is that verbal overshadowing interferes with global processes of face perception. The purpose of this study is to investigate whether the verbal overshadowing effect is also observed when recognizing inverted faces, which is believed to be controlled by local processes of face perception. 9 undergraduate students were randomly assigned to 3 conditions (verbal description of a whole face, verbal description of eyes and mouth, and control). They were given face recognition tasks. 2 upright faces and 2 inverted faces were randomly presented as target stimuli. The results of a one-way ANOVA suggest that there were no main effects of the type of faces (F = 1.28,p > 0.05). However, contrary to our expectation, participations in the control group showed significantly better performances on inverted face recognition than on upright face recognition (F = 9.00, p < 0.05). These results might be due to several factors including ceiling effects and long display time. Otherwise, they suggest the role of attention bias, not global processes, as a key mechanism underlying the verbal overshadowing effect. Further research and clinical implications are discussed.

P3-26: Positive eyes facilitate use of pigment and shape information from the rest of the face

Harold C H Hill

University of Wollongong, Australia harry@uow.edu.au Sarah Shrimpton

University of Dundee, UK Yumiko Otsuka

UNSW Australia, Australia Mark D Shriver Penn State, USA Peter D Claes

KU Leuven, Belgium

A positive eye region offsets the detrimental effect of photographic negation on recognition by making negated information from the rest of the face available (Gilad, Meng, & Sinha, 2009). We tested whether the added information is shape or pigment based by contrasting the effects of polarity on shape-only and shape with pigment ("texture-mapped") stimuli. Face and eye areas could be either positive or negative, and observers performed a same/different identity matching or a sex judgment task. The contrast chimera effect (i.e. performance for stimuli with positive eyes and negative face as good as for full positive) was replicated for the sex judgment task when pigment information was available, suggesting that positive eyes make negated face pigment information usable. Positive eyes also improved performance for positive face shape without pigment suggesting a more general benefit of positive eyes as well as an effect of negation for shape-only stimuli. For same/different judgments positive eyes were not sufficient to offset the effects of negation but again improved performance with positive face shape. The results cannot be attributed to eye contrast alone, and are interpreted as evidence that positive eyes facilitate access to degraded information in the rest of the face.

Supported by ARC DP09986898.

P3-27: Configurai face processing in serial visual presentation Jasmina Stevanov

Graduate School of Letters, Kyoto University, Japan jasminastevanov@yahoo.com Zorica Stevanov

Department of Psychology, University of Belgrade, Serbia Dejan Todorovic

Department of Psychology, University of Belgrade, Serbia

This study investigated processing mechanisms of featural and configurai face information by applying the method of serial visual presentation. Configurai information depicts canonical feature configuration and their spatial relations. Scrambling features impairs configural processing; however, we argue that simultaneous visibility of features can still initiate configural processing, even with disrupted feature interrelations. Under the assumption that serial presentation may impair any kind of configural processing, we devised a method for presentation of features in rapid succession or interleaved with two second blanks. The region around eyes, eyebrows, tip of the nose and mouth were cut out from achromatic real-face images and presented as a sequence in rapid succession at their canonical or random locations on the screen. Results showed that simultaneous and interleaved serial presentation of features at their canonical locations gives the best recognition rate. We suggest that interleaved serial presentation allows enough time to initiate configural processing as participants are trying to retrieve a mental representation of the familiar face and compare each presented feature against it. With rapid serial presentation, recognition rate dropped dramatically, trial repetitions increased significantly and reaction time was substantially longer as a result of failure to initiate configural processing.

P3-29: Importance of the stimulus intensity in perceptual adaptation

Dayi Jung

Yonsei University, Korea dayi0220@hotmail.co.kr Euihyun Kwak

Yonsei University, Korea Kyong-Mee Chung

Yonsei University, Korea

Substantial evidence suggests that adaptation to visual patterns could improve the sensitivity of facial recognition. The present study is designed to investigate how perceptual adaptation changes according to different stimulus intensity. In study 1, the participants (n = 8) were asked to distinguish between different female faces that varied in eight levels of intensity, from 10% to 80%. Participants were then asked to complete a post identification test after adapting to an average face for five minutes. In study 2, the participants (n = 8) were asked to distinguish between the same female faces that varied in 10 intensity levels, from 5% to 50%. They completed the post identification test after five minutes of adaptation task. Significant enhancement in discrimination ability of face stimulus was observed only in study 2. The result suggests that the intensity of the stimulus influences the adaptation process of recognizing faces. This experiment also suggests that the intensity of stimulus should be carefully considered in order to examine the mechanism of face recognition for clinical population.

Supported by the National Research Foundation of Korea.

P3-30: Rapid and efficient processing of frequently used meaningful visual symbols Naoto Sakuma

Graduate School of Humanities and Social Sciences, Chiba University, Japan nsakuma714@gmail.com Eiji Kimura

Faculty of Letters, Chiba University, Japan Ken Goryo Faculty of Letters, Chiba University, Japan

When viewing brief displays of numerals, observers can rapidly indicate which side of the displays contains the larger number of the target numerals without knowing every numeral in the display (Corbett et al., 2006 Vision Research, 46, 1559-1573). This phenomenon seems related to gist extraction from visual scenes, but has been suggested to be unique for Arabic numerals. However, aside from representing quantities, Arabic numerals have notable characteristics (e.g., they are ideograms and used very frequently) that may mediate the rapid processing. This study systematically investigated the influences of these characteristics. Results showed that Kanji letters (ideograms) that do not represent quantities could be processed rapidly if they were frequently used letters. Moreover, when tested with physically identical stimulus sets that could be interpreted as either Kanji numerals or Kana letters (phonograms), the rapid processing was observed when the stimuli were regarded as Kanji numerals, but not when regarded as Kana letters even if they were frequently used letters. Therefore, we concluded that for the rapid processing to occur the stimulus needs to be meaningful symbols that are used frequently. Finally, we confirmed our conclusion by demonstrating the rapid processing with meaningful symbols that are neither letters nor numerals but used frequently.

P3-31: Inhibition of return to eye gaze develops in teenage years Hui Fang Lin

Department of Special Education, National Changhua University of Education, Taiwan t02204@mail.dyjh.tc.edu.tw Li Jingling

Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taiwan Chih Chien Lin

Department of Psychiatry, Taichung Veterans General Hospital, Taiwan Chia Jui Tsai

Department of Psychiatry, Changhua Christian Hospital Taiwan Yen Ching Wu

Department of Psychiatry, Taichung Veterans General Hospital, Taiwan Joung Kung Yih

Department of Psychiatry, Taichung Veterans General Hospital, Taiwan Lin Me Chen

Department of Psychiatry, Taichung Veterans General Hospital, Taiwan Wan Ru Tzeng

Department of Psychiatry, Taichung Veterans General Hospital, Taiwan Mei Jung Chen

Department of Psychiatry, Taichung Veterans General Hospital, Taiwan

Gaze is an important cue in social interaction. Gaze direction can attract attention automatically and for adults can produce inhibition of return (IOR)—a slower response to an item at a previously attended location. We examined how early eye gaze can induce IOR by recruiting three typical developing groups: 6-8, 9-12, and 13-16 year-olds. Gaze cues were delivered by photos of real faces but non-informatively to the target location (the presence of a start) with 200 ms, 1200 ms, or 2400 ms delay intervals. Results showed reliable gaze-induced IOR only in 9-12 and 13-16 age groups, while the 6-8 age group paid attention to the gaze direction regardless of cue duration. Further, the 13-16 age group showed gaze-induced IOR earlier in the time course than expected. Consistent with the idea that eye gaze is an innate social cue from infancy, our results showed that the cueing effects of gaze were significant and stable from 6-year-old children to older teenagers. However, we found no gaze-induced IOR in 6-8-year-old children. Thus, our results suggest that gaze-induced IOR reflects a developmental trend in social interaction.

P3-32: Moderate transient noise improves response inhibition in children with attention-deficit / hyperactivity disorder

Yi-Min Tien

Department of Psychology, Chung Shan Medical University, Taiwan tien@csmu.edu.tw Yu-Shu Liang

Department of Psychology, Chung Shan Medical University, Taiwan Hisn-Wei Wu

Department of Psychology, Chung Shan Medical University, Taiwan Jeng-Yi Tyan

Department of Psychiatry, Chung Shan Medical University Hospital, Taiwan Tun-Shin Lo

Department of Speech Language Pathology and Audiology, Chung Shan Medical University, Taiwan

Kuo-You Huang

Department of Speech Language Pathology and Audiology, Chung Shan Medical University, Li-Chuan Hsu

Institute of Neural and Cognitive Sciences, China Medical University, Taiwan Schoolof Medicine, China Medical University, Taiwan Vincent Chin-Hung Chen Department of Psychiatry, Chung Shan Medical University Hospital, Taiwan Schoolof Medicine, China Medical University, Taiwan

Researchers suggested deficit of response inhibition is critical to clinical syndromes of attention-deficit / hyperactivity disorder (ADHD). Continuous noise is beneficial for memory performance of children with ADHD (So'derlund et al., 2007). However, intense noise as 80 dB may potentially induce hearing loss in some cases. Current study addresses the issue whether moderate transient noise (35 or 55 dB) can benefit response inhibition of ADHD and its influences on subtypes of ADHD. Children and adolescents were recruited and were assigned into groups according to predominantly inattentive presentation (ADHDin) or combined presentation of additional hyperactivity-impulsivity criteria (ADHDcom). We adopt the stop signal task with visual Go task and auditory Stop signal and measured the performance of participants under levels of transient noise. The results revealed that ADHDcom does show lower Go accuracy and longer stop-signal reaction time than ADHDin under no noise condition. However, the performance differences vanished under transient noise conditions, i.e. transient noise improves performance of ADHDcom. Our results suggest that moderate transient noise, instead of intense continuous noise, is enough to improve inhibition function of ADHDcom selectively.

Supported by Nsc102-2410-h-039-001, CS 11088.

P3-33: Longer stimulus presentation decreases the magnitude of cross-domain priming

Kiyofumi Miyoshi

Graduate School of Letters, Kyoto University, Japan miyoshi80@gmail.com Hiroshi Ashida Graduate School of Letters, Kyoto University, Japan

Previous studies suggest that longer stimulus presentation decreases the magnitude of priming. However, most of these studies employed the repetition priming paradigm, which activates perceptual as well as higher level of processes (e.g. conceptual, decisional and motor processes). Thus, it is uncertain what level of processes the decrease of priming reflects. To address the issue, we assessed the impact of stimulus duration on the magnitude of priming with manipulating the stimulus domain. In the "within-domain" condition, words were presented in both study and test sessions and participants judged whether each stimulus was natural or manmade. In the "cross-domain" condition, pictures were presented in study sessions, but their semantically matched words were presented in test sessions. Stimulus duration did not have a significant impact on priming in the within-domain condition. However, 250 ms of stimulus presentation led to significantly greater priming than 2000 ms of presentation in the cross-domain condition. The results suggest the involvement of higher level of processes, beyond perceptual levels, in the decrease of priming with longer presentation.

P3-34: Qigong dandao meditation improves visual attention Weilun Chou

Department of Psychology, Fo Guang University, Taiwan,

basilpudding@gmail.com Min-Hui Tsai

Department of Psychology, Fo Guang University, Taiwan,

Dandao meditation, one kind of Chinese traditional qigong, trains practitioners to focus their attention on specific body parts, including areas of the brain. The practice has been speculated to cause beneficial effects in these body parts. Because meditators frequently focus attention on many attention-related brain areas, we investigated the hypothesis that dandao meditation improves attentional performance. Three functionally and neuroanatomically distinct attentional subsystems were examined: alerting, orienting, and conflict monitoring. We adopted the Attention Network Test (ANT; Fan, McCandliss, Sommer, Raz, & Posner, 2002). In Experiment 1, we compared the performance of the expert with the control group. In Experiment 2, before the ANT task, meditators were asked to focus on only the parietal, temporal, or prefrontal lobe during one period of meditation. By comparing the performance of different attentional components under these three conditions, we could determine whether specific aspects of attention are improved when meditators focus on specific brain areas. In Experiment 3, a 12-week dandao training was provided to a control group. We found that the participants in the dandao meditation demonstrated improved orienting and conflict monitoring performance. In addition, we found that different attentional subsystems were affected when the meditators focused on different brain areas.

Supported by Taiwan's National Science Council (NSC 102-2420-H-431-001-MY2 and NSC 102-2410-H-431-008).

P3-35: Sequence effects of symbolic cueing: the role of voluntary control and attentional control settings

Qian Qian

Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology, China

qianqian1025@gmail.com Raodong Yu

Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology, China Feng Wang

Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology, China

People tend to automatically orient their attention to the same object that other people are looking at or an arrow is pointing at. This cue-following behavior has been found to be influenced by the orienting processes of previous trials in a symbolic cueing paradigm. The present study investigated the influence of voluntary control on the sequence effect of arrow cueing by manipulating the cue predictive values (i.e., 80%, 50%, or 20% possibility to predict the target location); and investigated the influence of attentional control settings by manipulating task demands (i.e., detection, discrimination, or localization). The results showed that cue predictive values have no significant influence on the magnitude of sequence effects, however, significant influence of task demands are found. The results support the automatic retrieval hypothesis for the sequence effect, but also suggest that the implicit processes involved in the sequence effect is sensitive to the attentional set of participants.

P3-36: Spatial spread of visual attention while tracking a moving object Kei Ishii

Graduate School of Information Sciences, Tohoku University, Japan keishii@riec.tohoku.ac.jp Kazumichi Matumiya

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan Ichiro Kuriki

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan Satoshi Shioiri

Graduate School of Information Sciences, Tohoku University, Japan Research Institute of Electrical Communication, Tohoku University, Japan

Visual attention is a function that selects important information among a vast amount of visual information for their efficient process in the human brain. We studied the spatial spread of visual attention while tracking a moving object by visual attention using Steady-State Visual Evoked Potential (SSVEP). SSVEP is a component of visual evoked potential generated by cortical neurons by their synchronous response to the flicker frequency of visual stimulus. SSVEP is known to increase/decrease its amplitude at the temporal frequency of the attended/unattended visual stimulus. We measured the SSVEP amplitude and inter-trial phase coherence (ITPC), while subjects were conducting an attentive tracking task with a bistable apparent motion stimulus, which yields either clockwise or counter clockwise direction of apparent motion around the fixation point. We analyzed the SSVEP amplitude and ITPC at various distances from the focus of attention. Both of the SSVEP amplitudes and the ITPC showed similar changes in distance with a broad spatial tuning property. This tuning property was also similar to the tuning property that previous studies reported for attention to a stationary target. These findings suggest that spatial tuning is similarly broad for moving and stationary attention.

P3-37: Studying effects of shifting gaze with subtle visual stimuli during driving Kohei Oka

Institute of Industrial Science, The University of Tokyo, Japan oka-k@iis.u-tokyo.ac.jp Yusuke Sugano

Institute of Industrial Science, The University of Tokyo, Japan Yoichi Sato

Institute of Industrial Science, The University of Tokyo, Japan Akihiro Sugimoto

National Institute of Informatics, Japan, Toshiyuki Kondo DENSO Co., Japan Keisuke Hachisuka DENSO Co., Japan Katsunori Abe DENSO Co., Japan Eiichi Okuno DENSO Co., Japan

Techniques for naturally shifting human gaze have the potential to play an important role for attentive user interfaces in various application fields such as driving assistance systems. In recent years, several techniques have been proposed to modulate the scene so that the user's gaze is guided toward, e.g., potential obstacles for driving, without interrupting the main task. The goal of this work is to investigate how we can evaluate the effectiveness of such gaze shifting techniques in a quantitative manner. Specifically, we conducted an experiment to see whether visual saliency can be used to evaluate subtleness of the scene modulation. We employed a technique to modulate the luminance around the target location with a 10Hz carrier wave in order to create subtle visual stimuli for gaze shifting, and we showed modulated videos of driving scenes to experiment participants with various modulation strengths. In addition to eye tracking, we also conducted a test to assess whether the participants noticed the modulation. By analyzing visual saliency values of the subtle visual stimuli together with the experimental data, we show the relationship between gaze shifting, modulation awareness and visual saliency and discuss how we should evaluate gaze shifting systems.

P3-38: Relationships between emotions and colors: a comparison between Japanese and Chinese

Xiangyuan Zeng

Graduate School of Letters, Kyoto University, Japan sousyougen@163.com Kana Kuraguchi

Graduate School of Letters, Kyoto University, Japan Hiroshi Ashida Graduate School of Letters, Kyoto University, Japan

A previous study has proposed a concordance among humans in perceiving emotional expressions, colors and their relationships, possibly on the basis of universal biological mechanisms (Osvaldo & Paul, 2007). However, most of the participants in their study were English-speaking, and were particularly Australian and European people. Therefore, to confirm this proposition, we tested Japanese and Chinese students in the present study. Six female faces and 6 male faces, which respectively expressed 6 basic emotions (anger, surprise, disgust, sadness, happiness, and fear), were used in the experiment. For each face, participants were asked to select a color that fitted best. In the data analysis, each color was transformed into lightness and chromaticity in the CIELAB color space. ANOVAs showed that the relationship between lightness and the 6 emotions was similar between Japanese and Chinese as expected from the previous study. However, the relationship between the chromaticity and the emotions was significantly different. Moreover, we investigated the positive/negative feelings of the 12 basic colors, and the results showed significant differences between Japanese and Chinese participants. These results suggest the differences in the sense of colors among different cultures and languages.

P3-39: Understandability and aesthetic preference in diagram design: A case study on the design of visual pathway diagrams

Yu-Ying Lin

Graduate School of Letters, Kyoto University, Japan

ayucrown1988@gmail.com Hiroshi Ashida

Graduate School of Letters, Kyoto University, Japan

Although numerous studies have demonstrated the educational advantages of diagrams, little effort has been invested in identifying how various visual representations may influence the understanding and aesthetic preference of readers. This study aimed to investigate the effect of diagram designs on understandability and aesthetic preference. Four representations of the visual pathways with varying levels of concreteness and complexity were used in this study. Participants were asked to rate their impression, understandability, and aesthetics of each design. By using semantic differential profiles and factor analysis, subjective evaluation of each design was analyzed in this study. The results indicated that concreteness plays the key role in design understandability, whereas visual complexity can affect the legibility and memorability of the diagram. In terms of aesthetic evaluation, participants showed preference towards a medium level of complexity and concreteness. Interestingly, no preference was exhibited in colorful and realistic representations, which contradicts what is considered effective in other contexts such as advertisement. Finally, the participants generally gave high ratings in the aesthetic evaluation of easily understandable designs, which supports the processing fluency theory: the more fluently perceivers can process an object, the more positive their aesthetic response.

P3-40: Synesthetic colors affect visuospatial working memory

Hiroyuki Mitsudo

Kyushu University, Japan hmitsudo@lit.kyushu-u.ac.jp Yukiko Nishi Kyushu University, Japan

A change-detection task was used to examine the relationship between synesthetic colors and visuospatial working memory. Grapheme-color synesthetes and controls were asked to detect changes among achromatic letters presented successively in two arrays. We measured sensitivity to change in two conditions: the change of a letter was or was not accompanied by a change of the synesthetic color. When the change was accompanied by the synesthetic-color change, the synesthetes' sensitivity fluctuated considerably compared to that predicted from the controls' sensitivity. These results suggest that synesthetic colors do not necessarily enhance change detection, but are held in visuospatial working memory.

P3-41: Action Comparison Influences Size Perception

Wenbo Luo

Kyushu University, Japan luo@kyudai.jp Yongning Song Kyushu University, Japan East China Normal University, China Kayo Miura Kyushu University, Japan

We examined whether the change of size perception occurs while people encountered action comparison. We instructed participants to roll balls into the goal/goals and reproduce the size of the goals on the computer immediately. Participants were randomly assigned to one of three groups, in which they accepted one of three conditions: single big goal, single small goal, or a big goal and a small goal presented simultaneously (comparable condition). The results indicated that participants perceived the small goal as smaller in comparable condition than in single small goal condition. However, the perceived size of the big goal was not affected by the action comparison. Besides, we have proved that the perceived size of the small goal in comparable condition was not looked as smaller owing to size contrast in the case of estimating the small goals without rolling balls. The results suggest that the action comparison influences size perception, and the participants tend to perceive the small goal as smaller in comparable condition because rolling balls into the small goal becomes more difficult and requires more effort due to the comparable big goal.

P3-42: Bouba/Kiki effect in deaf people

Shuichiro Taya

Taisho University, Japan

s_taya@mail.tais.ac.jp

We have a strong tendency to associate specific shape to specific sound. For example, when asked which of the two figures is "bouba" and which is "kiki", most people associate rounder shape with "bouba" and angular shape with "kiki". It has been proposed that the shape-sound matching bias can be explained by the similarity of the shape of a visual stimulus and the mouth shape when we pronounce the word; for example, pronunciation of "bouba" requires a more rounded mouth whereas when we speak "kiki" our mouth becomes angular. If this is true, deaf people who do not use oral communication in their daily life would show no or weaker bias in the sound-shape matching. To test this we asked deaf people the shape-sound matching task. The results showed that the matching tendency in the deaf participants was not different from that of the non-deaf participants. However, most of the deaf participants in our study use oral communication on a daily basis. Interestingly, one of the deaf participants whose daily communication was largely relied on the sign language showed no shape-sound matching bias (i.e. no Bouba/Kiki effect). The role of the daily experience of articulatory-movement and hearing-sound coupling is discussed.

P3-43: Interference of manual reactions by concurrent saccades

Hiroshi Ueda

The University of Tokyo, Japan uedahi64@graco.c.u-tokyo.ac.jp Kohske Takahashi

Research Center for Advanced Science and Technology, The University of Tokyo, Japan Katsumi Watanabe

Research Center for Advanced Science and Technology, The University of Tokyo, Japan Yasushi Yamaguchi The University of Tokyo, Japan

In order to investigate how concurrent saccadic and manual reactions interfere with each other, we used the gap-overlap reaction task in which a central fixation point disappeared shortly before the peripherally presented target (gap) or remained present until the end of the trial (overlap). Observers were asked to fixate the central fixation point and then to respond to the target as quickly and accurately as possible by making a saccade toward the target (saccade-only), by pressing the corresponding left or right arrow key while keeping fixation (keypress-only), or by making both (concurrent) in separate blocks. The manual reaction time was longer in the concurrent condition than in the keypress-only condition, while the saccadic latency was not influenced by the concurrent manual response. The gap effect—response facilitation in the gap condition as compared to the overlap condition—was observed in all conditions; the concurrent task affected neither the saccadic nor the manual gap effect. The results suggest that the saccadic and manual gap effects are independent each other but the saccade reactions interfere with the manual choice reactions.

P3-45: The dilations and compressions of the interval-time perception caused by visual flickers and auditory flutters

Kenichi Yuasa

Graduate School of Arts and Sciences, The University of Tokyo, Japan Japan Society for the Promotion of Science (JSPS), Japan yuasa@fechner.c.u-tokyo.ac.jp Yuko Yotsumoto

Graduate School of Arts and Sciences, The University of Tokyo, Japan

When an object is presented for a certain amount of time, the perception of its duration is called interval-time perception. When a visually presented object moves or flickers, the perception of the interval time tends to be overestimated, and such overestimation is called time dilation. The aim of this study is to investigate the mechanisms of the supra-second interval-time perception, and to examine whether the interval-times of the visually or aurally presented objects are processed in the common neural networks or not.

In the experiments, two stimuli were sequentially presented, and the subjects responded which was presented longer. One of the stimuli was always a standard stimulus presented in the fixed durations, and the other was a comparison stimulus with various "flickers(visual)" / "flutters(auditory)" in various durations. When the visual stimuli flickered at 10.9Hz for 1 sec and 3 sec, the perceived interval-time dilated. On the other hand, when the auditory stimuli fluttered at 10.9Hz for 1 sec, the perceived time was compressed. When the flickering/fluttering visual and auditory stimuli were presented simultaneously, neither dilation nor compression was observed. The results give insights to the audio-visual interactions and the modality dependency of the neural oscillations in the interval-time perception.

Supported by JSPS KAKENHI-25119003.

Copyright 2014

Published under a Creative Commons Licence [MS^^H JEr a Pion publication