Scholarly article on topic 'Virtual psychophysics'

Virtual psychophysics Academic research paper on "Art (arts, history of arts, performing arts, music)"

0
0
Share paper
Academic journal
Perception
Keywords
{""}

Academic research paper on topic "Virtual psychophysics"

D0l:10.1068/p2806ed

Guest editorial

Virtual psychophysics

I took up visual psychophysics in the mid sixties. Setting up a new experiment was considered to be a major undertaking that involved considerable expertise in general optics and kinematic design (Strong 1946). Often an experimental setup would fill the best part of a laboratory room, typically dominated by a huge metal table with pneumatic dampers in the legs. On this frame, a heavy granite slab was placed. The more mass the better! This granite tombstone was the posh solution—one might get by with a thick metal plate. A wooden plank, the poor man's solution, was being frowned upon, though. On this table, optical rails would be mounted on which the various parts moved on gliders. Assorted parts were such things as lenses, prisms, beamsplitters, mirrors, filters, shutters, mirror boxes, integrating spheres, and so forth. At the business end of the apparatus one mounted at least a head and/or chin rest, but even better a 'bite board'. I still shudder at the thought. The bite board might have been invented by the Inquisition. At the far end of the table one had one or more light sources. The sources I fancied most were ribbon tungsten lamps, actually secondary photometric standards, very stable and precisely calibrated. When more punch was desirable I threw in a two kilowatt xenon arc discharge. The latter were somewhat dangerous beasts, unstable and noisy, water-cooled, ozone producing, but they featured a huge radiant output and a flat radiant power spectrum from the ultraviolet to the infrared. It might easily take half a year to a year to really get things going and have it all properly lined up and calibrated. Most people I knew proudly owned such setups and were understandably reluctant to adopt any novel experimental paradigm that would force them to start from scratch. One planned not just a single experiment, but much more likely a suite of experiments that could be run from the same platform for at least a few years. 'Quickies' were unheard of. A thesis work at my university would typically take five years of which one or two were devoted to setting up shop so to speak, the remaining to reap the harvest (see figure 1).

Figure 1. Lab table from the seventies. [From the thesis of one of our students Lo Bour (Bour 1980).]

Aren't computers great? They have changed the scene completely. The overwhelming number of psychophysical experiments nowadays involves putting subjects in front of computer screens. Even at a cursory view, almost all vision labs look completely different from the old days. At closer scrutiny you find that students know next to nothing about optics—that is to say, practical optics. They don't know the front from the back of a lens. They also hardly know a screwdriver from a soldering iron. Twenty years ago (say) they had to know how to program in some low-level language, but nowadays they use mostly canned applications of various types. The big gain is that they can concentrate on the science instead of the technology; moreover, they can produce stimuli at the drop of a hat that would have been completely out of the scope of the old-day optical setups. For the old-style visual psychophysics the CRT screen has its drawbacks though: generally mediocre uniformity of screen radiance, mediocre geometry due to both the curvature of the CRT face and the deflection driver electronics, bad linearity of response, limited dynamic range, low temporal resolution, that is to say, frame rate at best. For some of these factors one has partial cures: one sets up lookup tables and so forth. Other inconveniences, like limited temporal resolution, simply have to be accepted. You end up with systems that are in many respects inferior to the traditional setups, but in some respects much more powerful and without a doubt far more flexible than these. There are many applications where the drawbacks hardly count and then computers are indeed a great asset to the lab. Many people simply don't even consider the type of experiments that could not be done on a computer, such ideas appear to be outside their mental framework. This certainly makes life easy for them, though perhaps less fun.

The new methods have opened the way to completely novel investigations. I mean studies where the stimuli are, at first blush, remarkably like glimpses of the real world instead of variously prepared flashes emerging from the dark as if by magic. In the old days, one would either have to go to the real world for such stimuli or use photographic material. In either case, it would have been very hard, often prohibitively so, to put stimulus parameters under experimental control or to implement 'natural' modes of observer interaction with the scene. The computer has made this all not just possible, but actually easy. This is indeed a great asset that has opened up the way to use entirely novel psychophysical methods. I am acutely sensitive to this, and indeed have been ready to develop and exploit several of such novel methods myself. I am especially excited about the possibilities, because one now has a powerful handle on the study of vision in circumstances that are much closer to real life situations than ever before. I consider this enormously important, as I have always mistrusted the value of results obtained in cases of either extreme stimulus reduction (faint points in the dark, hardly supraliminal contrast patterns, ...), extreme restriction of viewing conditions (head fixed by bite board, strict fixation, small field of view, flashes presentations, ...), or extreme response reduction (push either right or left button, ... ) for the understanding of 'generic vision'. The reduced paradigms seem more suited to address the physiology of vision. For me 'generic vision' implies free head and eye movements, large fields, lots to look at in general, some reasonable—perhaps even interesting—task to do, and the time to do it. So far so good.

Now for the critical remarks: Why do I speak—indeed in a derogatory fashion— of 'virtual psychophysics'? The reason is that, though many of the computer graphics used in psychophysics nowadays look great, they are actually quite different from a straight photograph of a real scene, say. There may exist scenes that might have yielded the stimulus in the sense of not being forbidden by any known law of physics. In fact, there typically exist infinitely many of such scenes, most of them really weird and unlikely to be intended by the investigator. For instance the pattern of luminous dots on the CRT is one obvious example, of the 'tame' variety, although typically not

intended. There exists no theory, from the machine vision literature, say, that would allow one to actually construct all these possible 'causes' of the proximal stimulus, except for some limited domains (Koenderink and van Doorn 1997; Kriegman and Belhumeur 1998). Hopefully, the one intended by the investigator is a member of that set. It is often impossible to be sure of that though. When the observer's response is in partial agreement with this intention one is often prepared to pronounce that the response is 'veridical'. Notice that responses that fail to be veridical in this narrowest sense may very well be 'correct' in the sense that they indicate a perfectly possible distal cause for the proximal stimulus. But, since most of these infinitely many correct responses are so weird that they would never occur to the observer, chances are that the response will at least not be too far removed from the fiducial 'veridical' one. This is probably the reason that so many of my colleagues wonder why I am so preoccupied with the 'veridicality' issue which appears to them a mere triviality. Why bother?

All this is fine and good as long as the radiance distribution on the CRT actually reflects the investigator's intention. This is fortunately frequently the case, especially when the stimuli are simple ones, and then I heartily agree that computers are a great tool to use. [But see the nice editorial by Hurlbert (1998)!] My issue is with the cases in which this fails to be so and—even worse—in which this failure is neither noticed by the investigator, nor is part of the description of the experiment in the eventual scientific report. Regrettably, there are good reasons to believe that this happens far more frequently than is usually assumed. People seem to have almost infinite trust in anything as advanced as a computer graphics pipeline (Foley et al 1990). Most of my colleagues don't even give it a thought and dismiss me as a grumpy old man when I question this. Cases in fact are those where the 3-D graphics rendering pipeline does not implement the actual laws of optics, but only approximate versions of them that allow the hardware designers and software developers to 'cut corners' and thus achieve impressive speeds at reasonable cost. In a nutshell:

To compute the wrong answer really fast.

That this is more the rule than the exception, is a fact that is not usually recognised outside part of the computer-graphics community. In rather obvious cases, such oddities are fortunately often noticed and mentioned in the reports. For instance, in many systems opaque objects fail to cast shadows. In the costlier systems they may do so on planar surfaces, though often not on arbitrary ones. Since cast shadows often cause the highest contrasts in a scene and are indeed the most obvious features of distant objects—for instance, on the distant tree you easily see the shadows of branches on the trunk when the branches themselves are no longer visible—this makes quite a difference (Ruskin, no date, 1st edition 1873): see figures 2 and 3. Interestingly, observers often miss the fact. In paintings, shadows are often left out too, and nobody complains (Schone 1954). More importantly investigators often fail to mention it. The latter omission is more serious, of course, since it should really be an important part of the description of the stimulus. That the observers don't notice is hardly a valid reason to assume that their perceptions are not influenced by the fact. I am sure that one often forgets to mention such important facts because one has grown accustomed to the 'virtual' world of computer graphics: It tends to look 'real', especially when any comparison with reality is avoided.

Other cases of 'cut corners' are far more subtle and go quite frequently unrecognised. In fact, they almost always go unnoticed. In many cases where I pointed out such facts to authors as an occasional reviewer of one of their papers, they failed to grasp the point and often were rather annoyed with the—in their eyes—nitpicking on irrelevant details of their work. Yet the effects need by no way be marginal: they can be qualitatively important and quantitatively huge.

Figure 2. I took this photograph in Central Park, New York, in April this year. I was thinking of John Ruskin (no date, 1st edition 1873, page 187): "shadows are ... the most conspicuous thing in a landscape".

Figure 3. I took this photograph in Central Park, New York, in April this year. I was planning a direct illustration of John Ruskin's (no date, 1st edition 1873, page 186) observations: "Go out some bright sunny day in winter, and look for a tree with a broad trunk ... . You will find that the boughs between you and the trunk ... are very indistinct ... but the shadows which they cast upon the trunk, you will find clear, dark, and distinct ... . And if you retire backwards, you will come to a point where you cannot see the intervening boughs at all, ... , but can still see their shadows perfectly plain.''

Here is one example that occurs commonly. Most graphics systems don't let you specify an extended light source such as the overcast sky. This is a nuisance, because a collimated beam (typically called 'point source', a misnomer) roughly irradiates only half of an object, leaving the other half in 'body shadow'. The usual way to ameliorate this situation (Foley et al 1990) is to add a certain amount of 'ambient illumination', by which is meant a Ganzfeld. Not a 'real' Ganzfeld mind you, but a 'virtual' one that yields constant irradiance on any surface element, even when part of the source is actually screened off by other parts of the object—an embarrassment by itself (see figure 4). Now the 'point source' gives rise to a surface irradiance that depends on the local surface attitude via 'Lambert's cosine law' (Foley et al 1990). In the body shadow region, the cosine and thus the irradiance becomes negative, because the radiation miraculously 'hits the surface from the inside' (see figure 5). This is purely formal nonsense, of course. But the graphics pipeline cheerfully accepts this oddity as a description of the actual physics. It is considered not to be a problem because the graphics pipeline clips negative radiance prior to driving the CRT's RGB guns anyway. When you have a 'point source with ambient illumination' (going by the menu items of a generic graphics program) the negativity of the surface attitude effect is conveniently compensated by the ambient term, which is a positive additive constant. Thus one obtains nice graphics indeed (Foley et al 1990): Even in the former body shadow area you get relief due to the surface attitude effect caused by the point source. However, notice that in real life the point source would not be able to irradiate that part of the surface at all, let alone cause relief through shading! The standard method is indeed utterly nonrealistic. It is possible to show that it mimics an extended source— that would have been the correct solution from the start—in the singular case that the object happens to be convex, say a sphere (Moon and Spencer 1981). Indeed the

Figure 4. In a real Ganzfeld, point A in the concavity receives only half of the illuminance of point B on the protrusion. The reason is simply that point A can see only part of the (very extended) source. In computer-graphics pipelines this is no problem though: the 'virtual Ganzfeld' has the virtue that it shines right through opaque objects. So points A and B receive the same virtual illuminance, thus correcting reality.

Figure 5. An opaque spherical object is illuminated by a collimated beam hitting it vertically from above. At point A, the illuminance is less than at the top point, that is less by a factor equal to the cosine of the angle subtended by the surface normal and the direction towards the source. In real life, point B would be in the body shadow. In virtual life it receives a share of negative illuminance though (the cosine turns out negative). This is due to light that travels straight through the opaque object. Again, virtual reality has the edge over the reality we have to live with.

technique was probably 'invented' through an erroneous interpretation of the formulas that apply to the photometry of this case. However, the method is applied willy nilly to any object, be it convex or not. Even for a sphere you obtain the correct result only for the global surface. Any texture due to surface irregularities is rendered in an utterly unrealistic way. Popular techniques like 'texture mapping' or 'bump mapping' yield results that may look good on cursory view, but can be shown to be completely off, both qualitatively and quantitatively (Koenderink and van Doorn 1996). It is easy enough to train yourself to see such glaring inconsistencies once you know what to look for. Indeed, the fact that each year's new graphics looks 'even better' than last year's should make one wary. Here I mean computer graphics, not truckage: The old 'Star Wars' movies don't look particularly primitive compared to 'Jurassic Park', but that is only because there is not so much actual computer graphics in them.

It would be fun—for me at least—to go into more technical detail: for instance the handling of surface scattering properties would make an attractive topic. But this editorial is clearly not the place to enter this chamber of horrors. The take-home message I want to convey here is that the—in itself understandable—trend to do psychophysics in a virtual rather than the real world is likely to lead to the lamentable result that much of the work published today will be discarded off hand in ten or twenty year's time, or at least should be. It should be because it applies to the perception of virtual worlds as they incidentally existed around the turn of the second millennium. The worry is that this will not be immediately obvious from the papers themselves, because present authors take familiarity with their virtual world pretty much for granted. This will make it very hard, not to say near impossible, to judge the relevance of a particular part of work, say twenty years from hence. One would have to be a historian of science in order to stand a chance of reproducing the actual stimuli! My colleagues of the future might well do worse than to discard work from the present period pretty much on the basis of their understanding of history, rather than waste their time in trying to figure out its intrinsic relevance and sift a few pearls from the dung. A sad thought.

Jan J Koenderink References

Bour L J, 1980 Investigations Relating to the Accommodation Response of the Human Eye thesis,

Universiteit Utrecht, Utrecht, The Netherlands Foley J D, Van Dam A, Feiner S K, Hughes J F, 1990 Computer Graphics, Principles and Practice

2nd edition (Reading, MA: Addison-Wesley) Hurlbert A, 1998 ''Illusions and reality-checking on the small screen'' Perception 27 633-636 Koenderink J J, Doorn A J van, 1996 ''Illuminance texture due to surface mesostructure'' Journal

of the Optical Society of America A 13 452 - 463 Koenderink J J, Doorn A J van, 1997 ''The generic bilinear calibration - estimation problem''

International Journal of Computer Vision 23 217-234 Kriegman D J, Belhumeur P N, 1998 ''What shadows reveal about object structure'', in Computer Vision, ECCV'98 Springer Lecture Notes in Computer Science 1407, Eds H Burkhardt, B Neumann (Berlin: Springer) pp 399- 414 Moon P, Spencer D E, 1981 The Photic Field (Cambridge MA: MIT Press) Ruskin J, no date, 1st edition 1873 Modern Painters volume 1 (London: Routledge) Schone W, 1954 Über das Licht in der Malerei (Berlin: Gebr. Mann)

Strong J, 1946 Modern Physical Laboratory Practice 12th edition (London: Blackie & Sons)

1999 a Pion publication