A subset of these readings are cited in the online lecture notes, and a subset of those will be handed out in class. In the list below, the "Comments:" fields were added informally by Steve or Marc, and are not guaranteed to be correct. If you see an error, let us know. The "File: directory/filename" fields refers to /u/levoy/downloaded/directory/filename, but you must be logged on to one of the graphics lab machines to access these files. These files have typically been downloaded from the web, so they should be easy to find using google.
Adams, A., The Camera, Little, Brown, and Co., 1976.
London, B., Upton, J., Photography, fifth edition, HarperCollins, 1994.
Hedgecoe, J., The Photographer's Handbook, third edition, Alfred A. Knopf, 1993.
Frost, L., The Complete Guide to Night and Low-Light Photography, Watson-Guptill, 1999.
Peterson, L., Learning to See Creatively, Watson-Guptill, 1988.
Professional Photographic Illustration, Antonio LoSapio, ed., Kodak, 1994.
Hecht, E., Optics, second edition, Addison-Wesley, 1987.
Kingslake, R., Optics in Photography, SPIE Press, 1992.
Kingslake, R., Optical System Design, Academic Press, 1983.
Kingslake, R., A History of the Photographic Lens, Academic Press, 1989.
Smith, W. J., Modern Optical Engineering, McGraw-Hill, 2000.
Goldberg, N., Camera technology: the dark side of the lens, Academic Press, 1992.
Kolb, C., Mitchell, D., Hanrahan, P., A Realistic Camera Model for Computer Graphics Proc. Siggraph '95.
Levoy, M., Hanrahan, P., Light field rendering, Proc. Siggraph '96. URL: http://graphics.stanford.edu/papers/light/
Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F., The Lumigraph, Proc. Siggraph '96. .file: image-based-rendering/gortler-lumigraph-sig96.pdf
Gershun, A., The Light Field, Moscow, 1936. Translated by P. Moon and G. Timoshenko, Journal of Mathematics and Physics, Vol. XVIII, MIT, 1939, pp. 51-151.
Moon, P., Spencer, D.E., The Photic Field, MIT Press, 1981.
Adelson, E.H., Bergen, J.R., The plenoptic function and the elements of early vision, Computation Models of Visual Processing, M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991. .file: image-based-rendering/adelson-plenoptic-cmvp91.pdf
Langer, M.S., Zucker, S.W., What is a light source? Proc. CVPR '97. .comment: taxonomy of various light sources using 4D light field rep .file: image-based-rendering/langer-light-source-cvpr97.pdf
Gu, X., Gortler, S.J., Cohen, M.F., Polyhedral geometry and the two-plane parameterization, Proc. Eurographics Rendering Workshop '97. .file: image-based-rendering/gortler-polyhedral-rend97.pdf .comment Some observations on the geometric and algebraic structure of linear subspaces of 4D space, i.e. slices of light fields, e.g. what is the space of all lines passing through a point, line segment, or triangle. No applications given, but presages Cohen's video cube and Zomet's cross-slit panoramas.
Sloan, P.-P., Cohen, M.F., Gortler, S.J., Time Critical Lumigraph Rendering Proc. 1997 Symposium on Interactive 3D Graphics. .file: image-based-rendering/sloan-timecritical-i3d97.pdf
Camahort, E., Lerios, A., Fussell, D., Uniformly sampled light fields, Proc. Eurographics Rendering Workshop '98. .comment: 2 parameterizations: sphere x sphere, and direction x position .file: image-based-rendering/camahort-uniformLF-rend98.pdf
Camahort, E., Fussell, D., A Geometric Study of Light Field Representations, Technical Report TR99-35, Department of Computer Sciences, The University of Texas at Austin, 1999. .comment: analysis of errors in various light field configurations .file: image-based-rendering/camahort-lightfield-tr99-35.pdf
Camahort, E., 4D Light-Field Modeling and Rendering PhD Dissertation, University of Texas at Austin, 2001. .comment: summary of alternative parameterizations of light fields .file: image-based-rendering/camahort-dissertation.pdf
Chai, J.-X., Tong, X., Chan, S.-C. Shum, H.-Y. Plenoptic Sampling, Proc. Siggraph 2000. .comment: Z-disparity versus light field sampling rate, Fourier analysis .file: image-based-rendering/shum-plenoptic-sig00.pdf
Lin, Z., Shum, H.-Y., On the number of samples needed in light field rendering with constant-depth assumption, Proc. CVPR 2000. .comment: more disparity analysis, ideal st-plane = harmonic mean depth .file: image-based-rendering/shum-light-field-sampling-cvpr2000.pdf
Kang, S.B., Seitz, S.M., Sloan, P.-P., Visual tunnel analysis for visibility prediction and camera planning, Proc. CVPR 2000. .comment: view planning for light field acquisition, in particular, .comment: defines a visual tunnel P(x,y,z,theta,w) and derived functions: .comment: for flatland, f(x,y) = density of rays available at that point, .comment: and viewing volume within which virtual cameras with specific .comment: orientations and FOV could be populated from the light field .file: image-based-rendering/kang-visual-tunnel-cvpr2000.pdf
Halle, M., Multiple viewpoint rendering, Proc. Siggraph '98. .file: image-based-rendering/halle-multiview-rendering-sig98.pdf
Isaksen, A., McMillan, L., Gortler, S.J., Dynamically reparameterized light fields, Proc. Siggraph 2000 .file: image-based-rendering/isaksen-reparameterized-sig00.pdf
Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M., Unstructured Lumigraph rendering, Proc. Siggraph 2001 .file: image-based-rendering/buehler-unstructured-sig01.pdf
Lin, Z., Wong, T.-T., Shum, H.-Y., Relighting with the Reflected Irradiance Field: Representation, Sampling and Reconstruction Proc. CVPR 2001. .comment: fixed viewpoint, point light moves across a plane, so still 4D .comment: object depth and surface BRDF -> est. of radiometric error .comment: ignores interreflection, see also Koudelka et al., CVPR 2001
Miller, G., Volumetric Hyper-Reality: A Computer Graphics Holy Grail for the 21st Century?, Proc. Graphics Interface '95, Canadian Information Processing Society, 1995, pp. 56-64.
Teller, S., Bala, K., Dorsey, J., Conservative radiance interpolants for ray tracing, Proc. Eurographics Rendering Workshop '96. .comment: aliasing in ray tracing due to gaps, blockers, funnels, peaks .comment: radiance interpolation within a nodes of a hierarchy of 4D .comment: ray spaces defined by two planes, similar to light fields, .comment: for the radiance leaving each surface in a scene, .comment: also uses lazy evaluation as observer moves
Miller, G., Rubin, S., Ponceleon, D., Lazy decompression of surface light fields for precomputed global illumination, Proc. Eurographics Rendering Workshop '98. .comment: per-surface light fields, DCT-based compression
Wood, D.N., Azuma, D.I., Aldinger, K., Curless, B., Duchamp, T., Salesin, D.H., Stuetzle, W., Surface Light Fields for 3D Photography Proc. Siggraph 2000. .file: image-based-rendering/wood-surfacelfs-sig00.pdf
Wilburn, B., Smulski, M., Lee, K., Horowitz, M.A., The Light Field Video Camera, Proc. SPIE Electronic Imaging 2002. .file: sensing-hardware/wilburn-lfcamera-spie02.pdf
The MIT light field camera array URL: http://graphics.lcs.mit.edu/~jcyang/CameraArray/cameraarray.htm
The CMU Virtualized Reality dome URL: www-2.cs.cmu.edu/afs/cs/project/VirtualizedR/www/VirtualizedR.html
Kanade, T., Saito, H., Vedula, S., The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams, Technical Report CMU-RI-TR-98-34, Carnegie Mellon University, 1998. .comment: 49 video cameras streaming raw video to 17 PCs .file: image-based-rendering/kanade-3droom-tr98.pdf
Naemura, T., Yoshida, T., Harashima, H., 3-D computer graphics based on integral photography, Optics Express, Vol. 8, No. 2, February 12, 2001. .comment: lens array -> HDTV signal -> traditional light field rendering .comment: low-res (54 x 63 pixels, 20 x 20 angles), no depth assumption .comment: or constant Z depth assumption .comment: see also Ooi, ICIP 2001 .file: stereoscopic-displays/naemura-lens-array-optics01.pdf
Ooi, R., Hamamoto, T., Naemura, T., Aizawa, K., Pixel Independent Random Access Image Sensor for Real Time Image-Based Rendering System, Proc. ICIP 2001. .comment: CMOS sensor with selectable readout region .comment: see also Naemura, Optics Express, 2001 .file: image-based-rendering/ooi-array-icip02.pdf
Schirmacher, H., Ming, L., Seidel, H.-P., On-the-fly processing of generalized Lumigraphs, Proc. Eurographics 2001. .comment: "Lumishelf" of 6 firewire cameras (3 x 2 array) .file: image-based-rendering/schirmacher-lumigraphs-eg01.pdf
Adelson, E.H., Wang, J.Y.A., Single Lens Stereo with a Plenoptic Camera , IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 14, No. 2, 1992, pp. 99-106. .comment: lenticular screen over sensor, one main lens, to extract depth .file: sensing-hardware/adelson-plenoptic-pami92.pdf
Taylor, Dayton Virtual Camera Movement: The Way of the Future? American Cinematographer, Vol 77, No. 9, September, 1996, pp. 93-100.
Chen, S.E., QuickTime VR - An Image-Based Approach to Virtual Environment Navigation, Proc. Siggraph '95.
Szeliski, R. and Shum, H.Y. Creating Full View Panoramic Image Mosaics and Environment Maps. Computer Graphics Proceedings, Annual Conference Series, 1997, pp. 251-258. .file: image-based-rendering/szeliski-mosaics-tr97.pdf
Shum, H., He, L.-W., Rendering with Concentric Mosaics, Proc. Siggraph '99. .file: image-based-rendering/shum-concentric-sig99.pdf
Quan, L., Lu, L.,. Shum, H., Lhuillier, M., Concentric Mosaic(s), Planar Motion and 1D Cameras, Proc. ICCV 2001. .comment: decomposes CM into 1D perspective x 1D affine cameras, .comment: implications for calibration and shape-from-motion
Debevec, P., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., Sagar, M., Acquiring the Reflectance Field of a Human Face, Proc. Siggraph 2000. .file: image-based-rendering/debevec-reflectance-sig00.pdf
Herman, G.T., Image Reconstruction from Projections, Academic Press, New York, 1980.
Gering, D.T.and Wells III, W.M. Object Modeling using Tomography and Photography. Proc. CVPR '99. .comment: reconstruction from backprojection of light fields .file: shape-from-light-fields/gering-backprojection-cvpr99.pdf
Seitz, S.M., Dyer, C.R., Photorealistic Scene Reconstruction by Voxel Coloring. Int. J. Computer Vision, Vol. 35, No. 2, 1999, 151-173. .comment: the first paper on voxel algorithms for shape from many cameras .file: shape-from-light-fields/seitz-voxel-coloring-ijcv99.pdf
De Bonet, J., Viola, P., Roxels: responsibility weighted 3D volume reconstruction, Proc. ICCV '99. .comment: ARTS-like iterative backprojection/forward projection, .file: shape-from-light-fields/debonet-roxels-iccv99.pdf
Broadhurst, A., Drummond, T., Cipolla, R., A Probabilistic Framework for Space Carving, Proc. ICCV 2001. .comment: similar to De Bonet and Viola, ICCV '99,
Dachille, F. IX, Mueller, K., Kaufman, A., Volumetric Backprojection, Proc. 2000 Symposium on Volume Visualization and Graphics, ACM, October, 2000. .comment: for global illumination, but should suffer from .comment: self-illumination like volumetric shadowing algorithms, and for .comment: shape-from-light-fields using filtered backprojection and ART
Marks, D.L., Stack, R.A., Brady, D.J., Munson Jr. D.C., Brady, R.B., Visible Cone-Beam Tomography With a Lensless Interferometric Camera, Science, Vol. 284, No. 5423, June 25, 1999, pp. 2164-6, .comment: lensless (infinite depth-of-field) interferometric images .comment: of opaque object followed by tomographic reconstruction, .comment: see also Fetterman et al., (Optics Express 7(5)) .file: shape-from-light-fields/marks-visible-tomography-science99.pdf
Fetterman, M.R., Tan, E., Ying, L., Stack, R.A., Marks, D., Feller, S., Cull, E., Sullivan, J., Munson, D.C.Jr., Thoroddsen, S.T., Brady, D.J., Tomographic imaging of foam, Optics Express, Vol. 7, No. 5, August 28, 2000. .comment: tomographic reconstruction of pins and foam from light images .comment: see also Marks, Stack, et al (Science 284(5423)) .file: shape-from-light-fields/fetterman-tomography-foam-optics00.pdf
See also the section on "optical tomography" in Berthold Horn's Spring 2002 course (at Berkeley) on computational imaging.
Matusik, W., Buehler, C., Raskar, R., Gortler, S.J., McMillan, L., Image-Based Visual Hulls, Proc. Siggraph 2000. .file: image-based-rendering/buehler-visualhulls-sig00.pdf
Faugeras, O., Keriven, R., Complete Dense Stereovision using Level Set Methods Proc. ECCV '98. .file: shape-from-X/faugeras-levelset-eccv98.pdf .comment Determining shape from a circle of cameras (extendable to a surface of cameras) by evolving a surface toward points in 3-space that maximize a photo-consistency metric, continually updating this volumetric function to account for occlusion by the evolving surface. Metric is cross-correlation of camera intensities, which is different than in Seitz's voxel coloring algorithm. Seems to work, but no running times are given, also no evaluation of convergence, robustness to noise, lack of texture, changes in number of cameras, etc.
Chen, S.E., Williams, L., View Interpolation for Image Synthesis, Proc. Siggraph '93. .comment: the first view interpolation paper, assumes Z-information .file: image-based-rendering/chen-viewinterp-sig93.pdf
McMillan, L., Bishop, G., Plenoptic Modeling: An Image-Based Rendering System, Proc. Siggraph '95 .file: image-based-rendering/mcmillan-plenoptic-sig95.pdf
Sillion, F., Drettakis, G., Bodelet, B., Efficient impostor manipulation for real-time visualization of urban scenery, Computer Graphics Forum (Proc. Eurographics '97), Vol. 16, No. 3, 1997, pp. 207-218. .comment: textured range images (=3D imposters) for bkg, geometry for fgd .file: image-based-rendering/sillion-imposter-eg97.pdf
Mark, W.R., McMillan, L., Bishop, G., Post-Rendering 3D Warping. Proc. 1997 Symposium on Interactive 3D Graphics. .comment: view interpolation (with Z) instead of rendering every frame .file: image-based-rendering/mark-warping-i3d97.pdf & -plates.pdf
Shade, J., Gortler, S.J., He, L., Szeliski, R., Layered Depth Images, Proc. Siggraph '98. .file: image-based-rendering/shade-layereddepth-sig98.pdf
Debevec, P., Yu, Y., Borshukov, G., Efficient view-dependent image-based rendering with projective texture-mapping, Proc. Eurographics Rendering Workshop '98. .comment: compute visibility map from each camera, split polygons, .comment: view-dependent texture mapping by hardware-projected textures
Pulli, K., Surface Reconstruction and Display from Range and Color Data, PhD dissertation, University of Washington, 1997. .file: 3D-shape-acquisition/pulli-dissertation.pdf .comment: chapters on global registration, image-guided registration, .comment: viewpoint-dependent textures
Aliaga, D.G., Lastra, A.A., Automatic Image Placement to Provide a Guaranteed Frame Rate, Proc. Siggraph '99. .comment: image caching, with guaranteed frame rate .file: image-based-rendering/aliaga-framerate.sig99.pdf
Magnor, M., Geometry-Adaptive Multi-View Coding Techniques for Image-based Rendering, PhD dissertation, University of Erlangen, 2000. .file: image-based-rendering/magnor-dissertation.pdf
Magnor, M., Girod, B., Data Compression for Light-Field Rendering, IEEE Transactions on circuits and systems for video technology, Vol. 10, No. 3, April, 2000. .comment: disparity-compensated compression and interpolation .file: image-based-rendering/magnor-lfcomp-tcsvt00.pdf
Magnor, M., Girod, B., Model-based coding of multi-viewpoint imagery, Proc. VCIP 2000. .comment: surface-based light fields using vision-based 3D voxel model Folder: image-based rendering
Tong, X., Gray, R.M., Coding of multi-view images for immersive viewing, Proc. ICASSP 2000.
Maciel, P.W.C., Shirley, P., Visual navigation of large environments using textured clusters, Proc. 1995 Symposium on Interactive 3D Graphics, ACM, 1995, pp. 95-102. .comment: general LOD framework with (static) sprites .file: image-based-rendering/maciel-navigation-i3d95.pdf
Aliaga, D.G., Visualization of complex models using dynamic texture-based simplification, Proc. Visualization '96, IEEE Computer Society Press, October, 1996, pp. 101-106. .comment: texture as surrogate for geometry in the usual way, but morph .comment: remaining geometry to match texture rather than warping texture .file: image-based-renderng/aliaga-caching-vis96.pdf
Shade, J., Lischinski, D., DeRose, T.D., Snyder, S., Salesin, D.H., Hierarchical image caching for accelerated walkthroughs of complex environments, Proc. Siggraph '96. .comment: render portions of scene, cache images, map onto planes .file: image-based-rendering/shade-caching-sig96.pdf
Schaufler, G., Sturzlinger, W., A three-dimensional image cache for virtual reality, Computer Graphics Forum (Proc. Eurographics '96), Vol. 15, No. 3, 1995, pp. 227-235. .comment: very similar to Shade et al., Siggraph '96 .file: image-based-rendering/schaufler-cache-eg96.pdf
Wilson, A., Mayer-Patel, K., Manocha, D., Spatially-encoded far-field representations for interactive walkthroughs, Proc. Multimedia 2001. .comment: scene divided into cells with far-field images on each wall, .comment: similar to Regan's concentric environment maps, .comment: n-D MPEG encoding used to capture inter-cell coherence, .comment: general survey of IBR, with many references .file: image-based-rendering/wilson-spatialvideo-mm01.pdf
Seitz, S., Dyer, C, View Morphing, Proc. Siggraph '96. .file: image-based-rendering/seitz-view-morphing-sig96.pdf
Debevec, P.E., Taylor, C.J., Malik, J., Modeling and Rendering Architecture from Photographs, Proc. Siggraph '96. .comment: also introduces view-dependent texture mapping .file: image-based-rendering/debevec-modeling-sig96.pdf
Debevec, P., Rendering Synthetic Objects Into Real Scenes: Bridging Traditional and Image-Based Graphics With Global Illumination and High Dynamic Range Photography, Proc. Siggraph '98.
Debevec, P., Yu, Y., Borshukov, G., Efficient view-dependent image-based rendering with projective texture-mapping, Proc. Eurographics Rendering Workshop '98. .comment: compute visibility map from each camera, split polygons, .comment: view-dependent texture mapping by hardware-projected textures
Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., Fuchs, H., The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays, Proc. Siggraph '98. .file: image-based-rendering/fuchs-office-future-sig98.pdf
Irani, M., and Peleg, S., Improving Resolution by Image Registration. Graphical Models and Image Processing. Vol. 53, No. 3, May, 1991, pp. 231-239.
Mann, S., Picard, R., Virtual Bellows: Constructing High Quality Stills From Video Proc. IEEE Int. Conf. on Image Processing, 1994. .comment: superresolution via upsampling -> adding -> deblurring .file: image-based-rendering/mann-stillsfromvideo-cip94.pdf
Zomet, A., Peleg, S., Super-Resolution from Multiple Images having Arbitrary Mutual Motion, in S. Chaudhuri (ed.), Super-Resolution Imaging, Kluwer Academic, September 2001.
Finkelstein, A., Jacobs, C.E., Salesin, D.H., Multiresolution Video, Proc. Siggraph '96.
Heeger, D.J., Bergen, J.R., Pyramid-Based Texture Analysis/Synthesis, Proc. Siggraph '95. .comment: a good representative of statistical synthesis techniques
Efros, A., Leung, T., Texture synthesis by non-parametric sampling, Proc. ICCV '99. .comment: for each pixel, search for similar neighborhoods in sample, .comment: very slow, but the basis for Wei and Levoy and others .file: texture-synthesis/efros-texture-iccv99.pdf
Wei, L.-Y., Levoy, M., Fast texture synthesis using tree-structured vector quantization, Proc. Siggraph 2000. .comment: based on Efros and Leung
Praun, E., Finkelstein, A., Hoppe, H., Lapped textures, Proc. Siggraph 2000. .comment: manually specified patches laid down randomly on 2D manifold, .comment: works surprisingly well for some textures, extremely fast .file: texture-synthesis/praun-lapped-sig00.pdf
Efros, A.A., Freeman, W.T., Image quilting for texture synthesis and transfer, Proc. Siggraph 2001. .comment: Efros-Leung search + minimum-error cut between patches .file: texture-synthesis/efros-quilting-sig01.pdf
Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H., Image Analogies, Proc. Siggraph '01. .file: texture-synthesis/hertmann-analogies-sig01.pdf
Sawhney, H.S., Guo, Y., Hanna, K., Kumar, R., Adkins, S., Zhou, S., Hybrid Stereo Camera: An IBR Approach for Synthesis of Very High Resolution Stereoscopic Image Sequences, Proc. Siggraph '01. .file: texture-synthesis/sawhney-stereo-sig01.pdf
Mann, S., Picard, R.W., On being 'undigital' with digital cameras: Extending Dynamic Range by Combining Differently Exposed Pictures. M.I.T Media Laboratory Perceptual Computing Section Technical Report No. TR-323. IS&T's 48th Annual Conference, May 7-11, 1995, pp. 422-428. Folder: image-based rendering
Debevec, P., Malik, J., Recovering High Dynamic Range Radiance Maps from Photographs, Proc. Siggraph '97. .file: sensing-hardware/debevec-highrange-sig97.pdf
Mitsunaga, T., Nayar, S., Radiometric self calibration, Proc. CVPR '99. .comment: creating high-dynamic range radiance maps from ratios of pixel .comment: values in multiple exposures *without* knowing exposure times, .comment: as is required in Debevec and Malik .file: sensing-hardware/mitsunaga-calibration-cvpr99.pdf
Nayar, S.K., Mitsunaga, T., High dynamic range imaging: spatially varying pixel exposures Proc. CVPR 2000. .comment: collaboration with Sony on placing a mask in front of sensor .file: sensing-hardware/nayar-high-dyanmic-range-cvpr2000.pdf
Aggarwal, M., Ahuja, N., High dynamic range panoramic imaging, Proc. ICCV 2001. .comment: stepped grayscale mask placed over stepwise rotating sensor .comment: survey of high dynamic range imaging .comment: (see also Schechner and Nayar in these proceedings)
Cohen, J., Tchou, C., Hawkins, T., Debevec, P., Real-time High Dynamic Range Texture Mapping, Proc. Eurographics Rendering Workshop 2001. .file: image-based-rendering/debevec-highrange-rend01.pdf
DiCarlo, J., Wandell, B., Rendering High Dynamic Range Images, Proc. SPIE Electronic Imaging 2000, Vol. 3965, San Jose, CA, January 2000. .file: sensing-hardware/wandell-dynamic-range-spie00.pdf
Wandell, B., Catrysse, P., DiCarlo, J., Yang, D., El Gamal, A., Multiple Capture Single Image Architecture with a CMOS Sensor, Proc. Chiba Conference on Multispectral Imaging, Chiba, Japan, 1999. .file: sensing-hardware/wandell-MCSI-chiba99.pdf .comment: trading off spatial resolution and dynamic range
Tumblin, J. and Rushmeier, H., Tone reproduction for realistic images, IEEE Computer Graphics and Applications, Vol. 13, No. 6, November, 1993, pp. 42-48. .comment: dealing with gamma correction, CRT contrast, ambient .comment: illumination, and computed luminance all at once, tone .comment: reproduction operator
Tumblin, J., Turk, G., LCIS: A Boundary Hierarchy For Detail-Preserving Contrast Reduction, Proc. Siggraph '99. .file: sensing-hardware/tumblin-highrange-sig99.pdf
Edgerton, H., Stopping Time, Abrams, 1987. .comment: his classic high-speed photographs
Sidney F. Ray ed., High Speed Photography, Focal Press, 1997.
Nayar, S.K., Karmarkar, A., 360 x 360 mosaics, Proc. CVPR 2000. .comment: panning a camera around the equator, (1) using a parabolic .comment: mirror each frame sees (overlapping) longitudinal strips, .comment: which are then mosaiced together to make a 360 x 360, .comment: or (2) using a conical mirror each frame sees a single arc .comment: of longitude, with each latitude ray smeared into a line .comment: segment on the sensor, thereby permitting superresolution .file: sensing-hardware/nayar-360-360-cvpr2000.pdf
Schechner, Y., Nayar, S., Generalized Mosaicing, Proc. ICCV 2001. .comment: continuous grayscale mask placed over moving color camera, or .comment: continuous rainbow mask placed over moving B&W camera .comment: special technique for registering differently filtered images .comment: suggests varying defocus, suggests many variables at once, etc. .comment: (see also Aggarwal and Ahuja in these proceedings) .comment: later tried polarization and extended depth-of-field, see: .comment: http://www.cs.columbia.edu/CAVE/, demos, video demonstrations
Dowski, E.R., Johnson, G.E., Wavefront Coding: A modern method of achieving high performance and/or low cost imaging systems, Proc. SPIE, August, 1999. .comment: xy-separable sinusoidal (anamorphic) lens -> afocal image .comment: in which objects at different depths are equally misfocused -> .comment: digital processing -> extended depth-of-field, seems to work! .file: image-processing/dowski-wavefront-coding-spie99.pdf .comment: see also Dowski (OSA '95) and Bradburn (Applied Optics '97) .comment: see also http://www.cdm-optics.com, esp. .comment: http://www.cdm-optics.com/wave/pubs/papers/edf/paper.html
Dowski, E.R., An Information Theory Approach to Incoherent Information Processing Systems, Signal Recovery and Synthesis V, OSA Technical Digest Series, pp. 106-108, March, 1995. .comment: theory behind Dowski and Johnson (SPIE '99) .comment: see also Bradburn et al. (Applied Optics '97) .file: image-processing/dowski-infotheory-osa95.pdf
See also the section on "coded aperture imaging" in Berthold Horn's Spring 2002 course (at Berkeley) on computational imaging.
Ogden, J.M., Adelson, E.H., Bergen, J.R., Burt, P.J., Pyramid-Based Computer Graphics, RCA Engineer, Vol. 30, No. 5, Sept./Oct., 1985, pp. 4-15. Folder: texture rendering .comment: includes extended depth-of-field, .comment: cited by Haeberli's Graphica Obscura article on depth of field, .comment: see http://www.sgi.com/grafica/depth/
Yang, D.X.D., El Gamal, A., Fowler, B., Tian, H., A 640�512 CMOS Image Sensor with Ultra Wide Dynamic Range Floating-Point Pixel-Level ADC, Proc. IEEE International Solid-State Circuits Conference (ISSCC), 1999.
Chen, T., Catrysse, P., El Gamal, A., Wandell, B., How Small Should Pixel Size Be? Proc. SPIE Electronic Imaging 2000, Vol. 3965, San Jose, CA, January 2000. .file: sensing-hardware/gamal-pixel-size-spie00.pdf .comment: trading off spatial resolution, dynamic range, and SNR
El Gamal, A., Yang, D., Fowler, B., Pixel Level Processing - Why, What, and How? Proc. SPIE Electronic Imaging '99, Vol. 3650, January 1999. .file: sensing-hardware/gamal-pixel-processing-spie99.pdf .comment: CMOS imaging, analog versus digital interpixel processing
Lim, S.H., El Gamal, A., Integrating Image Capture and Processing -- Beyond Single Chip Digital Camera, Proc. SPIE Electronic Imaging 2001, Vol. 4306, San Jose, CA, January 2001. .file: sensing-hardware/gamal-motion-est-spie01.pdf .comment: motion estimation from high-speed imaging (10,000fps)
Liu, X.Q., El Gamal, A., Photocurrent Estimation from Multiple Non-destructive Samples in a CMOS Image Sensor, Proc. SPIE Electronic Imaging 2001, Vol. 4306, San Jose, CA, January 2001. .file: sensing-hardware/gamal-high-range-spie01.pdf .comment: improvement (for darks) on their high dynamic range sensor
Kleinfelder, S., Lim, S., Liu, X., Gamal, A., A 10,000 Frames/s 0.18 �m CMOS Digital Pixel Sensor with Pixel-Level Memory, Proc. 2001 International Solid State Circuits Conference. .comment: latest 10,000 frame/sec CMOS sensor .file: sensing-hardware/gamal-10kfps-isscc01.pdf
Srinivasan, S., Chellapp, R., Image sequence stabilization, mosaicking, and superresolution, In Handbook of Image and Video Processing (chapter 3.13), Al Bovik ed., Academic Press, 2000. .comment: survey of these subjects, including ideas for the future
Srinivasan, S., Chellapp, R., Image stabilization and mosaicking using the overlapped basis optical flow field, Proc. ICIP '97. .comment: Paper underlying their chapter (3.13) in Al Bovik's book .file: image-processing/srinivasan-stabilization-icip97.pdf
Buehler, C., Bosse, M., McMillan, L., Non-Metric Image-Based Rendering for Video Stabilization, Proc. CVPR 2001.
Kemp, M., The Science of Art, Yale University Press, 1990.
Cole, A., Perspective, Dorling Kindersley, 1992.
Leonardo Da Vinci, Leonardo on Painting, translated by M. Kemp and M. Walker, Yale University Press, 1989.
Rademacher, Pl, Bishop, G., Multiple-Center-of-Projection Images, Proc. SIGGRAPH '98. .file: image-based-rendering/rademacher-mcop-sig98.pdf
Andrew Davidhazy's articles and examples of panoramic, strip, and peripheral (rollout) photographs URL: http://www.rit.edu/~andpph/
Zomet, A., Peleg, S., Arora, C., Rectified mosaicing: mosaics without the curl, Proc. CVPR 2000. .comment: if the camera is tilted, the resulting panorama is curved, .comment: solved by cutting imagery into strips and unkeystoning each one
Rousso, B., Peleg, S., Finci, I., Rav-Acha, A., Universal mosaicing using pipe projection, Proc. ICCV '98. .comment: if a camera moves or zooms toward a focus-of-expansion (FOE), .comment: for each frame, project one circle of pixels around the FOE .comment: onto the surface of a pipe extruded along the 3D camera motion, .comment: creates a single pipe-shaped mosaic for the image sequence, .comment: which can then be reprojected onto a plane for viewing, if .comment: camera moves and scene is not flat, result is multi-perspective
Peleg, S., Herman, J., Panoramic Mosaics by Manifold Projection, Proc. CVPR '97. .comment: swept video camera -> grab central column from each frame -> .comment: panoramic image, result is ultra-wide-angle planar projection .file: image-based-rendering/peleg-panoramic-cvpr97.pdf
Seitz, S., The Space of All Stereo Images Proc. ICCV 2001.
Agrawala, M., Zorin, D., Munzner, T., Artistic multiprojection rendering, Proc. Eurographics Rendering Workshop 2000.
Weinshall, D., Lee, M.S., Brodsky, T., Trajkovic, M., Feldman, D., New View Generation with a Bi-centric Camera, Proc. ECCV '02. .file: image-based-rendering/weinshall-bicentric-eccv02.pdf .comment: Panoramas from a translating, sideways-looking camera, where the horizontal and vertical centers of projection lie at different points along the virtual camera's optical axis, applied to architectural interiors. Also movie sequences composed of such panoramas, where one center of projection moves. Note: straight lines map to hyperbolae in these images, and these distortions change over time in the movies. Finally, since they take strips from each image, and the scene is not flat, there are discontinuities at strip boundaries in the panoramas. These discontinuities move coherently in the movies.
Zomet, A., Feldman, D., Peleg, S., Weinshall, D., Non-Perspective Imaging and Rendering with the Crossed-Slits Projection. Technical report #2002-41, Hebrew University, July, 2002. .file: image-based-rendering/zomet-xslits-TR02.pdf .comment: Generalization of Weinshall et al. (ECCV '02). Two slits (line segments) in arbitrary position. Parameterization of the set of rays passing through the slits is forced by their intersection with a rectangularly parameterized image plane in general position. Analyzes the alebraic and geometric properties of these images, including the effects of rotating the image plane and tilting the slits. Also discusses variants on slits, including one linear and one circular. Applications include sideways-looking video taken from a helicopter.
Nayar, S.K., Catadioptric omnidirectional camera, Proc. CVPR '97. .comment: hemispherical video camera using a mirror and a single sensor
Nayar, S.K., Peri, V., Folded catadioptric cameras, Proc. CVPR '99. .comment: imaging systems with two conic mirrors followed by lenses
Baker, S., Nayar, S., A Theory of Catadioptric Image Formation, Proc. ICCV '98. .file: sensing-hardware/baker-nayar-catadioptric-iccv98.pdf .comment: see later and expanded version in IJCV '99
Baker, S., Nayar, S., A Theory of Single-Viewpoint Catadioptric Image Formation, Proc. IJCV '99. .file: sensing-hardware/baker-nayar-catadioptric-ijcv99.pdf .comment: tutorial on geometrical optics of mirror/lens systems .comment: earlier version appeared in ICCV '98
Gluckman, J., Nayar, S.K., Rectified catadioptric stereo sensors, Proc. CVPR 2000. .comment: using planar mirrors to obtain stereo views using one camera, .comment: arrangements that yield parallel scanlines in both cameras .file: sensing-hardware/nayar-rectified-stereo-cvpr2000.pdf
Swaminathan, R., Grossberg, M., Nayar, S., Caustics of Catadioptric Cameras, Proc. ICCV 2001. .comment: locus of viewpoints of single-reflector conical systems
(under construction)
Rusinkiewicz, S., Hall-Holt, O., Stripe Boundary Codes for Real-Time Structured-Light Range Scanning of Moving Objects, Proc. ICCV '01. URL: http://graphics.stanford.edu/papers/realtimerange/
Rusinkiewicz, S., Real-time Acquisition and Rendering of Large 3D Models, PhD dissertation, Stanford University. URL: http://graphics.stanford.edu/papers/smr_thesis/
Rusinkiewicz, S., Hall-Holt, O., Levoy, M., Real-Time 3D Model Acquisition, To appear in Siggraph '02. URL: https://graphics.stanford.edu/papers/rt_model/
Oh, B.M., Chen, M., Dorsey, J., Durand, F., Image-based modeling and photo editing, Proc. Siggraph 2001.
Levoy, M., Spreadsheets for Images, Proc. SIGGRAPH '94.
Siggraph '98 course #15, Debevec et al. URL: http://www.cs.berkeley.edu/~debevec/IBMR98/
Siggraph '99 course #39, Debevec et al. URL: http://www.debevec.org/IBMR99/
CMU 15-869, Seitz and Heckbert URL: http://www-2.cs.cmu.edu/~ph/869/www/869.html
Karlsruhe IBMR-Focus resource page URL: http://i31www.ira.uka.de/~oel/ibmr-focus/
UNC IBR resource page URL: http://www.cs.unc.edu/~ibr/
Aaron Isaksen's web page on autostereoscopic display of light fields URL: http://graphics.lcs.mit.edu/~aisaksen/projects/autostereoscopic/
Intel's work on surface light fields URL: http://www.intel.com/research/mrl/research/lfm/
U. Washington's work on surface light fields URL: http://grail.cs.washington.edu/projects/slf/
Korean page of links for autostereoscopic displays: URL: http://vr.kjist.ac.kr/~3D/Research/Stereo/display.html
The Page of Omnidirectional Vision URL: http://www.cis.upenn.edu/~kostas/omni.html