Glossary

A

A-buffer or Alpha-buffer

An extra Color channel to hold transparency information; pixels become quad values (RGBA). In a 32-bit frame buffer there are 24 bits of color, 8 each for red, green, and blue, along with an 8-bit alpha channel. Alpha is used for determining and displaying transparency, shadows, and anti-aliasing.

Related Glossary Terms: Antialiasing, Frame buffer

Term Source: Chapter 19 – Noise functions and Fractals

 

Abel, Robert

Robert Abel was a pioneer in visual effects, computer animation and interactive media, best known for the work of his company, Robert Abel and Associates. He received degrees in Design and Film from UCLA. He began his work in computer graphics in the 1950s, as an apprentice to John Whitney. In the 1960s and early 1970s, Abel wrote or directed several films, including The Making of the President, 1968, Elvis on Tour and Let the Good Times Roll.

In 1971, Abel and Con Pederson founded Robert Abel and Associates (RA&A), creating slit- scan effects and using motion-controlled cameras for television commercials and films. RA&A began using Evans & Sutherland computers to pre-visualize their effects; this led to the creation of the trailer for The Black Hole, and the development of their own software for digitally animating films (including Tron).In 1984, Robert Abel and Associates produced a commercial named Brilliance for the Canned Food Information Council for airing during the Super Bowl. It featured a sexy robot with reflective environment mapping and human-like motion.

Abel & Associates closed in 1987 following an ill-fated merger with now defunct Omnibus Computer Graphics, Inc., a company which had been based in Toronto.In the 1990s, Abel founded Synapse Technologies, an early interactive media company, which produced pioneering educational projects for IBM, including “Columbus: Discovery, Encounter and Beyond” and “Evolution/Revolution: The World from 1890-1930”.He received numerous honors, including a Golden Globe Award (for Elvis on Tour), 2 Emmy Awards, and 33 Clios.

Abel died from complications following a myocardial infarction at the age of 64.

Related Glossary Terms: DOA

Term Source: Chapter 6 – Robert Abel and Associates

 

Abstract expressionism

A painting movement in which artists typically applied paint rapidly, and with force to their huge canvases in an effort to show feelings and emotions, painting gesturally, non- geometrically, sometimes applying paint with large brushes, sometimes dripping or even throwing it onto canvas. Their work is characterized by a strong dependence on what appears to be accident and chance, but which is actually highly planned. Some Abstract Expressionist artists were concerned with adopting a peaceful and mystical approach to a purely abstract image. Usually there was no effort to represent subject matter. Not all work was abstract, nor was all work expressive, but it was generally believed that the spontaneity of the artists’ approach to their work would draw from and release the creativity of their unconscious minds. The expressive method of painting was often considered as important as the painting itself.

Related Glossary Terms:

Term Source: Chapter 9 – Ed Emshwiller

 

Affine transformation

In geometry, an affine transformation or affine map or an affinity (from the Latin, affinis, “connected with”) is a transformation which preserves straight lines (i.e., all points lying on a line initially still lie on a line after transformation) and ratios of distances between points lying on a straight line (e.g., the midpoint of a line segment remains the midpoint after transformation). It does not necessarily preserve angles or lengths.

Related Glossary Terms:

Term Source: Chapter 19 – Plants

 

Alpha channel

the concept of an alpha channel was introduced by Alvy Ray Smith in the late 1970s, and fully developed in a 1984 paper by Thomas Porter and Tom Duff. In a 2D image element, which stores a color for each pixel, additional data is stored in the alpha channel with a value between 0 and 1. A value of 0 means that the pixel does not have any coverage information and is transparent; i.e. there was no color contribution from any geometry because the geometry did not overlap this pixel. A value of 1 means that the pixel is opaque because the geometry completely overlapped the pixel.

Related Glossary Terms: Frame buffer

Term Source: Chapter 5 – Cornell and NYIT, Chapter 15 – Early hardware

 

Analog

Relating to, or being a device in which data are represented by continuously variable, measurable, physical quantities, such as length, width, voltage, or pressure; a device having an output that is proportional to the input.

Related Glossary Terms: Digital

Term Source: Chapter 1 – Early analog computational devices

 

Anisotropic reflection

Anisotropic Reflections are just like regular reflections, except stretched or blurred based on the orientation of small grooves (bumps, fibers or scratches) that exist on a reflective surface. The kinds of objects include anything that has a fine grain that goes all in predominantly one direction. Good everyday examples would be hair, brushed metals, pots and pans, or reflections in water that’s being perturbed (for example, by falling rain).

Related Glossary Terms:

Term Source: Chapter 5 – Cal Tech and North Carolina State

Antialiasing

antialiasing is a software technique for diminishing jaggies – stair-step-like lines that should be smooth. Jaggies occur because the output device, the monitor or printer, doesn’t have a high enough resolution to represent a smooth line. Antialiasing reduces the prominence of jaggies by surrounding the stair-steps with intermediate shades of gray (for gray-scaling devices) or color (for color devices). Although this reduces the jagged appearance of the lines, it also makes them fuzzier.

Related Glossary Terms: A-buffer or Alpha-buffer, Jaggies

Term Source: Chapter 15 – Early hardware

 

API

API, an abbreviation of application program interface, is a set of routines, protocols, and tools for building software applications. A good API makes it easier to develop a program by providing all the building blocks. A programmer then puts the blocks together.

Most operating environments, such as the Apple Quartz API, provide an API so that programmers can write applications consistent with the operating environment. Although APIs are designed for programmers, they are ultimately good for users because they guarantee that all programs using a common API will have similar interfaces. This makes it easier for users to learn new programs.

Related Glossary Terms:

Term Source: Chapter 15 – Graphics Accelerators

 

Atkinson, Bill

Bill Atkinson is a computer engineer and photographer. Atkinson worked at Apple Computer from 1978 to 1990. He received his undergraduate degree from the University of California, San Diego, where Apple Macintosh developer Jef Raskin was one of his professors. Atkinson continued his studies as a graduate student at the University of Washington. Atkinson was part of the Apple Macintosh development team and was the creator of the ground-breaking MacPaint application, among others. He also designed and implemented QuickDraw, the fundamental toolbox that the Macintosh used for graphics. QuickDraw’s performance was essential for the success of the Macintosh’s graphical user interface. Atkinson also designed and implemented HyperCard, the first popular hypermedia system.

Related Glossary Terms:

Term Source: Chapter 16 – Apple Computer

 

Augmented reality

Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one.

Related Glossary Terms: Virtual reality

Term Source: Chapter 17 – Virtual Reality

 

B

B-rep

In solid modeling and computer-aided design, boundary representation—often abbreviated as B-rep or BREP—is a method for representing shapes using the limits. A solid is represented as a collection of connected surface elements, the boundary between solid and non-solid.

Boundary representation models are composed of two parts: topology and geometry (surfaces, curves and points). The main topological items are: faces, edges and vertices. A face is a bounded portion of a surface; an edge is a bounded piece of a curve and a vertex lies at a point. Other elements are the shell (a set of connected faces), the loop (a circuit of edges bounding a face) and loop-edge links (also known as winged edge links or half- edges) which are used to create the edge circuits. The edges are like the edges of a table, bounding a surface portion.

Related Glossary Terms: Solids modeling

Term Source: Chapter 10 – SDRC / Unigraphics

 

Badler, Norman

Norman I. Badler is professor of computer and information science at the University of Pennsylvania and has been on that faculty since 1974. He has been active in computer graphics since 1968, with research interests centered on computational connections between language and human action. Badler received the B.A. degree in creative studies mathematics from the University of California at Santa Barbara in 1970. He received the M.Sc. in mathematics in 1971 and the Ph.D. in computer science in 1975, both from the University of Toronto. He directs the SIG Center for Computer Graphics and the Center for Human Modeling and Simulation at Penn.

Related Glossary Terms:

Term Source: Chapter 5 – Illinois-Chicago and University of Pennsylvania

 

Baecker, Ron

Dr. Baecker is an expert in human-computer interaction (“HCI”) and user interface (“UI”) design. His research interests include work on electronic memory aids and other cognitive prostheses; computer applications in education; computer-supported cooperative learning, multimedia and new media; software visualization; groupware and computer-supported cooperative work; computer animation and interactive computer graphics; computer literacy and how computers can help us work better and safer; and entrepreneurship and the management of small business as well as the stimulation of innovation. Baecker is also interested in the social implications of computing, especially the issue of responsibility when humans and computers interact.

Related Glossary Terms:

Term Source: Chapter 5 – UNC and Toronto

 

Baraff, David

David Baraff is a Senior Animation Scientist at Pixar Animation Studios. He received a BsE in Computer Science from the University of Pennsylvania, and a Ph.D. in Computer Science from Cornell. From 1992 to 1998 Baraff was a Professor of Robotics at Carnegie Mellon University in Pennsylvania. Simulation software from Physical Effects, Inc., a software company he co-founded, has been used in numerous movies at studios outside of Pixar. In 2006 he received a Scientific and Technical Academy Award for his work on cloth simulation.

Related Glossary Terms:

Term Source: Chapter 19 – Physical-based Modeling

 

Barr, Al

Al Barr, PhD RPI, now on the faculty at Caltech, works “to enhance the mathematical and scientific foundations of computer graphics, extending it beyond mere picture-making to the point that reconfigurable models have great predictive power.

Related Glossary Terms:

Term Source: Chapter 5 – Cal Tech and North Carolina State

 

Bass, Saul

Saul Bass was a graphic designer and filmmaker, perhaps best known for his design of film posters and motion picture title sequences. During his 40-year career Bass worked for some of Hollywood’s greatest filmmakers, including Alfred Hitchcock, Otto Preminger, Billy Wilder, Stanley Kubrick and Martin Scorsese. Amongst his most famous title sequences are the animated paper cut-out of a heroin addict’s arm for Preminger’s The Man with the Golden Arm, the credits racing up and down what eventually becomes a high-angle shot of the C.I.T. Financial Building in Hitchcock’s North by Northwest, and the disjointed text that races together and apart in Psycho.

Bass designed some of the most iconic corporate logos in North America, including the AT&T “bell” logo in 1969, as well as AT&T’s “globe” logo in 1983 after the breakup of the Bell System. He also designed Continental Airlines’ 1968 “jetstream” logo and United Airlines’ 1974 “tulip” logo which became some of the most recognized airline industry logos of the era.

Related Glossary Terms:

Term Source: Chapter 6 – Robert Abel and Associates

 

Bergeron, Philippe

Philippe Bergeron holds a B.Sc. and M.Sc. in Computer Science from University of Montreal. He wrote over a dozen articles on computer graphics. He co-directed the short “Tony de Peltrie,” the world’s first 3-D CGI human with emotions. It closed SIGGRAPH’85. He was technical research director at Digital Productions, and head of Production Research at Whitney/Demos Productions where he character animated “Stanley and Stella in Breaking The Ice.” Bergeron is also an actor and landscape designer.

Related Glossary Terms:

Term Source: Chapter 8 – Introduction

 

Bezier curves

A Bézier curve is a parametric curve frequently used in computer graphics and related fields. Generalizations of Bézier curves to higher dimensions are called Bézier surfaces, of which the Bézier triangle is a special case.

In vector graphics, Bézier curves are used to model smooth curves that can be scaled indefinitely. “Paths,” as they are commonly referred to in image manipulation programs, are combinations of linked Bézier curves. Paths are not bound by the limits of rasterized images and are intuitive to modify. Bézier curves are also used in animation as a tool to control motion

Related Glossary Terms:

Term Source: Chapter 14 – CGI and Effects in Films and Music Videos

 

Bézier, Pierre

Pierre Étienne Bézier was a French engineer and one of the founders of the fields of solid, geometric and physical modeling as well as in the field of representing curves, especially in CAD/CAM systems. As an engineer at Renault, he became a leader in the transformation of design and manufacturing, through mathematics and computing tools, into computer-aided design and three-dimensional modeling. Bézier patented and popularized, but did not invent the Bézier curves and Bézier surfaces that are now used in most computer-aided design and computer graphics systems.

Related Glossary Terms:

Term Source: Chapter 4 – Other research efforts

 

Bit BLT

Bit BLT (which stands for bit-block [image] transfer but is pronounced bit blit) is a computer graphics operation in which several bitmaps are combined into one using a raster operator.

The operation involves at least two bitmaps, a source and destination, possibly a third that is often called the “mask” and sometimes a fourth used to create a stencil. The pixels of each are combined bitwise according to the specified raster operation (ROP) and the result is then written to the destination.

This operation was created by Dan Ingalls, Larry Tesler, Bob Sproull, and Diana Merry at Xerox PARC in November 1975 for the Smalltalk-72 system.

Related Glossary Terms:

Term Source: Chapter 15 – Early hardware, Chapter 16 – Xerox PARC

 

Blinn, James

James F. Blinn is a computer scientist who first became widely known for his work as a computer graphics expert at NASA’s Jet Propulsion Laboratory (JPL), particularly his work on the pre-encounter animations for the Voyager project, his work on the Carl Sagan Cosmos documentary series and the research of the Blinn–Phong shading model.

Blinn devised new methods to represent how objects and light interact in a three dimensional virtual world, like environment mapping and bump mapping. He is well known for creating animation for three television series: Carl Sagan’s Cosmos: A Personal Voyage; Project MATHEMATICS!; and the pioneering instructional graphics in The Mechanical Universe. His simulations of the Voyager spacecraft visiting Jupiter and Saturn have been seen widely. He is now a graphics fellow at Microsoft Research. Blinn also worked at the New York Institute of Technology in the summer of 1976

Related Glossary Terms:

Term Source: Chapter 4 – JPL and National Research Council of Canada

 

Blue Screen

Chroma key compositing, or chroma keying, is a special effects / post-production technique for compositing (layering) two images or video streams together, used heavily in many fields to remove a background from the subject of a photo or video – particularly the newscasting, motion picture and videogame industries. A color range in the top layer is made transparent, revealing another image behind. The chroma keying technique is commonly used in video production and post-production. This technique is also referred to as color keying, color-separation overlay (CSO), or by various terms for specific color- related variants such as green screen, and blue screen – chroma keying can be done with backgrounds of any color that are uniform and distinct, but green and blue backgrounds are more commonly used because they differ most distinctly in hue from most human skin colors and no part of the subject being filmed or photographed may duplicate a color used in the background

Related Glossary Terms: Chroma key compositing

Term Source: Chapter 14 – CGI and Effects in Films and Music Videos

 

Brooks, Frederick

Frederick Phillips Brooks, Jr. is a software engineer and computer scientist, best known for managing the development of IBM’s System/360 family of computers and the OS/360 software support package, then later writing candidly about the process in his seminal book The Mythical Man-Month. Brooks has received many awards, including the National Medal of Technology in 1985 and the Turing Award in 1999. It was in The Mythical Man- Month that Brooks made the now-famous statement: “Adding manpower to a late software project makes it later.” This has since come to be known as the Brooks’s law.

Related Glossary Terms:

Term Source: Chapter 17 – Interaction

 

Bump mapping

Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978

Related Glossary Terms: Blinn, University of Utah

Term Source:

 

Burtnyk, Nestor

NRC scientists Nestor Burtnyk and Marceli Wein, were recently honored at the Festival of Computer Animation in Toronto. They were recognized as Fathers of Computer Animation Technology in Canada. Burtnyk, who began his career with NRC in 1950, started Canada’s first substantive computer graphics research project in the 1960s. Wein, who joined this same project in 1966, had been exposed to the potential of computer imaging while studying at McGill. He teamed up with Burtnyk to pursue this promising field.

One of their main contributions was the Academy Award nominated film “Hunger/La Faim” (produced by the National Film Board of Canada) using their famous key-frame animation approach and system.

Related Glossary Terms: Wein, Marceli

Term Source: Chapter 4 – JPL and National Research Council of Canada

 

Buxton, Bill

William Arthur Stewart “Bill” Buxton (born March 10, 1949) is a Canadian computer scientist and designer. He is currently a Principal researcher at Microsoft Research. He is known for being one of the pioneers in the human–computer interaction field.

Related Glossary Terms:

Term Source: Chapter 5 – UNC and Toronto

 

C

CAD

CAD – computer-aided design

The use of computer programs and systems to design detailed two- or three-dimensional models of physical objects, such as mechanical parts, buildings, and molecules.

Related Glossary Terms: CADD, CAE, CAID, CAM

Term Source: Chapter 3 – General Motors DAC, Chapter 10 – Introduction, Chapter 10 – Introduction

 

CADD

CADD – Computer Aided Drafting and Design, Computer-Aided Design & Drafting, or Computer-Aided Design Development

The use of the computer to help with the drafting of product plans.

Related Glossary Terms:CAD, CAE, CAID, CAM

Term Source: Chapter 3 – General Motors DAC Chapter 10 – Introduction

 

CAE

CAE – computer-aided engineering

Use of computers to help with all phases of engineering design work. Like computer aided design, but also involving the conceptual and analytical design steps.

Related Glossary Terms: CADD, CAD, CAID, CAM

Term Source: Chapter 10 – Introduction Chapter 10 – SDRC / Unigraphics

 

CAID

Computer-aided industrial design (CAID) is CAD adapted and specialized for aesthetic design. From a designer’s point of view, CAD is for the pocket-protector brigade, while CAID is for the creative.

Related Glossary Terms: CADD, CAE, CAD, CAM

Term Source: Chapter 8 – Alias Research

 

CAM

CAM – computer-aided manufacturing

The process of using specialized computers to control, monitor, and adjust tools and machinery in manufacturing.

Related Glossary Terms: CADD, CAE, CAID, CAD

Drag related terms here

Term Source: Chapter 10 – Introduction, Chapter 10 – MCS / CalComp / McAuto

 

Carpenter, Loren

Loren Carpenter is a computer graphics researcher and developer. He is co-founder and chief scientist of Pixar Animation Studios and the co-inventor of the Reyes rendering algorithm. He is one of the authors of the PhotoRealistic RenderMan software which implements Reyes and is used to create the imagery for Pixar’s movies. Following Disney’s acquisition of Pixar, Carpenter became a Senior Research Scientist at Disney Research.[1]

Carpenter began work at Boeing Computer Services in Seattle, Washington. In 1980 he gave a presentation at the SIGGRAPH conference, in which he showed “Vol Libre”, a 2 minute computer generated movie. This showcased his software for generating and rendering fractally generated landscapes. At Pixar Carpenter worked on the “genesis effect” scene of Star Trek II: The Wrath of Khan, which featured an entire fractally- landscaped planet.

Related Glossary Terms:

Term Source: Chapter 19 – Noise functions and Fractals

 

Cathode Ray Tube

A vacuum tube generating a focused beam of electrons that can be deflected by electric fields, magnetic fields, or both. The terminus of the beam is visible as a spot or line of luminescence caused by its impinging on a sensitized screen at one end of the tube. Cathode-ray tubes are used to study the shapes of electric waves, to reproduce images in television receivers, to display alphanumeric and graphical information on computer monitors, as an indicator in radar sets, etc. Abbreviation: CRT

Related Glossary Terms: Vacuum tube

Term Source: Chapter 1 – Electronic devices

 

Caustics

In optics, a caustic or caustic network is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface. The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. Therefore in an image the caustics can be the patches of light or their bright edges. These shapes often have cusp singularities.

In computer graphics, most modern rendering systems support caustics. Some of them even support volumetric caustics. This is accomplished by raytracing the possible paths of the light beam through the glass, accounting for the refraction and reflection. Photon mapping is one implementation of this.

Related Glossary Terms: Photon mapping, Ray-trace

Term Source: Chapter 20 – CG Icons

 

Charactron

a cathode-ray tube used in information display units to reproduce letters, numbers, map symbols, and other characters. Invented in the USA in 1941, the Charactron is an instantaneous-operation numerical indicator tube.

In the Charactron, the characters reproduced on the tube’s screen are formed by means of a matrix, which is an opaque plate containing a set of 64 to 200 microscopic openings in the shape of the characters to be displayed. The matrix is located in the path of the electron beam between two deflection systems. The first deflection system guides the beam to the desired character on the matrix; the second system guides the shaped beam to the desired location on the screen. When the beam passes through the matrix, the cross section of the beam takes on the shape of the character through which it has passed. Hence, an image of the desired character—rather than a point, as in ordinary cathode-ray tubes—is illuminated at the place where the beam strikes the screen.

Related Glossary Terms:

Term Source: Chapter 3 – Other output devices

 

Chroma key compositing

Chroma key compositing, or chroma keying, is a special effects / post-production technique for compositing (layering) two images or video streams together, used heavily in many fields to remove a background from the subject of a photo or video – particularly the newscasting, motion picture and video game industries. A color range in the top layer is made transparent, revealing another image behind. The chroma keying technique is commonly used in video production and post-production. This technique is also referred to as color keying, color-separation overlay (CSO), or by various terms for specific color- related variants such as green screen, and blue screen – chroma keying can be done with backgrounds of any color that are uniform and distinct, but green and blue backgrounds are more commonly used because they differ most distinctly in hue from most human skin colors and no part of the subject being filmed or photographed may duplicate a color used in the background

Related Glossary Terms: Blue Screen

Term Source:

 

Clipping

Any procedure which identifies that portion of a picture which is either inside or outside a region to be displayed on a CRT or screen is referred to as a clipping algorithm or clipping.

The region against which an object is to be clipped is called clipping window.

Related Glossary Terms:

Term Source: Chapter 3 – General Motors DAC, Chapter 4 – MIT and Harvard

 

Colormap

Color mapping is a function that maps (transforms) the colors of one (source) image to the colors of another (target) image. A color mapping may be referred to as the algorithm that results in the mapping function or the algorithm that transforms the image colors. Color mapping is also sometimes called color transfer or, when grayscale images are involved, brightness transfer function (BTF).

Related Glossary Terms:

Term Source: Chapter 18 – Hardware and Software

 

Combinatorial geometry

Computational (sometimes referred to as combinatorial) geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry.

The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Computational fluid dynamics

Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions.

Related Glossary Terms:

Term Source: Chapter 18 – Introduction

 

Computer graphics

  1. pictorial computer output produced on a display screen, plotter, or printer.
  2. the study of the techniques used to produce such output.
  3. the use of a computer to produce and manipulate pictorial images on a video screen, as in animation techniques or the production of audiovisual aids

Related Glossary Terms:

Term Source: Preface – Preface

 

Computer-generated art

Digital art is a general term for a range of artistic works and practices that use digital technology as an essential part of the creative and/or presentation process. Since the 1970s, various names have been used to describe the process including computer art, computer-generated art, and multimedia art, and digital art is itself placed under the larger umbrella term new media art.

Related Glossary Terms:

Term Source: Chapter 9 – Lillian Schwartz

 

Constructivist

Constructivism, Russian Konstruktivizm, Russian artistic and architectural movement that was first influenced by Cubism and Futurism and is generally considered to have been initiated in 1913 with the “painting reliefs”—abstract geometric constructions—of Vladimir Tatlin.

Related Glossary Terms:

Term Source: Chapter 9 – Manfred Mohr

 

Continuous shading

Continuous shading is the smooth shading of polygons with bilinear interpolation. In other words, the brightness of the shading varies within individual polygons, without altering the color being applied. It is often referred to as Gouraud shading.

Related Glossary Terms: Gouraud shading

Term Source: Chapter 17 – Virtual Reality

 

Contour plots

A contour plot is a graphical technique for representing a 3-dimensional surface by plotting constant z slices, called contours, on a 2-dimensional format. That is, given a value for z, lines are drawn for connecting the (x,y) coordinates where that z value occurs.

The contour plot is an alternative to a 3-D surface plot.

Related Glossary Terms: Isolines, Isosurfaces

Term Source: Chapter 18 – Algorithms

 

Coons, Steven

Steven Anson Coons (March 7, 1912 – August 1979) was an early pioneer in the field of computer graphical methods. He was a professor at the Massachusetts Institute of Technology in the Mechanical Engineering Department. Steven Coons had a vision of interactive computer graphics as a design tool to aid the engineer.

The Association for Computing Machinery SIGGRAPH has an award named for Coons. The Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics is given in odd-numbered years to an individual to honor that person’s lifetime contribution to computer graphics and interactive techniques. It is considered the field’s most prestigious award.

Related Glossary Terms:

Term Source: Chapter 4 – MIT and Harvard

 

Core memory

Magnetic-core memory was the predominant form of random-access computer memory for 20 years (circa 1955–75). It uses tiny magnetic toroids (rings), the cores, through which wires are threaded to write and read information. Each core represents one bit of information. The cores can be magnetized in two different ways (clockwise or counterclockwise) and the bit stored in a core is zero or one depending on that core’s magnetization direction. The wires are arranged to allow an individual core to be set to either a “one” or a “zero”, and for its magnetization to be changed, by sending appropriate electric current pulses through selected wires. The process of reading the core causes the core to be reset to a “zero”, thus erasing it. This is called destructive readout.

Such memory is often just called core memory, or, informally, core. Although core memory had been superseded by semiconductor memory by the end of the 1970s, memory is still occasionally called “core”

Each core was a donut shaped metal, often ferrite, that had two electrical wires strung through it. Neither wire was strong enough in power to change the state of the magnetism of the core, but together they were. Thus it was a randomly addressable storage and access medium.

Related Glossary Terms:

Term Source: Chapter 2 – Whirlwind and SAGE

 

Cray

Cray Inc.is an American supercomputer manufacturer based in Seattle, Washington. The company’s predecessor, Cray Research, Inc. (CRI), was founded in 1972 by computer designer Seymour Cray. Seymour Cray went on to form the spin-off Cray Computer Corporation (CCC), in 1989, which went bankrupt in 1995, while Cray Research was bought by SGI the next year. Cray Inc. was formed in 2000 when Tera Computer Company purchased the Cray Research Inc. business from SGI and adopted the name of its acquisition. Their computers included the Cray-1, Cray-2, and Cray X-MP

Related Glossary Terms:

Term Source: Chapter 6 – Digital Productions (DP)

 

CSG

Constructive solid geometry (CSG) is a technique used in solid modeling. Constructive solid geometry allows a modeler to create a complex surface or object by using Boolean operators to combine objects. Often CSG presents a model or surface that appears visually complex, but is actually little more than cleverly combined or decombined objects.

In 3D computer graphics and CAD CSG is often used in procedural modeling. CSG can also be performed on polygonal meshes, and may or may not be procedural and/or parametric.

Related Glossary Terms: Solids modeling

Term Source: Chapter 10 – Introduction

 

Csuri, Charles

Charles Csuri is best known for pioneering the field of computer graphics, computer animation and digital fine art, creating the first computer art in 1964. Csuri has been recognized as the father of digital art and computer animation by Smithsonian, and as a leading pioneer of computer animation by the Museum of Modern Art (MoMA) and The Association for Computing Machinery Special Interest Group Graphics (ACM SIGGRAPH). Between 1971 and 1987, while a senior professor at the Ohio State University, Charles Csuri founded the Computer Graphics Research Group, the Ohio Super Computer Graphics Project, and the Advanced Computing Center for Art and Design, dedicated to the development of digital art and computer animation. Csuri was co-founder of Cranston/ Csuri Productions (C/CP), one of the world’s first computer animation production companies.

Related Glossary Terms: The Ohio State University

Term Source: Chapter 4 – University of Utah, Chapter 4 – The Ohio State University

 

Cuba, Larry

Larry Cuba is a computer-animation artist who became active in the late 1970s and early 80s. He received A.B. from Washington University in St. Louis in 1972 and his Master’s Degree from California Institute of the Arts In 1975, John Whitney, Sr. invited Cuba to be the programmer on one of his films. The result of this collaboration was “Arabesque”. Subsequently, Cuba produced three more computer-animated films: 3/78 (Objects and Transformations), Two Space, and Calculated Movements. Cuba also produced computer graphics for Star Wars Episode IV: A New Hope in 1977 on Tom DeFanti’s Grass system at EVL. His animation of the Death Star is shown to pilots in the Rebel Alliance. Cuba received grants for his work from the American Film Institute and The National Endowment for the Arts

Related Glossary Terms: EVL

Term Source: Chapter 9 – Larry Cuba

 

D

DAC-1

DAC-1, for Design Augmented by Computer, was one of the earliest graphical computer aided design systems. Developed by General Motors, IBM was brought in as a partner in 1960 and the two developed the system and released it to production in 1963. It was publicly unveiled at the fall Joint Computer Conference in Detroit 1964. GM used the DAC system, continually modified, into the 1970s when it was succeeded by CADANCE.

Related Glossary Terms:

Term Source: Chapter 3 – General Motors DAC

 

Data-driven

Computer graphics visualization has evolved by focusing algorithmic approaches to the synthesis of imagery. Recently, various methods have been introduced to exploit pre- recorded data to improve the performance and/or realism of things like dynamic deformations. This data can guide the algorithms, or in some cases determine which algorithms are used in the synthesis process. It has seen successful usage in visualizations of music, dynamic deformation of faces, soft volumetric tissue, and cloth, as examples.

Related Glossary Terms:

Term Source: Chapter 19 – Data-driven Imagery

 

Dataflow

Dataflow is a software architecture based on the idea that changing the value of a variable should automatically force recalculation of the values of variables which depend on its value.

There have been a few programming languages created specifically to support dataflow. In particular, many (if not most) visual programming languages have been based on the idea of dataflow.

Related Glossary Terms: Modular visualization environments

Term Source: Chapter 18 – Visualization Systems

 

Debevec, Paul

Paul Debevec is a researcher in computer graphics at the University of Southern California’s Institute for Creative Technologies. He is best known for his pioneering work in high dynamic range imaging and image-based modeling and rendering. Debevec received a Ph.D. in computer science from UC Berkeley in 1996; his thesis research was in photogrammetry, or the recovery of the 3D shape of an object from a collection of still photographs taken from various angles.In 1997 he and a team of students produced The Campanile Movie, a virtual flyby of UC Berkeley’s famous Campanile tower. Debevec’s more recent research has included methods for recording real-world illumination for use in computer graphics; a number of novel inventions for recording ambient and incident light have resulted from the work of Debevec and his team, including the light stage, of which five or more versions have been constructed, each an evolutionary improvement over the previous. Techniques based on Debevec’s work have been used in several major motion pictures, including The Matrix (1999), Spider-Man 2 (2004), King Kong (2005), Superman Returns (2006), Spider-Man 3 (2007), and Avatar (2009). In addition Debevec and his team have produced several short films that have premiered at SIGGRAPH’s annual Electronic Theater, including Fiat Lux (1999) and The Parthenon (2004).

Debevec, along with Tim Hawkins, John Monos and Mark Sagar, was awarded a 2009 Scientific and Engineering Award from the Academy of Motion Picture Arts and Sciences for the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures.

Related Glossary Terms:

Term Source: Chapter 19 – Global Illumination

 

DeFanti, Tom

Tom DeFanti is a computer graphics researcher and pioneer. His work has ranged from early computer animation, to scientific visualization, virtual reality, and grid computing. He is a distinguished professor of Computer Science at the University of Illinois at Chicago, and a research scientist at the California Institute for Telecommunications and Information Technology. DeFanti did his PhD work in the early 1970s at Ohio State University, under Charles Csuri in the Computer Graphics Research Group. For his dissertation, he created the GRASS programming language, a three-dimensional, real-time animation system usable by computer novices.

In 1973, he joined the faculty of the University of Illinois at Chicago. With Dan Sandin, he founded the Circle Graphics Habitat, now known as the Electronic Visualization Laboratory (EVL). At UIC, DeFanti further developed the GRASS language, and later created an improved version, ZGRASS. The GRASS and ZGRASS languages have been used by a number of computer artists, including Larry Cuba, in his film 3/78 and the animated Death Star sequence for Star Wars. Later significant work done at EVL includes development of the graphics system for the Bally home computer, invention of the first data glove, co- editing the 1987 NSF-sponsored report Visualization in Scientific Computing that outlined the emerging discipline of scientific visualization, invention of PHSColograms, and invention of the CAVE Automatic Virtual Environment.

DeFanti contributed greatly to the growth of the SIGGRAPH organization and conference. He served as Chair of the group from 1981 to 1985, co-organized early film and video presentations (which became the Electronic Theatre), and in 1979 started the SIGGRAPH Video Review, a video archive of computer graphics research.

DeFanti is a Fellow of the Association for Computing Machinery. He has received the 1988 ACM Outstanding Contribution Award, the 2000 SIGGRAPH Outstanding Service Award, and the UIC Inventor of the Year Award.

Related Glossary Terms:EVL, Sandin, Dan

Term Source: Chapter 5 – Illinois-Chicago and University of Pennsylvania

 

DeGraf, Brad

DeGraf has been an innovator in computer animation in the entertainment industry since 1982, particularly in the areas of realtime characters, ride films, and the Web. He founded and/or managed several ground-breaking animation studios including Protozoa (aka Dotcomix), Colossal Pictures Digital Media, deGraf/Wahrman, and Digital Productions. In 2000, Wired called Brad “an icon in the world of 3D animation”. Brad is currently CEO and co-founder (with Michael Tolson formerly of XAOS and Envoii) of Sociative Inc.

His film credits include: Duke2000.com, a campaign with Garry Trudeau to get his Ambassador Duke character elected president; Moxy, emcee for the Cartoon Network, the first virtual character for television; Floops, the first Web episodic cartoon; Peter Gabriel’s Grammy award- winning video, Steam; “The Funtastic World of Hanna-Barbera”, the first computer-generated ride film; Feature films “The Last Starfighter”, “2010′′, “Jetsons: the Movie”, “Robocop 2′′,

Related Glossary Terms:

Term Source: Chapter 6 – Digital Productions (DP)

Demos, Gary

Gary Demos was one of the principals of the Motion Picture Project at Information International Inc. (1974–1981), Digital Productions (1981–1986), and Whitney/Demos Productions (1986–1988). In 1988 Demos formed DemoGraFX, which became involved in technology research for advanced television systems and digital cinema, as well as consulting and contracting for computer companies and visual effects companies. DemoGraFX was sold to Dolby Labs in 2003. Demos attended Cal Tech and worked with Ivan Sutherland at E&S and later at the Picture/Design Group before co-founding the graphics group at Triple-I.

Related Glossary Terms: DOA

Term Source: Chapter 6 – Digital Productions (DP)

 

Diffuse reflection

Diffuse reflection is the reflection of light from a surface such that an incident ray is reflected at many angles rather than at just one angle as in the case of specular reflection. An illuminated ideal diffuse reflecting surface will have equal luminance from all directions in the hemisphere surrounding the surface (Lambertian reflectance).

Related Glossary Terms: Lambertian, Specular reflection

Term Source:

 

Digital

A description of data which is stored or transmitted as a sequence of discrete symbols from a finite set, most commonly this means binary data represented using electronic or electromagnetic signals.

Related Glossary Terms: Analog

Term Source: Chapter 1 – Early digital computational devices

 

Digital compositing

Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called “chroma key”, “blue screen”, “green screen” and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century; and some are still in use.

Related Glossary Terms:

Term Source:

 

Digital painting

Digital painting differs from other forms of digital art, particularly computer-generated art, in that it does not involve the computer rendering from a model. The artist uses painting techniques to create the digital painting directly on the computer. All digital painting programs try to mimic the use of physical media through various brushes and paint effects. Included in many programs are brushes that are digitally styled to represent the traditional style like oils, acrylics, pastels, charcoal, pen and even media such as airbrushing. There are also certain effects unique to each type of digital paint which portraying the realistic effects of say watercolor on a digital ‘watercolor’ painting

Related Glossary Terms:

Term Source: Chapter 11 – Pixar

 

Digital Scene Simulation

Digital Scene Simulation was Digital Productions’ philosophy for creating visual excellence in computer-generated imagery and simulation. The approach it advocated required the use of powerful hardware, sophisticated software, and top creative talent. With a CRAY supercomputer at the heart of its computer network and its own proprietary image rendering and simulation software, Digital Productions was revolutionizing state-of-the-art computer graphics. At the forefront of computer graphics technology, Digital Productions was redefining traditional methods of visual communications and creating new forms of self-expression, instruction, and entertainment.

(From the Abstract of the invited paper “Digital scene simulations: The synergy of computer technology and human creativity”, by Demos, G.; Brown, M.D.; and Weinberg, R.A. Proceedings of the IEEE, Volume: 72 , Issue: 1, Jan. 1984, Page(s): 22 – 31 )

Related Glossary Terms:

Term Source: Chapter 6 – Information International Inc. (Triple-I), Chapter 6 – Information International Inc. (Triple-I)

 

Digitize

  1. to convert (data) to digital form for use in a computer.
  2. to convert (analogous physical measurements) to digital form.

Related Glossary Terms: Digital

Term Source: Chapter 4 – MIT and Harvard

 

DOA

DOA == Digital/Omnibus/Abel

In about 1985, the Digital Productions board went along with a hostile takeover bid by Omnibus and their leader, John Pennie, breaking the agreement with partners John Whitney Jr. and Gary Demos. Later that same year, Omnibus also purchased Robert Abel and Associates. The huge amount of debt, much of it provided by the Royal Bank of Canada, proved to be a burden for the company, and they declared bankruptcy) only 9 months later on April 13th of 1987. The closure had significant rippling effects on the CG industry, and impacted the lives of many top-flight CG professionals.

Related Glossary Terms: Abel, Robert, Demos, Gary, Pennie, John

Term Source: Chapter 8 – Wavefront Technologies

 

Drum plotter

A graphics output device that draws lines with a continuously moving pen on a sheet of paper rolled around a rotating drum that moves the paper in a direction perpendicular to the motion of the pen.

Related Glossary Terms:

Term Source: Chapter 10 – MCS / CalComp / McAuto

 

Dynamics

In the field of physics, the study of the causes of motion and changes in motion is dynamics. In other words the study of forces and why objects are in motion. Dynamics includes the study of the effect of torques on motion. These are in contrast to kinematics, the branch of classical mechanics that describes the motion of objects without consideration of the causes leading to the motion.

Related Glossary Terms: Kinematics

Term Source: Chapter 8 – Introduction

 

E

Elin, Larry

Larry Elin started his career as an animator at Mathematical Applications Group, Inc., in Elmsford, NY, in 1973, one of the first 3-D computer animation companies. By 1980, Elin had become head of production, and hired Chris Wedge, who later founded Blue Sky Studios, among others. Elin and Wedge were the key animators on MAGI’s work on the feature film Tron, which included the Lightcycle, Recognizer, and Tank sequences. Elin later became executive producer at Kroyer Films, which produced the animation for FernGully: The Last Rainforest.

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Ellipsoids

a geometric surface, symmetrical about the three coordinate axes, whose plane sections are ellipses or circles. Standard equation: x2/a2 + y2/b2 + z2/c2 = 1, where ±a, ±b, ±c are the intercepts on the x-, y-, and z- axes

Related Glossary Terms:

Term Source: Chapter 13 – Other Approaches

 

Em, David

David Em is one of the first artists to make art with pixels. He was born in Los Angeles and grew up in South America. He studied painting at the Pennsylvania Academy of the Fine Arts and film directing at the American Film Institute. Em created digital paintings at the Xerox Palo Alto Research Center (Xerox PARC) in 1975 with SuperPaint, “the first complete digital paint system”. In 1976, he made an articulated 3D digital insect at Information International, Inc. (III) that could walk, jump, and fly, the first 3D character created by a fine artist.

Related Glossary Terms:

Term Source: Chapter 9 – David Em

 

Emshwiller, Ed

Emshwiller was one of the earliest video artists. With Scape-Mates (1972), he began his experiments in video, combining computer animation with live-action. In 1979, he produced Sunstone, a groundbreaking three-minute 3-D computer-generated video made at the New York Institute of Technology with Alvy Ray Smith. Now in the Museum of Modern Art’s video collection, Sunstone was exhibited at SIGGRAPH 79, the 1981 Mill Valley Film Festival and other festivals. In 1979, it was shown on WNET’s Video/Film Review, and a single Sunstone frame was used on the front cover of Fundamentals of Interactive Computer Graphics, published in 1982 by Addison-Wesley

Related Glossary Terms:

Term Source: Chapter 9 – Ed Emshwiller

 

Engelbart, Douglas C.

Douglas Carl Engelbart (born January 30, 1925) is an American inventor, and an early computer and internet pioneer. He is best known for his work on the challenges of human– computer interaction, particularly while at his Augmentation Research Center Lab in SRI International, resulting in the invention of the computer mouse,[3] and the development of hypertext, networked computers, and precursors to graphical user interfaces.

Related Glossary Terms:

Term Source: Chapter 3 – Input devices

 

ENIAC

ENIAC stands for Electronic Numerical Integrator and Computer. It was a secret World War II military project carried out by John Mauchly, a 32-year-old professor at Penn’s Moore School of Electrical Engineering and John Presper Eckert Jr., a 24-year-old genius inventor and lab assistant. The challenge was to speed up the tedious mathematical calculations needed to produce artillery firing tables for the Army. ENIAC was not completed until after the war but it performed until 1955 at Aberdeen, Md. ENIAC was enormous. It contained 17,500 vacuum tubes, linked by 500,000 soldered connections. It filled a 50-foot long basement room and weighed 30 tons. Today, a single microchip, no bigger than a fingernail, can do more than those 30 tons of hardware.

Related Glossary Terms:

Term Source: Chapter 2 – Programming and Artistry

 

Environment mapping

Environment mapping is a technique that simulates the results of ray-tracing. Because environment mapping is performed using texture mapping hardware, it can obtain global reflection and lighting results in real-time.

Environment mapping is essentially the process of pre-computing a texture map and then sampling texels from this texture during the rendering of a model. The texture map is a projection of 3D space to 2D space.

Related Glossary Terms: Reflection mapping

Term Source:

 

Euler operators

In mathematics, Euler operators are a small set of operators to create polygon meshes. They are closed and sufficient on the set of meshes, and they are invertible.

A “polygon mesh” can be thought of as a graph, with vertices, and with edges that connect these vertices. In addition to a graph, a mesh has also faces: Let the graph be drawn (“embedded”) in a two-dimensional plane, in such a way that the edges do not cross (which is possible only if the graph is a planar graph). Then the contiguous 2D regions on either side of each edge are the faces of the mesh.

Related Glossary Terms:

Term Source: Chapter 5 – Other labs and NSF

 

Evans, David

David Cannon Evans (February 24, 1924 – October 3, 1998) was the founder of the computer science department at the University of Utah and co-founder (with Ivan Sutherland) of Evans & Sutherland, a computer firm which is known as a pioneer in the domain of computer-generated imagery

Related Glossary Terms:

Term Source: Chapter 4 – University of Utah

 

F

Facial animation

Computer facial animation is primarily an area of computer graphics that encapsulates models and techniques for generating and animating images of the human head and face. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

Related Glossary Terms: Kinematics, Motion capture

Term Source: Chapter 8 – Alias/Wavefront

 

Farnsworth, Philo

Philo Taylor Farnsworth was an American inventor and television pioneer. Although he made many contributions that were crucial to the early development of all-electronic television, he is perhaps best known for inventing the first fully functional all-electronic image pickup device (video camera tube), the “image dissector”, the first fully functional and complete all-electronic television system, and for being the first person to demonstrate such a system to the public. Farnsworth developed a television system complete with receiver and camera, which he produced commercially in the firm of the Farnsworth Television and Radio Corporation, from 1938 to 1951.

Related Glossary Terms:

Term Source: Chapter 1 – Electronic devices

 

Fetter, William

William Fetter was a graphic designer for Boeing Aircraft Co. and in 1960, was credited with coining the phrase “Computer Graphics” to describe what he was doing at Boeing at the time.

Related Glossary Terms:

Term Source: Chapter 2 – Programming and Artistry

 

Film recorder

A Film Recorder is a graphical output device for transferring digital images to photographic film.

All film recorders typically work in the same manner. The image is fed from a host computer as a raster stream over a digital interface. A film recorder exposes film through various mechanisms; flying spot (early recorders; photographing a high resolution video monitor; electron beam recorder (Sony); a CRT scanning dot (Celco); focused beam of light from an LVT (Light Valve Technology) recorder; a scanning laser beam (ARRILASER); or recently, full-frame LCD array chips;

Related Glossary Terms: Optical printers

Term Source: Chapter 6 – Digital Effects

 

Finite Element Analysis

The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler’s method, Runge- Kutta, etc.

Related Glossary Terms:

Term Source: Chapter 10 – SDRC / Unigraphics

 

Flicker

a visual sensation, often seen in a television or CRT image, produced by periodic fluctuations, often due to the rate of refreshing the image on the screen, in the brightness of light at a frequency below that covered by the persistence of vision

Related Glossary Terms: Cathode Ray Tube

Term Source: Chapter 3 – Other output devices

 

Floating point

A real number (that is, a number that can contain a fractional part). The following are floating-point numbers: 3.0, -111.5, 1⁄2, 3E-5 The last example is a computer shorthand for scientific notation. It means 3*10-5 (or 10 to the negative 5th power multiplied by 3).

The term floating point is derived from the fact that there is no fixed number of digits before and after the decimal point; that is, the decimal point can float. There are also representations in which the number of digits before and after the decimal point is set, called fixed-point representations. In general, floating-point representations are slower and less accurate than fixed-point representations, but they can handle a larger range of numbers.

Related Glossary Terms:

Term Source: Chapter 15 – Graphics Accelerators

 

Foonly F1

Foonly was the computer company formed by Dave Poole, who was one of the principal Super Foonly designers. The Foonly was to be a successor to the DEC PDP-10, and was to have been built (along with a new operating system) by the Super Foonly project at the Stanford Artificial Intelligence Laboratory (SAIL). The intention was to leapfrog from the old DEC timesharing system SAIL was then running to a new generation, bypassing TENEX which at that time was the ARPANET standard. ARPA funding for both the Super Foonly and the new operating system was cut in 1974. The design for Foonly contributed greatly to the design of the PDP-10 model KL10. One of the prototype models was built for Information International Incorporated (Triple-I) and was used to compute CG for TRON.

Related Glossary Terms:

Term Source: Chapter 6 – Information International Inc. (Triple-I)

 

Forced perspective

Forced perspective is a technique that employs optical illusion to make an object appear farther away, closer, larger or smaller than it actually is. It is used primarily in photography, filmmaking and architecture. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera.

Related Glossary Terms:

Term Source: Chapter 14 – CGI and Effects in Films and Music Videos

 

Foreshortening

Foreshortening occurs when an object appears compressed when seen from a particular viewpoint, and the effect of perspective causes distortion. Foreshortening is a particularly effective artistic device, used to give the impression of three-dimensional volume and create drama in a picture.

Foreshortening is most successful when accurately rendered on the picture plane to create the illusion of a figure in space.

Related Glossary Terms:

Term Source: Chapter 20 – CG Icons

 

Form factor

In radiative heat transfer, a form factor is the proportion of all that radiation which leaves surface A and strikes surface B.

In radiosity calculations, the “form factor” describes the fraction of energy which leaves one surface and arrives at a second surface. It takes into account the distance between the surfaces, computed as the distance between the center of each of the surfaces, and their orientation in space relative to each other, computed as the angle between each surface’s normal vector and a vector drawn from the center of one surface to the center of the other surface. It is a dimensionless quantity.

Related Glossary Terms: Radiosity

Term Source: Chapter 19 – Global Illumination

 

Fractal

A geometrical or physical structure having an irregular or fragmented shape at all scales of measurement between a greatest and smallest scale such that certain mathematical or physical properties of the structure, as the perimeter of a curve or the flow rate in a porous medium, behave as if the dimensions of the structure (fractal dimensions) are greater than the spatial dimensions.

A fractal is a rough or fragmented geometric shape that can be subdivided in parts, each of which is (at least approximately) a reduced-size copy of the whole. Fractals are generally self-similar and independent of scale, that is they have similar properties at all levels of magnification or across all times.

Related Glossary Terms:

Term Source: Chapter 19 – Noise functions and Fractals

 

Frame buffer

A frame buffer (or framebuffer) is a video output device that drives a video display from a memory buffer containing a complete frame of data.The information in the memory buffer typically consists of color values for every pixel (point that can be displayed) on the screen. Color values are commonly stored in 1-bit binary (monochrome), 4-bit palettized, 8-bit palettized, 16-bit high color and 24-bit true color formats. An additional alpha channel is sometimes used to retain information about pixel transparency. The total amount of the memory required to drive the frame buffer depends on the resolution of the output signal, and on the color depth and palette size.

Frame buffers differ significantly from the vector displays that were common prior to the advent of the frame buffer. With a vector display, only the vertices of the graphics primitives are stored. The electron beam of the output display is then commanded to move from vertex to vertex, tracing an analog line across the area between these points. With a frame buffer, the electron beam (if the display technology uses one) is commanded to trace a left- to-right, top-to-bottom path across the entire screen, the way a television renders a broadcast signal. At the same time, the color information for each point on the screen is pulled from the frame buffer, creating a set of discrete picture elements (pixels).

Related Glossary Terms: A-buffer or Alpha-buffer

Term Source: Chapter 6 – Information International Inc. (Triple-I)

 

Frame-grabbing

A frame grabber is an electronic device that captures individual, digital still frames from an analog video signal or a digital video stream. It is usually employed as a component of a computer vision system, in which video frames are captured in digital form and then displayed, stored or transmitted in raw or compressed digital form. Historically, frame grabbers were the predominant way to interface cameras to PC’s

Related Glossary Terms:

Term Source: Chapter 16 – Amiga

 

Free-form surface

Free-form surface, or freeform surfacing, is used in CAD and other computer graphics software to describe the skin of a 3D geometric element. Freeform surfaces do not have rigid radial dimensions, unlike regular surfaces such as planes, cylinders and conic surfaces. They are used to describe forms such as turbine blades, car bodies and boat hulls. Initially developed for the automotive and aerospace industries, freeform surfacing is now widely used in all engineering design disciplines from consumer goods products to ships. Most systems today use nonuniform rational B-spline (NURBS) mathematics to describe the surface forms; however, there are other methods such as Gorden surfaces or Coons surfaces .

Related Glossary Terms: B-rep, Solids modeling

Term Source: Chapter 10 – Intergraph / Bentley / Dassault

 

Fuchs, Henry

Prof. Henry Fuchs is a fellow of the American Academy of Arts and Sciences (AAAS) and the Association for Computing Machinery (ACM) and the Federico Gill Professor of Computer Science at the University of North Carolina at Chapel Hill (UNC). He is also an adjunct professor in biomedical engineering. His research interests are in computer graphics, particularly rendering algorithms, hardware, virtual environments, telepresence systems, and applications in medicine. In 1992, he received both the ACM SIGGRAPH Achievement Award and the Academic Award of the National Computer Graphics Association (NCGA)

Related Glossary Terms:

Term Source: Chapter 5 – UNC and Toronto

 

G

Gates, Bill

William Henry “Bill” Gates III is the former chief executive and current chairman of Microsoft, the world’s largest personal-computer software company, which he co-founded with Paul Allen. He is consistently ranked among the world’s wealthiest people. During his career at Microsoft, Gates held the positions of CEO and chief software architect, and remains the largest individual shareholder.

Related Glossary Terms:

Term Source: Chapter 16 – The IBM PC and Unix

 

Gehring, Bo

Bo Gehring was hired by Phil Mittleman of MAGI in 1972 to develop the division of the company focused on computer image making (MAGI Synthavision). He was the principle of Gehring Aviation and Bo Gehring Associates in Venice, California, and originally came to the west coast to do computer animation tests for Steven Spielberg’s CLOSE ENCOUNTERS OF THE THIRD KIND.

Related Glossary Terms:

Term Source: Chapter 6 – Bo Gehring and Associates

 

Genlocking

Genlock (generator locking) is a common technique where the video output of one source, or a specific reference signal from a signal generator, is used to synchronize other television picture sources together. The aim in video applications is to ensure the coincidence of signals in time at a combining or switching point. When video instruments are synchronized in this way, they are said to be generator locked, or genlocked.

Related Glossary Terms:

Term Source: Chapter 16 – Amiga

 

Global illumination

Global illumination is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).

Related Glossary Terms:

Term Source: Chapter 19 – Global Illumination

 

Glyphs

A glyph (pronounced GLIHF ; from a Greek word meaning carving) is a graphic symbol that provides the appearance or form for a character . A glyph can be an alphabetic or numeric font or some other symbol that pictures an encoded character.

It is a particular graphical representation, in a particular typeface, of a grapheme, or sometimes several graphemes in combination (a composed glyph), or a part of a grapheme. It can also be a grapheme or grapheme-like unit of text, as found in natural language writing systems (scripts). It may be a letter, a numeral, a punctuation mark, or a pictographic or decorative symbol such as dingbats. A character or grapheme is an abstract unit of text, whereas a glyph is a graphical unit.

For example, the sequence ffi contains three characters, but can be represented by one glyph, the three characters being combined into a single unit known as a ligature. Conversely, some typewriters require the use of multiple glyphs to depict a single character (for example, two hyphens in place of an em-dash, or an overstruck apostrophe and period in place of an exclamation mark).

Related Glossary Terms:

Term Source: Chapter 16 – Xerox PARC

 

Gouraud shading

Gouraud shading, named after Henri Gouraud, is an interpolation method used in computer graphics to produce continuous shading of surfaces represented by polygon meshes. In practice, Gouraud shading is most often used to achieve continuous lighting on triangle surfaces by computing the lighting at the corners of each triangle and linearly interpolating the resulting colors for each pixel covered by the triangle. Gouraud first published the technique in 1971.

Related Glossary Terms: Continuous shading, Phong shading

Term Source: Chapter 14 – CGI and Effects in Films and Music Videos

 

Graphics acceleration

Graphics accelerators are a type of graphics hardware that contains its own processor to boost performance levels. These processors are specialized for computing graphical transformations, so they achieve better results than the general-purpose CPU used by the computer. In addition, they free up the computer’s CPU to execute other commands while the graphics accelerator is handling graphics computations.

The popularity of graphical applications, and especially multimedia applications, has made graphics accelerators not only a common enhancement, but a necessity. Most computer manufacturers now bundle a graphics accelerator with their mid-range and high-end systems.

Related Glossary Terms:

Term Source: Chapter 13 – Evans and Sutherland, Chapter 15 – Graphics Accelerators

 

Graphics processing unit

A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory in such a way so as to accelerate the building of images in a frame buffer intended for output to a display.

Related Glossary Terms: Graphics acceleration

Term Source: Chapter 15 – Graphics Accelerators

 

Graphics tablet

A graphics tablet (or digitizing tablet , graphics pad , drawing tablet ) is a computer input device that allows one to hand-draw images and graphics, similar to the way one draws images with a pencil and paper. These tablets may also be used to capture data of handwritten signatures.

A graphics tablet (also called pen pad) consists of a flat surface upon which the user may “draw” an image using an attached stylus, a pen-like drawing apparatus. The image generally does not appear on the tablet itself but, rather, is displayed on the computer monitor. Some tablets however, come as a functioning secondary computer screen that you can interact with directly using the stylus.

Related Glossary Terms:

Term Source: Chapter 3 – Input devices

 

Graphics workstation

A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time. It is commonly connected to a local area network and run multi-user operating systems.

Historically, workstations had offered higher performance than desktop computers, especially with respect to CPU and graphics, memory capacity, and multitasking capability. Graphics workstations are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating 3D objects and navigating scenes), etc.

Related Glossary Terms:

Term Source: Chapter 15 – Apollo / Sun / SGI

 

Greenberg, Donald P.

Donald Peter Greenberg is the Jacob Gould Schurman Professor of Computer Graphics at Cornell University. He joined the Cornell faculty in 1968 with a joint appointment in the College of Engineering and College of Architecture. He currently serves as Director of the Program of Computer Graphics.

In 1971, Greenberg produced an early sophisticated computer graphics movie, Cornell in Perspective, using the General Electric Visual Simulation Laboratory. Greenberg also co- authored a series of papers on the Cornell Box.

An internationally recognized pioneer in computer graphics, Greenberg has authored hundreds of articles and served as a teacher and mentor to many prominent computer graphic artists and animators. Greenberg was the founding director of the National Science Foundation Science and Technology Center for Computer Graphics and Scientific Visualization when it was created in 1991.

Greenberg received the Steven Anson Coons Award in 1987, the most prestigious award in the field of computer graphics.

Related Glossary Terms:

Term Source: Chapter 5 – Cornell and NYIT

 

GUI (Graphical User Interface)

An interface for issuing commands to a computer utilizing a pointing device, such as a mouse, that manipulates and activates graphical images on a monitor.

Related Glossary Terms:

Term Source: Chapter 3 – Work continues at MIT, Chapter 16 – Xerox PARC

 

H

Haptic

Haptic technology, or haptics, is a tactile feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user. This mechanical stimulation can be used to assist in the creation of virtual objects in a computer simulation, to control such virtual objects, and to enhance the remote control of machines and devices (telerobotics). It has been described as “doing for the sense of touch what computer graphics does for vision”. Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface.

Related Glossary Terms:

Term Source: Chapter 17 – Interaction

 

Hausdorff-Besicovich dimension

the Hausdorff dimension (also known as the Hausdorff–Besicovitch dimension) is an extended non-negative real number associated with a metric space. The Hausdorff dimension generalizes the notion of the dimension of a real vector space in that the Hausdorff dimension of an n-dimensional inner product space equals n. This means, for example, the Hausdorff dimension of a point is zero, the Hausdorff dimension of a line is one, and the Hausdorff dimension of the plane is two. There are, however, many irregular sets that have noninteger Hausdorff dimension. The concept was introduced in 1918 by the mathematician Felix Hausdorff. Many of the technical developments used to compute the Hausdorff dimension for highly irregular sets were obtained by Abram Samoilovitch Besicovitch.

Related Glossary Terms:

Term Source: Chapter 19 – Noise functions and Fractals

 

Head-mounted displays

A head-mounted display or helmet mounted display, both abbreviated HMD, is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one (monocular HMD) or each eye (binocular HMD).

A typical HMD has either one or two small displays with lenses and semi-transparent mirrors embedded in a helmet, eye-glasses (also known as data glasses) or visor. The display units are miniaturized and may include CRT, LCDs, Liquid crystal on silicon (LCos), or OLED.

Related Glossary Terms: Stereoscopic display

Term Source: Chapter 17 – Virtual Reality

 

Heads-up display

A head-up display or heads-up display—also known as a HUD—is any transparent display that presents data without requiring users to look away from their usual viewpoints. The origin of the name stems from a pilot being able to view information with the head positioned “up” and looking forward, instead of angled down looking at lower instruments.

Although they were initially developed for military aviation, HUDs are now used in commercial aircraft, automobiles, and other applications.

Related Glossary Terms:

Term Source: Chapter 17 – Interaction

 

Height maps

In computer graphics, a height map or height field is a raster image used to store values, such as surface elevation data, for display in 3D computer graphics. A height map can be used in bump mapping to calculate where this 3D data would create shadow in a material, in displacement mapping to displace the actual geometric position of points over the textured surface, or for terrain where the height map is converted into a 3D mesh.

Related Glossary Terms:

Term Source: Chapter 13 – Other Approaches

 

Hidden line elimination

Hidden line elimination is an extension of wireframe model rendering where lines (or segments of lines) covered by surfaces of a model are not drawn, resulting in a more accurate representation of a 3D object.

Related Glossary Terms: Hidden surfaces

Term Source:

 

Hidden surfaces

In 3D computer graphics, hidden surface determination (also known as hidden surface removal (HSR), occlusion culling (OC) or visible surface determination (VSD)) is the process used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint. A hidden surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics.

Related Glossary Terms: Hidden line elimination

Term Source: Chapter 17 – Virtual Reality

 

Hopper, Grace

Rear Admiral Grace Murray Hopper was an American computer scientist and United States Navy officer. A pioneer in the field, she was one of the first programmers of the Harvard Mark I computer, and developed the first compiler for a computer programming language. She conceptualized the idea of machine-independent programming languages, which led to the development of COBOL, one of the first modern programming languages. She is credited with popularizing the term “debugging” for fixing computer glitches (motivated by an actual moth removed from the computer).

Related Glossary Terms:

Term Source: Chapter 2 – Programming and Artistry

 

I

I&D architectures

I&D (instructions and data) – refers to the ability to address instructions and data in the same computer “word”

Related Glossary Terms:

Term Source: Chapter 3 – TX-2 and DEC

 

Image processing

In imaging science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image. Most image- processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.

Related Glossary Terms:

Term Source: Chapter 13 – NASA

 

Imax

IMAX is a motion picture film format and a set of proprietary cinema projection standards created by the Canadian company IMAX Corporation. IMAX has the capacity to record and display images of far greater size and resolution than conventional film systems.

Related Glossary Terms:

Term Source:

Chapter 11 – Sogitec Audiovisuel

 

Ink and paint

Digital ink-and-paint is the computerized version of finalizing animation art using scanning, instead of inking, for each pencil drawing, and digitally coloring instead of hand-painting each cel. With all the ink-and-paint programs now available it is possible to drop fill (single- click paint an entire enclosed area) or use a digital paintbrush to fill colors into characters.

Related Glossary Terms:

Term Source: Chapter 11 – Metrolight / Rezn8

 

Integrated circuit

A circuit of transistors, resistors, and capacitors constructed on a single semiconductor wafer or chip, in which the components are interconnected to perform a given function. Abbreviation: IC

Related Glossary Terms:

Term Source: Chapter 1 – Electronic devices

 

Interpolation

Linear interpolation, in computer graphics often called “LERP” (Linear interpolation), is a very (if not the simplest) method of interpolation.

For a set of discrete values linear interpolation can approximate other values in between, assuming a linear development between these discrete values. An interpolated value, calculated with linear interpolation, is calculated only in respect to the two surrounding values, which makes it a quite inappropriate choice if the desired curve should be smooth. If a curvier interpolation is needed, cubic interpolation or splines might be an option.

Linear interpolation is the simplest method of getting values at positions in between the data points. The points are simply joined by straight line segments.

Cubic interpolation is the simplest method that offers true continuity between segments. As such it requires more than just the two endpoints of the segment but also the two points on either side of them. So the function requires 4 points in all.

Related Glossary Terms:

Term Source: Chapter 8 – Introduction

 

Isolines

An isoline (also contour line, isopleth, or isarithm) of a function of two variables is a curve along which the function has a constant value. For example, in cartography, a contour line (often just called a “contour”) joins points of equal elevation (height) above a given level, such as mean sea level.[ A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines.

Related Glossary Terms: Isosurfaces

Term Source: Chapter 18 – Introduction

 

Isosurfaces

An isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space; in other words, it is a level set of a continuous function whose domain is 3D-space.

Related Glossary Terms: Contour plots, Isolines

Term Source: Chapter 18 – Introduction

 

Iterated function systems

In mathematics, iterated function systems or IFSs are a method of constructing fractals; the resulting constructions are always self-similar.

IFS is the term originally devised by Michael Barnsley and Steven Demko for a collection of contraction mappings over a complete metric space, typically compact subsets of Rn . The landmark papers of John Hutchinson and, independently, Barnsley and Demko showed how such systems of mappings with associated probabilities could be used to construct fractal sets and measures: the former from a geometric measure theory setting and the latter from a probabilistic setting.

http://links.uwaterloo.ca/ResearchIFSFractalCoding.html

Related Glossary Terms:

Term Source: Chapter 19 – Plants

 

J

Jaggies

Jaggies” is the informal name for artifacts in raster images, most frequently from aliasing,[1] which in turn is often caused by non-linear mixing effects producing high-frequency components and/or missing or poor anti-aliasing filtering prior to sampling.

Jaggies are stair like lines that appear where there should be smooth straight lines or curves. For example, when a nominally straight, un-aliased line steps across one pixel, a dogleg occurs halfway through the line, where it crosses the threshold from one pixel to the other.

Related Glossary Terms: Antialiasing

Term Source: Chapter 15 – Graphics Accelerators

 

Jobs, Steve

Steven Paul “Steve” Jobs was an American entrepreneur who is best known as the co- founder, chairman, and chief executive officer of Apple Inc. Through Apple, he was widely recognized as a charismatic pioneer of the personal computer revolution and for his influential career in the computer and consumer electronics fields. Jobs also co-founded and served as chief executive of Pixar Animation Studios; he became a member of the board of directors of The Walt Disney Company in 2006, when Disney acquired Pixar.

Related Glossary Terms:

Term Source: Chapter 16 – Apple Computer

 

K

Kajiya, Jim

Jim Kajiya is a pioneer in the field of computer graphics. He is perhaps best known for the development of the rendering equation.Kajiya received his PhD from the University of Utah in 1979, was a professor at Caltech from 1979 through 1994, and is currently a researcher at Microsoft Research.

Related Glossary Terms:

Term Source: Chapter 5 – Cal Tech and North Carolina State, Chapter 19 – Global Illumination

 

Kawaguchi, Yoichiro

Yoichiro Kawaguchi is a Japanese computer graphics artist, professor at the University of Tokyo. Kawaguchi rose to international prominence in 1982 when he presented “Growth Model” in the international conference SIGGRAPH.

Related Glossary Terms:

Term Source: Chapter 9 – Yoichiro Kawaguchi

 

Keyframe

A key frame in animation and filmmaking is a drawing that defines the starting and ending points of any smooth transition. They are called “frames” because their position in time is measured in frames on a strip of film. A sequence of keyframes defines which movement the viewer will see, whereas the position of the keyframes on the film, video or animation defines the timing of the movement. Because only two or three keyframes over the span of a second do not create the illusion of movement, the remaining frames are filled with in- betweens.

Related Glossary Terms:

Term Source: Chapter 4 – University of Utah, Chapter 4 – The Ohio State University, Chapter 4 – JPL and National Research Council of Canada

 

Kinematics

Forward kinematic animation is a method in 3D computer graphics for animating models.

The essential concept of forward kinematic animation is that the positions of particular parts of the model at a specified time are calculated from the position and orientation of the object, together with any information on the joints of an articulated model. So for example if the object to be animated is an arm with the shoulder remaining at a fixed location, the location of the tip of the thumb would be calculated from the angles of the shoulder, elbow, wrist, thumb and knuckle joints. Three of these joints (the shoulder, wrist and the base of the thumb) have more than one degree of freedom, all of which must be taken into account. If the model were an entire human figure, then the location of the shoulder would also have to be calculated from other properties of the model.

Forward kinematic animation can be distinguished from inverse kinematic animation by this means of calculation – in inverse kinematics the orientation of articulated parts is calculated from the desired position of certain points on the model. It is also distinguished from other animation systems by the fact that the motion of the model is defined directly by the animator – no account is taken of any physical laws that might be in effect on the model, such as gravity or collision with other models.

Related Glossary Terms: Dynamics

Term Source: Chapter 8 – Introduction

 

Kinetic Art

Kinetic art is art that contains moving parts or depends on motion for its effect. The moving parts are generally powered by wind, a motor or the observer. Kinetic art encompasses a wide variety of overlapping techniques and styles.

Related Glossary Terms:

Term Source: Chapter 9 – Vera Molnar

 

Kleiser, Jeff

Jeff Kleiser is widely recognized as a leader in animation and visual effects. He has produced and directed visual effects for numerous award-winning television commercials, and has created unique location-based entertainment projects such as the 3D stereoscopic films Corkscrew Hill (for Bush Gardens), Santa Lights up New York (for Radio City Music Hall), and The Amazing Adventures of Spider-Man (for Universal Studios). Kleiser’s film credits range from Walt Disney’s Tron, the ground-breaking CGI movie released to critical acclaim in 1982, to recent Hollywood releases such as X-Men (including X-Men 2 and X- Men: The Last Stand), Fantastic Four, Scary Movie (3 and 4), Slither, Son of the Mask, Exorcist: The Beginning, and many more. In 1987 Kleiser and partner Diana Walczak founded the visual effects studio Kleiser-Walczak and together coined the term “synthespian” to describe digital actors (synthetic thespians). In 2005 Kleiser and Walczak founded Synthespian Studios (synthespians.net) to create original projects for animated characters.

Related Glossary Terms:

Term Source: Chapter 6 – Digital Effects

 

Kodalith

A high contrast black and white film made by Kodak, used also as a special effect film in the darkroom (allowed for the recording ultra high contrast images)

Related Glossary Terms:

Term Source: Chapter 2 – Programming and Artistry

 

Kovacs, Bill

Bill Kovacs received a Bachelor of Architecture degree from Carnegie Mellon University in 1971. He worked for Skidmore, Owings and Merrill (New York office) while getting a Masters of Environmental Design from Yale University (1972). He was then transferred to the Chicago Office, where he worked on a computer-aided design system.

In 1978, Kovacs left SOM to become VP of R&D for the early computer animation company Robert Abel and Associates (1978-1984). At Abel, Kovacs (along with Roy Hall and others) developed the company’s animation software. Kovacs used this software, with others in the film Tron. He later co-founded Wavefront Technologies as CTO (1984-1994), leading the development of products such as The Advanced Visualizer as well as animated productions. Along with Richard Childers and Chris Baker, he was a key organizer of the Infinite Illusions at the Smithsonian Institution exhibit in 1991.

Following retirement from Wavefront, Kovacs co-founded Instant Effects, worked as a consultant to Electronic Arts and RezN8, serving as RezN8’s CTO from 2000 until his death. In 1998, Kovacs received a 1997 (Scientific and Engineering) Academy Award from the Academy of Motion Picture Arts and Sciences. In 1980, he received two Clio Awards for his work on animated TV commercials.

Related Glossary Terms:

Term Source: Chapter 6 – Robert Abel and Associates

 

Kristoff, Jim

President of Cranston/Csuri Productions, and founder of Metrolight Productions in Los Angeles.

Related Glossary Terms:

Term Source: Chapter 6 – Cranston/Csuri Productions

 

Krueger, Myron

Myron Krueger is an American computer artist who developed early interactive works. He is also considered to be one of the first generation virtual reality and augmented reality researchers. He earned a Ph.D. in Computer Science at the University of Wisconsin– Madison and in 1969, he collaborated with Dan Sandin, Jerry Erdman and Richard Venezky on a computer controlled environment called “glowflow,” a computer-controlled light sound environment that responded to the people within it. Krueger went on to develop Metaplay, an integration of visuals, sounds, and responsive techniques into a single framework. A later project, “Videoplace,” was funded by the National Endowment for the arts and a two- way exhibit was shown at the Milwaukee Art Museum in 1975. From 1974 to 1978 Krueger performed computer graphics research at the Space Science and Engineering Center of the University of Wisconsin–Madison in exchange for institutional support for his “Videoplace” work. In 1978, joined the computer science faculty at the University of Connecticut, where he taught courses in hardware, software, computer graphics and artificial intelligence.

Related Glossary Terms:

Term Source: Chapter 17 – Hypermedia and Art

 

L

L-systems

An L-system or Lindenmayer system, is a parallel rewriting system, namely a variant of a formal grammar, most famously used to model the growth processes of plant development, but also able to model the morphology of a variety of organisms. An L-system consists of an alphabet of symbols that can be used to make strings, a collection of production rules which expand each symbol into some larger string of symbols, an initial “axiom” string from which to begin construction, and a mechanism for translating the generated strings into geometric structures. L-systems can also be used to generate self-similar fractals such as iterated function systems.

Related Glossary Terms:

Term Source: Chapter 19 – Plants

 

Lambertian

If a surface exhibits Lambertian reflectance, light falling on it is scattered such that the apparent brightness of the surface to an observer is the same regardless of the observer’s angle of view. More technically, the surface luminance is isotropic. For example, unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane does not, since specular highlights may appear at different locations on the surface. Not all rough surfaces are perfect Lambertian reflectors, but this is often a good approximation when the characteristics of the surface are unknown. Lambertian reflectance is named after Johann Heinrich Lambert.

In computer graphics, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all closed polygons (such as a triangle within a 3D mesh) to reflect light equally in all directions when rendered.

Related Glossary Terms: Diffuse reflection, Specular reflection

Term Source: Chapter 19 – Global Illumination

 

Langlois, Daniel

Daniel Langlois is the president and founder of the Daniel Langlois Foundation, Ex-Centris, and Media Principia Inc. He also founded Softimage Inc., serving as its president and chief technology officer from November 1986 to July 1998. The company is recognized in the fields of cinema and media creation for its digital technologies and especially its 3-D computer animation techniques. Softimage software was used to create most of the 3-D effects in the movies Star Wars Episode I: The Phantom Menace, The Matrix, Titanic, Men in Black, Twister, Jurassic Park, The Mask and The City of Lost Children.

Related Glossary Terms:

Term Source: Chapter 8 – SoftImage

 

Laposky, Ben

Ben Laposky was a mathematician and artist from Iowa. In 1950, he created the first graphic images generated by an electronic (in his case, an analog) machine.

Related Glossary Terms:

Term Source: Chapter 2 – Programming and Artistry

 

Lasseter, John

John Alan Lasseter is an American animator, film director and the chief creative officer at Pixar and Walt Disney Animation Studios. He is also currently the Principal Creative Advisor for Walt Disney Imagineering. Lasseter’s first job was with The Walt Disney Company, where he became an animator. Next, he joined Lucasfilm, where he worked on the then- groundbreaking use of CGI animation. After the Graphics Group of the Computer Division of Lucasfilm was sold to Steve Jobs and became Pixar in 1986, Lasseter oversaw all of Pixar’s films and associated projects as executive producer and he directed Toy Story, A Bug’s Life, Toy Story 2, Cars, and Cars 2.

He has won two Academy Awards, for Animated Short Film (for Tin Toy), as well as a Special Achievement Award (for Toy Story).

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Light pen

a rodlike device which, when focused on the screen of a cathode-ray tube, can detect the time of passage of the illuminated spot across that point thus enabling a computer to determine the position on the screen being pointed at

Related Glossary Terms: Cathode Ray Tube

Term Source: Chapter 3 – General Motors DAC, Chapter 3 – Input devices

 

Lofting

The creation of a 3D surface model by joining adjacent cross-sectional data with surface elements, such as triangles.

Related Glossary Terms:

Term Source: Chapter 18 – Algorithms

 

Lytle, Wayne

Wayne Lytle is the founder of Animusic, an American musical computer animation company. In 1988, he joined the Cornell Theory Center, where he could experiment with his idea as a scientific visualization producer. He created the piece More Bells & Whistles at Cornell in 1990 and composed Beyond The Walls in 1996. Lytle founded Animusic (originally under the name Visual Music) in 1995 with his associate David Crognale.

Related Glossary Terms:

Term Source: Chapter 19 – Data-driven Imagery

 

M

Machover, Carl

Carl Machover,a computer graphics pioneer and graphics “evangelist” is president of Machover Associates Corp (MAC), a computer graphics consultancy he founded in 1976,which provides a broad range of management, engineering, marketing, and financial services worldwide to computer graphics users, suppliers, and investors. Machover is also an Adjunct Professor at RPI, president of ASCI, past-president of NCGA, SID, and Computer Graphics Pioneers, on the editorial boards of many industry publications, writes and lectures world-wide on all aspects of computer graphics, and was guest editor of special computer graphics art issues of Computer and Graphics and the IEEE Computer Graphics and Applications, Machover received the North Carolina State University, Orthogonal Award , the NCGA Vanguard Award, .and was was inducted into the FAMLI Computer Graphics Hall of Fame. Machover passed away in 2012.

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Mandelbrot, Benoit

Benoît B. Mandelbrot was a French American mathematician. Born in Poland, he moved to France with his family when he was a child. Mandelbrot spent much of his life living and working at IBM in the United States, where he worked on a wide range of mathematical problems, including mathematical physics and quantitative finance. He is best known as the father of fractal geometry. He coined the term fractal and described the Mandelbrot set.

Related Glossary Terms:

Term Source: Chapter 19 – Noise functions and Fractals

 

Marks, Harry

Harry Marks is considered by many to be the founding father of modern broadcast design. He began his career as a typographer and publications designer at Oxford University Press. In the mid-1960s, he moved to Los Angeles and landed a job at ABC-TV, where his assignment was to improve the on-air graphic appearance of the network. He is also known for his work as an independent graphics consultant, including six years of on-air graphics for NBC-TV, brand packaging for international TV networks, and an Emmy-winning main title for Entertainment Tonight. Harry is well known for his innovative use of emerging technologies, such as computer graphics and slit scan. He has earned nearly every award in broadcast design and promotion, including an Emmy and the first Lifetime Achievement Award from the Broadcast Design Association. In 1984, Harry had the notion of facilitating a gathering of people from the converging worlds of technology, entertainment, and design, so he partnered with Richard Saul Wurman and created the TED Conference.

Related Glossary Terms:

Term Source: Chapter 6 – Pacific Data Images, Chapter 6 – Robert Abel and Associates

 

Max, Nelson

Max’s research interests are in the areas of scientific visualization, computer animation, and realistic computer graphics rendering. In visualization he works on molecular graphics, and volume and flow visualization, particularly on irregular finite element meshes. He has rendered realistic lighting effects in clouds, trees, and water waves, and has produced numerous computer animations, shown at the annual SIGGRAPH conferences, and in Omnimax at the Fujitu Pavilions at Expo ’85 in Tsukuba Japan, and Expo ’90 in Osaka Japan. His early work was done at Lawrence Livermore and he is currently affiliated with UC-Davis.

Related Glossary Terms:

Term Source: Chapter 4 – Bell Labs and Lawrence Livermore

 

Metaballs

Metaballs are, in computer graphics, organic-looking n-dimensional objects. The technique for rendering metaballs was invented by Jim Blinn in the early 1980s. Each metaball is defined as a function in n-dimensions.

Related Glossary Terms:

Term Source: Chapter 8 – Side Effects, Chapter 9 – Yoichiro Kawaguchi

 

MIP mapping

In 3D computer graphics texture filtering, mipmaps (also MIP maps) are pre-calculated, optimized collections of images that accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts. They are widely used in 3D computer games, flight simulators and other 3D imaging systems. Mipmapping was invented by Lance Williams in 1983 and is described in his paper Pyramidal parametrics.

Related Glossary Terms:

Term Source:

 

Modular visualization environments

several systems have been developed around the concepts of applying visual languages to visualization application building; decomposing a visualization application into separable process (such as data analysis, geometric representation, and rendering); and finally creating a real-time development environment where applications are created interactively. These systems have given rise to disposable applications by utilizing reusable visualization and graphics algorithms. These techniques can be connected in a visual manner to create problem-targeted applications with a short lifetime, which dramatically reduces the time devoted to problem solving.

Because of their focus, these systems blur the distinction between program visualization (the process of dynamically viewing the execution ordering of a program), visualization programming (creating visualization applications using graphics libraries), and visualization prototyping (building visualization applications interactively).

Related Glossary Terms: Dataflow

Term Source: Chapter 18 – Visualization Systems

 

Monte Carlo method

Monte Carlo methods (or Monte Carlo experiments) are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems. These methods are most suited to calculation by a computer and tend to be used when it is infeasible to compute an exact result with a deterministic algorithm. This method is also used to complement theoretical derivations.

Monte Carlo methods are especially useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). They are used to model phenomena with significant uncertainty in inputs, such as the calculation of risk in business. They are widely used in mathematics, for example to evaluate multidimensional definite integrals with complicated boundary conditions. When Monte Carlo simulations have been applied in space exploration and oil exploration, their predictions of failures, cost overruns and schedule overruns are routinely better than human intuition or alternative “soft” methods.

The Monte Carlo method was coined in the 1940s by John von Neumann, Stanislaw Ulam and Nicholas Metropolis, while they were working on nuclear weapon projects (Manhattan Project) in the Los Alamos National Laboratory.

Related Glossary Terms:

Term Source: Chapter 19 – Global Illumination

 

MOOG synthesizer

Moog synthesizer refers to any number of analog synthesizers designed by Dr. Robert Moog or manufactured by Moog Music, and is commonly used as a generic term for older- generation analog music synthesizers.

Related Glossary Terms: Sandin, Dan

Term Source: Chapter 5 – Illinois-Chicago and University of Pennsylvania

 

Morphing

Morphing is a special effect in motion pictures and animations that changes (or morphs) one image into another through a seamless transition. Most often it is used to depict one person turning into another through technological means or as part of a fantasy or surreal sequence. Traditionally such a depiction would be achieved through cross-fading techniques on film. Since the early 1990s, this has been replaced by computer software to create more realistic transitions.

Related Glossary Terms:

Term Source: Chapter 4 – University of Utah, Chapter 4 – The Ohio State University

 

Motion blur

Motion blur is the apparent streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. It results when the image being recorded changes during the recording of a single frame, either due to rapid movement or long exposure.

In computer animation (2D or 3D) it is computer simulation in time and/or on each frame that the 3D rendering/animation is being made with real video camera during its fast motion or fast motion of “cinematized” objects or to make it look more natural or smoother.

Related Glossary Terms:

Term Source: Chapter 19 – Noise functions and Fractals

 

Motion blur

Motion blur is the apparent streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. It results when the image being recorded changes during the recording of a single exposure, either due to rapid movement or long exposure.

Related Glossary Terms: 

Term Source: Chapter 11 – ILM

 

Motion capture

Motion capture, or mocap, is a technique of digitally recording movements for entertainment, sports and medical applications. It started as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation as well as education, training and sports and recently for both cinema and video games. A performer wears a set of one type of marker at each joint: acoustic, inertial, LED, magnetic or reflective markers, or combinations, to identify the motion of the joints of the body. Sensors track the position or angles of the markers, optimally at least two times the rate of the desired motion. The motion capture computer program records the positions, angles, velocities, accelerations and impulses, providing an accurate digital representation of the motion.

Related Glossary Terms: Performance animation

Term Source: Chapter 4 – University of Utah

 

Mouse

Computers . a palm-sized, button-operated pointing device that can be used to move, select, activate, and change items on a computer screen.

Related Glossary Terms:

Term Source: Chapter 3 – Input devices

 

Multi-texturing

Texture mapping is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in his Ph.D. thesis of 1974. Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete.

Related Glossary Terms: Texture Mapping

Term Source:

 

Multitasking

In computing, multitasking is a method where multiple tasks, also known as processes, are performed during the same period of time.

Related Glossary Terms:

Term Source: Chapter 16 – Apple Computer

 

Multivariate data

Data collected on several variables for each sampling unit. For example, if we collect information on weight (w), height (h), and shoe size (s) from each of a random sample of individuals, then we would refer to the triples (w1, h1, s1), (w2, h2, s2),…as a set of multivariate data.

Related Glossary Terms:

Term Source: Chapter 18 – Algorithms

 

N

Noise functions

Perlin noise is a procedural texture primitive, a type of gradient noise used by visual effects artists to increase the appearance of realism in computer graphics. The function has a pseudo-random appearance, yet all of its visual details are the same size (see image). This property allows it to be readily controllable; multiple scaled copies of Perlin noise can be inserted into mathematical expressions to create a great variety of procedural textures. Synthetic textures using Perlin noise are often used in CGI to make computer-generated visual elements – such as fire, smoke, or clouds – appear more natural, by imitating the controlled random appearance of textures of nature.

Noise functions are also frequently used to generate textures when memory is extremely limited, such as in demos, and is increasingly finding use in Graphics Processing Units for real-time graphics in computer games.

Related Glossary Terms: Fractal, Procedural rendering

Term Source: Chapter 6 – MAGI, Chapter 19 – Noise functions and Fractals

 

Numerical-control

A control system in which numerical values corresponding to desired tool or control positions are generated by a computer. Abbreviated CNC. Also known as computational numerical control; soft-wired numerical control; stored-program numerical control

Related Glossary Terms:

Term Source: Chapter 10 – MCS / CalComp / McAuto

 

NURBS

Non-uniform rational basis spline (NURBS) is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces which offers great flexibility and precision for handling both analytic (surfaces defined by common mathematical formulae) and modeled shapes.

Related Glossary Terms:

Term Source: Chapter 8 – Alias Research

 

O

Object-oriented programming

Object-oriented programming (OOP) is a programming paradigm using “objects” – data structures consisting of data fields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as data abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance.

Related Glossary Terms:

Term Source: Chapter 16 – Apple Computer

 

Olsen, Ken

Kenneth Harry Olsen was an American engineer who co-founded Digital Equipment Corporation (DEC) in 1957 with colleague Harlan Anderson.

Related Glossary Terms:

Term Source: Chapter 3 – TX-2 and DEC

 

Omnimax

A variation of the IMAX film format that is projected on an angled dome

Related Glossary Terms:

Term Source: Chapter 5 – Cal Tech and North Carolina State

 

Op-art

Op art, also known as optical art, is a style of visual art that makes use of optical illusions.

Related Glossary Terms:

Term Source: Chapter 9 – Vera Molnar

 

Operating system

the collection of software that directs a computer’s operations, controlling and scheduling the execution of other programs, and managing storage, input/output, and communication resources. Abbreviation: OS

Related Glossary Terms:

Term Source: Chapter 3 – TX-2 and DEC

 

Optical printers

An optical printer is a device consisting of one or more film projectors mechanically linked to a movie camera. It allows filmmakers to re-photograph one or more strips of film. The optical printer is used for making special effects for motion pictures, or for copying and restoring old film material.

Common optical effects include fade outs and fade ins, dissolves, slow motion, fast motion, and matte work. More complicated work can involve dozens of elements, all combined into a single scene.

Related Glossary Terms: Film recorder

Term Source: Chapter 6 – Robert Abel and Associates

 

Orthographic

Orthographic projection (or orthogonal projection) is a means of representing a three- dimensional object in two dimensions. It is a form of parallel projection, where all the projection lines are orthogonal to the projection plane, resulting in every plane of the scene appearing in affine transformation on the viewing surface. It is further divided into multi- view orthographic projections and axonometric projections. A lens providing an orthographic projection is known as an (object-space) tele-centric lens.

Related Glossary Terms:

Term Source: Chapter 4 – Bell Labs and Lawrence Livermore

 

Oxberry animation camera

An animation camera, a type of rostrum camera, is a movie camera specially adapted for frame-by-frame shooting animation or stop motion. It consists of a camera body with lens and film magazines, a stand that allows the camera to be raised and lowered, and a table, often with both top and underneath lighting. The artwork to be photographed is placed on this table. The Oxberry was made by Oxberry LLC in New Jersey.

Related Glossary Terms:

Term Source: Chapter 11 – Sogitec Audiovisuel

 

P

Paged architecture

In paging, the memory address space is divided into equal, small pieces, called pages. Using a virtual memory mechanism, each page can be made to reside in any location of the physical memory, or be flagged as being protected. Virtual memory makes it possible to have a linear virtual memory address space and to use it to access blocks fragmented over physical memory address space.

Related Glossary Terms:

Term Source Chapter 3 – TX-2 and DEC

 

Parametric modeling

Parametric modeling uses parameters to define a model (dimensions, for example). Examples of parameters are: dimensions used to create model features, material density, formulas to describe swept features, imported data (that describe a reference surface, for example). The parameter may be modified later, and the model will update to reflect the modification.

Related Glossary Terms:

Term Source: Chapter 10 – SDRC / Unigraphics

 

Particle system

The term particle system refers to a computer graphics technique to simulate certain fuzzy phenomena, which are otherwise very hard to reproduce with conventional rendering techniques. Examples of such phenomena which are commonly replicated using particle systems include fire, explosions, smoke, moving water, sparks, falling leaves, clouds, fog, snow, dust, meteor tails, hair, fur, grass, or abstract visual effects like glowing trails, magic spells, etc.

Related Glossary Terms:

Term Source: Chapter 4 – University of Utah, Chapter 4 – The Ohio State University

 

Patch panel

A panel of electronic ports contained together that connects incoming and outgoing lines of a LAN or other communication, sound, electronic or electrical system. Connections are made with patch cords. The patch panel allows circuits to be arranged and rearranged by plugging and unplugging the patch cords.

Related Glossary Terms:

Term Source: Chapter 12 – ANIMAC / SCANIMATE

 

Pennie, John

President of Omnibus

Related Glossary Terms: DOA

Term Source: Chapter 6 – Omnibus Computer Graphics

 

Performance animation

Performance animation could be described as ‘improvisation meets CG (computer graphics). This involves providing real-time rendered 3D animated characters that are doing the same movements as actors, at the same time. The 3D character(s) can exist within a computer generated ‘virtual set’ or can interact with human characters in a real environment (often seen in dance performances) or human characters in a virtual environment.

Related Glossary Terms: Motion capture

Term Source: Chapter 6 – Pacific Data Images (PDI)

 

Perlin, Ken

Ken Perlin is a professor at New York University, founding director of the Media Research Lab at NYU, and the Director of the Games for Learning Institute. He developed or was involved with the development of techniques such as Perlin noise, hypertexture, real-time interactive character animation, and computer-user interfaces such as zooming user interfaces, stylus-based input, and most recently, cheap, accurate multi-touch input devices. He is also the Chief Technology Advisor of ActorMachine, LLC. His invention of Perlin noise in 1985 has become a standard that is used in both computer graphics and movement.

Perlin was founding director of the NYU Media Research Laboratory and also directed the NYU Center for Advanced Technology from 1994 to 2004. He was the System Architect for computer generated animation at Mathematical Applications Group, Inc. 1979-1984, where he worked on Tron.

Related Glossary Terms:

Term Source: Chapter 19 – Noise functions and Fractals

 

Perspective (or Perspective Projection)

Perspective projection is a type of drawing that graphically approximates on a planar (two-dimensional) surface (e.g. computer display) the images of three-dimensional objects so as to approximate actual visual perception. It is sometimes also called perspective view or perspective drawing or simply perspective.

Related Glossary Terms:

Term Source: 

 

Phong shading

Phong shading refers to an interpolation technique for surface shading in 3D computer graphics. It is also called Phong interpolation or normal-vector interpolation shading. Specifically, it interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.

Phong shading and the Phong reflection model were developed by Bui Tuong Phong at the University of Utah, who published them in his 1973 Ph.D. dissertation. Phong’s methods were considered radical at the time of their introduction, but have evolved into a baseline shading method for many rendering applications.

Related Glossary Terms: Gouraud shading

Term Source: Chapter 14 – CGI and Effects in Films and Music Videos

 

Photon mapping

In computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann Jensen that approximately solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate simulations of light such as spectral rendering.

Related Glossary Terms:

Term Source: Chapter 19 – Global Illumination

 

Pixel

In digital imaging, a pixel, or pel, (picture element) is a physical point in a raster image, or the smallest, addressable element in a display device; so it is the smallest, controllable element of a picture represented on the screen. The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a two-dimensional grid, and are often represented using dots or squares, but CRT pixels correspond to their timing mechanisms and sweep rates.

Related Glossary Terms: Voxels

Term Source: Chapter 4 – MIT and Harvard

 

Plasma panel

A plasma display panel (PDP) is a type of flat panel display now commonly used for large TV displays (typically above 37-inch or 940 mm). Many tiny cells located between two panels of glass hold an inert mixture of noble gases. The gas in the cells is electrically turned into a plasma which then excites phosphors to emit light. Plasma displays are commonly confused with LCDs, another lightweight flatscreen display but with very different technology.

Related Glossary Terms: Cathode Ray Tube

Term Source: Chapter 3 – Other output devices

 

Post production

Post-production is part of filmmaking and the video production process. It occurs in the making of motion pictures, television programs, radio programs, advertising, audio recordings, photography, and digital art. It is a term for all stages of production occurring after the actual end of shooting and/or recording the completed work.

Post-production is, in fact, many different processes grouped under one name. These typically include:

  • Video editing the picture of a television program using an edit decision list (EDL)
  • Writing, (re)recording, and editing the soundtrack.
  • Adding visual special effects – mainly computer-generated imagery (CGI) and digital copy from which release prints will be made (although this may be made obsolete by digital- cinema technologies).
  • Sound design, Sound effects, ADR, Foley and Music, culminating in a process known as sound re-recording or mixing with professional audio equipment.
  • Transfer of Color motion picture film to Video or DPX with a telecine and color grading (correction) in a color suite.

Related Glossary Terms:

Term Source: Chapter 6 – Cranston/Csuri Productions

 

Pre-visualizing

Pre-visualization (also known as pre-rendering, preview or wireframe windows) is a function to visualize complex scenes in movie before filming. It is also a concept in still photography. Pre-visualization is applied to techniques such as storyboarding, either in the form of charcoal drawn sketches or in digital technology in the planning and conceptual of movie scenery make up. The advantage of pre-visualization is that it allows directors to experiment with different staging and art direction options – such as lighting, camera placement and movement, stage direction and editing – without having to incur the costs of actual production.

Related Glossary Terms:

Term Source: Chapter 8 – Wavefront Technologies

 

Procedural modeling

Procedural modeling is an umbrella term for a number of techniques in computer graphics to create 3D models and textures from sets of rules. L-Systems, fractals, and generative modeling are procedural modeling techniques since they apply algorithms for producing scenes. The set of rules may either be embedded into the algorithm, configurable by parameters, or the set of rules is separate from the evaluation engine. The output is called procedural content, which can be used in computer games, films, be uploaded to the internet, or the user may edit the content manually. Procedural models often exhibit database amplification, meaning that large scenes can be generated from a much smaller amount of rules. If the employed algorithm produces the same output every time, the output need not be stored. Often, it suffices to start the algorithm with the same random seed to achieve this.

Although all modeling techniques on a computer require algorithms to manage and store data at some point, procedural modeling focuses on creating a model from a rule set, rather than editing the model via user input. Procedural modeling is often applied when it would be too cumbersome to create a 3D model using generic 3D modelers, or when more specialized tools are required. This is often the case for plants, architecture or landscapes.

Related Glossary Terms:

Term Source: Chapter 8 – Side Effects

 

Procedural rendering

Procedural generation (procedural modeling, procedural rendering) is a widely used term in the production of media; it refers to content generated algorithmically (procedurally) rather than manually. Often, this means creating content on the fly rather than prior to distribution. This is often related to computer graphics applications and video game level design.

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Projective texture-mapping

Projective texture mapping is a method of texture mapping that allows a textured image to be projected onto a scene as if by a slide projector. Projective texture mapping is useful in a variety of lighting techniques and it is the starting point for shadow mapping.

Related Glossary Terms: Texture Mapping

Term Source: Chapter 19 – Global Illumination

 

Prusinkiewicz, Przemyslaw

Przemyslaw (Przemek) Prusinkiewicz advanced the idea that Fibonacci numbers in nature can be in part understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars. Prusinkiewicz’s main work is on the modeling of plant growth through such grammars.

Prusinkiewicz is currently a professor of Computer Science at the University of Calgary. Prusinkiewicz received the 1997 SIGGRAPH Computer Graphics Achievement Award for his work.

Related Glossary Terms:

Term Source: Chapter 19 – Plants

 

Q

Quantitative invisibility

In CAD/CAM, quantitative invisibility (QI) is the number of solid bodies that obscure a point in space as projected onto a plane. Often, CAD engineers project a model into a plane (a 2D drawing) in order to denote edges that are visible with a solid line, and those that are hidden with dashed or dimmed lines.

Related Glossary Terms: Hidden line elimination, Hidden surfaces

Term Source: Chapter 4 – Other research efforts

 

R

Radiosity

Radiosity (computer graphics), a rendering algorithm which gives a realistic rendering of shadows and diffuse light.

Radiosity is a global illumination algorithm used in 3D computer graphics rendering. Unlike direct illumination algorithms (such as Ray tracing), which tend to simulate light reflecting only once off each surface, global illumination algorithms such as Radiosity simulate the many reflections of light around a scene, generally resulting in softer, more natural shadows and reflections.

Related Glossary Terms: Form factor

Term Source: Chapter 5 – Cornell and NYIT, Chapter 19 – Global Illumination

 

Random Access Memory

a type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is the most common type of memory found in computers and other devices, such as printers.

Related Glossary Terms:

Term Source: Chapter 15 – Early hardware

 

Range image

Range imaging is the name for a collection of techniques which are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device.

The resulting image, the range image, has pixel values which correspond to the distance, e.g., brighter values mean shorter distance, or vice versa. If the sensor which is used to produce the range image is properly calibrated, the pixel values can be given directly in physical units such as meters.

Related Glossary Terms:

Term Source: Chapter 20 – CG Icons

 

Raster-scanned

A raster scan, or raster scanning, is the rectangular pattern of image capture and reconstruction in television. By analogy, the term is used for raster graphics, the pattern of image storage and transmission used in most computer bitmap image systems. The word raster comes from the Latin word rastrum (a rake), which is derived from radere (to scrape)

Related Glossary Terms: Cathode Ray Tube, Vector

Term Source: Chapter 1 – Electronic devices

 

Ray casting

Ray casting is the use of ray-surface intersection tests to solve a variety of problems in computer graphics. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering CSG models. The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel in 1968. The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray – think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye normally sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scan-line algorithms is its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.

Ray casting for producing computer graphics was first used by scientists at Mathematical Applications Group, Inc., (MAGI) of Elmsford, New York.

Roth, Scott D. (February 1982), “Ray Casting for Modeling Solids”, Computer Graphics and Image Processing 18 (2): 109–144

Goldstein, R. A., and R. Nagel. 3-D visual simulation. Simulation 16(1), pp. 25–31, 1971.

Related Glossary Terms: Ray-trace, Scanline rendering

Term Source: Chapter 11 – R/Greenberg Associates / Blue Sky Studios

 

Ray-trace

Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.

Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).

Related Glossary Terms: Radiosity, Reflection mapping, Rendering, Scanline rendering

Term Source: Chapter 5 – Cal Tech and North Carolina State

 

Reeves, Bill

William “Bill” Reeves is the technical director who worked with John Lasseter on the animation breakthrough shorts Luxo Jr and The Adventures of André and Wally B. at ILM and Pixar. After obtaining a Ph.D. at the University of Toronto, Reeves was hired by George Lucas as a member of Industrial Light and Magic. He was one of the founding employees of Pixar when it was sold in 1986 to Steve Jobs. Reeves is the inventor of the first Motion Blur algorithm and methods to simulate particle motion in CGI. Reeves received the Academy Award for Best Animated Short Film (Oscar) in 1988 for his work (with John Lasseter) on the film Tin Toy. Their collaboration continued with Reeves acting as the Supervising Technical Director of the first feature length, computer-animated film Toy Story.

Related Glossary Terms:

Term Source: Chapter 19 – Particle Systems and Artificial Life

 

Reflection mapping

In computer graphics, environment mapping, or reflection mapping, is an efficient image- based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture image. The texture is used to store the image of the distant environment surrounding the rendered object.

Several ways of storing the surrounding environment are employed. The first technique was sphere mapping, in which a single texture contains the image of the surroundings as reflected on a mirror ball. It has been almost entirely surpassed by cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures or unfolded into six square regions of a single texture.

Related Glossary Terms: Environment mapping, Radiosity

Term Source: Chapter 5 – Cornell and NYIT

 

Refraction

Refraction is the phenomenon when a wave changes direction due to a change in speed of the wave, most notably in response to the wave traveling from one medium to another. This is most commonly discussed in reference to the change in the path of a light beam, but affects other waves such as sound as well.

The rule which describes this change in direction is known as Snell’s Law, which says that the proportion of the sines of the angles are equal to the inverse proportion of the indices of refraction and also to the proportion of the velocities.

Related Glossary Terms:

Term Source: Chapter 20 – CG Icons

 

Remote sensing

Remote sensing is the acquisition of information about an object or phenomenon, without making physical contact with the object. In modern usage, the term generally refers to the use of aerial sensor technologies to detect and classify objects on Earth (both on the surface, and in the atmosphere and oceans) by means of propagated signals (e.g. electromagnetic radiation emitted from aircraft or satellites)

Related Glossary Terms:

Term Source: Chapter 18 – Algorithms

 

Render farm

A render farm is high performance computer system, e.g. a computer cluster, built to render computer-generated imagery (CGI), typically for film and television visual effects.

The rendering of images is a highly parallelizable activity, as each frame usually can be calculated independently of the others, with the main communication between processors being the upload of the initial source material, such as models and textures, and the download of the finished images.

Related Glossary Terms:

Term Source:

Chapter 11 – Rhythm and Hues / Xaos

 

Rendering

Rendering is the process of generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs.

Related Glossary Terms:

Term Source: Chapter 4 – University of Utah, Chapter 4 – The Ohio State University

 

Rendering equation

The rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation.

The physical basis for the rendering equation is the law of conservation of energy. Assuming that L denotes radiance, we have that at each particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light itself is the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and cosine of the incident angle.

Related Glossary Terms:

Term Source: Chapter 19 – Global Illumination

 

Reynolds, Craig

Craig W. Reynolds (born March 15, 1953), is an artificial life and computer graphics expert, who created the Boids artificial life simulation in 1986. Reynolds worked on the film Tron (1982) as a scene programmer, and on Batman Returns (1992) as part of the video image crew. Reynolds won the 1998 Academy Scientific and Technical Award in recognition of “his pioneering contributions to the development of three-dimensional computer animation for motion picture production.” He is the author of the OpenSteer library.

Related Glossary Terms:

Term Source: Chapter 6 – Information International Inc. (Triple-I)

 

Rig removal (wire removal)

A post production technique that is used to remove elements of an image or sequence that were needed during the principle photography, but must be taken out for the finished shot.

For example, a production technique called a “wire gag” is used where the talent is fitted with wires to either assist him to jump, fall or otherwise move in a non-normal way, or as a safety feature to save him from injury or death.

Wires are often also used for explosions. An explosion that would blow the extras into the air is not possible, so wire harnesses are added to the on-screen talent and they are yanked away from the explosion as or before it takes place.

Another example similar to wire removal is rig removal. A rig is any kind of device used on the set to hold up an item up for filming, or items in a scene that can’t be eliminated before the shot. After shooting, it must then be removed from the scene.

The post process requires replacing the scene including the rig or wire with a clean view. It can be a clean background frame for the area covered by the offending item (clean plate) and may also require a matte painting if a clean plate can’t be provided.

Related Glossary Terms:

Term Source: Chapter 6 – Pacific Data Images (PDI)

 

RISC

Reduced instruction set computing, or RISC, is a CPU design strategy based on the insight that simplified (as opposed to complex) instructions can provide higher performance if this simplicity enables much faster execution of each instruction. A computer based on this strategy is a reduced instruction set computer also called RISC.

Related Glossary Terms:

Term Source: Chapter 15 – Apollo / Sun / SGI

 

Roberts, Lawrence

Lawrence G. Roberts designed and managed the first packet network, the ARPANET (the precursor to the Internet). At that time, in 1967, Dr. Roberts became the Chief Scientist of ARPA taking on the task of designing, funding, and managing the radically new communications network concept of packet switching. Since then Dr. Roberts has founded five startups; Telenet, NetExpress, ATM Systems, Caspian Networks, and Anagran.

Roberts wrote the first algorithm to eliminate hidden or obscured surfaces from a perspective picture. In 1965, Roberts implemented a homogeneous coordinate scheme for transformations and perspective. His solutions to these problems prompted attempts to find faster algorithms for generating hidden surfaces.

Related Glossary Terms:

Term Source: Chapter 4 – MIT and Harvard

 

Rosebush, Judson

Judson Rosebush is a director and producer of multimedia products and computer animation, an author, artist and media theorist. He is the founder of Digital Effects Inc. and the Judson Rosebush Company. He is the former editor of Pixel Vision magazine, the serialized Pixel Handbook, and a columnist for CD-ROM Professional magazine. He has worked in radio and TV, film and video, sound, print, and hypermedia, including CD-ROM and the Internet. He has been an ACM National Lecturer since the late 1980s and is a recipient of its Distinguished Speaker Award.

Related Glossary Terms:

Term Source: Chapter 6 – Digital Effects

 

Rosendahl, Carl

Carl graduated with a BSEE from Stanford University in 1979 and founded Pacific Data Images in 1980. PDI became, and continues to be, one of the pioneering and most highly innovative creators of computer animation for film and television. During his 20 years of leading the organization, PDI produced over 700 commercials, worked on visual effects for over 70 feature films and, in partnership with DreamWorks SKG, produced the hit animated film “Antz” and the Academy Award winning “Shrek.” Carl received multiple Emmy Awards and in 1998 was recognized with a Technical Achievement Academy Award for PDI’s contributions to modern filmmaking. In early 2000 he sold PDI to DreamWorks SKG, where the company continues to develop and produce animated feature films, including the “Shrek” series and “Madagascar.”

Carl is currently a faculty member of Carnegie Mellon’s Entertainment Technology Center. Prior to joining Carnegie Mellon, Carl was the CEO and founder of Uth TV, a television and web outlet tapping into the exploding power of youth voice and digital storytelling.

From 2000 through 2002, Carl was a Managing Director at Mobius Venture Capital (formerly Softbank Venture Capital) where he focused on investments in the technology and media space.

Carl is also active with a number of non-profit organizations and was a founding board member of the Visual Effects Society (VES) in 1995 and served as the Chair of the Society’s Board of Directors from 2004 through 2006.

Related Glossary Terms:

Term Source: Chapter 6 – Pacific Data Images (PDI)

 

Rotoscoping

to rotoscope is to create an animated matte indicating the shape of an object or actor at each frame of a sequence, as would be used to composite a CGI element into the background of a live-action shot. 2. Historically, a rotoscope was a kind of projector used to create frame-by-frame alignment between filmed live-action footage and hand-drawn animation. Mounted at the top of an animation stand, a rotoscope projected filmed images down through the actual lens of the animation camera and onto the page where animators draw and compose images.

Related Glossary Terms: Motion capture

Term Source: Chapter 6 – Bo Gehring and Associates

 

Run length encoding

Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, simple graphic images such as icons, line drawings, and animations. It is not useful with files that don’t have many runs as it could greatly increase the file size.

RLE may also be used to refer to an early graphics file format supported by CompuServe for compressing black and white images, but was widely supplanted by their later Graphics Interchange Format

Related Glossary Terms: Raster-scanned

Term Source: Chapter 4 – University of Utah, Chapter 4 – The Ohio State University

 

 

S

SAGE

The Semi-Automatic Ground Environment (SAGE) was the Cold War operator environment created for the automated air defense (AD) of North American and by extension, the name of the associated network of radars, computer systems, and aircraft command and control equipment (“SAGE Defense System”)

Related Glossary Terms:

Term Source:

Chapter 2 – Whirlwind and SAGE

 

Sandin, Dan

Daniel J. Sandin (born 1942) is a video and computer graphics artist/researcher. He is a Professor Emeritus of the School of Art & Design, University of Illinois at Chicago, and Co- director of the Electronic Visualization Laboratory at the University of Illinois at Chicago. He is an internationally recognized pioneer in computer graphics, electronic art and visualization.

Related Glossary Terms: MOOG synthesizer

Term Source: Chapter 5 – Illinois-Chicago and University of Pennsylvania

 

Scanline rendering

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by- pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scan line with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

Related Glossary Terms:

Term Source:

 

Schure, Alexander

Alexander Schure founded the New York Institute of Technology (NYIT) in 1955. He also served as the Chancellor of Nova Southeastern University (NSU) from 1970 until 1985.

Schure was an early and decisive champion of computer animation. For almost five years, NYIT gave research funding and a home to the brain trust that would evolve into Pixar Animation Studios. In November, 1974, Schure hired recent University of Utah doctoral graduate Edwin Catmull to direct NYIT’s fledgling computer graphics lab. The core technical team included computer animation pioneers Catmull, Alvy Ray Smith, David DiFrancesco, Ralph Guggenheim, Jim Blinn, and Jim Clark.

Related Glossary Terms:

Term Source: Chapter 5 – Cornell and NYIT

 

Schwartz, Lillian

Lillian F. Schwartz is an American artist who is known for being a creator of 20th century computer-developed art. One notable work she created is Mona Leo, where she morphed the image of a Leonardo da Vinci self-portrait with the Mona Lisa. She made one of the first digitally created films to be shown as a work of art, Pixillation, which shows diagonal red squares and other shapes such as cones and pyramids on black on white backgrounds. She worked in the early stages of her career with Bell Laboratories, developing mixtures of sound, video, and art. Afterwards, during the 1980s, Schwartz experimented with manipulating artwork images using computer technology and creating artwork of her own.

Related Glossary Terms:

Term Source: Chapter 9 – Lillian Schwartz

 

Scientific visualization

Scientific visualization (also spelled scientific visualisation) is an interdisciplinary branch of science according to Friendly “primarily concerned with the visualization of three- dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component”. It is also considered a branch of computer science that is a subset of computer graphics. The purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data.

Michael Friendly (2008). “Milestones in the history of thematic cartography, statistical graphics, and data visualization”

Related Glossary Terms: Modular visualization environments, Visualization, Volume visualization

Term Source: Chapter 18 – Introduction

 

Sims, Karl

Karl Sims is a computer graphics artist and researcher, who is best known for using particle systems and artificial life in computer animation. Sims received a B.S. from MIT in 1984, and a M.S. from the MIT Media Lab in 1987. He worked for Thinking Machines as an artist- in-residence, for Whitney-Demos Production as a researcher, and co-founded Optomystic. He currently heads GenArts, a Cambridge, Massachusetts company that develops special effects plugins used by motion picture studios.

Related Glossary Terms:

Term Source: Chapter 19 – Particle Systems and Artificial Life

 

Sketchpad

Sketchpad was a revolutionary computer program written by Ivan Sutherland in 1963 in the course of his PhD thesis, for which he received the Turing Award in 1988. It helped change the way people interact with computers. Sketchpad is considered to be the ancestor of modern computer-aided drafting (CAD) programs as well as a major breakthrough in the development of computer graphics in general. For example, the Graphic User Interface was derived from the Sketchpad as well as modern object oriented programming.

Related Glossary Terms:

Term Source: Chapter 3 – Work continues at MIT

 

Slit-scan

Originally used in static photography to achieve blurriness or deformity, the slit-scan technique was perfected for the creation of spectacular animations. It enables the cinematographer to create a psychedelic flow of colors. It was adapted for film by Douglas Trumbull during the production of Stanley Kubrick’s 2001: A Space Odyssey and used extensively in the “stargate” sequence.

This type of effect was revived in other productions, for films and television alike. For instance, slit-scan was used by Bernard Lodge to create the Doctor Who title sequences for Jon Pertwee and Tom Baker used between December 1973 and January 1980. Slit-scan was also used in Star Trek: The Next Generation to create the “stretching” of the starship Enterprise-D when it engaged warp drive. Due to the expense and difficulty of this technique, the same three warp-entry shots, all created by Industrial Light and Magic for the series pilot, were reused throughout the series virtually every time the ship went into warp.

Related Glossary Terms:

Term Source: Chapter 6 – Robert Abel and Associates

 

Solids modeling

Solid modeling (or modeling) is a consistent set of principles for mathematical and computer modeling of three-dimensional solids. Solid modeling is distinguished from related areas of geometric modeling and computer graphics by its emphasis on physical fidelity. Together, the principles of geometric and solid modeling form the foundation of computer-aided design and in general support the creation, exchange, visualization, animation, interrogation, and annotation of digital models of physical objects.

Related Glossary Terms: B-rep, CSG

Term Source: Chapter 6 – MAGI

 

Spacewar

Spacewar! is one of the earliest known digital computer games. It is a two-player game, with each player taking control of a spaceship and attempting to destroy the other. A star in the centre of the screen pulls on both ships and requires maneuvering to avoid falling into it. In an emergency, a player can enter hyperspace to return at a random location on the screen, but only at the risk of exploding if it is used too often.

Steve “Slug” Russell, Martin “Shag” Graetz, and Wayne Wiitanen of the fictitious “Hingham Institute” conceived of the game in 1961, with the intent of implementing it on a DEC PDP-1 at the Massachusetts Institute of Technology.

Related Glossary Terms:

Term Source: Chapter 3 – TX-2 and DEC

 

Specular reflection

Specular reflection is the mirror-like reflection of light (or of other kinds of wave) from a surface, in which light from a single incoming direction (a ray) is reflected into a single outgoing direction. Such behavior is described by the law of reflection, which states that the direction of incoming light (the incident ray), and the direction of outgoing light reflected (the reflected ray) make the same angle with respect to the surface normal, thus the angle of incidence equals the angle of reflection.

Related Glossary Terms: Diffuse reflection, Lambertian

Term Source:

 

Splatting

In direct volume rendering, Splatting is a technique which trades quality for speed. Here, every volume element is splatted, as Lee Westover said, like a snow ball, on to the viewing surface in back to front order. These splats are rendered as disks whose properties (color and transparency) vary diametrically in normal (Gaussian) manner. Flat disks and those with other kinds of property distribution are also used depending on the application.

Related Glossary Terms: Volume Rendering

Term Source: Chapter 18 – Volumes

 

Sprite

In computer graphics, a sprite is a two-dimensional image or animation that is integrated into a larger scene. Initially used to describe graphical objects handled separately from the memory bitmap of a video display, the term has since been applied more loosely to refer to various elements of graphical overlays.

Originally, sprites were a method of integrating unrelated bitmaps so that they appeared to be part of the normal bitmap on a screen, such as creating an animated character that can be moved on a screen without altering the data defining the overall screen. Such sprites can be created by either electronic circuitry or software. In circuitry, a hardware sprite is a hardware construct that employs custom DMA channels to integrate visual elements with the main screen in that it super-imposes two discrete video sources. Software can simulate this through specialized rendering methods.

As three-dimensional graphics became more prevalent, the term was used to describe a technique whereby flat images are seamlessly integrated into complicated three- dimensional scenes, often as textures on 2D or 3D objects whose normal always faced the camera.

Related Glossary Terms:

Term Source: Chapter 15 – Influence of Games

 

Stereoscopic display

Stereoscopy (also called stereoscopics or 3-D imaging) refers to a technique for creating or enhancing the illusion of depth in an image by presenting two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3-D depth. Besides the technique of free viewing, which must be learned by the viewer, three strategies have been used to mechanically present different images to each eye: have the viewer wear eyeglasses to combine separate images from two offset sources, have the viewer wear eyeglasses to filter offset images from a single source separated to each eye, or have the light source split the images directionally into the viewer’s eyes (no glasses required; known as Autostereoscopy)

Related Glossary Terms: Head-mounted displays

Term Source: Chapter 17 – Virtual Reality

 

Stop-Motion

Stop motion (also known as stop frame) is an animation technique to make a physically manipulated object appear to move on its own. The object is moved in small increments between individually photographed frames, creating the illusion of movement when the series of frames is played as a continuous sequence. Dolls with movable joints or clay figures are often used in stop motion for their ease of repositioning. Stop motion animation using clay is called clay animation or “clay mation”

Related Glossary Terms:

Term Source: Chapter 14 – CGI and Effects in Films and Music Videos

 

Storage tube

An electron tube in which information is stored as charges for a predetermined time

Related Glossary Terms: Cathode Ray Tube, Vacuum tube

Term Source: Chapter 1 – Electronic devices

 

Storage tube vector graphics

a storage tube is a special monochromatic CRT whose screen has a kind of ‘memory’ (hence the name): when a portion of the screen is illuminated by the CRT’s electron gun, it stays lit until a screen erase command is given. Thus, screen update commands need only be sent once and this allows the use of a slower data connection, typically serial—a feature very well adapted to computer terminal use in 1960s and 1970s computing. The two main advantages were:

▪ Very low bandwidth needs compared to vector graphics displays, thus allowing much longer cable distances between computer and terminal

▪ No need for display-local RAM (as in modern terminals), which was prohibitively expensive at the time.

Related Glossary Terms:

Term Source: Chapter 3 – Other output devices

 

Supercomputing

A powerful computer that can process large quantities of data of a similar type very quickly

Related Glossary Terms:

Term Source: Chapter 4 – Bell Labs and Lawrence Livermore

 

Surface of revolution

A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix) around a straight line in its plane (the axis).

Related Glossary Terms:

Term Source: Chapter 20 – CG Icons

 

Sutherland, Ivan

Ivan Edward Sutherland (born May 16, 1938)[1] is an American computer scientist and Internet pioneer. He received the Turing Award from the Association for Computing Machinery in 1988 for the invention of Sketchpad, an early predecessor to the sort of graphical user interface that has become ubiquitous in personal computers. He was a professor at Utah when he co-founded computer graphics company Evans and Sutherland (E&S) in 1968.

Related Glossary Terms:

Term Source: Chapter 3 – Work continues at MIT

 

Synthespians

A virtual human or digital clone is the creation or re-creation of a human being in image and voice using computer-generated imagery and sound. The process of creating such a virtual human on film, substituting for an existing actor, is known, after a 1992 book, as Schwarzeneggerization, and in general virtual humans employed in movies are known as synthespians, virtual actors, vactors, cyberstars, or “silicentric” actors.

Related Glossary Terms:

Term Source: Chapter 11 – Kleiser Walczak Construction Company

 

T

Taylor, Richard

Richard Taylor is a director, production designer and special effects supervisor. He was the Visual Effects Supervisor for the movie, TRON and was responsible for organizing the effects and designing the film’s graphics and costumes, as well as blending the live-action footage with the CGI animation.

He began his career as an artist and holds a BFA in painting & drawing from the University of Utah. After graduation he co-founded Rainbow Jam, a multi-media light show and graphics company which gave concert performances in tandem with top musical groups such as The Grateful Dead, Santana, Led Zeppelin and Jethro Tull. In 1971 he received the Cole Porter Fellowship from USC where he earned an MFA in Print Making and Photography. In 1973, Richard joined Robert Abel and Associates. He directed many award-winning television commercials and received four Clio awards for his work on the 7UP Bubbles “See the Light”, 7UP “Uncola” and the Levi’s “Trademark” commercials. During his tenure at the Abel Studio he created many of the on air graphics for ABC television and designed new theatrical logos for CBS Theatrical Films and Columbia Pictures.

He supervised the design and construction of the miniatures and designed and directed special effects sequences for Paramount’s’ STAR TREK, THE MOTION PICTURE. In 1978, he became the creative director at Information International Inc. (III). While at III Richard directed many of the first computer generated commercials and designed and directed the special effects for the feature film “LOOKER” which was written and directed by Michael Crichton.

In 1981 Richard became the Special Effects Director of Walt Disney’s TRON, the innovative film that introduced America to the world of computer simulation. Following TRON, Richard opened the West Coast office of Magi Synthavision, the computer animation studio that along with III generated the computer simulation scenes for Tron. One of the first commercials Richard directed at Magi, “Worm War One” won the first Clio for Computer Animation.

He was also at Apogee Production Inc. Lee Lacy & Associates, Image Point Productions, Dryer/Taylor Productions, and Rhythm & Hues Studios.

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Terzopoulos, Demetri

Demetri Terzopoulos is a professor at the University of California, Los Angeles, where he directs the UCLA Computer Graphics and Vision Laboratory. After graduation from MIT, he was a research scientist at the MIT Artificial Intelligence Lab, then joined the University of Toronto His published work is in computer vision, computer graphics, medical image analysis, computer-aided design, and artificial intelligence/life. Professor Terzopoulos is the recipient of a 2005 Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his pioneering work on realistic cloth simulation for motion pictures. In 2007, he was the inaugural recipient of the Computer Vision Significant Researcher Award from the IEEE “For his pioneering and sustained research on Deformable Models and their applications”.

Related Glossary Terms:

Term Source: Chapter 19 – Physical-based Modeling

 

Tesler, Larry

Larry Tesler is a computer scientist working in the field of human-computer interaction. Tesler studied computer science at Stanford and worked for a time at the Stanford Artificial Intelligence Laboratory. From 1973 to 1980, he was at Xerox PARC, where, among other things, he worked on the Gypsy word processor and Smalltalk. Copy and paste was first implemented in 1973-1976 by Tesler while working on the programming of Smalltalk-76 at Xerox Palo Alto Research Center.

In 1980, Tesler moved to Apple Computer, where he held various positions, including Vice President of AppleNet, Vice President of the Advanced Technology Group, and Chief Scientist. He worked on the Lisa team, and was enthusiastic about the development of the Macintosh as the successor to the Lisa. In 1985, Tesler worked with Niklaus Wirth to add object-oriented language extensions to the Pascal programming language, calling the new language Object Pascal. He also was instrumental in developing MacApp, one of the first class libraries for application development. Eventually, these two technologies became shipping Apple products. Starting in 1990, Tesler led the efforts to develop the Apple Newton, initially as Vice President of the Advanced Development Group, and then as Vice President of the Personal Interactive Electronics division.

Related Glossary Terms:

Term Source: Chapter 16 – Xerox PARC

 

Texture Mapping

Texture mapping is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in his Ph.D. thesis of 1974

Related Glossary Terms: Multi-texturing

Term Source:

 

Transistor

A semiconductor device that amplifies, oscillates, or switches the flow of current between two terminals by varying the current or voltage between one of the terminals and a third: although much smaller in size than a vacuum tube, it performs similar functions without requiring current to heat a cathode.

Related Glossary Terms: Vacuum tube

Term Source: Chapter 1 – Electronic devices

 

Troubetskoy, Eugene

Dr. Eugene Troubetzkoy had a PhD in Theoretical Physics from Columbia and worked as a nuclear physicist to create computer simulations of nuclear particle behavior. He is credited with helping develop the amazing technique for capturing 3D scenes with remarkable realism called Raytrace rendering. He was one of the founders of Blue Sky.

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Turnkey

a computer system purchased from hardware and software vendors, customized and put in working order by a firm that then sells the completed system to the client that ordered it.

Related Glossary Terms:

Term Source:

Chapter 10 – Auto-trol / Applicon / ComputerVision

 

Tweening

Short for in-betweening, the process of generating intermediate frames between two images to give the appearance that the first image evolves smoothly into the second image. Tweening is a key process in all types of animation, including computer animation. Sophisticated animation software enables you to identify specific objects in an image and define how they should move and change during the tweening process.

Related Glossary Terms: Keyframe

Term Source: Chapter 8 – Introduction

 

U/V

Vacuum tube

  1. Also called, especially British , vacuum valve . an electron tube from which almost all air or gas has been evacuated: formerly used extensively in radio and electronics.
  2. A sealed glass tube with electrodes and a partial vacuum or a highly rarefied gas, used to observe the effects of a discharge of electricity passed through it.

The vacuum tube was invented by Lee de Forest in 1906. It was an improvement on the Fleming tube, or Fleming valve, introduced by John Ambrose Fleming two years earlier. The vacuum tube contains three components: the anode, the cathode and a control grid. It could therefore control the flow of electrons between the anode and cathode using the grid, and could therefore act as a switch or an amplifier.

Related Glossary Terms: Cathode Ray Tube, Transistor

Term Source: Chapter 1 – Electronic devices

 

Van Dam, Andy

Andries “Andy” van Dam (born 8 December 1938, Groningen) is a Dutch-born American professor of computer science and former Vice-President for Research at Brown University in Providence, Rhode Island. Together with Ted Nelson he contributed to the first hypertext system, HES in the late 1960s. He co-authored Computer Graphics: Principles and Practice along with J.D. Foley, S.K. Feiner, and John Hughes. He also co-founded the precursor of today’s ACM SIGGRAPH conference.

Related Glossary Terms:

Term Source: Chapter 5 – Other labs and NSF

 

Vector Graphics

Vector graphics is the use of geometrical primitives such as points, lines, curves, and shapes or polygon(s), which are all based on mathematical expressions, to represent images in computer graphics. “Vector”, in this context, implies more than a straight line.

Vector graphics is based on images made up of vectors (also called paths, or strokes) which lead through locations called control points. Each of these points has a definite position on the x and y axes of the work plan. Each point, as well, is a variety of database, including the location of the point in the work space and the direction of the vector (which is what defines the direction of the track). Each track can be assigned a color, a shape, a thickness and also a fill. This does not affect the size of the files in a substantial way because all information resides in the structure; it describes how to draw the vector.

Related Glossary Terms: Cathode Ray Tube, Raster-scanned

Term Source: Chapter 1 – Electronic devices

 

Videosynthesizer

A Video Synthesizer is a device that electronically creates a video signal. A video synthesizer is able to generate a variety of visual material without camera input through the use of internal video pattern generators, as seen in the still frames of motion sequences shown above. It can also accept and “clean up and enhance” or “distort” live television camera imagery. The synthesizer creates a wide range of imagery through purely electronic manipulations. This imagery is visible within the output video signal when this signal is displayed. The output video signal can be viewed on a wide range of conventional video equipment, such as TV monitors, theater video projectors, computer displays, etc.

Related Glossary Terms:

Term Source: Chapter 12 – Image West / Dolphin Productions / Ron Hays

 

Virtual memory

In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture’s various forms of computer data storage (such as random-access memory and disk storage), allowing a program to be designed as though there is only one kind of memory, “virtual” memory, which behaves like directly addressable read/write memory (RAM).

Related Glossary Terms:

Term Source: Chapter 3 – TX-2 and DEC

 

Virtual reality

Virtual reality (VR), is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Furthermore, virtual reality covers remote communication environments which provide virtual presence of users with the concepts of telepresence and telexistence or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, devices such as the Polhemus, and omnidirectional treadmills.

Related Glossary Terms: Augmented reality

Term Source: Chapter 17 – Virtual Reality

 

Vistavision

VistaVision is a higher resolution, widescreen variant of the 35mm motion picture film format which was created by engineers at Paramount Pictures in 1954.

Paramount did not use anamorphic processes such as CinemaScope but refined the quality of their flat widescreen system by orienting the 35mm negative horizontally in the camera gate and shooting onto a larger area, which yielded a finer-grained projection print.

Related Glossary Terms:

Term Source: Chapter 11 – Kleiser Walczak Construction Company

 

Visualization

Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci’s revolutionary methods of technical drawing for engineering and scientific purposes.

Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization.

Related Glossary Terms: Scientific visualization

Term Source: Chapter 18 – Introduction

 

VLSI

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device.

Related Glossary Terms:

Term Source: Chapter 15 – Apollo / Sun / SGI

 

Volume Rendering

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set.

A typical 3D data set is a group of 2D slice images acquired by a CT, MRI, or MicroCT scanner. Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

Related Glossary Terms: Volume visualization

Term Source: Chapter 18 – Volumes

 

Volume visualization

Volume visualization (Kaufman, 1992) – a direct technique for visualizing volume primitives without any intermediate conversion of the volumetric data set to surface representation.

Related Glossary Terms: Volume Rendering

Term Source: Chapter 18 – Introduction

 

Voxels

A voxel (volumetric pixel or Volumetric Picture Element) is a volume element, representing a value on a regular grid in three dimensional space. This is analogous to a pixel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). As with pixels in a bitmap, voxels themselves do not typically have their position (their coordinates) explicitly encoded along with their values. Instead, the position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image). In contrast to pixels and voxels, points and polygons are often explicitly represented by the coordinates of their vertices. A direct consequence of this difference is that polygons are able to efficiently represent simple 3D structures with lots of empty or homogeneously filled space, while voxels are good at representing regularly sampled spaces that are non-homogeneously filled.

Related Glossary Terms: Pixel

Term Source: Chapter 18 – Volumes

 

W

Walker, John

John Walker is a computer programmer and a co-founder of the computer-aided design software company Autodesk, and a co-author of early versions of AutoCAD, a product Autodesk originally acquired from programmer Michael Riddle.

Related Glossary Terms:

Term Source: Chapter 8 – Autodesk/Kinetix/Discreet

 

Warnock, John

John Warnock is best known as the co-founder with Charles Geschke of Adobe Systems Inc., the graphics and publishing software company. Warnock has pioneered the development of graphics, publishing, Web and electronic document technologies that have revolutionized the field of publishing and visual communications. He was part of the pioneering work at the University of Utah while a graduate student there.

In 1976, while Warnock worked at Evans & Sutherland, the computer graphics company, the concepts of the PostScript language were seeded. Prior to co-founding Adobe, Warnock worked at Xerox’s Palo Alto Research Center (Xerox PARC). Unable to convince Xerox management of the approach to commercialize the InterPress graphics language for controlling printing, he left Xerox to start Adobe in 1982. At their new company, they developed an equivalent technology, PostScript, from scratch, and brought it to market for Apple’s LaserWriter in 1984.

In his 1969 doctoral thesis, Warnock invented the Warnock algorithm for hidden surface determination in computer graphics. It works by recursive subdivision of a scene until areas are obtained that are trivial to compute. It solves the problem of rendering a complicated image by avoiding the problem. If the scene is simple enough to compute then it is rendered; otherwise it is divided into smaller parts and the process is repeated.

In the Spring of 1991, Warnock outlined a system called “Camelot” that evolved into the Portable Document Format (PDF) file-format. The goal of Camelot was to “effectively capture documents from any application, send electronic versions of these documents anywhere, and view and print these documents on any machines”.

One of Adobe’s popular typefaces, Warnock, is named after him.

Related Glossary Terms:

Term Source: Chapter 16 – Xerox PARC

 

Wedge, Chris

Chris Wedge received his BFA in Film from State University of New York at Purchase in Purchase, New York in 1981, and subsequently earned his MA in computer graphics and art education at the Ohio State University. He has taught animation at the School of Visual Arts in New York City where he met his future film directing partner, Carlos Saldanha. Wedge is co-founder and Vice President of Creative Development at Blue Sky Studios and is the owner of WedgeWorks, a film production company founded by Wedge.

In 1982, Wedge worked for MAGI/SynthaVision, where he was a principal animator on the Disney film Tron, credited as a scene programmer. Some of his other works include Where the Wild Things Are (1983), Dinosaur Bob, George Shrinks, and Santa Calls. In 1998, he won an Academy Award for the short animated film, Bunny. He is also the voice of Scrat in the Ice Age film series, performing the character’s “squeaks and squeals”.[2]

Related Glossary Terms:

Term Source: Chapter 6 – MAGI

 

Wein, Marceli

NRC scientists Nestor Burtnyk and Marceli Wein, were recently honored at the Festival of Computer Animation in Toronto. They were recognized as Fathers of Computer Animation Technology in Canada. Burtnyk, who began his career with NRC in 1950, started Canada’s first substantive computer graphics research project in the 1960s. Wein, who joined this same project in 1966, had been exposed to the potential of computer imaging while studying at McGill. He teamed up with Burtnyk to pursue this promising field.

One of their main contributions was the Academy Award nominated film “Hunger/La Faim” (produced by the National Film Board of Canada) using their famous key-frame animation approach and system.

Related Glossary Terms: Burtnyk, Nestor

Term Source: Chapter 4 – JPL and National Research Council of Canada

 

Whirlwind

The Whirlwind computer was developed at the Massachusetts Institute of Technology. It is the first computer that operated in real time, used video displays for output, and the first that was not simply an electronic replacement of older mechanical systems. Its development led directly to the United States Air Force’s Semi-Automatic Ground Environment (SAGE) system, and indirectly to almost all business computers and minicomputers in the 1960s.

Related Glossary Terms:

Term Source: Chapter 2 – Whirlwind and SAGE

 

Whitney, John Sr.

John Whitney, Sr. (April 8, 1917 – September 22, 1995) was an American animator, composer and inventor, widely considered to be one of the fathers of computer animation.

Related Glossary Terms:

Term Source: Chapter 2 – Programming and Artistry

 

Whitted, Turner

Turner Whitted is senior researcher and area manager at Microsoft Research. Whitted is an Association for Computing Machinery fellow and a member of the National Academy of Engineering. Whitted has served as a distinguished lecturer in the Rice University Department of Electrical and Computer Engineering. He is on the editorial boards of IEEE Computer Graphics and Applications and Association for Computing Machinery Transactions on Graphics. Whitted is credited with being the “father” of ray tracing, as exemplified with his famous short movie The Compleat Angler.

Related Glossary Terms:

Term Source: Chapter 5 – Cal Tech and North Carolina State

 

WIMP

In human–computer interaction, WIMP stands for “windows, icons, menus, pointer”, denoting a style of interaction using these elements of the user interface. It was coined by Merzouga Wilberts in 1980. Other expansions are sometimes used, substituting “mouse” and “mice” or “pull-down menu” and “pointing”, for menu and pointing, respectively

Related Glossary Terms: GUI (Graphical User Interface)

Term Source: Chapter 16 – Apple Computer

 

Wireframe

A wire frame model is a visual presentation of a three dimensional or physical object used in 3D computer graphics. It is created by specifying each edge of the physical object where two mathematically continuous smooth surfaces meet, or by connecting an object’s constituent vertices using straight lines or curves. The object is projected onto the computer screen by drawing lines at the location of each edge.The term wireframe comes from designers using metal wire to represent the 3 dimensional shape of solid objects.3D wireframe allows to construct and manipulate solids and solid surfaces.3D solid modeling technique efficiently draws high quality representation of solids than the conventional line drawing.

Related Glossary Terms:

Term Source: Chapter 15 – Graphics Accelerators

 

Witkin, Andrew

Andrew P. Witkin was an American computer scientist who made major contributions in computer vision and computer graphics. Witkin worked briefly at SRI International on computer vision, then moved to Schlumberger’s Fairchild Laboratory for Artificial Intelligence Research, later Schlumberger Palo Alto Research, where he led research in computer vision and graphics; here he invented scale-space filtering, scale-space segmentation and Active Contour Models. From 1988 to 1998 he was a professor of computer science, robotics, and art at Carnegie-Mellon University, after which he joined Pixar in Emeryville, California. At CMU and Pixar, with his colleagues he developed the methods and simulators used to model and render natural-looking cloth, hair, water, and other complex aspects of modern computer animation. Witkin received the ACM SIGGRAPH Computer Graphics Achievement Award in 2001 “for his pioneering work in bringing a physics based approach to computer graphics.” As senior scientist at Pixar Animation Studios, Witkin received a technical academy award in 2006 for “pioneering work in physically-based computer-generated techniques used to simulate realistic cloth in motion pictures.

Related Glossary Terms:

Term Source: Chapter 19 – Physical-based Modeling

 

Wozniak, Steve

Steve Wozniak (the Woz) is an American computer engineer and programmer who founded Apple Computer (now Apple Inc.) with Steve Jobs and Ronald Wayne. Wozniak is the inventor of the Apple I computer and its successor, the Apple II computer, which contributed significantly to the microcomputer revolution.

Related Glossary Terms:

Term Source: Chapter 16 – Apple Computer

 

WYSIWYG

an acronym for What You See Is What You Get. The term is used in computing to describe a system in which content (text and graphics) displayed onscreen during editing appears in a form closely corresponding to its appearance when printed or displayed as a finished product, which might be a printed document, web page, or slide presentation.

Related Glossary Terms:

Term Source: Chapter 16 – Xerox PARC

 

X/Y/Z

Zajac, Edward

Zajac is recognized internationally as the first person in history to create computer animation, at first as a visual means to share with his colleagues the positions of satellites as they orbit Earth. Appearing antiquated and simple in today’s world, his early computer- animated films produced at Bell Labs won much acclaim at the time, and awards in the U.S. and oversees, and are considered classics today.

Related Glossary Terms:

Term Source: Chapter 4 – Bell Labs and Lawrence Livermore

License