Film and Photography

Film and Photography

Film is traditionally understood to be the material medium that unites and cinematogra­phy. and cinema are, together, distin­guished from other media by sharing this material basis. In fact, the earliest photographers did not use film, but, rather, glass or polished metal plates, and a modern photographer or cinematographer is increasingly likely to use electronic means of recording rather than film. Nevertheless, film is the best starting point for appreciating both what these media have in common and what differenti­ates them. Specifically, they differ in respect of their representation of time: produces a still image of a moment frozen in time, whereas cinema employs film to produce a moving image that shows time passing. The representation of time, the technological control of time, and the viewer’s experience of time are all features that have crucial importance for the application and the value of film media.

The term photography was established by John Herschel (1792-1871), in an 1839 presentation to the Royal Society in England. The Greek words photos (light) and graphis (paintbrush) or graphe (drawing) convey the idea of “drawing with light.” Light from a scene is recorded on a photosensitive surface, such as chemically treated film or an elec­tronic sensor. In the case of photography, the data are subsequently processed and printed to produce a still image. A photographic image, or photograph, is sometimes called an because the photo­sensitive surface is exposed to light for a finite dura­tion of time and records only the data available during that time. In the case of cinema, a large number of individual exposures are projected sequentially onto a screen and, from the perspective of the viewer, the result is a moving image—known as a motion picture or movie. Cinematography is from kinesis (movement). The photographic image typically has just one relevant time constraint: the duration of the exposure time. The cinematic image has other relevant time constraints: in addition to the exposure time, the image is also a function of the speed of the cine-—the rate at which frames are exposed—and the projection speed—the rate at which frames are screened.

History of Photography and Cinema

A camera obscura (Latin: camera—room, obscu­ra—dark) is a darkened room in which light from an outside scene is channeled through a narrow aperture, by lenses and mirrors, to form an image on a screen. However, the resulting image is not a photograph; instead, the image we view inside a camera obscura moves in real time on the screen and leaves no permanent record. Artists can pro­duce a hand-traced outline from the image, and light-sensitive surfaces can react to produce a tem­porary image; but these are not photographs. A photograph occurs when an image produced by exposure to light is fixed and made to persist over time. The process can be entirely chemical or mechanical, so it can occur without human involvement. This led the first practitioners to claim that photography was the “spontaneous” or “automatic” reproduction of nature by itself, and some argued that photography was a discovery rather than an invention. The term camera is now used for any mechanical apparatus that records data in this way. A camera typically has three key elements: lens, shutter, and photosensitive surface. The lens, along with an adjustable opening (the aperture), focuses light onto the photosensitive surface. The shutter mechanism opens and shuts to control the duration of the exposure to light.

The Heliograph

In 1826, Joseph Nicephore Niepce (1765-1833) succeeded in producing a fixed image from a cam­era obscura by a process he called Heliography (sun-writing). His image of the view from an attic window achieved important status as the first pho­tograph precisely because it is an enduring image created entirely by an exposure of light on a pho­tosensitive surface. Niepce is not considered the sole originator of photography as history recog­nizes technical contributions from numerous peo­ple working independently and simultaneously. As many as 24 individuals have some claim to have invented photography. Two early contributions to the photographic process deserve special note: the daguerreotype and the calotype.

The Daguerreotype

Niepce’s 1826 heliograph, View From a Window at Gras, required an exposure time of some 8 hours. With the aim of creating a faster exposure time, he shared details of his process with Louis- Jacques-Mande Daguerre (1787-1851), who took over the project after Niepce’s death and gave his name to the resulting process. A daguerreotype began as a sheet of copper, plated with silver and highly polished. The plate was treated with iodine fumes to give it a light-sensitive surface of silver iodine. The first daguerreotype images had an exposure time of 4 or 5 minutes, and the photo­graphic image became visible when the latent image was first treated with mercury fumes, then fixed with a solution of table salt (sodium chloride). The process results in a “direct positive” print, which is extremely delicate and each image is unique. Still Life (Interior of a Cabinet of Curiosities), 1837, is thought to be the oldest surviving example. In 1839 the process was made public to the world by the French Academy of Science.

The Calotype, or Talbotype

In England, William Henry Fox Talbot (1800­1877) developed a process he called “photogenic drawing.” Unlike the daguerreotype, which produced a single inverted image, Talbot’s process involved the creation of a negative on chemically sensitized paper, from which multiple positive prints could be made. He called this image a calotype—from kalos (beauty); an example of a calotype negative survives from 1835, titled Lattice Window Taken With the Camera Obscura. Calotypes could be multiply reproduced, but, hindered by long expo­sure times and problems arising from paper texture, the positive prints were usually indistinct. Daguerreotypes offered a superior quality image, with exceptionally sharp detail. For many years the daguerreotype was more popular among the pub­lic, particularly for portraiture. However, in 1851 the collodian wet-plate process replaced Talbot’s paper negatives with glass plates and speeded up the exposure time. Ultimately, Talbot’s negative­positive process proved to be the prototype for the future of commercial and popular photography.

The Stereograph

Throughout the 19th and early 20th centuries, the stereograph, from stereos (“solid”) and skopein (“to look through”), was one of the most popular forms of photography. A stereograph is created by recording two images of a scene, simultaneously, but from slightly different locations. The two prints were placed side by side in a special viewer, one to be viewed by the left eye and one by the right eye. Viewing a stereograph in this way generates an illu­sion of three-dimensional space. Such was the demand for this format that a stereoscopic camera was produced. Equipped with two lenses, this camera was specifically designed to take two photographs at the same time. This format has subsequently entirely disappeared, not least because the stillness of the scene fails to entertain modern eyes more accustomed to moving images.

The Kinetoscope and Cinematograph

Even prior to the invention of photography, various devices had been designed to produce a moving image from static pictures. A moving pic­ture is, really, an illusion of movement, created when one image is replaced by another in such a short period of time that the human eye registers the change as though it is the movement of an object. This phenomenon is known as “the persis­tence of vision.” Many of the earliest devices used a rotating cylinder to generate a rapid succession of images. The viewer looked at a fixed point through a series of narrow slits in the rotating drum and saw an apparently moving image. These devices were merely entertaining parlor toys, whereas the invention of cinema in the late 19th century owed its origins to a breakthrough from the world of photography.

Two basic elements are essential for cinema: First, we need an apparatus that records a series of images in sequential time order; second, we need an apparatus that screens the series of images in sequential time order. There are numerous differ­ent ways to satisfy one or the other of these requirements, but the invention of celluloid photo­graphic film in the late 1880s provided a straight­forward way to satisfy both at once. The advent of film caused the birth of cinema, so it is understand­able that the term film is used, both as a noun and in verb form (“to film”), as an abbreviation for cinema, but not for photography even though film has been the material medium of both. Roll film provided not just a means of recording a sequence of exposures, but, crucially, also a means of con­trolling the time sequence when screening the images. The innovations leading up to this point show why film provided the breakthrough that led to the cinematic revolution.

The pioneers of photography investigated two approaches for recording a series of time-ordered images: one solution is to have multiple cameras, each making a single exposure at timed intervals; another solution is to have a single camera making multiple exposures one after another. In the 1870s, Eadweard Muybridge (1830-1904) produced a time-ordered sequence of a running horse using a row of 12 cameras. The shutter of each camera was triggered by cotton threads as the horse ran past. In 1874 scientists were able to use a “revolver camera” to record the transit of the planet Venus across the sun. This camera gun, devised for the purpose by Cesar Jules Janssen (1824-1907), made 48 individual exposures around a circular daguerreotype plate.

Inspired by these ideas, Etienne Jules Marey (1830-1904) developed a process he called chronophotography (time photography) in the early 1880s. Marey used a spinning disk in front of an ordinary camera lens. The disk had slots at regular intervals so that a moving subject appeared at a different position every time an open slot per­mitted an exposure. The result showed the move­ment of the subject as a series of overlapping images on a single photographic plate. Eventually, Marey designed a camera to produce a series of multiple exposures that did not overlap. To achieve this he required the light-sensitive surface to move for each new exposure, so the medium needed to be flexible and robust. Glass and metal places were unsuitable, but, when celluloid photographic film became available in 1889, Marey was able to pro­duce a short “film” showing the movement of the human hand. A contemporary inventor, Louis Le Prince (1841-1890), patented a 16-lens motion­capture camera in 1886, but this was unsatisfac­tory as it recorded images from slightly different angles. By 1888 Le Prince had developed a single­lens camera-projector and used this to record exposures at 12 frames per second on paper-based Eastman Kodak film. Fragments of his first motion picture, Roundhay Garden Scene, have survived, and Le Prince is one of several figures who are credited with the invention of cinema.

Thomas Edison (1847-1931) invented the kine- toscope in 1894. This was known also as the peep show because images were screened inside a box and viewed by one person at a time. The kineto- scope screened a 50-foot length of celluloid film at 48 exposure frames per second, and the entire show lasted only 13 seconds. In 1895 the Lumiere brothers, Auguste (1862-1954) and Louis (1864­1948), unveiled their Cinematograph, which used a film projector to screen films to a group audi­ence. The majority of early films were studies of human or animal movement and, although cinema would very quickly expand into creative and unconventional treatments of the time sequence, the first film audiences were fascinated simply by viewing the moving image of an ordinary event occurring in real time.

Technology and the Control of Time

Photography and cinema are highly dependent on technical apparatus and therefore particularly sen­sitive to technological progress. Whenever practi­tioners have reached the limits of existing equipment, they have urgently invented new cam­era apparatuses and devised innovative means of production. In this way the creative impetus has driven technological progress, but new technical advances have also produced surprising applica­tions. A survey of the most significant technologi­cal advances points to one primary technical challenge: the control of time. The production of a photograph has three stages: preparation time, exposure time, and processing time. Technological progress has dramatically altered the time and resources needed for each of these stages.

Film and Film Speed

Early photographers were forced to spend hours working with raw materials to prepare wet-plates. The laborious preparation time for each photo­graph removed any prospect of spontaneity; plus the size and amount of equipment required was an encumbrance to the photographer. Photography required wealth, leisure time, technical skill, and some degree of physical strength. A pre-prepared dry-plate process was developed in the 1880s, and in 1889 George Eastman (1854-1932) launched the first celluloid roll film. These technical advances dramatically reduced the burden of preparation time and expensive equipment, but also generated an important further advantage: faster film speeds. Film “speed” is a measure of the threshold sensitivity of the film surface. The current international mea­surement of film speed is the ISO scale. A represen­tative section of the scale (from slow to fast) is: ISO 100, ISO 200, ISO 400, ISO 800. With greater sensitivity, a film is quicker to respond to the avail­able light. Hence as film speed is increased, expo­sure times can be decreased. Long exposure times require a camera to be securely mounted, but short exposures make it possible to use handheld cam­eras. Thus by the time the first drop-in film car­tridge was marketed in 1963, cameras were cheap, quick, simple to use, and easily portable, making spontaneous photography available in any location and at any moment.

Automation: Shutter, Exposure, and Flash

Exposure times can be reduced by faster film speed, but also by increasing the level of available light. From the 1860s, photographers created addi­tional light by burning highly explosive magnesium flash powder. Flash technology became less danger­ous with the invention of the flashbulb in the 1920s, but photographers still needed to activate shutter and flash separately. In the mid-1930s the mechanisms for flash and shutter were made to synchronize, thus providing far greater control over the timing of the photograph. By the 1950s this became a standard feature on popular cameras.

The first cameras were limited by manual expo­sure times. A photographer removed the lens cap to start the exposure and replaced the cap to finish. The introduction of mechanized shutters provided much greater precision and control and made it possible to work with exposure times far faster than human action allows. By the end of the 19th century, shutter speeds as high as 1/5000th of a second were possible. When combined with roll film, faster shutter speeds also increased the num­ber of separate exposures that could be taken dur­ing a given period of time. The fastest modern SLR (single lens reflex) cameras can make 10 exposures per second.

Even with these mechanical aids, photography still required human expertise for judging the opti­mum exposure time. However, in 1938, Kodak manufactured a camera equipped with a sensor to calculate the exposure time. Combined with the other technical advances, it was marketed as the world’s first fully automated camera.

Developing and Printing

Wet-plate photography necessitated that photo­graphs were processed only a short time after the chemicals had been exposed to light, otherwise the image would not be preserved. This made it extremely difficult for a photographer to freely choose the time to take a photograph, a constraint that is evident in the subject matters favored in the 19th century. For example, photojournalism dur­ing the American Civil War was limited to images of the battlefield before or after the action took place. Even when dry-plate and film technology made it possible to process photographs after a reasonable delay, photographers were forced to spend long hours in the darkroom to develop film and enlarge prints from the negatives.

In the 1880s, Eastman marketed the first com­mercial developing and printing service, which gave the public an alternative to this time-consuming occupation. Kodak publicized the service with a now legendary slogan: “You press the button, we do the rest.” With the advent of a developing and printing service, photography enthusiasts were free to pursue their hobby in two alternative directions. Many welcomed the convenience of the commercial service, but others saw processing as an integral part of generating their desired final image. This included techniques such as dodging and burning, where light projected through the negative enlarger is selectively blocked, to compensate for areas of overexposure or underexposure in the negative. In the 20th century, photography clubs became enormously popular and provided ample evidence that photographers across the social spectrum were prepared to invest the time required to master the darkroom arts.

The introduction of color film revealed the extent of this trend. Setting aside earlier experi­mental versions, the first color film was introduced for movie cameras in 1935, followed by a version for still cameras in 1936. The procedures required for developing color film and producing color prints were disproportionately laborious, even for the enthusiast, and the results were typically less successful than commercial prints. Hence, although color photography became increasingly common for the commercial mass market, enthusiasts in photography clubs preferred to print from black and white film, at least until the advent of in the late 20th century.

Whether processed by a commercial company or in the darkroom at a photography club, there was still a considerable time lag between making an exposure and viewing the printed image. To satisfy demand for instantaneous results, the first “instant” camera was produced by the Polaroid Corporation in 1948. Invented by Edwin Land (1909-1991), the automated process was able to deliver a sepia­colored print within one minute. The mechanism spreads chemicals over the surface of the film and each chemical reacts according to a time delay, so that each layer of the film is processed in the right order. By 1963 the Polaroid camera was able to produce “instant” prints in color.

Digital cameras do not require a chemical devel­oping process, and an electronic image can be viewed immediately on the camera’s own LCD (liquid crystal display) screen. The speed with which a photograph can be viewed is highly important because the power of film media does not lie just in its ability to record events, but also in the possibilities for disseminating images widely and quickly. A daguerreotype image could be dis­seminated only if it was first translated into an engraving, so rapid publication, such as for news­papers, was out of the question. In the present day, it is commonplace for a mobile phone to contain a digital camera, so photographs and video clips can easily be sent around the world within seconds.


The earliest cine-cameras were cranked by hand, so the speed at which exposures were recorded could be uneven. Most early movies were recorded at between 16 and 23 frames per second. However, nitrate-based film was highly flammable and liable to catch fire if projected at low speeds. Movies were therefore often projected at speeds higher than 18 frames per second, and this produced the appearance of accelerated, unnatural action in the moving image. This is because the perceived speed of an event, from the perspective of a viewer, is calculated by dividing the projection speed by the camera speed. At present, when a moving picture is screened at a cinema theater, images are invariably projected at a rate of 24 frames per second. Hence, when old movies are screened, the images appear to move at an absurdly fast pace. In some cases this phenomenon was deliberately exploited to comic effect by cinematographers.

Even when the standard projection speed of 24 frames per second is maintained, from the perspec­tive of the audience, events can be made to occur faster or slower than the real-time event. The phe­nomena of high-speed photography and time-lapse photography both exploit the principle that the time-span of the recording can differ from the time-span of the screening. A cine-camera typically exposes film at a rate of 24 frames per second. Each frame is exposed to light for half the time and blocked by the shutter for half the time (i.e., each frame has an exposure time of 1/48 of a second). Time-lapse photography makes exposures at an extremely slow rate—this is sometimes called undercranking. If a camera records a 4-hour event by taking an exposure every minute and the expo­sures are screened at 24 frames per second, then the 4-hour event will be speeded up and appear to last only 10 seconds.

High-speed photography is known as overcrank­ing because it requires the cine-camera to record exposures faster than 24 frames per second. When the exposures are screened at 24 frames per second, the event will appear to take place in slow motion.

The earliest cine-cameras recorded only the data of light from the scene, so until the 1920s, the results were moving pictures without sound: silent movies. In a significant departure from still pho­tography, cinematographers sought to find a way to record sound—and their main problem became how to make the sound synchronize with the visual action. Again, film provided an elegant technical solution: Sound was recorded onto a magnetic strip along the edge of the photographic film. This ensured that an actor’s voice would be heard to speak at the same time as his lips were seen to move. In fact, in 1929, the projection speed stan­dard of 24 frames per second was established precisely because this was the optimal speed for the kind of film used for “talking movies” (talkies).

The Digital Future

Many interesting and important questions about photographic technology have arisen since the rapid emergence of digital photography. Digital cameras were designed by NASA in the 1960s for use in satellite photography. In 1981 Sony demonstrated the first consumer digital camera, and since 2003 digital cameras have outsold film cameras.

In a digital camera, light is captured by an elec­tronic photosensitive surface and the data are digi­tized into binary code. One significant difference is that the wavelength of light is “interpreted” by the camera software, rather than registered as chemical changes in the surface of the film. In early digital cameras this process created an inconvenient time lag between triggering the shutter and recording the data; however, advances in microprocessor technology have subsequently solved this problem.

Digital photography does not, strictly speaking, involve a negative, so it might appear to have ended the dominant tradition of the negative-pos­itive process. In fact, the digital file that stores data performs a function similar to the latent image on a negative because the binary data it contains needs to be accessed and processed in order to produce a visible “print.” Postprocessing of an image, using computer software, has replaced the traditional darkroom work of dodging and burning, and has made possible many other manipulations, so it is still possible to spend many hours on an image before the final version is printed or screened.

It is increasingly common for digital photographs to be viewed and stored only on computers, rather than printed on photograph paper. Digital images are stored as binary data in computer files, hence in principle they can be copied, transferred, and saved for an indefinitely long period of time without degrading. However, archivists of digital photo­graphs face the task of constantly updating the existing digital files, otherwise they could be lost as the retrieval technology becomes obsolete.

Unlike fully manual film cameras, digital pho­tography requires electrical power, and a battery provides power for only a finite period of time. Modern photographers are thus subject to at least one time constraint that is different from, but reminiscent of, the problems that faced their predecessors.

Applications of Film and Photography

Film media are thought to hold a privileged epistemic position by virtue of having a guaran­teed causal and temporal relation to reality. Each individual image has a causal relation to the light reflected or emitted by real objects in the world at an actual moment in time. Although a highly abstract image may not resemble any recognizable objects, nonetheless that image was caused by capturing the light from particular objects during a particular period of time. This essential charac­teristic shapes the applications and the value of photography and cinema.


Much of the power of film media stems from their having a documentary function and, when viewed as a document of an event, it seems to mat­ter greatly that a photograph or film was actually exposed at the time it appears to have been exposed. Periodically this feature has been empha­sized and exploited, or subverted and resisted. There are two strong trends throughout the history of photography: one emphasizes the ideal of an unaltered image, produced without any retouching of the negative or print. The other embraces manipulation and postprocessing as an intrinsic aspect of a photographic image. Associated with the former we find the idea that a photograph stands in a special relation to the specific moment in time when the data were recorded. Nothing is subsequently allowed to add or take away data recorded at that moment. With the latter trend, we find the idea that a photographic image is not straightforwardly a record of a specific moment in time. Rather, it is the product of various processes and decisions that occur over an extended time and include the exposure as only one factor.

Although this debate persists, it is a fact that manipulation has been a feature of photography from the very beginning. The negative of a calo­type could be retouched by scratching the surface before a print was made. In 1851, Edouard Baldus joined together parts from 10 negatives to produce a single print, the Cloister of Saint Trophîme, Arles, thus challenging the idea that a photograph necessarily represents a single exposure time.

There has long been a demand for additional evidence to verify when, in physical time, a photo­graph was taken. In 1914 the Kodak “auto­graphic” camera enabled the photographer to write notes directly onto the undeveloped film while it was still in the camera. Later cameras automated the process of time and date stamping each image, and digital cameras record these data as part of the digital file. However, even a time- stamped image is not a guarantee of epistemic authenticity. It is possible for a photographer to reconstruct an event or reconfigure a scene in order to control the image. This, too, has been a feature of photography since its inception. The public did not object when Alexander Gardner (1831-1882) positioned a rifle next to a dead body for his 1863 Civil War photograph Home of a Rebel Sharpshooter; yet the 1945 World War II photograph Marines Raising the American Flag on Iwo Jima has been attacked following allegations that Joe Rosenthal (b. 1911) had merely photo­graphed a repetition of the original event.

Despite these concerns, from photojournalism to the photo-finish image that gives a race result, we rely on photographs to stand as evidence that an event took place, or provide evidence about visual appearance at a particular time. This is true of formal contexts, such as law courts and pass­port photographs, and of informal contexts such as the holiday and wedding photographs in a fam­ily album. In both public and personal uses of photography, it seems that we use its documentary function on the one hand to highlight change and on the other hand to preserve against change.


The first scientific applications of photography were reference book collections of scientific speci­mens. For this use photographs were treated simply as a superior substitute for hand drawings. Before long, however, scientists employed photography to make scientific observations that would not be pos­sible by any other means. In particular, photogra­phy makes it possible to observe events that occur too fast for human perception and also to observe light from sources too faint for the human eye to detect. These applications require, respectively, very short and very long exposure times.

Muybridge’s and Marey’s studies of human and animal motion attracted scientific attention, and Muybridge’s 1878 series, The Horse in Motion, conclusively proved that a galloping horse raises all its hooves off the ground at the same time. In these experiments exposure time was controlled by shut­ter speed, but to achieve motion-stop photography of much faster objects, electronic flash photography was essential. In 1851 Talbot created the first flash photograph, using the illumination from an electric spark in a dark room. The exposure time was 1/100,000 of a second—far quicker than any shut­ter action could produce. In 1931 Harold Edgerton (1903-1990) invented an instantly rechargeable flash mechanism he called the stroboscope and pro­duced motion-stop photographs that fascinated the scientific world, including a bullet piercing a line of balloons and the coronet formed by a splashing drop of milk. Exposure times of a billionth of a second are possible, and the sound waves from bul­lets moving at 15,000 miles an hour have been “frozen” in a photographic image, making photog­raphy an instrument for scientific discovery.

As early as 1839 Daguerre produced a photo­graph of the moon, and astronomers soon under­stood that the exceptionally faint light from stellar objects could be recorded on film using extremely long exposures. Light that is too faint for the human eye to detect can accumulate over time on a photosensitive surface and eventually produce a strong image. Through photography, it is possible to observe areas of the universe that are invisible even with the most powerful telescopes—and, uncannily, to observe stars that ceased to exist millions of years ago.


The enormous range of artistic movements in film and photography are too many and too varied to detail here, but it is significant that the supposed objectivity of photography, which made it so suit­able for recording scientific observations, initially made it seem unsuitable as a basis for artistic work. Even now, despite the photographs exhibited in art galleries and a long tradition of art discourse, there are still figures who deny that it is possible for a photograph to be an artwork. More generally, it is recognized that, although a fully mechanical or automated photograph is possible, many photo­graphs are valued artistically due to the creative contribution and thought processes of the photogra­pher. Henri Cartier-Bresson (1908-2004) encapsu­lated this idea when he epitomized the composition of his photographs as “the decisive moment.” Elsewhere, even stronger claims have been made to support the view that photographs can be an art form: the critic Susan Sontag (1933-2004) went as far as to claim that the passing of time gives aesthetic value to most photographs, however mundane.

The Viewer’s Experience

The viewer of a photograph is able to control the amount of time spent viewing the photograph. The viewer of cinema has a significantly different experience. Photographs are, first and foremost, viewed privately. Multiple copies exist and indi­viduals view individual copies in their own time. Cinema, first and foremost, involves a public view­ing. The audience shares a period of time together and the duration of the viewing experience is dic­tated by the projection time of the movie.

Film media have a unique standing among visual media because they are considered to be such a close substitute for a visual experience that they seem more reliable than human memory, or even can be treated as surrogates for memory after time has passed. This along with an idea that every pho­tograph is essentially a memento mori—a reminder of the inevitability of death—is condemned as a cliche by critics and psychologists, but it is undeni­able that photography retains powerful associations with poignancy, nostalgia, and loss. The experience of viewing a photograph of one’s own ancestor, long dead, is commonly felt to be disturbing and moving in a manner that is unlike viewing even the most realistic painting. A photograph, it has been argued, is “transparent” to the scene at the time it was taken; hence when we look at such a photo­graph, we are genuinely looking back in time.

In our digital, postfilm era, it is no longer strictly correct to characterize photography and cinema as film media. However, film remains an important feature of their shared history. And history shows that, thanks to film and photography, we have gained new and remarkable ways to experience time and we now have a relation to time and the world that would otherwise not be open to us.

Dawn M. Phillips

See also Arthur C. Clarke; Perception; Time-Lapse Photography

Further Readings

Barthes, R. (2001). Camera lucida. New York: New Library Press.

Clarke, G. (1997). Oxford history of art: The photograph. Oxford, UK: Oxford University Press.

Ford, C. (Ed.). (1989). The Kodak museum: The story of popular photography. North Pomfret, VT: Trafalgar Square Publishing.

Hoy, A. H. (2005). The book of photography. Washington, DC: National Geographic.

Marien, M. W. (2006) Photography: A cultural history (2nd ed.). London: Lawrence King Publishing.

Sontag, S. (1979). On photography. London: Penguin. Szarkowski, J. (1966). The photographer’s eye. London: Secker & Warburg.

What do you think?

Johann Gottlieb Fichte

Johann Gottlieb Fichte