History of Photometry

History of Photometry in Astronomical Observations


Photometric History

A quick glance at the sky on a dark and clear night reveals that all stars are not the same brightness. Staring at Vega too long, for instance, might cause one to see spots afterwards, but staring at one of the many fainter stars scattered over the sky does no such thing. The Pleiades Look carefully at the photograph of the Pleiades, a cluster of stars, at right. You might notice that the brightest stars in the photograph also seem the largest to our eyes. If you go outside on a clear night, you will notice the same thing. This is not because the brighter stars are physically larger and so seem brighter, since they are much too far away to seem of any size at all. To us on Earth, they are just points of light in the sky. The reason that brighter stars seem larger is due to the way light behaves in our eyes. It is convenient to draw an analogy to a grid of water buckets. Let a star be pouring its energy (water for the sake of the analogy) into one of the buckets. If it is a very bright star with a lot of energy (water), then eventually this bucket will fill up and then overflow. The overf lowing water then seeps into the buckets adjacent to the original one, and so rather than water in just the one bucket, we end up with water in several buckets near to the first one. And depending on how bright and energetic the star is (how much water it has), we could end up with a great many buckets filled up. It is easy to see from this analogy a brighter star will seem to extend over a greater area, simply because its energy is too large to be contained in a single bucket, or retinal cell, in the case of our eyes.

Photometry is the study of the brightness of things. Most early astronomical research dealt with stellar photometry, the brightness of stars, although since then the field has broadened to the photometry of galaxies, nebulae, supernovae, and pretty much everything else in the universe. There are several ways to determine the brightness of a star. The earliest studies were done by eye. Hipparcos in 250 B.C. classified all the stars in the sky that he could see according to their brightnesses. He had no special tools or equipment, just his naked eyes. When the telescope was invented many centuries later, astronomers realized that they could measure more exact brightnesses by examining how large a star appeared through the telescope. This was done in several ways, as well. First, the astronomer needed to know the brightness of at least one other star in the field of the star of interest. He could then visually compare the unknown star to the one he did know and so make an estimate to the brightness of the unknown star. The main problem with this method was that it was not very objective. Different observers could measure very different brightnesses. A more exact way to get an answer was to use diaphragms, devices which could measure the diameter of a star seen through the telescope. Therefore, if the astronomer knew the relation between the diameter seen and the brightness of the star, then it would be easy to calculate the stellar brightness. The main problem with this method was that the relation between diameter and brightness could never be known very exactly.

Star on night with atmospheric turbulence Star on night with little atmospheric turbulence
There were nightly differences in atmospheric conditions, temperature, and the telescope itself which caused the relation to vary somewhat. Note the photographs above, of the same star but first on a night with much atmospheric turbulence and second on a night with little atmospheric turbulence. The difference is quite noticeable and also difficult to calibrate. The process which seemed to work best for these early astronomers was to do a combination of both methods. They used the diaphragm to measure the star of interest plus at least one comparison star so that they could calibrate the diameter-brightness relation on every night, no matter what kind of conditions existed.

The introduction of photography to the field of photometry ushered in a century of much more accurate and reproduceable results. It allowed astronomers to keep a permanent record of their observations so that they could photograph the stars on one night and analyze them some other time. Astronomers could then take more photographs each night, since they did not need to measure anything while they were observing. Furthermore, after developing the photograph, they then had a record which could be examined time and time again, for greater precision and objectivity.

But how were these photographs to be measured? Certainly one could measure the diameter of stars and so relate them to their brightness, as astronomers had already been doing. But photographing stars presents us with a new problem in that the energy in the starlight (the water from our earlier analogy) can overflow buckets in the photograph not just at the surface of the photograph, but also through the depth of the photograph. Photographic plates have a small depth to them, and the energy from the star often produces a reaction in the plate below the surface of the plate. So this needs to be taken into account. Edward Pickering It was Edward Pickering (at left) at Harvard University who in 1910 suggested a solution to this problem. He proposed that photographic plates be measured objectively by passing either heat or light through the plate and then measuring how much of the heat or light reached through to the other side. Where less light came out, there must be a larger density of grains produced by the stellar light in the photograph. Brighter stars produced more chemical reactions in the photographic plate and thus denser regions of grains in the plate. Therefore, they block more light from exiting the other side of the plate, and so Pickering calibrated how much light blockage corresponded to how much brightness, and he could then determine the brightnesses of many stars.

A year after Pickering designed this scheme, Stetson PhotometerHarlan Stetson at Dartmouth College built a thermopile photometer (at right) for this very purpose, to measure the intensity of light which passed through a photographic plate. In his design, an illuminated pinhole diaphragm was projected onto one side of a plate and on the other side was placed a thermopile and galvanometer to measure how much light was received.Schilt Photometer A strikingly similar machine (at left) was built at the same time by Jan Schilt at Groningen University, which ended up being adopted by many observatories including Mt. Wilson and Yerkes.

In 1934, an adjustable iris was added to the same design by Heinrich Siedentopf (at right) at Jena in Germany, so that the light beam could be reduced or intensified until the observer could have any given brightness of light directed at the plate.Heinrich Seindentopf This new development greatly improved the range of magnitudes which could be measured on a given plate from 4-5 magnitudes for the Schilt design to 11 magnitudes for Siedentopf's. This allowed astronomers to measure both very bright and very faint stars in the same photograph.

By 1960, several observatories had digitized and semi-automatic iris photometers so that lots of stars could be measured without very much human subjectivity in measurement. And then a year later, Peter Fellgett proposed a fully automated plate measuring machine which would give both positions and magnitudes of stars without any human intervention. The first successful realization of his idea came in 1969, with the Edinburgh GALAXY machine. A scanning light spot from a cathode ray tube was focused onto a photographic plate so as to measure the light emergent from the other side of the plate in a 16 micrometer pixel. This machine could measure 1000 stars in a single hour. Two years later, the Cambridge Automatic Plate-Measuring Project went a step further. The new development in this design was a flying laser beam guided by two computer controlled mirrors with orthogonal axes. The new set-up allowed for the measurement of 10 stars during each second, or 36,000 stars during each hour. Quite an improvement over an astronomer or graduate student sitting at the telescope and estimating the brightness of one star with respect to another!

During the 1970s and 1980s, further developments took place to make measurements much more accurate. Scanning beams have narrowed and their intensities have been made much more constant, the light detector has become more sensitive to even minute differences in brightness, and the motion of the plate and chart has been synchronized further. Additionally, the density data collected in the newer machines can be automatically converted to brightness data and then sent to a computer for reduction and analysis. Furthermore, even more recent developments have included the use of television cameras, electronic cameras, and charged coupled devices (CCDs) for which there is no intermediate step between photograph and computer, as photographs are almost literally taken on computers.

Photometry at Leander McCormick Observatory

Astronomers at McCormick Observatory have taken part in almost all stages in the development of photometry. Its first director, Ormond Stone, began a systematic program of visually observing variable stars with long periods in 1902. After he retired and S.A. Mitchell took over as director of the observatory, Mitchell and Harold Alden continued this work by improving the magnitudes of the comparison stars in the fields of these long period variables. Most of the observatory's energy was devoted to astrometry, but when Mitchell realized that they could be using the midnight hours when astrometry could not be done to do photometry, he and Alden enthusiastically started a new observing program. They were able to greatly improve over previous calibrations by using standardized disks to represent stars of different brightnesses and by having better draughtmanship and consistency in their measurements. Throughout the 1920s, McCormick Observatory worked in cooperation with Harvard, Lick, and Yerkes Observatories to determine accurate brightnesses of these standard stars. In 1930, Mitchell was honored by being asked by the American Association of Variable Star Observers (AAVSO) to revise the magnitudes for all of the comparison stars in the fields of the variable stars on their observing program.

This early photometric work was mostly done visually, meaning measuring while observing with the telescope. Mitchell obtained a wedge photometer for the observatory when this work began, which allowed an observer to measure the brightness of a star by comparing it to a "fake" star whose brightness could be varied until it matched that of the actual star in the field. Several stars in the field of the variable would be measured in this way, covering a wide range in brightnesses. Then the variable itself would be measured and compared to the field stars, and so one could estimate the brightness of the variable to an accuracy of 0.1 magnitude, if one was a skilled observer.

Photographic photometry at McCormick Observatory was first done in 1916, but it was not until Alexander Vyssotsky developed a technique for it on the 26 inch telescope in 1932 that much science was done in this way. However, after Vyssotsky figure out a way to do photometry using photographic plates, this method of photometric study became the main focus of attention at the observatory. It was much more convenient and less time consuming than using the wedge photometer, and one could get more accurate results. The first generation of photometric plates, including Vyssotsky's famous catalog of M Stars, were measured on a thermoelectric microphotometer modeled after the one built by Schilt in 1911. In later years, the observatory obtained automated photometers and so plates were measured on those, without any human intervention. Currently, photographic plates are measured on a PDS Microdensitometer.


Return to Virtual Museum of Measuring Engines