History of Photometry in Astronomical Observations
A quick glance at the sky on a dark and clear night reveals that all stars are not the same brightness. Staring at Vega too long, for instance, might cause one to see spots afterwards, but staring at one of the many fainter stars scattered over the sky does no such thing. Look carefully at the photograph of the Pleiades, a cluster of stars, at right. You might notice that the brightest stars in the photograph also seem the largest to our eyes. If you go outside on a clear night, you will notice the same thing. This is not because the brighter stars are physically larger and so seem brighter; all stars except the Sun are much too far away to seem of any size at all. To us on Earth, they are just points of light in the sky. The reason that brighter stars seem larger is due to the way light behaves in our eyes. It is convenient to draw an analogy to a grid of water buckets. Let a star be pouring its energy (water for the sake of the analogy) into one of the buckets. If it is a very bright star with a lot of energy (water), then eventually this bucket will fill up and then overflow. The overflowing water then seeps into the buckets adjacent to the original one, and so rather than water in just the one bucket, we end up with water in several neighboring buckets. Depending on how bright the star is (how much water it has), we could end up with a great many filled buckets. It is easy to see from this analogy that a brighter star will seem to extend over a greater area, simply because its energy is too large to be contained in a single bucket, or retinal cell, in the case of our eyes.
Photometry is the study of the brightness of things. Most early astronomical research dealt with stellar photometry, the brightness of stars, although since then the field has broadened to include galaxies, nebulae, supernovae, and pretty much everything else in the universe. There are several ways to determine the brightness of a star. The earliest studies were done by eye. Hipparchus of Nicaea, working in Rhodes c. 129 B.C. apparently produced a catalog of about 850 stars, with positions and brightnesses, without special tools or equipment, just his naked eyes. He called the brightest stars "of the first magnitude", and the faintest stars "of the sixth magnitude". Ptolemy of Alexandria seems to have copied this system in his Almagest (c. 170 A.D.), which was the basis of astronomical learning for the next 1400 years. When the telescope was invented in the seventeenth century, astronomers realized that they could measure more exact brightnesses by examining how large a star appeared through the telescope. This was done in several ways as well. First, the astronomer needed to know the brightness of at least one other star in the field of the star of interest. He could then visually compare the unknown star to this reference star and so make an estimate to the brightness of the unknown star. The main problem with this method was that it was not very objective: different observers could measure very different brightnesses for the same star. A more exact way to get an answer was to use diaphragms, devices which could measure the diameter of a star seen through the telescope. Therefore, if the astronomer knew the relation between the diameter seen and the brightness of the star, then it would be easy to calculate the stellar brightness. The main problem with this method was that the relation between diameter and brightness could never be known very exactly.
The introduction of photography to the field of photometry ushered in a century of much more accurate and reproduceable results. It allowed astronomers to keep a permanent record of their observations so that they could photograph the stars on one night and carefully analyze them at some other time. Astronomers could then take more photographs each night, since they did not need to measure anything while they were observing. Furthermore, after developing the photograph, they then had a record which could be examined time and time again, for greater precision and objectivity.
But how were these photographs to be measured? Certainly one could measure the diameter of stars and so relate them to their brightness, as astronomers had already been doing. But photographing stars presents us with a new problem in that the energy in the starlight (the water from our earlier analogy) can overflow buckets in the photograph not just at the surface of the photograph, but also through the depth of the photographic emulsion. Photographic emulsions have a slight thickness to them, and the energy from the star often produces a reaction in the material below the surface of the emulsion, making it impossible to measure magnitudes by directly comparing the sizes of stars on a plate. It was Edward Pickering (at left) at Harvard College Observatory who in 1910 suggested a solution to this problem. He proposed that photographic plates be measured objectively by passing either heat or light through the plate and then measuring how much of the heat or light passed through to the other side. Where less light came out, there must be a larger density of grains (produced by the reaction with stellar light) in the photograph. The more light that strikes a particular area of the plate, the more grains in that area that will react, and turn dark when the plate is developed. Therefore, they block more light from exiting the other side of the plate. Pickering calibrated what amount of light blockage corresponded to what stellar brightness, and he was then able to determine the brightnesses of many stars.
A year after Pickering came up with this method, Harlan Stetson at Dartmouth College built a thermopile photometer (at left) for this very purpose, to measure the intensity of light which passed through a photographic plate. In his design, an illuminated pinhole diaphragm was projected onto one side of a plate and on the other side was placed a thermopile and galvanometer to measure how much light passed through the plate. A strikingly similar machine (at right) was built at the same time by Jan Schilt at Groningen University, which ended up being adopted by many observatories including Mt. Wilson and Yerkes.
In 1934, an adjustable iris was added to the same design by Heinrich Siedentopf (at left) at Jena in Germany, so that the light beam could be reduced or increased until the observer could have any given brightness of light directed at the plate. This new development greatly increased the range of magnitudes which could be measured on a given plate from 4-5 magnitudes for the Schilt design to 11 magnitudes for Siedentopf's. This allowed astronomers to measure both very bright and very faint stars from the same photograph.
By 1960, several observatories had digitized and semi-automatic iris photometers so that a large number of stars could be measured, without bias introduced by the subjectivity of the human measurer. A year later Peter Fellgett proposed a fully automated plate measuring machine which would give both positions and magnitudes of stars without any human intervention. The first successful realization of his idea came in 1969, with the Edinburgh GALAXY machine (first superceded by the COSMOS and now the SuperCOSMOS machines). A scanning light spot from a cathode ray tube was focused onto a photographic plate so as to measure the light emergent from the other side of the plate in a 16 micrometer pixel. This machine could measure 1000 stars in a single hour. Two years later, the Cambridge Automatic Plate-Measuring Project (APM) went a step further. The new development in this design was a flying laser beam guided by two computer controlled mirrors with orthogonal axes. The new set-up allowed for the measurement of 10 stars a second, or 36,000 stars over an hour. Quite an improvement over an astronomer or graduate student sitting at the telescope and estimating the brightness of one star with respect to another!
During the 1970's and 1980's, further developments took place to make measurements much more accurate. Scanning beams narrowed and their intensities became much more constant, the light detectors become more sensitive to even minute differences in brightness, and the motion of the plate and chart were synchronized further. Additionally, the density data collected in the newer machines could be automatically converted to brightness data and then sent to a computer for reduction and analysis. More recent developments have included the replacement of photographic plates at the telescope with television cameras, electronic cameras, and charged coupled devices (CCDs). These remove the intermediate step (measurement) between photograph and computer, as photographs are almost literally taken on computers.
Photometry at Leander McCormick Observatory
Astronomers at McCormick Observatory have taken part in almost all stages in the development of photometry. Its first director, Ormond Stone, began a systematic program of visually observing long-period variable stars in 1902. After he retired and Samuel A. Mitchell took over as director of the observatory, Mitchell and Harold Alden continued this work by improving the magnitudes of the comparison stars in the fields of these long-period variables. Most of the observatory's energy was devoted to astrometry, but Mitchell realized that the midnight hours (when astrometry could not be done) could be used for photometry, so he and Alden enthusiastically started a new observing program. They were able to greatly improve on previous calibrations by using standardized disks to represent stars of different brightnesses and through better draughtmanship and consistency in their measurements. Throughout the 1920's, McCormick Observatory worked in cooperation with Harvard, Lick, and Yerkes Observatories to determine accurate brightnesses of these standard stars. In 1930, Mitchell was honored by being asked by the American Association of Variable Star Observers (AAVSO) to revise the magnitudes for all of the comparison stars in the fields of the variable stars on their observing program.
This early photometric work was mostly done visually, meaning measuring while observing with the telescope. Mitchell obtained a wedge photometer for the observatory when this work began, which allowed an observer to measure the brightness of a star by comparing it to an artificial "star" whose brightness could be varied until it matched that of the actual star in the field. Several stars in the field of the variable would be measured in this way, covering a wide range in brightnesses. Then the variable itself would be measured and compared to the field stars, and so a skilled observer could estimate the brightness of the variable to an accuracy of 0.1 magnitude.
Photographic photometry at McCormick Observatory was first attempted in 1916, but it was not until Alexander Vyssotsky developed a technique for it on the 26¼ inch telescope in 1932 that much photometry was done. However, after Vyssotsky figured out a way to do photometry using photographic plates, this method of photometric study became the main focus of attention at the observatory. It was much more convenient and less time consuming than using the wedge photometer, and one could get more accurate results. The first generation of photometric plates, including Vyssotsky's famous catalogue of M stars, were measured on a thermoelectric microphotometer modeled after the one built by Schilt in 1911. In later years, the observatory obtained automated photometers and plates were measured on those, without any human intervention. Currently, photographic plates are measured on a PDS Microdensitometer.
Return to Hall of Precision Astrometry