Using whatever computing environment you prefer, generate plots of the
following related LFs:
(a) |
Log (L) dL | vs | Log L/L_{} |
(b) |
Log (M) dM | vs | M - M_{} |
(c) |
Log N(>L) | vs | Log L/L_{} |
(d) |
Log N(< M) | vs | M - M_{} |
where the second two are cumulative functions integrated over L or M to brighter galaxies.
Take the normalization n_{} to be unity;
take the range in L/L_{} to be from
10^{-2} to 10; and overplot lines
with three values of : -1.5, -1.0, -0.5
(dotted, solid, dashed). Be careful to account for the fact that graph (a) expresses
per unit interval of luminosity (dL), while graph (b) expresses per magnitude (dM, which is an interval in Log L). Also, note that graphs (c) and (d) are not expressed per interval, but are integrated, and so they should look the same (excluding, possibly, the direction of the x-axis).
Summarize, briefly, the various features you see in the plots and their differences. Why does the graph of Log (M) dM immediately tell you that = -1.0 is the critical value separating finite from infinite numbers of galaxies?