Friday, July 24, 2015

Development of modern chronostratigraphy


One of the central themes of the development of modern chronostratigraphy has been the work to establish an accurate geological times scale. Here are nine reasons:

Some of these geological problems and questions include: 
  1. Rates of tectonic processes.
  2. Rates of sedimentation and accurate basin history. 
  3. Correlation of geophysical and geological events.
  4. Correlation of tectonic and eustatic events.
  5. Are epeirogenic movements worldwide.
  6. Have there been simultaneous extinctions of unrelated animal and plant groups.
  7. What happened at era boundaries.
  8. Have there been catastrophes in earth history which have left a simultaneous record over a wide region or worldwide.
  9. Are there different kinds of boundaries in the geologic succession (That is, “natural” boundaries marked by a worldwide simultaneous event versus “quiet” boundaries, man-made by definition). 
It is, in fact, fundamental to the understanding of the history of Earth that events be meticulously correlated in time. For example, current work to investigate the history of climate change on Earth during the last few tens to hundreds of thousands of years has demonstrated how important this is, because of the rapidity of climate change and because different geographical regions and climatic belts may have had histories of climate change that were not in phase. If we are to understand Earth’s climate system thoroughly enough to determine what we might expect from human influences, such as the burning of fossil fuels, a detailed record of past climate change will be of fundamental importance. That we do not now have such a record is in part because of the difficulty in establishing a time scale precise enough and practical enough to be applicable in deposits formed everywhere on Earth in every possible environmental setting. Until the early twentieth century, the geologic time scale in use by geologists was a relative time scale dependent entirely on biostratigraphy. The standard systems had nearly all been named. Estimates about the duration of geologic events, including that of chronostratigraphic units, varied widely, because they depended on diverse estimation methods, such as attempts to quantify rates of erosion and sedimentation. The discovery of the principle of radioactivity was fundamental, providing a universal clock for direct dating of certain rock types, and the calibration of the results of other dating methods, especially the relative scale of biostratigraphy. Radiometric dating methods may be used directly on rocks containing the appropriate radioactive materials. For example, volcanic ash beds intercalated with a sedimentary succession provide an ideal basis for precise dating and correlation. Volcanic ash contains several minerals that include radioactive isotopes of elements such as potassium and rubidium. Modern methods can date such beds to an accuracy typically in the ±2% range, that is, ±2 million years at an age of 100 Ma, although locally, under ideal conditions, accuracy and precision are now considerably better than this (±104–105 years). Where a sedimentary unit of interest (such as a unit with a biostratigraphically significant fauna or flora) is overlain and underlain by ash beds it is a simple matter to estimate the age of the sedimentary unit. The difference in age between the ash beds corresponds to the elapsed time represented by the succssion of strata between the ash beds. Assuming the sediments accumulated at a constantrate, the rate of sedimentation can be determined by dividing the thickness of the section between the ash beds by the elapsed time. The amount by which the sediment bed of interest is younger than the lowest ash bed is then equal to its stratigraphic height above the lowest ash bed divided by the rate of sedimentation, thereby yielding an “absolute” age, in years, for that bed. This procedure is typical of the methods used to provide the relative biostratigraphic age scale with a quantitative basis. The method is, of course, not that simple, because sedimentation rates tend not to be constant, and most stratigraphic successions contain numerous sedimentary breaks that result in underestimation of sedimentation rates. Numerous calibration exercises are required in order to stabilize the assigned ages of any particular biostratigraphic unit of importance. Initially, the use of radiometric dating methods was relatively haphazard, but gradually geologists developed the technique of systematically working to cross calibrate the results of different dating methods, reconciling radiometric and relative biostratigraphic ages in different geological sections and using different fossils groups. In the 1960s the discovery of preserved (“remanent”) magnetism in the rock record led to the development of an independent time scale based on the recognition of the repeated reversals in magnetic polarity over geologic time. Cross-calibration of radiometric and biostratigraphic data with the magnetostratigraphic record provided a further means of refinement and improvement of precision. These modern developments rendered irrelevant the debate about the value and meaning of hypothetical chronostratigraphic units. The new techniques of radiometric dating and magnetostratigraphy, where they are precise enough to challenge the supremacy of biostratigraphy, could have led to the case being made for a separate set of chronostratigraphic units, as Hedberg proposed. However, instead of a new set of chronostratigraphic units, this correlation research is being used to refine the definitions of the existing, biostratigraphically based stages. Different assemblages of zones generated from different types of organism may be used to define the stages in different ecological settings (e.g., marine versus nonmarine) and in different biogeographic provinces, and the entire data base is cross-correlated and refined with the use of radiometric, magnetostratigraphic and other types of data. The stage has now effectively evolved into a chronostratigraphic entity of the type visualized by Hedberg. For most of Mesozoic and Cenozoic time the standard stages, and in many cases, biozones, are now calibrated using many different data sets, and the global time scale, based on correlations among the three main dating methods, is attaining a high degree of accuracy. The Geological Society of London time scale (GSL, 1964) was an important milestone, representing the first attempt to develop a comprehensive record of these calibration and cross-correlation exercises. Formal methods of accounting for “time in stratigraphy”, including the use of “Wheeler plots” for showing the time relationships of stratigraphic units, provided much needed clarity in the progress of this work. Timescales for the Cenozoic and the Jurassic and Cretaceous are particularly noteworthy for their comprehensive data syntheses, although all have now been superseded. More recent detailed summation and reconciliation of the global data provided a comprehensive treatment of the subject. In the 1960s, several different kinds of problems with stratigraphic methods and practice had begun to be generally recognized. There are two main problems. Firstly, stratigraphic boundaries had traditionally been drawn at horizons of sudden change, such as the facies change between marine Silurian strata and the overlying nonmarine Devonian succession in Britain. Changes such as this are obvious in outcrop, and would seem to be logical places to define boundaries. Commonly such boundaries are unconformities. However, it had long been recognized that unconformities pass laterally into conformable contacts. This raised the question of how to classify the rocks that formed during the interval represented by the unconformity. When it was determined that rocks being classified as Cambrian and Silurian overlapped in time, a new chronostratigraphic unit the Ordovician, as a compromise unit straddling the Cambrian-Silurian interval. The same solution could be used to define a new unit corresponding to the unconformable interval between the Silurian and the Devonian. In fact, rocks of this age began to be described in central Europe after WWII, and this was one reason why the Silurian-Devonian boundary became an issue requiring resolution. A new unit could be erected, but it seemed likely that with additional detailed work around the world many such chronostratigraphic problems would arise, and at some point it might be deemed desirable to stabilize the suite of chronostratigraphic units. For this reason, the development of some standardized procedure seemed to be desirable. A second problem is that to draw a significant stratigraphic boundary at an unconformity or at some other significant stratigraphic change is to imply the hypothesis that the change or break has a significance relative to the stratigraphic classification, that is, that unconformities have precise temporal significance. This was specifically hypothesized by Chamberlin who was one of many individuals who generated ideas about a supposed “pulse of the earth”. In the case of lithostratigraphic units, which are descriptive, and are defined by the occurrence and mappability of a lithologically distinctive succession, a boundary of such a unit coinciding with an unconformity is of no consequence. However, in the case of an interpretive classification, in which a boundary is assigned time significance (such as a stage boundary), the use of an unconformity as the boundary is to make the assumption that the unconformity has time significance; that is, it is of the same age everywhere. This places primary importance on the model of unconformity formation, be this diastrophism, eustatic sea-level change or some other cause. From the methodological point of view this is most undesirable, because it negates the empirical or inductive nature of the classification. It is for this reason that it is inappropriate to use sequence boundaries as if they are chronostratigraphic markers. A time scale is concerned with the continuum of time. Given our ability to assign “absolute” ages to stratigraphic units, albeit not always with much accuracy and precision, one solution would be to assign numerical ages to all stratigraphic units and events. However, this would commonly be misleading or clumsy. In many instances stratigraphic units cannot be dated more precisely than, say, “late Cenomanian” based on a limited record of a few types of organisms (e.g., microfossils in subsurface well cuttings). Named units are not only traditional, but also highly convenient, just as it is convenient to categorize human history using such terms as the “Elizabethan” or the “Napoleonic” or the “Civil War” period. The familiar terms for periods (e.g., Cretaceous) and for ages/stages (e.g., Aptian) offer such a subdivision and categorization, provided that they can be made precise enough and designed to encompass all of time’s continuum. The problem solution was explained in this way:

There is another approach to boundaries, however, which maintains that they should be defined wherever possible in an area where “nothing happened.” The International Subcommission on Stratigraphic Classification has recommended that “Boundary-stratotypes should always be chosen within sequences of continuous sedimentation. The boundary of achronostratigraphic unit should never be placed at an unconformity. Abrupt and drastic changes in lithology or fossil content should be looked at with suspicion as possibly indicating gaps in the sequence which would impair the value of the boundary as a chronostratigraphic marker and should be used only if there is adequate evidence of essential continuity of deposition. The marker for a boundary stratotype may often best be placed within a certain bed to minimize the possibility that it may fall at a time gap.” This marker is becoming known as “the Golden Spike.” By “nothing happens” is meant a stratigraphic succession that is apparently continuous. The choice of boundary is then purely arbitrary, and depends simply on our ability to select a horizon that can be the most efficiently and most completely documented and defined. This is the epitome of an empirical approach to stratigraphy. Choosing to place a boundary where “nothing happened” is to deliberately avoid having to deal with some “event” that would require interpretation. This recommendation was accepted in the first International Stratigraphic Guide, although noted the desirability of selecting boundary stratotypes “at or near markers favourable for long-distance time-correlation”, by which he meant prominent biomarkers, radiometrically-datable horizons, or magnetic reversal events. Boundary stratotypes were to be established to define the base and top of each chronostratigraphic units, with a formal marker (a“golden spike”) driven into a specific point in a specific outcrop to mark the designated stratigraphic horizon. Such boundary-stratotypes be used to define both the top of one unit and the base of the next overlying unit. However, further consideration indicates an additional problem, which was noted in the North American Stratigraphic Code:

Designation of point boundaries for both base and top of chronostratigraphic units is not recommended, because subsequent information on relations between successive units may identify overlaps or gaps. One means of minimizing or eliminating problems of duplication or gaps in chronostratigraphic successions is to define formally as a point-boundary stratotype only the base of the unit. Thus, a chronostratigraphic unit with its base defined at one locality will have its top defined by the base of an overlying unit at the same, but more commonly, another locality.

Even beds selected for their apparently continuous nature may be discovered at a later date to hide a significant break in time. Detailed work on the British Jurassic section using what is probably the most refined biostratigraphic classification scheme available for any pre-Neogene section has demonstrated how common such breaks are. The procedure recommended by NACSN is that, if it is discovered that a boundary stratotype does encompass a gap in the temporal record, the rocks (and the time they represent) are assigned to the unit below the stratotype. In this way, a time scale can be constructed that can readily accommodate all of time’s continuum, as our description and definition of it continue to be perfected by additional field work. This procedure means that, once designated, boundary stratotypes do not have to be revised or changed. This has come to be termed the concept of the “topless stage.” The modern definition of the term “stage” indicates how the concept of the stage has evolved. The Guide states that “The stage has been called the basic working unit of chronostratigraphy. The stage includes all rocks formed during an age. A stage is normally the lowest ranking unit in the chronostratigraphic hierarchy that can be recognized on a global scale. A stage is defined by its boundary stratotypes, sections that contain a designated point in a stratigraphic sequence of essentially continuous deposition, preferably marine, chosen for its correlation potential”. The first application of the new concepts for defining chronostratigraphic units was to the Silurian Devonian boundary, the definition of which had begun to cause major stratigraphic problems as international correlation work became routine in post-WWII years. Aboundary stratotypewas selected atalocation called Klonk, in what is now the Czech Republic, following extensive work by an international Silurian-Devonian Boundary Committee on the fossil assemblages in numerous well-exposed sections in Europe and elsewhere. The establishment of the new procedures led to a flood of new work to standardize and formalize the geological time scale, one boundary at a time. This is extremely labour-intensive work, requiring the collation of data of all types (biostratigraphic, radiometric and, where appropriate, chemostratigraphic and magnetostratigraphic) for well-exposed sections around the world. In many instances, once such detailed correlation work is undertaken, it is discovered that definitions for particular boundaries being used in different parts of the world, or definitions established by different workers using different criteria, do not in fact define contemporaneous horizons. This may be because the original definitions were inadequate or incomplete, and have been subject to interpretation as practical correlation work has spread out across the globe. Resolution of such issues should simply require international agreement; the important point being that there is nothing significant about, say, the Aptian-Albian boundary, just that we should all be able to agree on where it is. Boundaries be places where “nothing happens”, the sole criterion for boundary definition is that such definitions be as practical as possible. The first “golden spike” location was chosen because it represents an area where deepwater graptolite-bearing beds are interbedded with shallow-water brachiopod-trilobite beds, permitting detailed cross-correlation among the faunas, thereby permitting the application of the boundary criteria to a wide array of different facies. In other cases, the presence of radiometrically datable units or a well defined magneto stratigraphic record may be helpful. In all cases, accessibility and stability of the location are considered desirable features of a boundary stratotype, because the intent is that it serves as a standard. Perfect correlation with such a standard can never be achieved, but careful selection of the appropriate stratotype is intended to facilitate future refinement in the form of additional data collection. Despite the apparent inductive simplicity of this approach to the refinement of the time scale, further work has been slow, in part because of the inability of some working groups to arrive at agreement. In addition, two contrasting approaches to the definition of chronostratigraphic units and unit boundaries have now evolved, each emphasizing different characteristics of the rock record and the accumulated data that describe it. The first approach, which Castradori described as the historical and conceptual approach, emphasizes the historical continuity of the erection and definition of units and their boundaries, the data base for which has continued to grow since the nineteenth century by a process of inductive accretion. The alternative method focuses on the search for and recognition of significant “events” as providing the most suitable basis for rock-time markers, from which correlation and unit definition can then proceed. The choice of the term “pragmatic” is unfortunate in this context, because the suggested method is certainly not empirical. The followers of this method suggest that in some instances historical definitions of units and their boundaries should be modified or set aside in favour of globally recognizable event markers, such as a prominent biomarker, a magnetic reversal event, an isotopic excursion, or, eventually, events based on cyclostratigraphy. Boundaries be defined in places where “nothing happened,” although it is in accord with suggestions in the first stratigraphic guide that “natural breaks” in the stratigraphy could be used or boundaries defined “at or near markers favorable for long-distance time-correlation”. The virtue of this method is that where appropriately applied it may make boundary definition easier to recognize. The potential disadvantage is that is places prime emphasis on a single criterion for definition. From the perspective of this book, which has attempted to clarify methodological differences, it is important to note that the hyperpragmatic approach relies on assumptions about the superior time-significance of the selected boundary event. The deductive flavour of hypothesis is therefore added to the method. In this sense the method is not strictly empirical. As has been demonstrated elsewhere, assumptions about global synchroneity of stratigraphic events may in some cases be misguided. The hyper-pragmatic approach builds assumptions into what has otherwise been an inductive method free of all but the most basic of hypotheses about the time-significance of the rock record. The strength of the historical and conceptual approach is that it emphasizes multiple criteria, and makes use of long established practices for reconciling different data bases, and for carrying correlations into areas where any given criterion may not be recognizable. For this reason to eliminate the distinction between time-rock units (chronostratigraphy) and the measurement of geologic time (geochronology). History has repeatedly demonstrated the difficulties that have arisen from the reliance on single criteria for stratigraphic definitions, and the incompleteness of the rock record, which is why “time” and the “rocks” are so rarely synonymous in practice. The latter article provides several case studies of how each approach has worked in practice. For our purposes, the importance of this history of stratigraphy is that the work of building and refining the geological time scale has been largely an empirical, inductive process. Note that each step in the development of chronostratigraphic techniques, including the multidisciplinary cross-correlation method, the golden spike concept, and the concept of the topless unit, are designed to enhance the empirical nature of the process. Techniques of data collection, calibration and crosscomparison evolved gradually and, with that development came many decisions about the nature of the time scale and how it should be measured, documented, and codified. These decisions typically were taken at international geological congresses by large multinational committees established for such purposes. For our purposes, the incremental nature of this method of work is significant because it is completely different from the basing of stratigraphic history on the broad, sweeping models of pulsation or cyclicity that have so frequently arisen during the evolution of the science of geology.