Comments on Common-Focus-Point Gathers

William S. Harlan

Feb 1999

Introduction

At the 1996 SEG Annual Meeting in Denver, three papers first introduced the concept of velocity-independent imaging with common-focus-point (CFP) gathers. I came to some rather negative conclusions about the method at the time and haven't seen any reason to change this opinion since. Here are my remarks after first seeing the papers.

Common-focus-point gathers

Three papers discussed common-focus-point (CFP) gathers: “Seismic processing between two focusing steps,” (MIG 1.1) by A.J. Berkhout, “Migration velocity analysis using the common focus point technology,” (MIG 1.2) by M.M. Nurul Kabir and D.J. Verschuur, and “Automating prestack migration analysis using common focal point gathers,” (MIG 1.3) by Scott A. Morton and Jan Thorbecke. These gathers are equivalent to those used by conventional depth-focusing analysis [2,1], but with a slightly different use.

Scott A. Morton of Cray Research defined a CFP gather simply with a Kirchhoff implementation. (A.J. Berkhout used his operator notation, with less explicit arguments.)

  $\displaystyle {\rm CFP} ({\svector x}_s , {\svector x}_m , \tau) = \int \int
...
... {\svector x}_s) + T({\svector x}_m , {\svector x}_r) ]
d^2 {\svector x}_r ,
$ (1)
where ${\svector x}_s , {\svector x}_r , {\svector x}_m $ are the spatial coordinates of the source, receiver, and focus point, $t$ is the recorded time, and $\tau$ is a downward-continued time. The functions $T$ give the one-way traveltime between two points for a particular velocity model. Each output CFP extrapolates receivers down to the depth of the focus (focal) point and subtracts the time to the source. (The source shift was in Scott Morton's slide presentation, but not abstract.) Ideally, a good velocity model should produce a flat consistent phase at zero time for different sources. A conventional Kirchhoff depth migration would produce an amplitude at the CFP location ${\svector x}_m$ by summing over all source positions ( ${\svector x}_s$) at zero time.

Most CFP gathers are not perfectly flat at zero time because of suboptimum velocities. Conventional depth-focusing analysis locates the flattest reflections at earlier or later times and then displays this error as an equivalent depth or average velocity correction. The correct depth of a flat event at non-zero time $\tau$ is expected to fall halfway between the CFP depth ${\svector x}_m$ and the depth at which the event would migrate to zero time without flattening. These depth or velocity errors give only an average correction to the velocity model from the surface down to the reflector depth. Some sort of tomographic back-projection is necessary to distribute these velocity errors correctly in the overlying model and to reconcile with the errors for other reflections.

The new CFP papers assume that velocity models will be layered and that layer boundaries will produce reflections that can be identified in unstacked CFP gathers. Velocity models are optimized by layer-stripping—one layer velocity and boundary at a time.

At this point CFP analysis begins to depart from depth-focusing analysis. A user identifies the next significant reflection, chooses an initial velocity for the overlying layer, and proposes corresponding depths for the reflector (perhaps from the depth image for the previous iteration). The user examines CFP gathers at the proposed depths and then looks for the unflattened reflection that was expected to image at this depth—or for any other reflection that might now appear easier to pick. Because the mislocated reflection is not flat, the coherence cannot be identified as automatically as for depth-focusing analysis. The reflection may also lie several cycles away from the CFP zero time, so snapping would appear impossible.

Instead of attempting to use this imaging error to update velocities, these authors update the traveltimes for the Kirchhoff operator by adding half the picked time errors to the traveltimes used previously for this CFP position. They produce a new CFP gather without a more expensive remodeling of traveltimes. Again, the procedure has converged when the CFP's are flat at zero time. Although the abstracts do not say, I expect the CFP depth positions are also revised by half the difference with the image depth of the intended reflection. (Otherwise the final CFP's will not track the reflection.)

This splitting of time errors would appear to assume that velocity errors are well behaved in the lateral direction from near to far offset. Conventional focusing analysis makes the same assumption to split depth errors.

Scott Morton states that the final unimplemented step of the algorithm is to revise velocities by a tomographic inversion of the updated Kirchhoff operators. There is no guarantee that revised traveltimes can be fit by a single velocity model.

Hans Tieman of GDC pointed out an interesting degenerate case to me. For a single layer beginning at the surface, one could imagine that the data had been migrated with a zero velocity at zero depth. The CFP gathers then become identical to the original shot profiles. Picking residual moveout amounts to picking the raw prestack moveouts. The data could be stacked and imaged perfectly in the next iteration. The final nontrivial step is to convert all these picked traveltimes into a velocity model (tomography). CFP's would remain at zero depth until we revised our reference velocity model.

A constant-offset implementation would better avoid artifacts from the limited range of offsets present in common-source profiles, but might violate some of the (unstated) assumptions in these three papers.

All in all, I find it difficult to extract a practical algorithm from these details, assuming that we desire to arrive at a meaningful depth section. The authors do not say how to revise CFP depths for the intended reflection. Without revision, why should a reflection be forced to produce a flat CFP at an arbitrarily chosen depth? Nevertheless, some features are interesting, and many listeners were enchanted by the idea that the imaging operator could be revised directly without a physically consistent revision of the velocities.

Two admirers of the CFP approach, who read my description above, believe the method is not intended to estimate meaningful depths directly. They stress that the method uses downward continuation to simplify the coherence and improve the signal-to-noise ratio of reflections before picking. The revised traveltime operators are the final objective: these picks provide a robust estimate of reflection moveouts for input to tomography.

I have already used several forms of prestack moveout picking as input to reflection tomography: moveouts after constant offset depth or time migration, after DMO only, from combinations of prestack moveouts and poststack picks, and other gathers which appear conveniently during processing. The moveouts of all such picks are modeled to invert geometrically the effects of the imaging and produce equivalent tables of unmigrated traveltimes. After conversion, the same reflection tomography program inverts them all. It would not be difficult to add CFP picks to this list and use them as a new alternative. Nevertheless, I find few advantages. Shot profile migration produces too many artifacts, compared to constant-offset migration. Picking residual moveouts is easy unless we are expected to track specific reflections before and after imaging. I would prefer to pick the moveouts of the flattest reflections in a CFP gather, as preferred by conventional depth focusing analysis. Unstacked prestack depth migration with a reference model enhances the signal-to-noise ratio.

Imaging algorithms cannot leave velocity estimation as an exercise for the reader. A solid tomography algorithm probably takes an order of magnitude more computer code than an imaging algorithm. Velocities are the hard part. It would be convenient if we could produce depth images without velocities, but we would be obliged to accept an arbitrarily scaled depth axis.

Later remarks

In 1998, this method continues to be discussed, although I have yet to see anyone estimate a velocity model from recorded data. The fatal flaw remains the same.

One must choose a CFP gather for a particular image depth, then identify, at a non-zero image time, the reflection that one expected to see at zero time. This seems fundamentally impractical. The mislocated reflection will not be flat or have any other distinctive coherence. Instead, one must recognize a reflection that one has seen before imaging. Not surprisingly, the only examples I have seen use synthetic data with a few isolated strong reflections. On Gulf Coast data, with many weak reflections, such picking would be impossible.

Conventional depth-focusing analysis uses similar image gathers, but allows one to pick the flattest reflection at a non-zero imaging time. Recognizing flatness is easy with numerical tools like semblance. It is not necessary to know where this reflection came from before imaging.

Bibliography

1
Jean Pierre Faye and Jean P. Jeannot.
Prestack migration velocities from focusing depth analysis.
In 56th Annual International Meeting, SEG, Expanded Abstracts, volume 86, page Session:S7.6. Soc. Expl. Geophys., 1986.
2
S. MacKay and R. Abma.
Imaging and velocity estimation with depth–focusing analysis.
Geophysics, 57(12):1608–1622, 1992.