Saturday, September 26, 2009

Heat Flow Prediction

How should I go about estimating heat flow when the nearest well is 50 miles away? The common wisdom in basin modeling is to calibrate a heat flow from temperatures in a well, and use the same heat flow in the near by kitchen. The problem is that the kitchen usually has a different heat flow than where we drill wells. Heat flow is a function of mainly crust thickness and sedimentation rate, among a few other variables. So the correct thing to do is to estimate the crust thickness in the kitchen relative to the well location. The effect of the sedimentation rate should be taken care of using a 1D basin model, provided that the model includes the entire lithosphere. This will allow the best prediction of heat flow away from wells - even 50 miles away. Here is an example from NW Australia.

There are a couple of sources of surface heat flow data - one from surface probes, which measures heat flow in the first few meters under sea floor using a transient measurement. The technique has improved over the past few years. The other technique is using seismic bottom-simulating-reflection (BSR). These techniques may be used to provide a sense of relative regional heat flow variations, but should not be used directly in basin models to estimate temperatures at the reservoir or source depths. Aside from large uncertainties, in most offshore regions, the heat flow varies significantly with depth. The surface heat flow fluctuates over time due to short term ( 1 - 100 thousand year scale) changes in sedimentation rates.


  1. I think a valid questions to your points is:

    Is there a danger that your observations are artifacts of doing 1-D calibrations ? How close can two 1-D calculations (with different boundary thermal conditions) be before they will give significantly different results compared to if you did 2-D calculations over the area using the same thermal boundary conditions ?

  2. That is a good question. Three dimensional effects may contribute to the difference in heat flow between a structure high and low, especially at smaller scale in steep dipping situation. At larger scales, sedimentation rate changes, variation in crust thickness and composition are probably overwhelming causes for the lateral variation. At large scale, a 3D model will probably make little difference compared with the uncertainty (such as lateral compositional change of the crust - which is not considered here) and may not be worth the time. The more common situation is that we have a few wells to calibrate to, but then we need to extrapolate to areas without wells, there is usually a strong correlation between the calculated heat flows and basement geometry. Using this correlation to extrapolate the heat flow would also correct for any 3D effects - as we usually can not distinguish the contribution from sedimentation rate variation, crust thickness change, crust compositional change and 3D effects.

    There is a tendency for some 2 and 3D numerical models to artificially elevate the temperatures on a high due to the fact the numerical grids are not orthogonal because of dipping beds or faulting - Bethke's Basin2 has this effect corrected - but I suspect it is not taken care of in some of the commercial codes.

  3. I think I should have formulated myself differently. I was not referring to technical problems of solving the heat equation on difficult geometries. The errors you introduce with some grids
    and formulations (plain finite difference, finite volume, finite element whether you ignore or not the off-diagonal elements of the thermal diffusion tensor (from the anisotropy not being orthogonal to the orientation)) are small compared to the issue I was trying to address.

    What I was fishing for, is the need for a conscious evaluation of boundary conditions, and to make modelers aware of that there is no red-alert mechanism in 1-D models, if you use physically/geologically unfeasible variation in boundary conditions between the 1-D runs at different locations. In 2-D and 3-D you do have some possibilities to check for "unreasonableness" and to see how rapidly the effect of spatially variable boundary conditions are diffused away within the more central parts of the computational domain.

    In 2-D and 3-D you will also have a smaller chance of developing incorrect "common wisdom" regarding what parameters do vary much in nature and how they influence the real system.

    In models boundary conditions are independent "tuning parameters" and that is the problem; the user must set them to be physically feasible in both space and time. You cannot play around with them independently, because they are unbalanced (between themselves) forcings in the model.

    To illustrate the issue, I will use a silly but dramatic example. Consider 2 wells 1 km apart. In 1-D you are free (after smoking something illegal :-) ) to have one well model with 1HFU flux and one with 2HFU let us say at 20km. Now set up a similar 2-D model with the same boundary conditions. Run the models. After a short period, the thermal differences between the 2 wells in the 2-D case are minimized, while the steep difference is preserved between the 1-D models. Now change the type of boundary condition from flux to fixed T in the 2-D. Run the model. Now extend the computational domain 1 km down to 21 km. Run the model and compare the point locations were we previously had fixed temperatures. The silly (geologically meaningless) horizontal gradient at the location (previous boundary) is rapidly diffused away over time. Hence, in the 2-D case, simply by extending the computational domain, you would see that the boundary conditions are mutually exclusive. Naturally, for a simulation to be meaningfull, it must be invariant to this kind of perturbations in boundary conditions and also to the size of the computational domain (probably most dramatic for flow simulations).

    If we use 1-D models as "calibrators" there is a danger, that we derive thumb-rules that are "stable" in further 1-D calculations with similar 1-D models with similar set of parameters, BUT they could be more like multidimensional statistical models than process models; it is like fitting a statistical model to a set of measurements. I do not say that that is the case here; I only ask if you are certain you have eliminated the problem and that your observed relationships / thumb rules are invariant to using 1-D vs properly scaled and parametrisized 2-D models (that are "invariant" to boundary condition spatial variation) ?

  4. Thanks for taking the time to comment. I believe I understand well you points. The 1D models in the example are about 100 km apart (note the slope of the basement is about 3km over 100 km). The thermal boundary conditions are 1300 C at base of lithosphere at about 120 km constant depth below sediment water interface.

    I have also run the models with 1300 C at same depth below base of sediment instead of same depth below top of sediment, it makes very little difference.

    The lithosphere thickness was assumed constant through time. But it may also be varied independently based on a stretching rift model (Jurassic rift), but that that would have essentially no effect on the point this post is trying to make.