Interactive comment on “Can we detect regional methane anomalies? A comparison between three observing systems”

The comments are in bold and our answers in normal font.


I think the main issue with this paper is the fact that it does not clarify well enough the concepts of signal and, in particular, noise. As such it is not clear why is SURFREF is the signal?
We chose REFSURF as the signal since it is the inversion that covers the longest time period. Therefore, as explained in Section 2.4, we assume that the inter-annual variability of the inverted fluxes can be more robustly computed over this period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011) than over the only 2 or 3 years available with the other inversions (e.g. satellite based).

C1
If you are defining the fluxes from SURFREF as the signal does this mean this is a pseudo-data experiment, or if it is a real data experiment, isn't SURFREF expected to have the same short comings as the SURF inversion? REFSURF is a real data experiment since we use the actual real data in this inversion. SURF refers to an ensemble of inversions from which posterior (Bayesian) error statistics are computed. Therefore, the information we get from SURF (posterior flux errors referred to as noise) is not the same as what we get from REFSURF (fluxes themselves).
Please also expand on how noise is defined, and why GOSAT has less noise than SURF and these two have less noise than IASI? We agree that clarifications are needed. The noise is the posterior error variance computed for the IAV of the posterior fluxes for the ensemble of inversions. Indeed, for each dataset (surface observations = SURF, GOSAT and IASI), we perform 10 inversions of 1 year, varying the inversion setup according to an objective analysis described in Cressot et al. (2014). The standard deviation of each ensemble allows computing an estimation of the residual error on methane fluxes which we call "noise". We get three "noises", one for each dataset. We will clarify these explanations in the revised version of the manuscript. The differences in the noises are mainly due to the constraints that each observing system brings on the fluxes. This is linked to the number of data, to their distribution in time and space, and also to their sensitivity to methane fluxes (whether they "see" actual surface fluxes) and to their uncertainty. The noise depends on the region but very often GOSAT has a smaller noise than SURF and IASI because it has more data and they are also more sensitive to the surface (GOSAT "sees" the boundary layer) than IASI data (IASI "sees" the free troposphere only). SURF sometimes does better than IASI because the stations are mostly in the boundary layer (and therefore "see" the surface fluxes directly) whereas IASI provides more integrated information (as it "sees" the free troposphere only).
Another issue is the fact that too few details are provided in the method section. Please consider expanding on the following issues: -The driving meteorology is nudged to what? We forgot to clarify this: we use ECMWF analysed winds. Again, the method section will be expanded also considering the previous comment.

-Why is only OH loss considered and not the stratosphere, soil and Cl in the marine boundary layer?
The stratosphere is considered through O1D loss but also OH loss which applies both in the troposphere and the stratosphere. Soil is not considered as such as inversions infer net surface emissions, including the soil uptake. The Cl loss in the marine boundary layer is not implemented yet in our model. This is a limitation of the model that will be acknowledged in the revised version; the implementation of the required reaction is currently in development.

Is the uncertainty in MCF emissions considered?
Yes, it is considered: it is taken into account in the B matrix, as for methane emissions, and is set at 1% (MCF emissions are fairly well-known and therefore allow constraining OH concentrations effectively, at least until these emissions became negligible).
-The fact that surface observations are not used in the inversions with GOSAT and IASI should be made clear earlier in the paper. We will do this in Section 2.1.
-Are there also no long-term trends in the anthropogenic emissions? Only net emissions are inferred and REFSURF includes the regional trend of anthropogenic emissions.
-Review spatial and temporal correlations assumed in the prior As stated in Cressot at al. (2014): "spatial correlations are defined by an e-folding length of 500 km over land and 1000 km over the ocean, without correlation between land and ocean. Temporal correlations are defined by an e-folding length of 2 weeks".

C3
It was checked that combining all errors (variances and covariances from the correlations) leads to a budget uncertainty which is consistent with that of current bottom-up inventories as described in Kirschke et al. (2013). This point can be precised in the revised version.
-PBSURF inversion should be introduced earlier and the differences with respect to REFSURF made clearer. We introduce PBSURF only later so as not to confuse the reader with the different inversions defining signal and noises. The main differences between REFSURF and PBSURF are: • PBSURF uses an analytical inversion whereas REFSURF is variationnal • because of this, PBSURF uses big regions whereas REFSURF works at the pixel scale • as a consequence, the B matrices of the two inversions are quite different • PBSURF uses monthly means of the surface observations as constraints whereas REFSURF uses hourly data • PBSURF retrieves monthly fluxes whereas REFSURF retrieves fluxes at a weekly resolution.
-Treatment of input data (e.g. discarding non-background conditions, treatment of flask pairs, more on the model data mismatch).
More on this topics is available in Cressot et al. (2014). We will make a more extensive summary of Cressot et al. (2014) but details can be found in this former paper.
-Please expand a bit more on how the monte carlo ensemble works and explain if you calculate fluxes or only error statistics.
We have generated an ensemble of fluxes from ensembles of inversions, that allowed us to compute Bayesian error statistics. We will expand the description.
In general I found the way much of the results were given in tables quite difficult to understand particularly for the regional spatial scale. This could be substituted in the following ways: -Maps for each observation system for each temporal scale showing the detection rates at regional spatial scale.
For the revised version, we can propose the attached maps which correspond to the detection rates given in Tables 1, 2, 5 and 6.
-A map for each the observation system showing the time scale at which best detection rates were found. We can propose the attached synthetic maps but we are not fully convinced that they are very useful compared to the tables.
-I think it would be a great contribution to provide maps that delineate the regions of spatial agregation that provide the best detection rates for chosen observing system This would indeed be interesting but it goes further than the aim of this paper. Finding associations of pixels which optimize the signal-to-noise ratio would be very costly, as explained in Section 3. The paper is more focused on a signal-to-noise ratio analysis for IAV than a detailed analysis of the seasonality of methane fluxes. We think the paper already contains a lot of material and adding seasonal analysis is another angle than the one we chose here.
Finally, please expand more at the section were you compare with Bergamaschi

2013.
We will expand the discussion in the revised version.   Table 2, for GOSAT.  Table 2, for IASI. C11 Fig. 6. Detection rate (%) of the signal consisting in the anomalies at the yearly time-scale, as in Table 2, for SURF. C12 Fig. 7. Detection rate (%) of the signal consisting in the anomalies at the "seasonal" time-scale, as in Table 5, for GOSAT. C13 Fig. 8. Detection rate (%) of the signal consisting in the anomalies at the "seasonal" time-scale, as in Table 5, for IASI. C14 Fig. 9. Detection rate (%) of the signal consisting in the anomalies at the "seasonal" time-scale, as in Table 5, for SURF. C15 Fig. 10. Detection rate (%) of the signal consisting in the anomalies at the yearly time-scale, as in Table 6, for GOSAT. C16 Fig. 11. Detection rate (%) of the signal consisting in the anomalies at the yearly time-scale, as in Table 6, for IASI. C17 Fig. 12. Detection rate (%) of the signal consisting in the anomalies at the yearly time-scale, as in Table 6, for SURF.