amruiz

  • Hello,

    For the first time, I am looking at velocity standard deviations.

    With a coherence of 0.88, the velocity standard deviation is 1.4 mm/y, and with a coherence of 0.46, the estimated velocity standard deviation is 1.7 mm/y (velocity 18 mm/y). Is this right, having a set of 33 scenes? I am (hopefully) attaching the CSV.

  • sorry, you are right.
    The cumulative displacement is calculated using the model you chose for the time series analysis.
    This reduces possible noise.

  • Then, the cumulative displacement shouldn’t be equal to the displacement for the last acquisition?

    I give you a concrete example:

    20120803 (First Acquisition): 0.00
    20120819: 0.43
    .
    .
    .
    20150112 (Last Acquisition): -14.53 —-> This is the displacement of 20151212 respect to 20120803??
    CUMULDISP: -15.54

  • The time series in the site processing are referred to the first acquisition.
    In the small area processing you can choose whether to refer them to the first acquisition or to the Master one.
    However the Master acquisition may be noisy, so, the sw estimates an error and it removes it (so, the value at the master may be not zero).
    The cumulative…[Read more]

  • Hello Daniele,

    I have a doubt regarding the fields YYYYMMDD (created with the time series option) which seem to be showing displacement values. These values refer to a cumulative displacement (i.e. the cumulative displacement at date 20010721) or to the single displacement registered for that acquisition (in that case, respec to to whom or when?…[Read more]

  • yes, that is something to be tided up.
    Initially, the Cum Disp tool was generated to calculate it based on the estimated coefficients of a polynomial.
    Later on, I invented the smart tool, which is a non-parametric time series analysis.
    So, the Cum Disp tool is consistent with the rest of the results (and it calculates the proper things) only if…[Read more]

  • The small area processing has been designed as a process independent from the full site analysis. Even if they can interact (see b).
    So, my suggestion is:
    (a) if you want to carry out different processing sessions with different parameters, you should save every time a different small area dataset. You do this using the “save” button in the small…[Read more]

  • Hello Daniele,

    I’ve tried to generate Cumulative Displacement files covering different periods of time, entering different dates in the Cumulative Displacement tool. However, when exporting to KML/KMZ I think I’m always getting the same file,as the initially generated for the complete temporal length. covered by mi images. Looking at the logfile…[Read more]

  • Hello,

    I would like to ask how saving result variables in the small area works and what is the right procedure to estimate one variable (ktemp) first and another one (defo) later.

    I have saved the small area results using save (computed without linear deformation) before and now only loaded it. the file ktemp.mat is old. So, without any…[Read more]

  • Hello Daniele,

    Thanks you, now it is clear for me

    Cheers,
    S

  • Hi Santiago,
    this is a small misunderstanding. To answer your question we firstly need to clarify some parameter names.
    – Temporal Coherence: it’s the coherence calculated from the time series phase residues (the classic PSI parameter)
    – Synthetic Coherence: it will be removed. It was a simulation
    – Spatial Coherence: it’s the mean interferometric…[Read more]

  • Hi Santiago,
    this is a small misunderstanding. To answer your question we firstly need to clarify some parameter names.
    – Temporal Coherence: it’s the coherence calculated from the time series phase residues (the classic PSI parameter)
    – Synthetic Coherence: it will be removed. It was a simulation
    – Spatial Coherence: it’s the mean interferometric…[Read more]

  • Hi Santiago,
    this is a small misunderstanding. To answer your question we firstly need to clarify some parameter names.
    – Temporal Coherence: it’s the coherence calculated from the time series phase residues (the classic PSI parameter)
    – Synthetic Coherence: it will be removed. It was a simulation
    – Spatial Coherence: it’s the mean interferometric…[Read more]

  • Hello,
    After finishing the processing and opening a new SarProz session , is it possible to re-plot the Histograms and Graphs of showing the Coherence (both before and after APS removal)??.

    I have saved the Parameters and the Coherence in the Sparse Points Processing section, and then I’ve tried to map the Coer. First and Coer. Second in View…[Read more]

  • -local maxima extraction with range lobes suppression
    -local maxima extraction with range and azimuth lobes suppression

    these two options are useful for suppressing ghost targets generated by side lobes of the impulse response of the system (cardinal sine). in fact, side lobes could already be suppressed by filters used in the focusing process.…[Read more]

  • Hello d,

    Normally I’ve created the mask with the ‘Local Maxima from Reflexivity’ option. Could you explain how the other options work for te Mask creation?

    -local maxima extraction with range lobes suppression
    -local maxima extraction with range and azimuth lobes suppression
    -regular grid, spaced dS by dL

    thanks

  • periz replied to the topic Exporting products in the forum Sarproz Forum 9 years, 5 months ago

    in Such a case, I suggest you to read directly the data from the disk.
    All parameters are written as matrixes SxL -float- (real or complex depending on the case).
    you can use the function leggi included in the pcodes
    Data=leggi(FileName,S,L);
    FileName is a string containing the name of the file with full path.
    S and L are samples and lines.
    This…[Read more]

  • Hello Daniele,

    When exporting a parameter in geotiff format, what I have been able to create is a file containing a color ramp (from 0 to 255) with the corresponding reference scale, but I would like to have the actual parametervalues in order to compare with outputs from another software. Is this possible?

    Thanks,
    Santiago

  • If you have more images, use them.
    Using more images makes all statistical parameters more stable, and estimates are much more accurate.
    If you are interested only in a given period of time, you will focus your attention there. But you have to use all images you can get.
    You can e.g. use the ampl. stab. index estimated on many more images, get the…[Read more]

  • Hello,

    I’ve been carrying out tests of PS-processing with a stack of 9 CSK images, selecting always the Sparse Points according to their amplitude Stability Index.

    What I would like to know is whether with this approach the obtained results are reliable (the coherence I get both before and after APS removal is high), or if I should increase the…[Read more]

  • Load More