[gradsusr] mysterious bug with merra-2 data
Muhammad Yunus Ahmad Mazuki
ukm.yunus at gmail.com
Tue Mar 6 03:58:01 EST 2018
Hi,
If you are using Windows 10, I highly recommend that you try Bash
shell in Windows 10.
Yunus
On Mon, Mar 5, 2018 at 9:24 PM, Antoine Molin <amolin at plenr.fr> wrote:
> Hi,
>
> Thanks, but i am working under Windows, and i am not using the NetCDF library...
> Well, I downloaded again my november merra-2 files (i had the problem with a descriptor file beginning 1st november…), and now it works well. I had before 30 files in november, same size, but there was somewhere a bug....
> Thanks for your answer, i will have to find a way to check my downloaded data!
>
> Best regards
> Antoine Molin
>
>
> .
>
> -----Message d'origine-----
> De : gradsusr [mailto:gradsusr-bounces at gradsusr.org] De la part de Muhammad Yunus Ahmad Mazuki
> Envoyé : lundi 5 mars 2018 12:39
> À : GrADS Users Forum
> Objet : Re: [gradsusr] mysterious bug with merra-2 data
>
> Hi,
>
> It is a guess, but from the description in https://gmao.gsfc.nasa.gov/pubs/docs/Bosilovich785.pdf , the data you downloaded may be 'packed'. If you would provide the output of ncdump -h of the files, that would be great. Please look at the variable attribute scale_factor and add_offset. If the values are different for each file, you will encounter a little bug in GrADS template option.
> The situation is where GrADS would only take the attribute from the first file and apply it to all files. This does lead to incorrect reading of the all the data except for the first data in the template option.
>
> There are few ways to overcome this. First is to use cdo mergetime with option -b F32 or -b F64, refer https://code.mpimet.mpg.de/projects/cdo/wiki#netCDF-with-packed-data,
> this will lead to the output being unpacked and large in size. Since the bug is old, I'm not sure of the current status of cdo in handling packed data.
> Second is to unpack all the data using nco, merge them all, and repack them. This will creat one large size packed data.
> Third is to unpack all the data using nco, find a common value of scale_factor and add_offset (which you have to calculate yourself), and repack each file using the new common value. This will retain the original number of files with slight variation in the size.
>
> Do note that nco can be quit RAM intensive, in a sense that its suitable for a cluster job compared to cdo. Whether to use cdo or nco, it depends on the overall size of the data being worked on and the amount of RAM in your machine.
>
> Yunus
>
> On Fri, Mar 2, 2018 at 10:11 PM, Antoine Molin <amolin at plenr.fr> wrote:
>> Hi,
>>
>>
>>
>> I am using merra-2 data U&V wind speeds at 50m, for wind energy
>> studies in France ; very good data…..
>>
>>
>>
>> Getting « MERRA2_400.tavg1_2d_slv_Nx.%y4%m2%d2.SUB.nc4» daily files,
>> data are obtained with grads, using xdfopen and enclosed data descriptor files.
>>
>>
>>
>> I really don’t understand why i get different values with different
>> descriptor files, for same point and time :
>>
>>
>>
>> As an example, at point (49°N ; 4.375°E) and time 2017-12-01 at 0h,
>> i get
>> U50m=5.31 m/s using « merra2_dec2017.ctl » (lat = 49, lon = 4.375, t
>> = 1), and U50m=5.22 m/s using « merra2_2017.ctl » (lat = 49, lon =
>> 4.375, t=8017)…..
>>
>>
>>
>> Why ????????????????
>>
>>
>>
>> Thanks !
>>
>>
>>
>>
>>
>> Antoine Molin
>>
>> Etudes de vent PlenR
>>
>> Tel : 03 20 47 99 76
>>
>>
>>
>>
>> _______________________________________________
>> gradsusr mailing list
>> gradsusr at gradsusr.org
>> http://gradsusr.org/mailman/listinfo/gradsusr
>>
>
> _______________________________________________
> gradsusr mailing list
> gradsusr at gradsusr.org
> http://gradsusr.org/mailman/listinfo/gradsusr
>
>
> _______________________________________________
> gradsusr mailing list
> gradsusr at gradsusr.org
> http://gradsusr.org/mailman/listinfo/gradsusr
More information about the gradsusr
mailing list