Alyono wrote:wxmann_91 wrote:ConvergenceZone wrote:This is slightly off-topic but I'm always hearing about some of the really bad models from year-to-year, and if performance of these models is so bad(which I agree with by the way),why do they keep using them year after year?? why not either improve on them or just get rid of them completely. I just don't understand that.
It's easy just to say "oh let's improve the models" but it's hard to actually implement that.
TC's are small on global scales. To accurately depict a TC you must first improve the resolution. But with global models you can only go so small because greater resolution means a greater amount of computing power/time to create the output.
After you improve the resolution, you have to figure out how to model thunderstorms. For the most part, thunderstorm development is random. You can't say thunderstorms will develop at exactly X location at Y time. Models have to approximate the randomness of thunderstorms, which is imprecise and are a big source of error for TC forecasting. Because TC-level thunderstorm development can alter the surrounding environment, and TC's themselves undergo fluctuations due to thunderstorms (think ERC).
I think in 2-3 years, the ECMWF will be able to resolve storms explicitly. They need to drop their resolution to 5 km or less. At those high resolutions, the models can handle convection without any parameterization. That improvement is going to be the next breaktrhough in track and intensity forecasting
Will be interesting to see that happen. Improvements in computing power/technology are probably the number one contributor to improvements in models (simply because we can go higher in resolution).