Psychology Essay Assignment代写 ：天气预报模型以及大气本身可以被视为动态非线性系统，在该系统中，初始系统高度影响事件的演变。大多数数值模型都有不足之处，这导致预测误差随着时间的增加而增加的预测时间。这些错误的增长也取决于系统的流动，所以集合预测希望量化的流量依赖性预测的不确定性。这可能是相当困难的，因为数值天气预报本身是一个领域，涉及的不确定性，以基本判断。以这种方式，一个数值预测模型的天气的原始条件可能只被发现通过一定的精度。
Weather prediction models as well as the atmosphere itself may be viewed as dynamic nonlinear systems in which the initial systems highly influence the evolution of events. Most numerical models have inadequacies, and this leads to forecast errors that grow with increasing lead forecast time. Growth of these errors also depends on the flow of the system and so ensemble forecasting hopes to quantify the uncertainty of the flow dependent forecast. This may be quite difficult because numerical weather prediction by itself a field that deals with uncertainties to base judgments. In this way, the original conditions of a numerical prediction model for the weather may only be found through certain accuracy.
The reason for numerical weather prediction is it uses mathematical models to predict the weather, whereby the base assumption relies on the present weather conditions. It was first attempted in first two decades of the twentieth century but did not present practical results until computer simulations were introduced as a viable method of weather analysis (Leutbecher and Palmer, 2008). These mathematical models are acquainted with weather factors and used to generate both short or long-term weather models and climate predictions. The aim of this paper is to try to come up with an unbiased system, with a minimal RMS error of the observed weather conditions.
The available data is provided for Australian Urban areas and significant locations by the Bureau of Meteorology. Most Australian weather forecasts are supplied by Weather zone site originates from the Bureau of Meteorology or come from in house models. The Bureau of Meteorology also provides marine forecasts and aviation forecasts. Weather zone gets its information from a variety of places and provides custom packages for weather websites and television broadcasters. It also runs a WRF model, in house, which produces highly independent and accurate sets of forecasts.
Sometimes weather zone may apply formulae to the data to create additional fields, which go on to serve specific purposes, for examples of the uses include ‘heat stresses and ‘wind chill’.
Aims and objectives of the model
The obvious main aim of the project would be to present a self-sustaining model that would provide weather forecasting capabilities. Some of the stated aims of ensemble forecasting include estimating the ideal probability density function, or simply getting a rough guide to the reliability of today’s best guess for the control forecast. The ideal model would accurately reflect the likelihood of observing future conditions, which means the provision of a series of certain forecasts. Last but not least the time scales for which the models are created should be independent of the measures used to define chaos. These are based on the statistics of infinitesimals.
Background analysis of weather forecasting models in use
For the past 15 years, weather ensemble forecasting became established in prediction centers as a response to the limitations that were imposed because of the uncertainties of the process itself. The major goal when it comes to ensemble forecasting is quantitatively predict the probability density of the atmospheric conditions at a future date. For the many uses of forecast information, the estimates of future probability density of a weather related factor may have certain information hat is useful to that seen in a single forecast obtained from the best available estimate of the original situation (Leutbecher and Palmer, 2008).
The first theoretical studies that came about on error growth in atmospheric predictions on limitations of predictions and probabilistic predictions became known in the fifties to the early seventies. The field of meteorological research is known commonly as predictability. In this area, Thompson, Lorenz, as well as Epstein were critical to the emergence of research in the subjects of predictability and ensemble forecasting. Since then there has been growing interest in filling the gaps by quantifying the uncertainty in many areas of computing and statistics related to weather models.
As a result, techniques developed n other fields such as oceanography has proved to be useful in the context of numerical weather modeling and vice versa sometimes. Most numerical weather prediction models use the spatial discretisation to represent the partial differentiation equations that govern the dynamics of the atmosphere as a part of the system N ordinary differential equations. As such we may note that ensemble forecasting and estimation of state that is referred to in most terms as data assimilation in the meteorological context. They are closely related, as both of them need the prediction of uncertainty as an estimate. The difference here is the time range at which the uncertainty estimates are needed (Leutbecher and Palmer, 2008).
When it comes to weather modeling computers are an essential component, whereby their advancement affects the errors expected in forecasting. Current estimates, suggest that the power of computers in the market doubles about every 18 months. This rate could increase in the future exponentially depending on market preferential activity and new technology. Thus, in the future, the probable increase in the computing power will continue to improve performance when it comes to operational numerical prediction systems. This could apply to data assimilation or numerical modeling. In this way, it will be possible n the first case to increase spatial resolution of the models to take into account the ever-finer scales (Coiffier, 2011).
Computing power will therefore, be used to generalize the ensemble of prediction systems that are both in the domain of trying to forecast at all ranges as well as simulation of the atmospheric flows. In this way, atmospheric prediction will not only make it possible to imitate the better evolution of the actual atmosphere but to explore the impacts of the various evolutions that could happen under certain conditions. In other words, one will be able to create a model characterized by scenarios put in place by the architect and play the scenarios based on the numerical models. As such, the simulation system detailed in this paper aims perpetuate a unbiased model
Original aims and supposed results of the Weather Model
There are qualities that are desirable for a computer based objective analysis process that is well known by personnel that have constructed manuals or subjective analysis of the observations. Fir and foremost, one must construct an overall weather pattern (Warner, 2011). This provides any analyst with a context for the observations. The basis may be a map, or recent forecast or personal knowledge of the typical weather patterns, or the climatology. These variables should not be analyzed separately. For example, on the larger scales, or areas with strong gradients are used to infer to areas of high wind speeds when drawing weather contours and isotachs.
The weather patterns may provide information that is useful in the interpolation between the observable points. For instance, when analyzing the jet maximum, most isotachs are stretched out towards the direction of the wind. The spatial density in the observations is crucial to the analysis process. In areas, where the observations happen to be dense, the analysis could be drawn to them. However, in other places where the observations are sparse or does not exist, the analysis is based upon previous knowledge of the climatology (Warner, 2011).
The Lorenz equations have been used in the past as a paradigm of extreme sensitivity when it comes to some models of the initial conditions. This is a major source of uncertainty in numerical weather prediction. It is also a known fact that chaotic systems such as the Lorenz equations are sensitive at a point to model error. Edward, Lorenz was a mathematician and a meteorologist who came up with the Lorenz equations. In the beginning, he focused on the basic concept of convection, designing a basic model. When air at the bottom becomes warmer than the air above it, then the air rises and the volume left is replaced by cooler air.
The model that Lorenz designed is a derivation of a three dimensional ordinary differential equation. It is illustrated as such:
Dx/Dt = Ïƒ (x-y)
Dx/Dt = rx-y- rz
Dz/Dt= -bz + xy
X, Y, and Z do not represent coordinates in space. Here, X represents the convective overturning on the plane while Y as well as Z represents the horizontal and the vertical temperature variation in equal terms. The parameters of the above model include Ïƒ. This represents the Prandtl number. This is the ratio between the fluid viscosity and the thermal conductivity. P represents the difference in temperatures between the top and the bottom of the atmosphere plane. B relates to the ratio between the widths of the plane to its height. Lorenz here, found the values of Ïƒ to be 10 and r = 8/3.
The state of the weather in the thermosyphon for all pockets of fluid is described by the position of the pocket and the temperature among other variables. Therefore, knowing the variables of temperature, position and temperature are all that is needed to know about the future condition of the weather at a future event. However, one cannot measure the temperature difference during the velocity of the fluid in the thermosyphon. One can only measure the temperature difference between the warm rising fluid and the cool falling fluid on the opposite side of the tube. On the other hand, a technique otherwise known as state reconstruction can be used to more information on the device known as the Lorenz like attractor, which goes along to the behavior of the thermosyphon (Alligood, Sauer and Yorke, 1996).
The sensitivity of the long terms behavior to very small changes in the initial conditions may present a problem in the quantification of the equation. This is what makes long-term predictions quite difficult. In the real world systems, it is often impossible to know what the initial conditions to a great level of precision. Especially when talking about weather, any imprecision might lead to drastically different solutions. Similarly, the sensitivity has equally significant effects on the numerical computations.
The model has been used in the past to predict weather patterns, but specializes in the assumption of chaos in all systems. The equations thus show the same system of unpredictability that occurs within weather. For example, one such factor is the heat produced from solar radiation and the variation of its intensity on different parts of the earth surface. Vorticity is also another major factor. Because of the earth’s rotation, the ‘geostrophic’ wind keep changing and this creates twists and spirals and this generates high and low pressure areas. When the air pressure decreases, the temperature begins to drop. Similarly, since precipitation depends on the air pressure and air pressure is chaotic, it would be quite difficult to use the model more than one week into the future.
Outside of the handful of solutions, usually it is quite difficult to find the explicit solutions of non-linear equations of differential statistics. For the intermediate values of R, most of the solutions tend toward one of the nontrivial equilibrium. From a physical perspective, this means the system will approach a steady convective approach. Some of the computer simulations of the Lorenz formula in a three dimensional display are represented below.
These two images represent the two views of a solution to the Lorenz equation. To the right hand panel, the plot of X versus T is shown for the Lorenz equation, whereby R will be equal to 28 and the initial point of the equation being (2, 2, 2). One should note that the solution only appears to cross itself; this is a relic, which prints the three dimensional curve formulas on a piece of paper. The solution curve looks more intriguing as it passes through the X, Y, and Z Cartesian plane. Though the solution does not come close to any equilibrium, it does seem to weave its way through the two non-trivial equilibria.
From near one of these equilibria, it seems that the equation is spreading out slowly. Thus, when the radius starts to become too big then the solution is ejected and thrown into the neighborhood of the other equilibrium and starts another outward spiral. In this way, it perpetuates the idea of a never-ending cycle, and becomes self-sustaining. The resulting visual image looks something akin to the wings of a butterfly (Lorenz, 1963).
If one uses a different numerical method or the same method but changes the step size even minutely, these will amount to small differences within the computed solution. Small differences in different parts will generally develop to create major deviations in the answers to these equations. On the other hand, despite the fact that similar conditions may lead to different solutions especially when the view is X versus T, the solution curve in the three-dimensional sense generally has a consistent shape. The time step sensitivity of the Lorenz equations lead to some consequences of interest in terms of the numerical convergences involved.
The excel spreadsheets are shown as:
This paper illustrates that in fully chaotic systems, numerical convergence as a concept is not guaranteed.
Similarly, the theory states that for regimes that are not fully chaotic, there are different numbers of time steps required that lead to different time climates. The time step sensitivity of a (QG) potential vorticity model is analyzed in order to study in more detail a model with dynamics that is closer to the real atmosphere, but still relatively simple (Marshall and Molteni, 1993). The usefulness of this model for atmospheric predictability and ensemble prediction research has been well documented in previous studies. The QG model is well suited for this type of study, since it is complex enough to capture synoptic scale processes that are fundamental in forecast error growth, as well as being simple enough to allow for multiple extended integrations and complete linear stability analyses. The QG model study suggests results regarding the time step sensitivity that are qualitatively similar to the results obtained with the Lorenz equations. We analyze the error due to the different time steps and we compare it with errors due to the initial conditions. The time, truncation error growth exhibits an interesting behavior, and we develop a simple model for the evolution of this error growth.
Because of inevitable uncertainties in initial conditions and to imperfect models, operational weather forecasts should also be viewed in a probabilistic sense, which means that forecasts should provide probabilities of the occurrence of specific events, as well as estimates
of forecast skill. A practical and computationally feasible approach to this problem is to perform an ensemble of forecasts from equally plausible initial states. Ensembles of forecasts created by perturbing the initial state have been produced operationally at centers like the National Centers for Environmental Prediction.
Evaluation of the model
For the Lorenz equations, the third dimension in our phase space is recovered by creating a third function, Î´T(t – 2Ï„ ). If we then plot the three functions together, [Î´T(t), Î´T(t – Ï„ ), Î´T(t – 2Ï„ )], we create an approximation of the Lorenz attractor. We now reconstruct the two dimensional Lorenz attractor from our thermosyphon measurements of Î´T, taking Ï„ = 15 seconds to be the time delay. This Ï„ is necessarily large because data was only taken every 15 seconds. Figure 12 plots the measurements of Î´T taken over the course of an hour (Alligood, Sauer and Yorke, 1996). The two-dimensional delay reconstructed attractor for our data is shown in. In this reconstructed phase plot, points are plotted every 15 seconds and joined by straight line segments to make their progression easier to follow. Consider now, plotting Î´T versus Î´T(t – 15s) versus Î´T(t – 30s) in three space.
This will give us a three dimensional phase portrait of the behavior of the ï¬‚uid in our thermosyphon. Note how the three dimensional plot appears to have a shape similar to that of the Lorenz attractor. While the nature of the attractor is far more apparent when viewed as the picture is created, the state reconstruction is successful in recovering the Lorenz attractor for our thermosyphon data. In particular, there do not appear to be any self intersections.
Removing bias and Improving Accuracy
There are ‘Ordinary Differential Equation’ solvers used as computer applications that may help the problem of accuracy for the weather model. The ode45 computer application, suite is one of seven that chooses its own step size in order to achieve a predetermined level of accuracy. Although the default level is sufficient for many problems, sometimes it will be necessary to improve that “predetermined level of accuracy.”
The accuracy is controlled by the user by deï¬ning two optional inputs to ode45, and the other solvers. These are the relative tolerance, RelTol, and the absolute tolerance, AbsTol. If y
K is the computed solution at step k, then each component of the solution is required to satisfy its own error restriction. This means that we consider an estimated error vector, which has a component for every component of y k , and it is required that for each j , the j the component of the error vector satisfy
|estimated error k/j | â‰¤ tolerance j ,
Tolerance j = max (|y k/j | Ã- RelTol, AbsTolj)
Notice the relative tolerance refers to a number, however the absolute tolerance is a vector with a component that refers to each equation in system being solved. The philosophy behind the use of the tolerances is that the relative tolerance should bound the estimated error by at least a fraction of the size of the computed quantity. This goes to work for the Lorenz attractor. The Lorenz and Rosler form of attractors are used as examples to deterministic chaos and they demonstrate differences in the efficiencies of symbolic regression systems. The low magnitude of the fitness functions for the particular solutions tend to eliminate the errors and increases number of evolutionary systems at the same time. The results are significant for the prediction of unknown system states such as errors in the Lorenz equations.
Nonetheless, it is trivial to observe that a weather forecast, which is trivial and made with a deterministic model, may be accurate to a point, however it gives one possible forecast from a large number of possibilities. While, there is no denying that the advances that have been made to atmospheric modeling have accelerated, thus forecasting is ever more accurate over shorter periods such as days or weeks.
Findings, strengths weaknesses and recommendations
The Lorenz formula was once quite popular but proved to reveal a lot of unpredictability. This led to the scrapping of its application practically, however, the recent error combating methods may yet prove to make the concept yield more accuracy. Lorenz’s initial work implies a key fact, that people will never be able to exactly predict the future behavior of these complex weather systems. One of the strengths of this system is the statistical reliability of the parameters used such as temperature and the physics of fluid behavior. The weakness is the chaotic theory that comes with the formula. This stipulation gives an element of unpredictability. Overall, the introduction of the error curbing systems should significantly affect the computations of errors while using Lorenz as a weather predictor, and the benefit is the model is for a short term range basis, such as days up to the duration of a week.
Leutbecher, M., Palmer, T. N. (2008). Ensemble forecasting. Journal of Computational Physics
Coiffier, J. (2011). Fundamentals of Numerical Weather Prediction. New York: Cambridge
Warner, T. T. (2011). Numerical Weather and Climate Prediction. New York: Cambridge
Marshall, J., Molteni, R. (1993). Toward a dynamical understanding of planetary-scale flow
regimes. Atmos. Sci., 50, 1792-1818
Lorenz, E. N., (1963): Deterministic non-periodic flow. J. Atmos. Sci., 20, 130-141
K. Alligood, T. Sauer, and J. Yorke, (1996). Chaos: An Introduction to Dynamical Systems,
New York; Springer.