Updated: Apr 26, 2021
After a technical improvement of a wind turbine, the question often arises which change the improvement has on the performance of the turbine. For example, one would like to know how much more power is produced after the calibration of a nacelle misalignment.
First of all, it is convenient to compare the power curve of the turbine in the periods before and after the measure.
In this article, we highlight inaccuracies in this approach and present a completely new approach that allows reliable power curve comparisons to be made with the help of machine learning.
Example: Correction of Yaw Misalignment
We would like to present our thoughts on behalf of the example of a nacelle misalignment correction.
One can assume, that the calibration of a systematic yaw error, e.g. by our Turbit Measurement System (TMS), is leading to an improvement of the overall performance of the wind turbine. This Article describes the theoretical and physical background of the effects of a yaw error on the performance of wind turbines.
Theoretical Increases in Total Power
If you take the annual yields of a wind turbine and add a theoretical additional output of 1-3%, you can quickly see the economic benefit of a nacelle misalignment calibration.
Evidenvce of an increase in yield?
The question arises, however, as to how this increase in performance can be demonstrated by real power measurements. The process of converting kinetic wind energy into electrical energy is very complex and highly dependent on meteorological parameters.
Problems of a normal power curve Comparison
If you make a change to a wind turbine, you want to know how much this change changes the performance of the turbine. The performance of a wind turbine does not only depend on the wind speed. This leads to a change in the power curve (power per wind speed) under different meteorological conditions.
A normal power curve comparison of two time periods is not sufficient to be able to recognize exactly which was the cause. E.g. the meteorological conditions can change over the time and thus affect the power curve.
Dependence of the power curve on the air density
The air density is, besides the wind speed, one of the most important factors on the power curve of wind turbines. At the same wind speed, the energy content of the wind changes as a function of the air density. The denser the air, the more energy the wind has and the more power the wind turbine can convert from the wind.
Air density, in turn, depends mainly on temperature, air pressure and humidity
The figure on the left shows a clear difference in the power curve of a plant between summer and winter. The main reason is the average temperature and the associated different air density.
Dependence of the power curve on the turbulence intensity
Likewise, the power at the same wind speed depends on the turbulence intensity.
We cannot assume that the power curve of two temporally separated measurements is comparable if we only measure the wind speed.
Conventional measurements with measuring masts are too expensive and complex.
New approach: Machine Learning
Neural nets are very good at learning complex relationships. The nonlinear and therefore complex relationship between the meteorological parameters and the performance of a wind turbine can be learned very well from historical SCADA data with this method. The 10 minutes of data from all turbines of a wind farm are used as input data for the model and the model is trained to predict the performance of the test turbine.
The neural network thus learns the complex power behavior from different wind directions, air pressures and wind speeds. If this model is applied to new data, the performance of the test turbine can be simulated for different weather conditions.
When the nacelle misalignment of the test system is calibrated, the simulation can be compared with the real data. The determined power difference, which is adjusted for the location and the meteorological conditions, can be used to evaluate the success of the measure.
To train the power curve we use only a part of the data (Training Data). This gives us a data set (Validation Data) that we can use to check how well the prediction of power through the neural network works.
2.Training Loss vs. Validation Loss
If we now compare the error of our training from the training data and from the validation data, we must make sure that no so-called overfitting occurs,
meaning the neural network learns the training data by heart, but only understands the basic connections.
3.Self Consistency Check
To see if the network has learned the performance behaviour well, we use historical data from the complete training and validation data set and check how well the simulation works.
We achieve usually an accuracy of 99%.
Finally, we use data as input data, which were measured in the test period, e.g. after a calibration, and compare the results of the machine-learning model with the real measured values. The difference then results in the increased or decreased output.
If these differences are integrated over a certain period of time, a direct difference in kWh is obtained between the expected and actually generated power of the test turbine.