Solar Clipping State¶
Overview¶
The FeatureCalcSolarClippingState class is a subclass of FeatureCalculator designed to calculate the clipping state of solar inverters. This calculation identifies periods when the inverter is operating at or near its maximum possible power output, indicating that the inverter is "clipping" and unable to convert all available DC power from the solar array into AC power due to its own power limitations.
The class uses a deterministic approach based on inverter datasheets (specifically, Huawei) and real-time operational data. It considers both the inverter's temperature derating curve and the P-Q capability curve (active vs. reactive power) to determine the theoretical maximum AC power output at each timestamp. The clipping state is then defined by comparing the actual measured active power to this calculated maximum possible power.
Calculation Logic¶
The calculation proceeds as follows:
- Data Preparation
- Retrieve required features for the inverter and associated weather station, including active power, grid voltages, reactive power, and ambient temperature.
-
Clean the data by forward- and backward-filling missing values.
-
Calculate Average Grid Voltage
-
Compute the average of the three grid line voltages to represent the grid voltage at each timestamp.
-
Temperature Derating
- Use the ambient temperature to determine the inverter's maximum possible power output according to the manufacturer’s derating curve.
-
This curve is piecewise linear, with different power limits for temperature ranges (e.g., constant up to 30°C, linear decrease between 30–50°C, etc.).
-
P-Q Capability Curve
-
Calculate the maximum possible active power based on the inverter’s capability curve, which depends on the average grid voltage (per-unit) and the measured reactive power.
-
Select Closest Maximum Power
- For each timestamp, select the value (between temperature derating and P-Q curve) that is closest to the actual measured active power.
-
This accounts for real-world inverter behavior that may not always follow the strict minimum of the two theoretical limits.
-
Determine Clipping State
-
If the absolute difference between the actual active power and the selected maximum possible power is less than or equal to 0.5% of the maximum possible power, set the clipping state to 1 (ON); otherwise, set it to 0 (OFF).
-
Result Formatting
- Align the result with the requested period and save or return the calculated clipping state as a pandas Series.
Database Requirements¶
- Object Attributes: The following object attributes must be present for the inverter being calculated:
- reference_weather_stations: Dictionary indicating the associated weather stations (must include "complete_ws").
- nominal_ac_voltage: The nominal AC voltage of the inverter.
Required Features¶
The following features must be available in the Bazefield database for the calculation:
For the inverter:
ActivePower_5min.MAX: Maximum active power (kW)GridLineABVoltage_5min.AVG,GridLineBCVoltage_5min.AVG,GridLineCAVoltage_5min.AVG: Grid voltages (V)ReactivePower_5min.AVG: Reactive power (kVAR)
For the complete weather station:
AmbTemp_5min.AVG: Ambient temperature (°C)
Feature Attribute:
- The feature being calculated must have the attribute
server_calc_typeset to'solar_clipping_state'in the database.
Class Definition¶
FeatureCalcSolarClippingState(object_name, feature)
¶
Base class for solar energy loss due to open strings.
For this class to work, the feature must have the attribute 'server_calc_type' set to 'solar_clipping_state'.
Parameters:
-
(object_name¶str) –Name of the object for which the feature is calculated. It must exist in performance_db.
-
(feature¶str) –Feature of the object that is calculated. It must exist in performance_db.
Source code in echo_energycalc/feature_calc_solar_clipping_state.py
def __init__(
self,
object_name: str,
feature: str,
) -> None:
"""
Class used to calculate ClippingState Feature for solar assets.
For this class to work, the feature must have the attribute 'server_calc_type' set to 'solar_clipping_state'.
Parameters
----------
object_name : str
Name of the object for which the feature is calculated. It must exist in performance_db.
feature : str
Feature of the object that is calculated. It must exist in performance_db.
"""
# initialize parent class
super().__init__(object_name, feature)
# Defining which object attributes are required for the calculation.
self._add_requirement(
RequiredObjectAttributes(
{
self.object: [
"reference_weather_stations",
"nominal_ac_voltage",
],
},
),
)
self._get_required_data()
# Getting the complete weather station name for the specif object.
complete_ws = self._get_requirement_data("RequiredObjectAttributes")[self.object]["reference_weather_stations"]["complete_ws"]
# Defining the features that will be required for the calculation.
reference_features = [
"ActivePower_5min.MAX",
"GridLineABVoltage_5min.AVG",
"GridLineBCVoltage_5min.AVG",
"GridLineCAVoltage_5min.AVG",
"ReactivePower_5min.AVG",
"CurtailmentState_5min.REP",
"ActivePowerTheoretical_5min.AVG",
]
features = {
self.object: reference_features,
complete_ws: ["AmbTemp_5min.AVG"],
}
# Adding suffix _b# to features -> necessary to aquire data from bazefield
features = {
self.object: [f"{feat}_b#" for feat in reference_features],
complete_ws: ["AmbTemp_5min.AVG_b#"],
}
self._add_requirement(RequiredFeatures(features=features))
feature
property
¶
Feature that is calculated. This will be defined in the constructor and cannot be changed.
Returns:
-
str–Name of the feature that is calculated.
name
property
¶
Name of the feature calculator. Is defined in child classes of FeatureCalculator.
This must be equal to the "server_calc_type" attribute of the feature in performance_db.
Returns:
-
str–Name of the feature calculator.
object
property
¶
Object for which the feature is calculated. This will be defined in the constructor and cannot be changed.
Returns:
-
str–Object name for which the feature is calculated.
requirements
property
¶
List of requirements of the feature calculator. Is defined in child classes of FeatureCalculator.
Returns:
-
dict[str, list[CalculationRequirement]]–Dict of requirements.
The keys are the names of the classes of the requirements and the values are lists of requirements of that class.
For example:
{"RequiredFeatures": [RequiredFeatures(...), RequiredFeatures(...)], "RequiredObjects": [RequiredObjects(...)]}
result
property
¶
Result of the calculation. This is None until the method "calculate" is called.
Returns:
-
Series | DataFrame | None:–Result of the calculation if the method "calculate" was called. None otherwise.
calculate(period, save_into=None, cached_data=None, **kwargs)
¶
Method that will calculate the feature ClippingState_5min.REP.
The logic follow these steps: 1. Calculate the Maximum Possible Power based on the inverter derating and P-Q curve from Huawei datasheets. 2. Inverter derating is defined by the ambient temperature. 3. P-Q curve is defined by the reactive power and the average grid voltage (average grid line voltage). 4. If the difference between the Maximum Possible Power and the Active Power_5min.MAX is less or more than 0.5% -> ClippingState = 1 (ON), otherwise ClippingState = 0 (OFF).
Parameters:
-
(period¶DateTimeRange) –Period for which the feature will be calculated.
-
(save_into¶Literal['all', 'performance_db'] | None, default:None) –Argument that will be passed to the method "save". The options are: - "all": The feature will be saved in performance_db and bazefield. - "performance_db": the feature will be saved only in performance_db. - None: The feature will not be saved. By default None.
-
(cached_data¶DataFrame | None, default:None) –DataFrame with features already queried/calculated. This is useful to avoid needing to query all the data again from performance_db, making chained calculations a lot more efficient. By default None
-
(**kwargs¶dict, default:{}) –Additional arguments that will be passed to the "save" method.
Returns:
-
Series–Pandas Series with the calculated feature.
Source code in echo_energycalc/feature_calc_solar_clipping_state.py
def calculate(
self,
period: DateTimeRange,
save_into: Literal["all", "performance_db"] | None = None,
cached_data: DataFrame | None = None,
**kwargs,
) -> Series:
"""
Method that will calculate the feature ClippingState_5min.REP.
The logic follow these steps:
1. Calculate the Maximum Possible Power based on the inverter derating and P-Q curve from Huawei datasheets.
2. Inverter derating is defined by the ambient temperature.
3. P-Q curve is defined by the reactive power and the average grid voltage (average grid line voltage).
4. If the difference between the Maximum Possible Power and the Active Power_5min.MAX is less or more than 0.5% -> ClippingState = 1 (ON), otherwise ClippingState = 0 (OFF).
Parameters
----------
period : DateTimeRange
Period for which the feature will be calculated.
save_into : Literal["all", "performance_db"] | None, optional
Argument that will be passed to the method "save". The options are:
- "all": The feature will be saved in performance_db and bazefield.
- "performance_db": the feature will be saved only in performance_db.
- None: The feature will not be saved.
By default None.
cached_data : DataFrame | None, optional
DataFrame with features already queried/calculated. This is useful to avoid needing to query all the data again from performance_db, making chained calculations a lot more efficient.
By default None
**kwargs : dict, optional
Additional arguments that will be passed to the "save" method.
Returns
-------
Series
Pandas Series with the calculated feature.
"""
t0 = perf_counter()
nominal_ac_voltage = self._get_requirement_data("RequiredObjectAttributes")[self.object]["nominal_ac_voltage"]
# adjusting period to account for lagged timestamps
adjusted_period = period.copy()
# creating a series to store the result
result = self._create_empty_result(period=adjusted_period, freq="5min", result_type="Series")
# getting feature values
self._get_required_data(
period=adjusted_period,
reindex=None,
round_timestamps={"freq": timedelta(minutes=5), "tolerance": timedelta(minutes=2)},
cached_data=cached_data,
)
# getting DataFrame with feature values
df = self._get_requirement_data("RequiredFeatures")
t1 = perf_counter()
# Dataframe structure adjustment
df.columns = df.columns.get_level_values("feature")
# Remove the suffix _b# from the columns
df.columns = df.columns.str.replace("_b#$", "", regex=True)
# Filling missing values, first forward and then backward
df[df.columns] = df[df.columns].ffill().bfill()
t2 = perf_counter()
# Defining crucial columns for calculation
# Average Grid Voltage to define the pu voltage and apply the corresponding maximum P-Q curve, following inverter datasheet.
df["AverageGridVoltage_5min.AVG"] = df[
["GridLineABVoltage_5min.AVG", "GridLineBCVoltage_5min.AVG", "GridLineCAVoltage_5min.AVG"]
].mean(axis=1)
# Define which active power feature to use based on curtailment state
# When curtailment is active (CurtailmentState == 1), use theoretical power; otherwise use measured max power
df["ActivePowerForClipping"] = np.where(
df["CurtailmentState_5min.REP"] == 1,
df["ActivePowerTheoretical_5min.AVG"],
df["ActivePower_5min.MAX"],
)
# Calculating the maximum possible power based on temperature derating following iverter datasheet.
df["TemperatureDeratingPower_5min.AVG"] = self._inverter_derating_function(df["AmbTemp_5min.AVG"])
# Calculating the maximum possible power based on P-Q curve following inverter datasheet.
df["CapabilityCurvePowerMax_5min.AVG"] = self._get_p_max_from_pq_curve_vectorized(
df["ReactivePower_5min.AVG"],
df["AverageGridVoltage_5min.AVG"],
nominal_ac_voltage,
)
# Defining the maximum possible power based on the inverter derating and P-Q curve.
# Here we are considering the one that is closer to the active power on each timestamp.
# This is happening because we have cases in which the inverter does not follow the minimum value between theses two possibilities.
df["MaximumPossiblePower_5min.AVG"] = np.where(
np.abs(df["CapabilityCurvePowerMax_5min.AVG"] - df["ActivePowerForClipping"])
< np.abs(df["TemperatureDeratingPower_5min.AVG"] - df["ActivePowerForClipping"]),
df["CapabilityCurvePowerMax_5min.AVG"],
df["TemperatureDeratingPower_5min.AVG"],
)
# Defining the clipping state based on the maximum possible power and the active power.
# If the difference between the maximum possible power and the active power is less or more than 0.5% -> ClippingState = 1
df["ClippingState_5min.REP"] = np.where(
np.abs(df["ActivePowerForClipping"] - df["MaximumPossiblePower_5min.AVG"]) <= 0.005 * df["MaximumPossiblePower_5min.AVG"],
1, # Clipping state is ON
0, # Clipping state is OFF
)
t3 = perf_counter()
# Adjusting the Series index with the results.
# This is done to prevent missing indexes on the requested period of calculation. That is, if the calculated df has less points than the expected for the period.
wanted_idx = result.index.intersection(df.index)
result.loc[wanted_idx] = df.loc[wanted_idx, "ClippingState_5min.REP"].values
# Trimming result to the original period, just to be sure
result = result[(result.index >= period.start) & (result.index <= period.end)].copy()
# Adding calculated feature to class result attribute
self._result = result.copy()
# Saving results
self.save(save_into=save_into, **kwargs)
logger.debug(
f"{self.object} - {self.feature} - {period}: Requirements during calc {t1 - t0:.2f}s - Data adjustments {t2 - t1:.2f}s - Calculation core {t3 - t2:.2f}s - Final adjustments {perf_counter() - t3:.2f}s",
)
return result
save(save_into=None, **kwargs)
¶
Method to save the calculated feature values in performance_db.
Parameters:
-
(save_into¶Literal['all', 'performance_db'] | None, default:None) –Argument that will be passed to the method "save". The options are: - "all": The feature will be saved in performance_db and bazefield. - "performance_db": the feature will be saved only in performance_db. - None: The feature will not be saved.
By default None.
-
(**kwargs¶dict, default:{}) –Not being used at the moment. Here only for compatibility.
Source code in echo_energycalc/feature_calc_core.py
def save(
self,
save_into: Literal["all", "performance_db"] | None = None,
**kwargs, # noqa: ARG002
) -> None:
"""
Method to save the calculated feature values in performance_db.
Parameters
----------
save_into : Literal["all", "performance_db"] | None, optional
Argument that will be passed to the method "save". The options are:
- "all": The feature will be saved in performance_db and bazefield.
- "performance_db": the feature will be saved only in performance_db.
- None: The feature will not be saved.
By default None.
**kwargs : dict, optional
Not being used at the moment. Here only for compatibility.
"""
# checking arguments
if not isinstance(save_into, str | type(None)):
raise TypeError(f"save_into must be a string or None, not {type(save_into)}")
if isinstance(save_into, str) and save_into not in ["all", "performance_db"]:
raise ValueError(f"save_into must be 'all', 'performance_db' or None, not {save_into}")
# checking if calculation was done
if self.result is None:
raise ValueError(
"The calculation was not done. Cannot save the feature calculation results. Please make sure to do something like 'self._result = df[self.feature].copy()' in the method 'calculate' before calling 'self.save()'.",
)
if save_into is None:
return
if isinstance(save_into, str):
if save_into not in ["performance_db", "all"]:
raise ValueError(f"save_into must be 'performance_db' or 'all', not {save_into}.")
upload_to_bazefield = save_into == "all"
elif save_into is None:
upload_to_bazefield = False
else:
raise TypeError(f"save_into must be a string or None, not {type(save_into)}.")
# converting result series to DataFrame if needed
if isinstance(self.result, Series):
result_df = self.result.to_frame()
elif isinstance(self.result, DataFrame):
result_df = self.result.droplevel(0, axis=1)
else:
raise TypeError(f"result must be a pandas Series or DataFrame, not {type(self.result)}.")
# adjusting DataFrame to be inserted in the database
# making the columns a Multindex with levels object_name and feature_name
result_df.columns = MultiIndex.from_product([[self.object], result_df.columns], names=["object_name", "feature_name"])
self._perfdb.features.values.series.insert(
df=result_df,
on_conflict="update",
bazefield_upload=upload_to_bazefield,
)