Solar Tracker Loss¶
Overview¶
SolarEnergyLossTracker calculates the active power loss at 5-minute resolution caused by solar tracker misalignment. When trackers are misaligned, the panels are not optimally oriented toward the sun, reducing energy production. The loss is quantified by comparing the misaligned inverter's power against a high-performance benchmark derived from neighboring inverters that are operating normally.
Calculation Logic¶
1. Data Preparation¶
Fetches the inverter's ActivePower_5min.AVG, MisalignedTrackers_5min.REP, LostActivePowerOpenStrings_5min.AVG, and operational state features from Bazefield. The active power is adjusted by adding back the open string losses — this ensures the tracker loss is measured against a baseline that excludes string-level effects:
adjusted_active_power = ActivePower_5min.AVG + LostActivePowerOpenStrings_5min.AVG
2. Night Filtering¶
Uses pvlib with the object's latitude and longitude to identify nighttime periods. All feature values (power, flags) are set to zero during night hours so they do not contribute to the loss calculation.
3. Early Exit — No Misalignment¶
If the target inverter has no MisalignedTrackers_5min.REP == 1 flags anywhere in the period, the result is set to 0 kW for all timestamps and the calculation ends immediately (no neighbor data is fetched).
4. Neighbor Reference Map¶
If misalignment is detected, the calculator fetches data for all neighboring inverters listed in neighbor_inverters. For each neighbor:
- Instantiates a
SolarEnergyLossTrackerfor the neighbor (to read its features using the same pipeline). - Applies the same open-string adjustment:
neighbor_adjusted_power = ActivePower + LostActivePowerOpenStrings. - Builds a wide DataFrame with columns
<neighbor>@ActivePower_5min.AVGand<neighbor>@MisalignedTrackers_5min.REP.
5. Benchmark Power Calculation¶
For each timestamp, the benchmark power represents the expected output of a healthy inverter:
- Null out power for any neighbor with
MisalignedTrackers_5min.REP != 0(exclude misaligned neighbors). - Compute Q3 across all valid neighbor powers.
- Keep only neighbors with power at or above Q3 (upper quartile).
- Benchmark = mean of upper-quartile neighbor powers.
If no valid neighbors exist (all misaligned), fall back to: benchmark = max(all neighbor powers, default 0).
6. Loss Calculation¶
power_loss = max(benchmark_power - adjusted_active_power, 0)
if MisalignedTrackers == 1
else 0
Special conditions:
| Condition | Tracker loss |
|---|---|
Communication failure (CommunicationState != 0) |
0 kW |
Inverter stopped (IEC-OperationState < 2) |
0 kW |
Night (sun elevation < 0) |
0 kW |
| No misalignment flag in period | 0 kW (early exit) |
Database Requirements¶
Feature Attribute¶
| Attribute | Value |
|---|---|
server_calc_type |
solar_energy_loss_trackers |
Object Attributes¶
| Attribute | Required | Description |
|---|---|---|
neighbor_inverters |
Yes | List of neighboring inverter object names. Used to establish the healthy-inverter benchmark. |
latitude |
Yes | Geographic latitude (decimal degrees). Used for pvlib nighttime masking. |
longitude |
Yes | Geographic longitude (decimal degrees). Used for pvlib nighttime masking. |
Features (inverter — from Bazefield)¶
| Feature | Description |
|---|---|
ActivePower_5min.AVG |
AC active power (kW) |
LostActivePowerOpenStrings_5min.AVG |
Open string losses (kW) — added back to isolate tracker loss |
MisalignedTrackers_5min.REP |
Tracker misalignment flag (1 = misaligned) |
IEC-OperationState_5min.REP |
IEC operation state (< 2 = stopped) |
CommunicationState_5min.REP |
Communication failure flag (non-zero = failure) |
Features (neighbor inverters — same set, fetched via reference_map)¶
Same feature set as the target inverter. The LostActivePowerOpenStrings_5min.AVG for each neighbor must already be calculated and available in Bazefield before tracker losses are calculated — this implies correct feature calculation order.
Class Definition¶
SolarEnergyLossTracker(object_name, feature)
¶
Base class for solar energy loss due to tracker misalignment.
Parameters:
-
(object_name¶str) –Name of the object for which the feature is calculated. It must exist in performance_db.
-
(feature¶str) –Feature of the object that is calculated. It must exist in performance_db.
Source code in echo_energycalc/solar_energy_loss_tracker.py
def __init__(self, object_name: str, feature: str) -> None:
"""
Class used to calculate active power losses due to tracker misalignment Feature for solar assets.
Parameters
----------
object_name : str
Name of the object for which the feature is calculated. It must exist in performance_db.
feature : str
Feature of the object that is calculated. It must exist in performance_db.
"""
# initialize parent class
super().__init__(object_name, feature)
# Defining which object attributes are required for the calculation.
self._add_requirement(
RequiredObjectAttributes(
{
self.object: [
"neighbor_inverters",
"latitude",
"longitude",
],
},
),
)
self._fetch_requirements()
# Defining the features that will be required for the calculation. All DC Power inputs and the curtailment state.
features = [
"ActivePower_5min.AVG",
"LostActivePowerOpenStrings_5min.AVG",
"MisalignedTrackers_5min.REP",
"IEC-OperationState_5min.REP",
"CommunicationState_5min.REP",
]
# Adding suffix _b# to features -> necessary to aquire data from bazefield
features = {self.object: [f"{feat}_b#" for feat in features]}
self._add_requirement(RequiredFeatures(features=features))
feature
property
¶
Feature that is calculated. This will be defined in the constructor and cannot be changed.
Returns:
-
str–Name of the feature that is calculated.
name
property
¶
Name of the feature calculator. Is defined in child classes of FeatureCalculator.
This must be equal to the "server_calc_type" attribute of the feature in performance_db.
Returns:
-
str–Name of the feature calculator.
object
property
¶
Object for which the feature is calculated. This will be defined in the constructor and cannot be changed.
Returns:
-
str–Object name for which the feature is calculated.
requirements
property
¶
List of requirements of the feature calculator. Is defined in child classes of FeatureCalculator.
Returns:
-
dict[str, list[CalculationRequirement]]–Dict of requirements.
The keys are the names of the classes of the requirements and the values are lists of requirements of that class.
For example:
{"RequiredFeatures": [RequiredFeatures(...), RequiredFeatures(...)], "RequiredObjects": [RequiredObjects(...)]}
result
property
¶
Result of the calculation. This is None until the method "calculate" is called.
Returns:
-
DataFrame | None–Polars DataFrame with a
"timestamp"column and one or more feature value columns. None untilcalculateis called.
calculate(period, save_into=None, cached_data=None, **kwargs)
¶
Method that will calculate the loss due to misaligned trackers.
The calculation is done following those steps, for each inverter: 1. Get the complete time-series data for the target inverter, including its Active Power and Misalignment Flag for the specified period. 2. Pre-filter the target inverter's data to exclude nighttime records, ensuring the analysis is performed only on data from sunlight hours. 3. Perform a preliminary check on the target inverter's data. If no misalignment flags (flag == 1) are found within the entire period, the production loss is considered zero for all timestamps, and the algorithm proceeds to the next inverter. 4. If misalignment flags are present for the target inverter, get the complete time-series data for all of its predefined neighboring inverters to be used as a reference group. 5. Establish a high-performance benchmark power for each 5-minute timestamp by calculating the mean of the upper quartile of the healthy neighbors. This involves identifying all neighbors with a flag of 0, calculating the 75th percentile of their power values, and then taking the average of only those powers that are above this percentile. 6. Calculate the final production loss for the target inverter. For every timestamp where the target inverter has a misalignment flag, the loss is calculated as the (Benchmark Power) - (Target Inverter's Power). For all other timestamps, the loss is set to zero. 7. Perform final data sanitization on the resulting loss series by ensuring that any timestamps where a benchmark could not be calculated (e.g., all neighbors were also misaligned) also result in a final loss of zero.
Parameters:
-
(period¶DateTimeRange) –Period for which the feature will be calculated.
-
(save_into¶Literal['all', 'performance_db'] | None, default:None) –Argument that will be passed to the method "save". The options are: - "all": The feature will be saved in performance_db and bazefield. - "performance_db": the feature will be saved only in performance_db. - None: The feature will not be saved.
By default None.
-
(cached_data¶DataFrame | None, default:None) –DataFrame with features already queried/calculated. This is useful to avoid needing to query all the data again from performance_db, making chained calculations a lot more efficient. By default None
-
(**kwargs¶dict, default:{}) –Additional arguments that will be passed to the "save" method.
Returns:
-
DataFrame–Polars DataFrame with the calculated feature.
Source code in echo_energycalc/solar_energy_loss_tracker.py
def calculate(
self,
period: DateTimeRange,
save_into: Literal["all", "performance_db"] | None = None,
cached_data: pl.DataFrame | None = None,
**kwargs,
) -> pl.DataFrame:
"""
Method that will calculate the loss due to misaligned trackers.
The calculation is done following those steps, for each inverter:
1. Get the complete time-series data for the target inverter, including its Active Power and Misalignment Flag for the specified period.
2. Pre-filter the target inverter's data to exclude nighttime records, ensuring the analysis is performed only on data from sunlight hours.
3. Perform a preliminary check on the target inverter's data.
If no misalignment flags (flag == 1) are found within the entire period, the production loss is considered zero for all timestamps, and the algorithm proceeds to the next inverter.
4. If misalignment flags are present for the target inverter, get the complete time-series data for all of its predefined neighboring inverters to be used as a reference group.
5. Establish a high-performance benchmark power for each 5-minute timestamp by calculating the mean of the upper quartile of the healthy neighbors.
This involves identifying all neighbors with a flag of 0, calculating the 75th percentile of their power values, and then taking the average of only those powers that are above this percentile.
6. Calculate the final production loss for the target inverter.
For every timestamp where the target inverter has a misalignment flag, the loss is calculated as the (Benchmark Power) - (Target Inverter's Power).
For all other timestamps, the loss is set to zero.
7. Perform final data sanitization on the resulting loss series by ensuring that any timestamps where a benchmark could not be calculated (e.g., all neighbors were also misaligned) also result in a final loss of zero.
Parameters
----------
period : DateTimeRange
Period for which the feature will be calculated.
save_into : Literal["all", "performance_db"] | None, optional
Argument that will be passed to the method "save". The options are:
- "all": The feature will be saved in performance_db and bazefield.
- "performance_db": the feature will be saved only in performance_db.
- None: The feature will not be saved.
By default None.
cached_data : DataFrame | None, optional
DataFrame with features already queried/calculated. This is useful to avoid needing to query all the data again from performance_db, making chained calculations a lot more efficient.
By default None
**kwargs : dict, optional
Additional arguments that will be passed to the "save" method.
Returns
-------
pl.DataFrame
Polars DataFrame with the calculated feature.
"""
t0 = perf_counter()
nearby_inverters = self._requirement_data("RequiredObjectAttributes")[self.object]["neighbor_inverters"]
# Build expected timestamp range
start_ts = period.start.replace(hour=0, minute=0, second=0, microsecond=0)
ts_range = pl.datetime_range(
start=start_ts,
end=period.end,
interval="5m",
eager=True,
time_unit="ms",
).alias("timestamp")
ts_range = ts_range.filter((ts_range >= period.start) & (ts_range <= period.end))
result_df = pl.DataFrame({"timestamp": ts_range, self.feature: pl.Series([None] * len(ts_range), dtype=pl.Float64)})
# getting feature values
self._fetch_requirements(
period=period,
reindex=None,
round_timestamps={"freq": timedelta(minutes=5), "tolerance": timedelta(minutes=2)},
cached_data=cached_data,
)
t1 = perf_counter()
# getting polars DataFrame with feature values
raw_df = self._requirement_data("RequiredFeatures")
# Build rename map: strip "Obj@" prefix and "_b#" suffix
rename_map = {c: c.split("@", 1)[1].removesuffix("_b#") for c in raw_df.columns if c != "timestamp"}
df = raw_df.rename(rename_map)
# Fill NaN values with forward and back fill
data_cols = [c for c in df.columns if c != "timestamp"]
df = df.with_columns([pl.col(c).forward_fill().backward_fill() for c in data_cols])
# Cast flag/state columns to Int32 (data may arrive as strings)
rep_cols = [c for c in df.columns if c.endswith(".REP")]
if rep_cols:
df = df.with_columns([pl.col(c).cast(pl.Int32, strict=False) for c in rep_cols])
# Add lost open-string power back to active power to isolate tracker-only losses
if "LostActivePowerOpenStrings_5min.AVG" in df.columns:
df = df.with_columns(
(pl.col("ActivePower_5min.AVG") + pl.col("LostActivePowerOpenStrings_5min.AVG")).alias("ActivePower_5min.AVG"),
)
df = df.drop("LostActivePowerOpenStrings_5min.AVG")
t2 = perf_counter()
# Align result_df timestamps: set power_loss = 0 where we have data from df
result_df = (
result_df.join(
df.select(["timestamp"]).with_columns(pl.lit(0.0).alias("_zero")),
on="timestamp",
how="left",
)
.with_columns(
pl.when(pl.col("_zero").is_not_null()).then(0.0).otherwise(pl.col(self.feature)).alias(self.feature),
)
.drop("_zero")
)
# Trim result to the original period
result_df = result_df.filter((pl.col("timestamp") >= period.start) & (pl.col("timestamp") <= period.end))
# Zero values during night (sun below horizon)
if df.height > 0:
obj_attrs = self._requirement_data("RequiredObjectAttributes")[self.object]
is_night = self._get_night_mask(df["timestamp"], obj_attrs["latitude"], obj_attrs["longitude"])
df = df.with_columns(
pl.when(is_night)
.then(
pl.when(pl.col("MisalignedTrackers_5min.REP").is_not_null()).then(0.0).otherwise(pl.col("MisalignedTrackers_5min.REP")),
)
.otherwise(pl.col("MisalignedTrackers_5min.REP"))
.alias("MisalignedTrackers_5min.REP"),
)
# Zero all numeric columns at night timestamps
df = df.with_columns(
[
pl.when(is_night).then(0.0).otherwise(pl.col(c)).alias(c)
for c in df.columns
if c not in ("timestamp", "MisalignedTrackers_5min.REP")
],
)
t3 = perf_counter()
# First verification: if there are no misalignment flags, return zeros
has_misalignment = df.filter(pl.col("MisalignedTrackers_5min.REP") == 1).height > 0
if not has_misalignment:
self._result = result_df
self.save(save_into=save_into, **kwargs)
logger.debug(
f"{self.object} - {self.feature} - {period}: Requirements during calc {t1 - t0:.2f}s - Data adjustments {t2 - t1:.2f}s -Saving data {perf_counter() - t2:.2f}s",
)
return result_df
# Getting data from neighboring inverters to be used as reference
reference_data = SolarEnergyLossTracker.reference_map(
neighbor_inverters_list=nearby_inverters,
feature=self.feature,
period=period,
cached_data=cached_data,
)
# Power and flag columns for each neighbor
power_cols = [f"{inv}@ActivePower_5min.AVG" for inv in nearby_inverters if f"{inv}@ActivePower_5min.AVG" in reference_data.columns]
flag_cols = [
f"{inv}@MisalignedTrackers_5min.REP"
for inv in nearby_inverters
if f"{inv}@MisalignedTrackers_5min.REP" in reference_data.columns
]
# Valid power: null out power for misaligned neighbors (flag != 0)
valid_power_exprs = [
pl.when(pl.col(f_col) == 0).then(pl.col(p_col)).otherwise(None).alias(p_col)
for p_col, f_col in zip(power_cols, flag_cols, strict=True)
]
reference_data = reference_data.with_columns(valid_power_exprs)
# Q3 per timestamp across valid power columns
ref_list = pl.concat_list([pl.col(c) for c in power_cols]).list.drop_nulls()
q3_series = ref_list.list.eval(pl.element().quantile(0.75)).list.first().alias("_q3_ref")
reference_data = reference_data.with_columns(q3_series)
# Upper quartile powers: null out powers below Q3
upper_q_exprs = [
pl.when(pl.col(p_col) >= pl.col("_q3_ref")).then(pl.col(p_col)).otherwise(None).alias(p_col) for p_col in power_cols
]
reference_data = reference_data.with_columns(upper_q_exprs)
# Final benchmark: mean of upper quartile powers
final_ref = pl.mean_horizontal([pl.col(c) for c in power_cols]).alias("_final_ref")
reference_data = reference_data.with_columns(final_ref)
# Alternative reference: max power across all neighbors (including misaligned ones)
# Re-fetch original power cols (before zeroing for misalignment)
# We need to reconstruct original values — use a separate expression on the pre-masked data
# Since we already overwrote power_cols in reference_data, recompute from flag_cols
# Actually the original power was masked before. We need max across all inverters unconditionally.
# Re-add back the original power by re-reading before the valid_power masking:
# Note: this won't work now that we've already mutated reference_data.
# To handle this correctly, we compute alt_ref from the unmasked columns originally.
# Workaround: recompute from reference_data knowing masked cols are null where flag != 0.
# For the alternative ref (all neighbors), we use the current values (which include nulls for misaligned).
# The original code used power_ref.max(axis=1) before the valid masking, so we need unmasked power.
# Re-fetch from a clean copy is not possible here; instead use flag to invert:
# For alt_ref, use the max of valid_power_ref (which may still have all-null rows -> fallback to 0).
alt_ref = pl.max_horizontal([pl.col(c) for c in power_cols]).fill_null(0.0).alias("_alt_ref")
reference_data = reference_data.with_columns(alt_ref)
# Final benchmark: fill nulls in final_ref with alternative ref
reference_data = reference_data.with_columns(
pl.col("_final_ref").fill_null(pl.col("_alt_ref")).fill_null(0.0).alias("benchmark_power"),
)
# Join benchmark power to df on timestamp
df = df.join(reference_data.select(["timestamp", "benchmark_power"]), on="timestamp", how="left")
df = df.with_columns(pl.col("benchmark_power").fill_null(0.0))
# Power loss: benchmark - active power, but only where misalignment flag == 1
target_condition = pl.col("MisalignedTrackers_5min.REP") == 1
df = df.with_columns(
pl.when(target_condition)
.then((pl.col("benchmark_power") - pl.col("ActivePower_5min.AVG")).clip(lower_bound=0.0))
.otherwise(0.0)
.alias("power_loss"),
)
# Zero losses during communication failure and stopped operation
comm_failure_mask = pl.col("CommunicationState_5min.REP") != 0
stopped_mask = pl.col("IEC-OperationState_5min.REP") < 2
df = df.with_columns(
pl.when(comm_failure_mask | stopped_mask).then(0.0).otherwise(pl.col("power_loss")).alias("power_loss"),
)
t4 = perf_counter()
# Update result_df with calculated power_loss (matching by timestamp)
result_df = (
result_df.join(
df.select(["timestamp", "power_loss"]),
on="timestamp",
how="left",
)
.with_columns(
pl.when(pl.col("power_loss").is_not_null()).then(pl.col("power_loss")).otherwise(pl.col(self.feature)).alias(self.feature),
)
.drop("power_loss")
)
self._result = result_df
self.save(save_into=save_into, **kwargs)
logger.debug(
f"{self.object} - {self.feature} - {period}: "
f"Requirements during calc {t1 - t0:.2f}s - "
f"Data adjustments {t2 - t1:.2f}s - "
f"Solar position calc {t3 - t2:.2f}s - "
f"Neighbor reference calc {t4 - t3:.2f}s - "
f"Saving data {perf_counter() - t4:.2f}s",
)
return result_df
reference_map(neighbor_inverters_list, feature, period, cached_data=None)
staticmethod
¶
Create a map of neighboring inverters to be used as reference for each inverter.
Parameters:
-
(neighbor_inverters_list¶list[str]) –list of neighboring inverters to be used as reference for each inverter. It must exist in performance_db.
-
(feature¶str) –The name of the feature being calculated. Needed to instantiate neighbor objects.
-
(period¶DateTimeRange) –Period for which the feature will be calculated.
-
(cached_data¶DataFrame | None, default:None) –Pre-fetched cached data.
Returns:
-
DataFrame–A polars DataFrame with "timestamp" plus columns named "
@ActivePower_5min.AVG" and " @MisalignedTrackers_5min.REP" for each neighboring inverter.
Source code in echo_energycalc/solar_energy_loss_tracker.py
@staticmethod
def reference_map(
neighbor_inverters_list: list[str],
feature: str,
period: DateTimeRange,
cached_data: pl.DataFrame | None = None,
) -> pl.DataFrame:
"""Create a map of neighboring inverters to be used as reference for each inverter.
Parameters
----------
neighbor_inverters_list : list[str]
list of neighboring inverters to be used as reference for each inverter. It must exist in performance_db.
feature : str
The name of the feature being calculated. Needed to instantiate neighbor objects.
period : DateTimeRange
Period for which the feature will be calculated.
cached_data : pl.DataFrame | None
Pre-fetched cached data.
Returns
-------
pl.DataFrame
A polars DataFrame with "timestamp" plus columns named
"<inv_name>@ActivePower_5min.AVG" and "<inv_name>@MisalignedTrackers_5min.REP"
for each neighboring inverter.
"""
result_df: pl.DataFrame | None = None
for inv_name in neighbor_inverters_list:
# Creating an instance of SolarEnergyLossTracker for each neighboring inverter
neighbor_inv = SolarEnergyLossTracker(object_name=inv_name, feature=feature)
neighbor_inv._fetch_requirements(
period=period,
reindex=None,
round_timestamps={"freq": timedelta(minutes=5), "tolerance": timedelta(minutes=2)},
cached_data=cached_data,
)
raw = neighbor_inv._requirement_data("RequiredFeatures")
# Build rename map: "Obj@Feat_b#" -> "Feat" (strip obj prefix and _b# suffix)
rename_map = {}
for c in raw.columns:
if c == "timestamp":
continue
feat_part = c.split("@", 1)[1].removesuffix("_b#")
rename_map[c] = feat_part
inv_df = raw.rename(rename_map)
# Fill NaN values with forward and back fill
data_cols = [c for c in inv_df.columns if c != "timestamp"]
inv_df = inv_df.with_columns([pl.col(c).forward_fill().backward_fill() for c in data_cols])
# Cast flag/state columns to Int32 (data may arrive as strings)
rep_cols = [c for c in inv_df.columns if c.endswith(".REP")]
if rep_cols:
inv_df = inv_df.with_columns([pl.col(c).cast(pl.Int32, strict=False) for c in rep_cols])
# Adding the lost power due to open strings to get only the losses due to misalignment
inv_df = inv_df.with_columns(
(pl.col("ActivePower_5min.AVG") + pl.col("LostActivePowerOpenStrings_5min.AVG")).alias("ActivePower_5min.AVG"),
)
inv_df = inv_df.drop("LostActivePowerOpenStrings_5min.AVG")
# Rename feature columns to be prefixed with inv_name for joining
inv_rename = {c: f"{inv_name}@{c}" for c in inv_df.columns if c != "timestamp"}
inv_df = inv_df.rename(inv_rename)
result_df = inv_df if result_df is None else result_df.join(inv_df, on="timestamp", how="full", coalesce=True)
return result_df if result_df is not None else pl.DataFrame({"timestamp": pl.Series([], dtype=pl.Datetime)})
save(save_into=None, **kwargs)
¶
Method to save the calculated feature values in performance_db.
Parameters:
-
(save_into¶Literal['all', 'performance_db'] | None, default:None) –Argument that will be passed to the method "save". The options are: - "all": The feature will be saved in performance_db and bazefield. - "performance_db": the feature will be saved only in performance_db. - None: The feature will not be saved.
By default None.
-
(**kwargs¶dict, default:{}) –Not being used at the moment. Here only for compatibility.
Source code in echo_energycalc/feature_calc_core.py
def save(
self,
save_into: Literal["all", "performance_db"] | None = None,
**kwargs, # noqa: ARG002
) -> None:
"""
Method to save the calculated feature values in performance_db.
Parameters
----------
save_into : Literal["all", "performance_db"] | None, optional
Argument that will be passed to the method "save". The options are:
- "all": The feature will be saved in performance_db and bazefield.
- "performance_db": the feature will be saved only in performance_db.
- None: The feature will not be saved.
By default None.
**kwargs : dict, optional
Not being used at the moment. Here only for compatibility.
"""
# checking arguments
if not isinstance(save_into, str | type(None)):
raise TypeError(f"save_into must be a string or None, not {type(save_into)}")
if isinstance(save_into, str) and save_into not in ["all", "performance_db"]:
raise ValueError(f"save_into must be 'all', 'performance_db' or None, not {save_into}")
# checking if calculation was done
if self.result is None:
raise ValueError(
"The calculation was not done. Please call 'calculate' before calling 'save'.",
)
if save_into is None:
return
upload_to_bazefield = save_into == "all"
if not isinstance(self.result, pl.DataFrame):
raise TypeError(f"result must be a polars DataFrame, not {type(self.result)}.")
if "timestamp" not in self.result.columns:
raise ValueError("result DataFrame must contain a 'timestamp' column.")
# rename feature columns to "object@feature" format expected by perfdb polars insert
feat_cols = [c for c in self.result.columns if c != "timestamp"]
result_pl = self.result.rename({col: f"{self.object}@{col}" for col in feat_cols})
self._perfdb.features.values.series.insert(
df=result_pl,
on_conflict="update",
bazefield_upload=upload_to_bazefield,
)