Skip to content

Power at Connection Point

Overview

FeatureCalcPowerCP calculates the power delivered at the connection point (CP) for a specific Collecting Substation (CS) power meter. The output is a 5-minute average power time series.

The challenge is that CP meters measure total power for the entire plant, while each CS meter measures only one section. The calculator distributes CP power proportionally to each CS meter based on its share of total CS production, and falls back to electrical loss curves when meter data is unavailable.

Note

The calculator only runs for objects with power_meter_location = "collecting substation". Objects ending in SMF2 are skipped entirely (secondary bus meters handled separately).


Calculation Logic — 3-Step Fallback

Steps are applied sequentially to fill null timestamps. Each step only fills timestamps that remain null after the previous step.

Step 1 — CS and CP Meter Scaling

Requires: all CS meters sharing the same CP, plus the CP meter itself.

For each timestamp where data from all CS meters and the CP meter is available:

Text Only
power_at_CP = (power_this_CS_meter / sum_all_CS_meters) × power_CP_meter

This step is skipped for a timestamp if:

  • Any CS meter has a missing reading (sum would be incomplete).
  • The sum of all CS meters is less than the CP meter reading (physically invalid — negative loss).

Logs a warning when negative-loss timestamps are detected, which usually indicates a CS meter that is not configured in performance_db.

Step 2 — CS Meter + Loss Curve

Requires: loss_curve_cs_cp object attribute on the SPE, and valid reading from this CS meter.

Uses a historical loss curve (fraction of power lost between CS and CP) as a function of CS power:

Text Only
power_at_CP = power_CS × (1 - loss_fraction(power_CS))

The loss_fraction function is interpolated from loss_curve_cs_cp using convert_curve_df_to_func (with extrapolation enabled).

Step 3 — Asset Power + Loss Curve

Requires: loss_curve_asset_cp object attribute on the SPE, and ActivePower_10min.AVG from the SPE.

When both CS and CP meter data are unavailable, falls back to the SPE's total asset power:

Text Only
power_at_CP = power_asset × (1 - loss_fraction(power_asset))

Asset power is resampled to 5-minute frequency with a 1-period backward-fill to align timestamps.

Pre-Operation Date Zeroing

If test_operation_date or commercial_operation_date is defined on the SPE or CS meter object, all readings before that date are set to 0. This prevents unreliable commissioning data from affecting the calculation.


Database Requirements

Feature Attribute

Attribute Value
server_calc_type power_connection_point

Object Attributes (CS meter object)

Attribute Required Description
power_meter_location Yes Must be "collecting substation". Other values cause the calculator to skip this object.
parent_meter_name Yes Object name of the CP meter in performance_db. Used to identify sibling CS meters and to fetch CP power.
test_operation_date No Date from which CS meter data is considered reliable. Readings before this date are set to 0.
commercial_operation_date No Fallback start date if test_operation_date is absent.

Object Attributes (SPE object)

The SPE name is derived from the CS meter object name by stripping the trailing -SMF1 or -SMF2 suffix.

Attribute Required Description
loss_curve_cs_cp No (Step 2) Loss curve from CS to CP. Dict with bin_mean and value_mean arrays defining fractional loss vs. CS power. Without this, Step 2 is skipped.
loss_curve_asset_cp No (Step 3) Loss curve from asset (total SPE) to CP. Same format. Without this, Step 3 is skipped.
test_operation_date / commercial_operation_date No Same purpose as on the CS meter object.

Loss curve format:

JSON
{
    "bin_mean": [0.0, 500.0, 1000.0, 2000.0],
    "value_mean": [0.005, 0.007, 0.008, 0.009]
}

Note

Loss curves are computed by the script at manual_routines\postgres_update_electrical_losses on the Performance Server.

Features

Feature Object Required Description
ActivePower_5min.AVG CP meter (parent_meter_name) Step 1 Total plant active power at connection point (kW).
ActivePower_5min.AVG All CS meters sharing the same CP Step 1 CS section active power (kW).
ActivePower_10min.AVG SPE object Step 3 Total SPE active power aggregated from assets (kW). Resampled to 5 min.

Class Definition

FeatureCalcPowerCP(object_name, feature)

FeatureCalculator class used to calculate the power at the connection point for an SPE.

This will go through the following steps to try to calculate the power at the connection point. The steps are only used to fill the missing data from the previous step.

  1. Calculate using power from all CS meters and power of CP meter. This basically does a scaling of the power from the CS meters so that the sum of all CS meters is equal to the power of the CP meter.

    This step will be skipped for timestamps in the following cases:

    • The sum of all CS meters is lower than the power of the CP meter.
    • Data for any of the CS or CP meters is missing.
    • There is significant difference between the sum of all CS meters and the power of the CP meter. This usually happens if there is a CS power meter not configured in the performance_db.
  2. Calculate using power from the specific CS meter and loss curve from CS to CP.

  3. Calculate using power from assets (wind turbine or solar inverter) from the specific CS meter and loss curve from asset to CP.

For this calculation to work, the following object attributes must be defined:

  • Power meter:
    • power_meter_location: Must be collecting substation, others will be skipped.
    • parent_meter_name: Must be the name of a valid power meter at the connection point.
  • SPE:
    • loss_curve_cs_cp: Losses curve must be defined for the respective SPE for filling gaps using active power at CS and historical curve
    • loss_curve_asset_cp: Losses curve must be defined for the respective SPE for filling gaps using active power at asset and historical curve

Parameters:

  • object_name

    (str) –

    Name of the object for which the feature is calculated. It must exist in performance_db.

  • feature

    (str) –

    Feature of the object that is calculated. It must exist in performance_db.

Source code in echo_energycalc/feature_calc_electrical_loss.py
Python
def __init__(
    self,
    object_name: str,
    feature: str,
) -> None:
    """
    FeatureCalculator class used to calculate the power at the connection point for an SPE.

    This will go through the following steps to try to calculate the power at the connection point. The steps are only used to fill the missing data from the previous step.

    1. Calculate using power from all CS meters and power of CP meter. This basically does a scaling of the power from the CS meters so that the sum of all CS meters is equal to the power of the CP meter.

        This step will be skipped for timestamps in the following cases:

        - The sum of all CS meters is lower than the power of the CP meter.
        - Data for any of the CS or CP meters is missing.
        - There is significant difference between the sum of all CS meters and the power of the CP meter. This usually happens if there is a CS power meter not configured in the performance_db.

    2. Calculate using power from the specific CS meter and loss curve from CS to CP.
    3. Calculate using power from assets (wind turbine or solar inverter) from the specific CS meter and loss curve from asset to CP.

    For this calculation to work, the following object attributes must be defined:

    - Power meter:
        - `power_meter_location`: Must be `collecting substation`, others will be skipped.
        - `parent_meter_name`: Must be the name of a valid power meter at the connection point.
    - SPE:
        - `loss_curve_cs_cp`: Losses curve must be defined for the respective SPE for filling gaps using active power at CS and historical curve
        - `loss_curve_asset_cp`: Losses curve must be defined for the respective SPE for filling gaps using active power at asset and historical curve

    Parameters
    ----------
    object_name : str
        Name of the object for which the feature is calculated. It must exist in performance_db.
    feature : str
        Feature of the object that is calculated. It must exist in performance_db.
    """
    # initialize parent class
    super().__init__(object_name, feature)

    # skipping calculation if current object is the secondary SMF
    if self.object.endswith("SMF2"):
        return

    # getting power meter location to be sure this is a power meter that needs to be calculated
    self._add_requirement(RequiredObjectAttributes({self.object: ["power_meter_location"]}))
    self._fetch_requirements()
    self._meter_location: str = self._requirement_data("RequiredObjectAttributes")[self.object]["power_meter_location"]
    if self._meter_location != "collecting substation":
        logger.warning(
            f"'{self.object}' - '{self.feature}': Skipping calculation as this power meter is not a 'collecting substation' meter. Please check 'power_meter_location' object attribute.",
        )
        return

    # defining attributes
    self._add_requirement(RequiredObjectAttributes({self.object: ["parent_meter_name"]}))
    self._fetch_requirements()
    self._cp_meter: str = self._requirement_data("RequiredObjectAttributes")[self.object]["parent_meter_name"]

    # getting collecting substation meters (the ones that share the same connection point meter)
    self._cs_meters = list(
        self._perfdb.objects.instances.get_ids(
            attributes={"parent_meter_name": self._cp_meter},
            object_types=["power_meter"],
        ).keys(),
    )

    # defining spe names (as most attributes are stored in the respective spes)
    # to do that we will remove the -SMF1 or -SMF2 from the end of the object name
    # split by "-" in reverse order and get the first element
    self._spe_name = self.object[::-1].split("-", maxsplit=1)[-1][::-1]
    # getting the loss curves
    self._add_requirement(
        RequiredObjectAttributes(
            {self._spe_name: ["loss_curve_cs_cp", "loss_curve_asset_cp"]},
            optional=True,
        ),
    )
    self._fetch_requirements()

    # adding start of operation dates as requirement
    self._add_requirement(
        RequiredObjectAttributes(
            {
                self._spe_name: ["test_operation_date", "commercial_operation_date"],
                self.object: ["test_operation_date", "commercial_operation_date"],
            },
            optional=True,
        ),
    )
    self._fetch_requirements()

    # defining the required features
    required_features = {
        self._cp_meter: ["ActivePower_5min.AVG"],
    }
    for meter in self._cs_meters:
        required_features[meter] = ["ActivePower_5min.AVG"]
    self._add_requirement(RequiredFeatures(required_features))

feature property

Feature that is calculated. This will be defined in the constructor and cannot be changed.

Returns:

  • str

    Name of the feature that is calculated.

name property

Name of the feature calculator. Is defined in child classes of FeatureCalculator.

This must be equal to the "server_calc_type" attribute of the feature in performance_db.

Returns:

  • str

    Name of the feature calculator.

object property

Object for which the feature is calculated. This will be defined in the constructor and cannot be changed.

Returns:

  • str

    Object name for which the feature is calculated.

requirements property

List of requirements of the feature calculator. Is defined in child classes of FeatureCalculator.

Returns:

  • dict[str, list[CalculationRequirement]]

    Dict of requirements.

    The keys are the names of the classes of the requirements and the values are lists of requirements of that class.

    For example: {"RequiredFeatures": [RequiredFeatures(...), RequiredFeatures(...)], "RequiredObjects": [RequiredObjects(...)]}

result property

Result of the calculation. This is None until the method "calculate" is called.

Returns:

  • DataFrame | None

    Polars DataFrame with a "timestamp" column and one or more feature value columns. None until calculate is called.

calculate(period, save_into=None, cached_data=None, **kwargs)

Run the calculation for the given period and optionally save the result.

Calls :meth:_compute to get the result, stores it in :attr:result, then calls :meth:save. Subclasses should implement :meth:_compute instead of overriding this method.

Parameters:

  • period

    (DateTimeRange) –

    Period for which the feature will be calculated.

  • save_into

    (Literal['all', 'performance_db'] | None, default: None ) –
    • "all": save in performance_db and bazefield.
    • "performance_db": save only in performance_db.
    • None: do not save.

    By default None.

  • cached_data

    (DataFrame | None, default: None ) –

    Polars DataFrame with features already fetched/calculated. Passed to _compute to enable chained calculations without re-querying performance_db. By default None.

  • **kwargs

    Forwarded to :meth:save.

Returns:

  • DataFrame

    Polars DataFrame with a "timestamp" column and one or more feature value columns.

Source code in echo_energycalc/feature_calc_core.py
Python
def calculate(
    self,
    period: DateTimeRange,
    save_into: Literal["all", "performance_db"] | None = None,
    cached_data: pl.DataFrame | None = None,
    **kwargs,
) -> pl.DataFrame:
    """
    Run the calculation for the given period and optionally save the result.

    Calls :meth:`_compute` to get the result, stores it in :attr:`result`,
    then calls :meth:`save`. Subclasses should implement :meth:`_compute` instead
    of overriding this method.

    Parameters
    ----------
    period : DateTimeRange
        Period for which the feature will be calculated.
    save_into : Literal["all", "performance_db"] | None, optional
        - ``"all"``: save in performance_db and bazefield.
        - ``"performance_db"``: save only in performance_db.
        - ``None``: do not save.

        By default None.
    cached_data : pl.DataFrame | None, optional
        Polars DataFrame with features already fetched/calculated. Passed to
        ``_compute`` to enable chained calculations without re-querying
        performance_db. By default None.
    **kwargs
        Forwarded to :meth:`save`.

    Returns
    -------
    pl.DataFrame
        Polars DataFrame with a ``"timestamp"`` column and one or more feature value columns.
    """
    result = self._compute(period, cached_data=cached_data)
    self._result = result
    self.save(save_into=save_into, **kwargs)
    return result

save(save_into=None, **kwargs)

Method to save the calculated feature values in performance_db.

Parameters:

  • save_into

    (Literal['all', 'performance_db'] | None, default: None ) –

    Argument that will be passed to the method "save". The options are: - "all": The feature will be saved in performance_db and bazefield. - "performance_db": the feature will be saved only in performance_db. - None: The feature will not be saved.

    By default None.

  • **kwargs

    (dict, default: {} ) –

    Not being used at the moment. Here only for compatibility.

Source code in echo_energycalc/feature_calc_core.py
Python
def save(
    self,
    save_into: Literal["all", "performance_db"] | None = None,
    **kwargs,  # noqa: ARG002
) -> None:
    """
    Method to save the calculated feature values in performance_db.

    Parameters
    ----------
    save_into : Literal["all", "performance_db"] | None, optional
        Argument that will be passed to the method "save". The options are:
        - "all": The feature will be saved in performance_db and bazefield.
        - "performance_db": the feature will be saved only in performance_db.
        - None: The feature will not be saved.

        By default None.
    **kwargs : dict, optional
        Not being used at the moment. Here only for compatibility.
    """
    # checking arguments
    if not isinstance(save_into, str | type(None)):
        raise TypeError(f"save_into must be a string or None, not {type(save_into)}")
    if isinstance(save_into, str) and save_into not in ["all", "performance_db"]:
        raise ValueError(f"save_into must be 'all', 'performance_db' or None, not {save_into}")

    # checking if calculation was done
    if self.result is None:
        raise ValueError(
            "The calculation was not done. Please call 'calculate' before calling 'save'.",
        )

    if save_into is None:
        return

    upload_to_bazefield = save_into == "all"

    if not isinstance(self.result, pl.DataFrame):
        raise TypeError(f"result must be a polars DataFrame, not {type(self.result)}.")
    if "timestamp" not in self.result.columns:
        raise ValueError("result DataFrame must contain a 'timestamp' column.")

    # rename feature columns to "object@feature" format expected by perfdb polars insert
    feat_cols = [c for c in self.result.columns if c != "timestamp"]
    result_pl = self.result.rename({col: f"{self.object}@{col}" for col in feat_cols})

    self._perfdb.features.values.series.insert(
        df=result_pl,
        on_conflict="update",
        bazefield_upload=upload_to_bazefield,
    )