Skip to content

Solar Resource MET Validation

Overview

FeatureCalcSolarResourceMet calculates validated Irradiance (POA) and Module Temperature for simple weather stations (MET masts). It applies physical and operational filters to the raw data and uses spatial redundancy to fill gaps or invalid periods with data from neighboring towers.

The calculation outputs highly reliable solar resource metrics at a 5-minute resolution, ensuring that downstream performance models and energy loss calculators use accurate baseline data.


Calculation Logic

1. Astronomical Calculations

Using the pvlib library, the calculator computes the sun's position (elevation, zenith, azimuth) for each timestamp based on the tower's geographic coordinates and the site's altitude.

It then simulates a theoretical single-axis tracker with backtracking to calculate the ideal target_angle, using the site's Ground Coverage Ratio (gcr) and maximum rotation angle (max_angle).

2. Validation Criteria (Main Tower)

The raw Irradiance and Module Temperature data from the main MET tower are subjected to several validity checks. If any of the following conditions are met, the data for that timestamp is marked as invalid (null):

Condition Description
Flatline (Frozen Data) The absolute difference between consecutive readings sums to 0 over 3 periods (15 minutes). Indicates a stuck sensor.
Physical Limits Irradiance > 1832 W/m² OR Module Temperature > 85 °C.
Tracker Misalignment The reference tracker's PositionActual deviates from the theoretical target_angle by more than 5°.
Nebula State Misalignment If has_nebula is True and the Nebula system is active (1), data is invalid if the actual position deviates from the target by more than 10°.

3. Spatial Redundancy (Fallback Logic)

To prevent data gaps when the main tower fails validation or loses communication, the calculator searches for valid data from a predefined list of neighboring towers (redundancy_towers).

This is evaluated sequentially per timestamp: 1. Main Tower: Uses the main tower's data if it passes all validation criteria. 2. Neighbor Fallback: If the main tower is invalid, it falls back to the first neighbor in the list, provided the neighbor's raw data falls within safe physical limits (Irr ≤ 1832, Temp ≤ 85). 3. Subsequent Neighbors: If the first neighbor is also invalid, it checks the second, and so on. 4. Zero Fallback: If all redundancy towers fail or are missing, the value defaults to 0.0.

4. Nighttime Treatment

Regardless of the measured or fallback values, solar irradiance is strictly forced to zero during nighttime periods:

Text Only
Final_Irradiance = 0.0  [when solar_elevation < 0]

Outputs

The calculator produces two separate features simultaneously:

Feature Description
ValidIrradiancePOA_5min.AVG Validated Plane of Array (POA) Irradiance, accounting for tracker alignment and spatial redundancy (W/m²).
ValidModuleTemp_5min.AVG Validated Module Temperature, following the same redundancy and validation logic (°C).

Database Requirements

Feature Attribute

Attribute Value
server_calc_type solar_resource_met_validation

Object Attributes

Target Object (MET Tower):

Attribute Required Description
latitude Yes Geographic latitude (decimal degrees). Used for pvlib.
longitude Yes Geographic longitude (decimal degrees). Used for pvlib.
reference_tracker Yes Name of the tracker object used to check alignment (e.g., "BRR-BRR15-TS1-INV20-T01").
redundancy_towers Yes Ordered list of neighboring MET towers to use as fallback (e.g., ["BRR-BRR16-MET1", ...]).

Site Object (e.g., 'BRR'):

Attribute Required Description
altitude Yes Site altitude in meters. Used for pvlib calculations.
has_nebula Yes Boolean indicating if the site utilizes the Nebula tracker control system.
gcr Yes Ground Coverage Ratio. Used for pvlib backtracking calculations.
max_angle Yes Maximum tracker rotation angle (degrees).

Features (From Bazefield)

Feature Object Target Description
IrradiancePOA_5min.AVG Main & Neighbors Raw Plane of Array Irradiance (W/m²).
ModuleTemp_5min.AVG Main & Neighbors Raw Module Temperature (°C).
PositionActual_5min.AVG reference_tracker Actual angular position of the reference tracker.
NebulaState_5min.REP reference_tracker Operating state of the Nebula system (Required only if has_nebula is True).

Class Definition

FeatureCalcSolarResourceMet(object_name, feature)

Class to calculate validated Irradiance and Module Temperature with spatial redundancy.

object_name : str Name of the MET tower (e.g., 'BRR-BRR15-MET1'). feature : str Base feature name (can be dummy, as we calculate multiple).

Source code in echo_energycalc/feature_calc_solar_resource_met.py
Python
def __init__(self, object_name: str, feature: str) -> None:
    """
    Parameters
    ----------
    object_name : str
        Name of the MET tower (e.g., 'BRR-BRR15-MET1').
    feature : str
        Base feature name (can be dummy, as we calculate multiple).
    """
    super().__init__(object_name, feature)

    # Extracting site name from the object name (assumes format "Site-Object-..."), e.g., "BRR-BRR15-MET1" -> "BRR"
    self.site_name = self.object.split("-")[0]

    # 1. Requiring object attributes for both the MET tower and the site
    self._add_requirement(
        RequiredObjectAttributes(
            {
                self.object: [
                    "latitude",
                    "longitude",
                    "reference_tracker",
                    "redundancy_towers",
                ],
                self.site_name: [
                    "altitude",
                    "has_nebula",
                    "gcr",
                    "max_angle",
                ],
            },
        ),
    )
    self._fetch_requirements()

    # Extracting attributes into class variables
    obj_attrs = self._requirement_data("RequiredObjectAttributes")[self.object]
    site_attrs = self._requirement_data("RequiredObjectAttributes")[self.site_name]

    self.latitude = obj_attrs["latitude"]
    self.longitude = obj_attrs["longitude"]
    self.ref_tracker = obj_attrs["reference_tracker"]
    self.redundancy_towers = obj_attrs["redundancy_towers"]  # Expected to be a list of strings

    self.altitude = site_attrs["altitude"]
    self.has_nebula = site_attrs["has_nebula"]
    self.gcr = site_attrs["gcr"]
    self.max_angle = site_attrs["max_angle"]

    # 2. Defining Features dynamically (Main Tower + Tracker + Neighbors)
    features = {
        self.object: ["IrradiancePOA_5min.AVG_b#", "ModuleTemp_5min.AVG_b#"],
        self.ref_tracker: ["PositionActual_5min.AVG_b#"],
    }

    if self.has_nebula:
        features[self.ref_tracker].append("NebulaState_5min.REP_b#")

    for neighbor in self.redundancy_towers:
        features[neighbor] = ["IrradiancePOA_5min.AVG_b#", "ModuleTemp_5min.AVG_b#"]

    self._add_requirement(RequiredFeatures(features=features))

feature property

Feature that is calculated. This will be defined in the constructor and cannot be changed.

Returns:

  • str

    Name of the feature that is calculated.

name property

Name of the feature calculator. Is defined in child classes of FeatureCalculator.

This must be equal to the "server_calc_type" attribute of the feature in performance_db.

Returns:

  • str

    Name of the feature calculator.

object property

Object for which the feature is calculated. This will be defined in the constructor and cannot be changed.

Returns:

  • str

    Object name for which the feature is calculated.

requirements property

List of requirements of the feature calculator. Is defined in child classes of FeatureCalculator.

Returns:

  • dict[str, list[CalculationRequirement]]

    Dict of requirements.

    The keys are the names of the classes of the requirements and the values are lists of requirements of that class.

    For example: {"RequiredFeatures": [RequiredFeatures(...), RequiredFeatures(...)], "RequiredObjects": [RequiredObjects(...)]}

result property

Result of the calculation. This is None until the method "calculate" is called.

Returns:

  • DataFrame | None

    Polars DataFrame with a "timestamp" column and one or more feature value columns. None until calculate is called.

calculate(period, save_into=None, cached_data=None, **kwargs)

Run the calculation for the given period and optionally save the result.

Calls :meth:_compute to get the result, stores it in :attr:result, then calls :meth:save. Subclasses should implement :meth:_compute instead of overriding this method.

Parameters:

  • period

    (DateTimeRange) –

    Period for which the feature will be calculated.

  • save_into

    (Literal['all', 'performance_db'] | None, default: None ) –
    • "all": save in performance_db and bazefield.
    • "performance_db": save only in performance_db.
    • None: do not save.

    By default None.

  • cached_data

    (DataFrame | None, default: None ) –

    Polars DataFrame with features already fetched/calculated. Passed to _compute to enable chained calculations without re-querying performance_db. By default None.

Returns:

  • DataFrame

    Polars DataFrame with a "timestamp" column and one or more feature value columns.

Source code in echo_energycalc/feature_calc_core.py
Python
def calculate(
    self,
    period: DateTimeRange,
    save_into: Literal["all", "performance_db"] | None = None,
    cached_data: pl.DataFrame | None = None,
    **kwargs,
) -> pl.DataFrame:
    """
    Run the calculation for the given period and optionally save the result.

    Calls :meth:`_compute` to get the result, stores it in :attr:`result`,
    then calls :meth:`save`. Subclasses should implement :meth:`_compute` instead
    of overriding this method.

    Parameters
    ----------
    period : DateTimeRange
        Period for which the feature will be calculated.
    save_into : Literal["all", "performance_db"] | None, optional
        - ``"all"``: save in performance_db and bazefield.
        - ``"performance_db"``: save only in performance_db.
        - ``None``: do not save.

        By default None.
    cached_data : pl.DataFrame | None, optional
        Polars DataFrame with features already fetched/calculated. Passed to
        ``_compute`` to enable chained calculations without re-querying
        performance_db. By default None.

    Returns
    -------
    pl.DataFrame
        Polars DataFrame with a ``"timestamp"`` column and one or more feature value columns.
    """
    result = self._compute(period, cached_data=cached_data)
    self._result = result
    self.save(save_into=save_into, **kwargs)
    return result

save(save_into=None, **kwargs)

Method to save the calculated feature values in performance_db.

Parameters:

  • save_into

    (Literal['all', 'performance_db'] | None, default: None ) –

    Argument that will be passed to the method "save". The options are: - "all": The feature will be saved in performance_db and bazefield. - "performance_db": the feature will be saved only in performance_db. - None: The feature will not be saved.

    By default None.

  • **kwargs

    (dict, default: {} ) –

    Not being used at the moment. Here only for compatibility.

Source code in echo_energycalc/feature_calc_core.py
Python
def save(
    self,
    save_into: Literal["all", "performance_db"] | None = None,
    **kwargs,  # noqa: ARG002
) -> None:
    """
    Method to save the calculated feature values in performance_db.

    Parameters
    ----------
    save_into : Literal["all", "performance_db"] | None, optional
        Argument that will be passed to the method "save". The options are:
        - "all": The feature will be saved in performance_db and bazefield.
        - "performance_db": the feature will be saved only in performance_db.
        - None: The feature will not be saved.

        By default None.
    **kwargs : dict, optional
        Not being used at the moment. Here only for compatibility.
    """
    # checking arguments
    if not isinstance(save_into, str | type(None)):
        raise TypeError(f"save_into must be a string or None, not {type(save_into)}")
    if isinstance(save_into, str) and save_into not in ["all", "performance_db"]:
        raise ValueError(f"save_into must be 'all', 'performance_db' or None, not {save_into}")

    # checking if calculation was done
    if self.result is None:
        raise ValueError(
            "The calculation was not done. Please call 'calculate' before calling 'save'.",
        )

    if save_into is None:
        return

    upload_to_bazefield = save_into == "all"

    if not isinstance(self.result, pl.DataFrame):
        raise TypeError(f"result must be a polars DataFrame, not {type(self.result)}.")
    if "timestamp" not in self.result.columns:
        raise ValueError("result DataFrame must contain a 'timestamp' column.")

    # rename feature columns to "object@feature" format expected by perfdb polars insert
    feat_cols = [c for c in self.result.columns if c != "timestamp"]
    result_pl = self.result.rename({col: f"{self.object}@{col}" for col in feat_cols})

    self._perfdb.features.values.series.insert(
        df=result_pl,
        on_conflict="update",
        bazefield_upload=upload_to_bazefield,
    )