LEAG-COALLOG/notebooks/first_opt_model.ipynb
2026-03-17 12:00:21 +01:00

2893 lines
118 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"id": "5d518102",
"metadata": {},
"source": [
"# Rohkohle-Optimierungsmodell (Entwurf)\n",
"\n",
"Übersichtliche Struktur für Datenaufbereitung und ersten Modellaufbau. Dieses Notebook ist nun durchgehend kommentiert, sodass jede Code- und Markdown-Zelle den jeweiligen Zweck erklärt.\n",
"\n",
"Die Schritte decken Datenloading, Parametrisierung, Nebenbedingungen, Zielfunktion sowie Ergebnisaufbereitung ab."
]
},
{
"cell_type": "markdown",
"id": "9d194127",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Imports, Anzeigeoptionen und Laden der vorverarbeiteten Parquet-Tabellen."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ace0fee6",
"metadata": {},
"outputs": [],
"source": [
"# Basisimporte für Datenaufbereitung, Optimierung und Visualisierung\n",
"import pandas as pd\n",
"import numpy as np\n",
"import pyomo.environ as pyo\n",
"from pathlib import Path\n",
"import matplotlib.pyplot as plt\n",
"\n",
"pd.set_option('display.max_columns', None) # alle Spalten anzeigen\n",
"pd.set_option('display.width', 0) "
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "df0cf11b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"dict_keys(['Verfuegbarkeiten', 'bounds_power_plants', 'bunker', 'foerderkapaz', 'kohle_mix', 'rohkohlebedarf', 'veredelung_bounds', 'verladungskap', 'zugdurchlass'])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Alle verfügbaren Parquet-Tabellen aus dem processed-Verzeichnis laden\n",
"tables = {}\n",
"\n",
"data_dir = Path('../data/processed')\n",
"for parquet_file in sorted(data_dir.glob('*.parquet')):\n",
" tables[parquet_file.stem] = pd.read_parquet(parquet_file)\n",
"\n",
"tables.keys()"
]
},
{
"cell_type": "markdown",
"id": "f00d1a9f",
"metadata": {},
"source": [
"## Datenaufbereitung\n",
"\n",
"Bedarf und Bounds aus den geladenen Tabellen vorbereiten und für die Modellparameter formen."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a1c72993",
"metadata": {},
"outputs": [],
"source": [
"bedarf = tables['rohkohlebedarf'][[\n",
" 'datum', 'JW', 'SP', 'BW3', 'BW4',\n",
" 'Veredel_Nochtener', 'Veredel_Welzower'\n",
"]].copy()\n",
"\n",
"# datetime konvertieren und Wochentag ergänzen\n",
"bedarf['datum'] = pd.to_datetime(bedarf['datum'])\n",
"bedarf['weekday'] = bedarf['datum'].dt.weekday\n",
"\n",
"bounds_power_plants = tables['bounds_power_plants']"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ad9461a9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" kraftwerk zeitraum minus_pct plus_pct minus plus\n",
"0 Jänschwalde (JW) pro Tag -0.25 0.25 -20000.0 20000.0\n",
"1 Jänschwalde (JW) pro Woche NaN NaN -2000.0 2000.0\n",
"2 Jänschwalde (JW) pro Monat NaN NaN -2000.0 2000.0\n",
"3 Schwarze Pumpe (SP) pro Tag -0.10 0.10 -5500.0 5500.0\n",
"4 Schwarze Pumpe (SP) pro Woche NaN NaN -2000.0 2000.0\n",
"5 Schwarze Pumpe (SP) pro Monat NaN NaN -2000.0 2000.0\n",
"6 Boxberg Werk 3 (BW3) pro Tag -0.05 0.05 -1000.0 1000.0\n",
"7 Boxberg Werk 3 (BW3) pro Woche NaN NaN -1000.0 1000.0\n",
"8 Boxberg Werk 3 (BW3) pro Monat NaN NaN -500.0 500.0\n",
"9 Boxberg Werk 4 (BW4) pro Tag -0.05 0.05 -1000.0 1000.0\n",
"10 Boxberg Werk 4 (BW4) pro Woche NaN NaN -500.0 500.0\n",
"11 Boxberg Werk 4 (BW4) pro Monat NaN NaN -500.0 500.0\n",
"12 Gesamt (JW+SP+BW3+BW4) pro Woche NaN NaN -2000.0 2000.0\n",
"13 Gesamt (JW+SP+BW3+BW4) pro Monat NaN NaN -500.0 500.0\n"
]
}
],
"source": [
"# Kontrolle der Bound-Tabelle, um Spaltennamen und Kohlesorten zu prüfen\n",
"print(tables['bounds_power_plants'])"
]
},
{
"cell_type": "markdown",
"id": "438032e6",
"metadata": {},
"source": [
"### Kalender- und Bound-Vorbereitung\n",
"\n",
"Kalenderwoche, Modell-Tags sowie Tagesgrenzen aus den Bounds ableiten. Der Abschnitt erklärt, welche Hilfsfunktionen Bounds aus den Tabellen ziehen."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0469f70c",
"metadata": {},
"outputs": [],
"source": [
"# Kalender- und Toleranzfunktionen vorbereiten\n",
"# Kalenderwoche aus dem Datum ableiten\n",
"bedarf['week'] = bedarf['datum'].dt.isocalendar().week\n",
"\n",
"# Wochentagscode (0=Mo, ..., 6=So) auf Modell-Tags abbilden\n",
"weekday_map = {5: 'Sa', 6: 'So', 0: 'Mo', 1: 'Di', 2: 'Mi', 3: 'Do', 4: 'Fr'}\n",
"bedarf['day'] = bedarf['weekday'].map(weekday_map)\n",
"\n",
"\n",
"# Mapping Woche/Tag -> Datum für spätere Ergebnis-Tabellen\n",
"wd_to_date = (\n",
" bedarf.sort_values(\"datum\")\n",
" .drop_duplicates(subset=[\"week\", \"day\"])\n",
" .set_index([\"week\", \"day\"])[\"datum\"]\n",
" .to_dict()\n",
")\n",
"# Nur Tagesgrenzen verwenden\n",
"bounds_day = bounds_power_plants[bounds_power_plants['zeitraum'] == 'pro Tag'].copy()\n",
"\n",
"# Passende Bounds-Zeile für ein Kraftwerk finden\n",
"def match_kraftwerk_row(df: pd.DataFrame, j_key: str) -> pd.Series:\n",
" if j_key == 'J':\n",
" mask = df['kraftwerk'].str.contains('JW')\n",
" elif j_key == 'SP':\n",
" mask = df['kraftwerk'].str.contains('SP')\n",
" elif j_key == 'B3':\n",
" mask = df['kraftwerk'].str.contains('BW3')\n",
" elif j_key == 'B4':\n",
" mask = df['kraftwerk'].str.contains('BW4')\n",
" else:\n",
" mask = np.zeros(len(df), dtype=bool)\n",
"\n",
" row = df.loc[mask]\n",
" if row.empty:\n",
" raise ValueError(f'Keine Bounds Zeile für Kraftwerk {j_key} gefunden')\n",
" return row.iloc[0]"
]
},
{
"cell_type": "markdown",
"id": "5f72f183",
"metadata": {},
"source": [
"## Modellaufbau\n",
"\n",
"Grundlegende Mengen und Sets definieren, bevor Parameter und Variablen gefüllt werden."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9c0f1d1a",
"metadata": {},
"outputs": [],
"source": [
"model = pyo.ConcreteModel()\n",
"\n",
"# Menge der Verbraucher im Modell (inkl. Veredlung)\n",
"J_POWER = ['J', 'SP', 'B3', 'B4']\n",
"model.J = pyo.Set(initialize=J_POWER + ['V'])\n",
"\n",
"# Wochen aus den Daten ableiten\n",
"weeks = sorted(bedarf['week'].unique().tolist())\n",
"model.W = pyo.Set(initialize=weeks)\n",
"\n",
"# Feste Mengen für Tage, Schichten und Quellen\n",
"model.D = pyo.Set(initialize=['Sa', 'So', 'Mo', 'Di', 'Mi', 'Do', 'Fr'])\n",
"model.S = pyo.Set(initialize=['F', 'S', 'N'])\n",
"model.I = pyo.Set(initialize=['Reichwalde', 'Nochten', 'Welzow'])\n",
"\n",
"# Kohlesorten-Quellen (für Veredlung)\n",
"I_W = ['Welzow'] # Welzow-Kohle\n",
"I_N = ['Nochten'] # Nochtener-Kohle\n"
]
},
{
"cell_type": "markdown",
"id": "d8d613dc",
"metadata": {},
"source": [
"## Parameterdefinition\n",
"\n",
"Bedarfe, Veredlungsanteile und Tagesabweichungsgrenzen. Alle Parameter werden als `mutable` angelegt, damit sie aus den Daten überschrieben werden können."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6680154b",
"metadata": {},
"outputs": [],
"source": [
"# Tagesbedarf der Kraftwerke d_{j,w,d}\n",
"model.d = pyo.Param(model.J, model.W, model.D, initialize=0.0, mutable=True, within=pyo.NonNegativeReals)\n",
"\n",
"# Veredlung: Bedarfsparameter nach Kohlesorte\n",
"model.dV_N = pyo.Param(model.W, model.D, initialize=0.0, mutable=True)\n",
"model.dV_W = pyo.Param(model.W, model.D, initialize=0.0, mutable=True)\n",
"\n",
"# Tagesabhängige Abweichungsgrenzen\n",
"model.a_min_day = pyo.Param(model.J, model.W, model.D, initialize=0.0, mutable=True)\n",
"model.a_max_day = pyo.Param(model.J, model.W, model.D, initialize=0.0, mutable=True)"
]
},
{
"cell_type": "markdown",
"id": "b01f3f37",
"metadata": {},
"source": [
"### Parameterbefüllung aus den Daten\n",
"\n",
"Bedarfs- und Abweichungswerte je Woche/Tag in das Modell schreiben. Die Kommentare in den folgenden Zellen erläutern jeden Zwischenschritt."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7d550976",
"metadata": {},
"outputs": [],
"source": [
"# Kraftwerk-Bounds (ohne Veredlung)\n",
"column_map = {'JW': 'J', 'SP': 'SP', 'BW3': 'B3', 'BW4': 'B4'}\n",
"bounds_by_j = {j: match_kraftwerk_row(bounds_day, j) for j in model.J if j != 'V'}\n",
"\n",
"# Veredlung-Bounds laden\n",
"ver_bounds = tables['veredelung_bounds']\n",
"v_day = ver_bounds[ver_bounds['zeitraum'] == 'pro Tag']\n",
"v_week = ver_bounds[ver_bounds['zeitraum'] == 'pro Woche']\n",
"v_month = ver_bounds[ver_bounds['zeitraum'] == 'pro Monat']\n",
"\n",
"def match_ver_row(df, label_exact):\n",
" row = df[df['kohlesorte'] == label_exact]\n",
" if row.empty:\n",
" raise ValueError(f'Keine Veredlung-Bounds für {label_exact}')\n",
" return row.iloc[0]\n",
"\n",
"# Vorauswahl der Bounds-Zeilen nach Kohlesorte\n",
"ver_row_day_N = match_ver_row(v_day, 'Nochtener-Kohle')\n",
"ver_row_day_W = match_ver_row(v_day, 'Welzower-Kohle')\n",
"ver_row_day_ALL = match_ver_row(v_day, 'Gesamt (NO+WZ)')\n",
"\n",
"ver_row_week_N = match_ver_row(v_week, 'Nochtener-Kohle')\n",
"ver_row_week_W = match_ver_row(v_week, 'Welzower-Kohle')\n",
"ver_row_week_ALL = match_ver_row(v_week, 'Gesamt (NO+WZ)')\n",
"\n",
"ver_row_month_N = match_ver_row(v_month, 'Nochtener-Kohle')\n",
"ver_row_month_W = match_ver_row(v_month, 'Welzower-Kohle')\n",
"ver_row_month_ALL = match_ver_row(v_month, 'Gesamt (NO+WZ)')\n",
"\n",
"for _, row in bedarf.iterrows():\n",
" w = int(row['week'])\n",
" d = row['day']\n",
"\n",
" # Veredlungsbedarf nach Kohlesorte — prüfen auf NaN\n",
" if (w in model.W) and (d in model.D):\n",
" vN = row.get('Veredel_Nochtener', np.nan)\n",
" if not pd.isna(vN):\n",
" model.dV_N[w, d] = float(vN)\n",
" vW = row.get('Veredel_Welzower', np.nan)\n",
" if not pd.isna(vW):\n",
" model.dV_W[w, d] = float(vW)\n",
"\n",
" # Kraftwerksbedarfe und Toleranzen\n",
" for col, j in column_map.items():\n",
" if not ((j in model.J) and (w in model.W) and (d in model.D)):\n",
" continue\n",
"\n",
" val = row.get(col, np.nan)\n",
" if pd.isna(val):\n",
" continue\n",
"\n",
" d_nom = float(val)\n",
" model.d[j, w, d] = d_nom\n",
"\n",
" b = bounds_by_j[j]\n",
" lower_candidates = []\n",
" upper_candidates = []\n",
"\n",
" minus_abs = b.get('minus')\n",
" plus_abs = b.get('plus')\n",
" minus_pct = b.get('minus_pct')\n",
" plus_pct = b.get('plus_pct')\n",
"\n",
" if not pd.isna(minus_abs):\n",
" lower_candidates.append(float(minus_abs))\n",
" if not pd.isna(plus_abs):\n",
" upper_candidates.append(float(plus_abs))\n",
" if not pd.isna(minus_pct):\n",
" lower_candidates.append(float(minus_pct) * d_nom)\n",
" if not pd.isna(plus_pct):\n",
" upper_candidates.append(float(plus_pct) * d_nom)\n",
"\n",
" if not lower_candidates or not upper_candidates:\n",
" raise ValueError(f'Keine gültigen Toleranzen für {j}, Woche {w}, Tag {d}')\n",
"\n",
" a_min = max(lower_candidates)\n",
" a_max = min(upper_candidates)\n",
"\n",
" model.a_min_day[j, w, d] = a_min\n",
" model.a_max_day[j, w, d] = a_max\n"
]
},
{
"cell_type": "markdown",
"id": "881f5ec0",
"metadata": {},
"source": [
"## Variablen und abgeleitete Größen\n",
"\n",
"Transportentscheidungen als Integer-Variablen sowie Expressions für die abgeleiteten Liefermengen."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0a4884e8",
"metadata": {},
"outputs": [],
"source": [
"# Integer-Transportvariable k in 1000 t; tatsächliche Tonnage x = 1000 * k\n",
"model.k = pyo.Var(model.I, model.J, model.W, model.D, model.S, domain=pyo.NonNegativeIntegers)\n",
"\n",
"def x_rule(model, i, j, w, d, s):\n",
" return 1000 * model.k[i, j, w, d, s]\n",
"\n",
"# x als Expression statt eigene Entscheidungsvariable\n",
"model.x = pyo.Expression(model.I, model.J, model.W, model.D, model.S, rule=x_rule)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "cb8a05e4",
"metadata": {},
"outputs": [],
"source": [
"# Lieferung je Kraftwerk/Woche/Tag/Schicht\n",
"def y_shift_rule(model, j, w, d, s):\n",
" return sum(model.x[i, j, w, d, s] for i in model.I)\n",
"\n",
"model.y_shift = pyo.Expression(model.J, model.W, model.D, model.S, rule=y_shift_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "d9556e33",
"metadata": {},
"source": [
"## Tageslieferung und Abweichungen\n",
"\n",
"Berechnung der Tageslieferungen pro Kraftwerk inklusive Toleranz-Constraints auf Tagesebene."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "5de353dc",
"metadata": {},
"outputs": [],
"source": [
"# Tageslieferung je Kraftwerk: Summe über alle Quellen und Schichten\n",
"def y_rule(model, j, w, d):\n",
" return sum(model.x[i, j, w, d, s] for i in model.I for s in model.S)\n",
"\n",
"model.y = pyo.Expression(model.J, model.W, model.D, rule=y_rule)\n",
"\n",
"# Lieferung muss innerhalb der zulässigen Toleranz um die Nachfrage liegen (nur Kraftwerke)\n",
"J_delivery = [j for j in model.J if j != 'V']\n",
"\n",
"def delivery_tolerance_rule(model, j, w, d):\n",
" return (\n",
" model.d[j, w, d] + model.a_min_day[j, w, d], # Untergrenze\n",
" model.y[j, w, d], # Lieferung\n",
" model.d[j, w, d] + model.a_max_day[j, w, d], # Obergrenze\n",
" )\n",
"\n",
"model.delivery_tolerance = pyo.Constraint(J_delivery, model.W, model.D, rule=delivery_tolerance_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "59d6fd9f",
"metadata": {},
"source": [
"## Wochen-/Monatslieferung und Abweichung\n",
"\n",
"Aggregation der Tageswerte auf Wochen- und Monatsebene inklusive ihrer Toleranzrestriktionen."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "3166c81d",
"metadata": {},
"outputs": [],
"source": [
"# --- Sets & Mapping Woche -> Monat ---\n",
"week_to_month = (\n",
" bedarf.groupby('week')['datum']\n",
" .min()\n",
" .dt.month\n",
" .to_dict()\n",
")\n",
"model.M = pyo.Set(initialize=sorted(set(week_to_month.values())))\n",
"J_power = [j for j in model.J if j != 'V']\n",
"\n",
"# --- Helper: Bounds-Zeile für Zeitraum + Kraftwerk holen ---\n",
"def match_bounds_row(df: pd.DataFrame, j_key: str, zeitraum: str) -> pd.Series:\n",
" label_map = {\n",
" 'J': 'Jänschwalde',\n",
" 'SP': 'Schwarze Pumpe',\n",
" 'B3': 'Boxberg Werk 3',\n",
" 'B4': 'Boxberg Werk 4',\n",
" 'ALL': 'Gesamt',\n",
" }\n",
" label = label_map[j_key]\n",
" row = df[(df['zeitraum'] == zeitraum) & df['kraftwerk'].str.contains(label)]\n",
" if row.empty:\n",
" raise ValueError(f'Keine Bounds-Zeile für {j_key} ({zeitraum})')\n",
" return row.iloc[0]\n",
"\n",
"STEP = 1000 # 1000-t-Schritt durch k\n",
"MIN_STEPS = 1 # mindestens ein Schritt erlauben (bei ALL evtl. 2)\n",
"\n",
"def tol_from_row(row, d_nom, step=STEP, min_steps=MIN_STEPS):\n",
" lowers, uppers = [], []\n",
" if not pd.isna(row.get('minus')):\n",
" lowers.append(float(row['minus']))\n",
" if not pd.isna(row.get('plus')):\n",
" uppers.append(float(row['plus']))\n",
" if not pd.isna(row.get('minus_pct')):\n",
" lowers.append(float(row['minus_pct']) * d_nom)\n",
" if not pd.isna(row.get('plus_pct')):\n",
" uppers.append(float(row['plus_pct']) * d_nom)\n",
" if not lowers or not uppers:\n",
" raise ValueError('Toleranz nicht definiert')\n",
" \n",
" a_min = max(lowers)\n",
" a_max = min(uppers)\n",
" # mindestens ±step*min_steps zulassen, damit 1000er-Rundung reinpasst\n",
" a_min = min(a_min, -min_steps * step)\n",
" a_max = max(a_max, min_steps * step)\n",
" return a_min, a_max\n",
"\n",
"# --- Aggregierte Bedarfe & Toleranzen (mutable) ---\n",
"model.d_week = pyo.Param(model.J, model.W, initialize=0.0, mutable=True)\n",
"model.d_month = pyo.Param(model.J, model.M, initialize=0.0, mutable=True)\n",
"\n",
"model.a_min_week = pyo.Param(model.J, model.W, initialize=0.0, mutable=True)\n",
"model.a_max_week = pyo.Param(model.J, model.W, initialize=0.0, mutable=True)\n",
"model.a_min_month = pyo.Param(model.J, model.M, initialize=0.0, mutable=True)\n",
"model.a_max_month = pyo.Param(model.J, model.M, initialize=0.0, mutable=True)\n",
"\n",
"# Gesamtbedarf/Toleranz (Summe über Kraftwerke)\n",
"model.d_week_total = pyo.Param(model.W, initialize=0.0, mutable=True)\n",
"model.d_month_total = pyo.Param(model.M, initialize=0.0, mutable=True)\n",
"model.a_min_week_total = pyo.Param(model.W, initialize=0.0, mutable=True)\n",
"model.a_max_week_total = pyo.Param(model.W, initialize=0.0, mutable=True)\n",
"model.a_min_month_total = pyo.Param(model.M, initialize=0.0, mutable=True)\n",
"model.a_max_month_total = pyo.Param(model.M, initialize=0.0, mutable=True)\n",
"\n",
"# --- Aggregierte Lieferungen ---\n",
"def y_week_rule(model, j, w):\n",
" return sum(model.y[j, w, d] for d in model.D)\n",
"model.y_week = pyo.Expression(model.J, model.W, rule=y_week_rule)\n",
"\n",
"def y_month_rule(model, j, m):\n",
" weeks_in_m = [w for w, mm in week_to_month.items() if mm == m and w in model.W]\n",
" return sum(model.y_week[j, w] for w in weeks_in_m)\n",
"model.y_month = pyo.Expression(model.J, model.M, rule=y_month_rule)\n",
"\n",
"model.y_week_total = pyo.Expression(model.W, rule=lambda m, w: sum(m.y_week[j, w] for j in J_power))\n",
"model.y_month_total = pyo.Expression(model.M, rule=lambda m, mm: sum(m.y_month[j, mm] for j in J_power))\n",
"\n",
"# --- Befüllen von Wochen/Monat-Bedarfen und Toleranzen ---\n",
"bounds_week = bounds_power_plants[bounds_power_plants['zeitraum'] == 'pro Woche']\n",
"bounds_month = bounds_power_plants[bounds_power_plants['zeitraum'] == 'pro Monat']\n",
"\n",
"for j in J_power:\n",
" for w in model.W:\n",
" d_w = sum(model.d[j, w, d] for d in model.D)\n",
" model.d_week[j, w] = d_w\n",
" row = match_bounds_row(bounds_week, j, 'pro Woche')\n",
" a_min, a_max = tol_from_row(row, d_w)\n",
" model.a_min_week[j, w] = a_min\n",
" model.a_max_week[j, w] = a_max\n",
"\n",
"for j in J_power:\n",
" for m in model.M:\n",
" weeks_in_m = [w for w, mm in week_to_month.items() if mm == m and w in model.W]\n",
" d_m = sum(model.d_week[j, w] for w in weeks_in_m)\n",
" model.d_month[j, m] = d_m\n",
" row = match_bounds_row(bounds_month, j, 'pro Monat')\n",
" a_min, a_max = tol_from_row(row, d_m)\n",
" model.a_min_month[j, m] = a_min\n",
" model.a_max_month[j, m] = a_max\n",
"\n",
"# Gesamtzeilen aus der Tabelle\n",
"row_week_total = match_bounds_row(bounds_week, 'ALL', 'pro Woche')\n",
"row_month_total = match_bounds_row(bounds_month, 'ALL', 'pro Monat')\n",
"for w in model.W:\n",
" d_tot = sum(model.d_week[j, w] for j in J_power)\n",
" model.d_week_total[w] = d_tot\n",
" a_min, a_max = tol_from_row(row_week_total, d_tot)\n",
" model.a_min_week_total[w] = a_min\n",
" model.a_max_week_total[w] = a_max\n",
"\n",
"for m in model.M:\n",
" weeks_in_m = [w for w, mm in week_to_month.items() if mm == m and w in model.W]\n",
" d_tot = sum(model.d_month[j, m] for j in J_power)\n",
" model.d_month_total[m] = d_tot\n",
" a_min, a_max = tol_from_row(row_month_total, d_tot)\n",
" model.a_min_month_total[m] = a_min\n",
" model.a_max_month_total[m] = a_max\n",
"\n",
"# --- Nebenbedingungen: Wochen- / Monats-Toleranz (nur Kraftwerke) ---\n",
"def week_tolerance_rule(model, j, w):\n",
" return (model.d_week[j, w] + model.a_min_week[j, w],\n",
" model.y_week[j, w],\n",
" model.d_week[j, w] + model.a_max_week[j, w])\n",
"model.week_tolerance = pyo.Constraint(J_power, model.W, rule=week_tolerance_rule)\n",
"\n",
"def month_tolerance_rule(model, j, m):\n",
" return (model.d_month[j, m] + model.a_min_month[j, m],\n",
" model.y_month[j, m],\n",
" model.d_month[j, m] + model.a_max_month[j, m])\n",
"model.month_tolerance = pyo.Constraint(J_power, model.M, rule=month_tolerance_rule)\n",
"\n",
"def week_total_rule(model, w):\n",
" return (model.d_week_total[w] + model.a_min_week_total[w],\n",
" model.y_week_total[w],\n",
" model.d_week_total[w] + model.a_max_week_total[w])\n",
"model.week_total_tolerance = pyo.Constraint(model.W, rule=week_total_rule)\n",
"\n",
"def month_total_rule(model, m):\n",
" return (model.d_month_total[m] + model.a_min_month_total[m],\n",
" model.y_month_total[m],\n",
" model.d_month_total[m] + model.a_max_month_total[m])\n",
"model.month_total_tolerance = pyo.Constraint(model.M, rule=month_total_rule)\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "abd9b467",
"metadata": {},
"outputs": [],
"source": [
"# Veredlung: Toleranzen Tag/Woche/Monat und Nebenbedingungen\n",
"# Veredlung-Toleranz-Parameter\n",
"model.a_min_V_day = pyo.Param(['N','W','ALL'], model.W, model.D, initialize=0.0, mutable=True)\n",
"model.a_max_V_day = pyo.Param(['N','W','ALL'], model.W, model.D, initialize=0.0, mutable=True)\n",
"model.a_min_V_week = pyo.Param(['N','W','ALL'], model.W, initialize=0.0, mutable=True)\n",
"model.a_max_V_week = pyo.Param(['N','W','ALL'], model.W, initialize=0.0, mutable=True)\n",
"model.a_min_V_month = pyo.Param(['N','W','ALL'], model.M, initialize=0.0, mutable=True)\n",
"model.a_max_V_month = pyo.Param(['N','W','ALL'], model.M, initialize=0.0, mutable=True)\n",
"\n",
"# Tages-Toleranzen Veredlung\n",
"for w in model.W:\n",
" for d in model.D:\n",
" dN = model.dV_N[w, d]\n",
" dW = model.dV_W[w, d]\n",
" dA = dN + dW\n",
" model.a_min_V_day['N', w, d], model.a_max_V_day['N', w, d] = tol_from_row(ver_row_day_N, dN)\n",
" model.a_min_V_day['W', w, d], model.a_max_V_day['W', w, d] = tol_from_row(ver_row_day_W, dW)\n",
" model.a_min_V_day['ALL', w, d], model.a_max_V_day['ALL', w, d] = tol_from_row(ver_row_day_ALL, dA)\n",
"\n",
"# Wochen-Toleranzen Veredlung\n",
"for w in model.W:\n",
" dN = sum(model.dV_N[w, dd] for dd in model.D)\n",
" dW = sum(model.dV_W[w, dd] for dd in model.D)\n",
" dA = dN + dW\n",
" model.a_min_V_week['N', w], model.a_max_V_week['N', w] = tol_from_row(ver_row_week_N, dN)\n",
" model.a_min_V_week['W', w], model.a_max_V_week['W', w] = tol_from_row(ver_row_week_W, dW)\n",
" model.a_min_V_week['ALL', w], model.a_max_V_week['ALL', w] = tol_from_row(ver_row_week_ALL, dA)\n",
"\n",
"# Monats-Toleranzen Veredlung\n",
"for m in model.M:\n",
" weeks_in_m = [w for w, mm in week_to_month.items() if mm == m and w in model.W]\n",
" dN = sum(model.dV_N[w, dd] for w in weeks_in_m for dd in model.D)\n",
" dW = sum(model.dV_W[w, dd] for w in weeks_in_m for dd in model.D)\n",
" dA = dN + dW\n",
" model.a_min_V_month['N', m], model.a_max_V_month['N', m] = tol_from_row(ver_row_month_N, dN)\n",
" model.a_min_V_month['W', m], model.a_max_V_month['W', m] = tol_from_row(ver_row_month_W, dW)\n",
" model.a_min_V_month['ALL', m], model.a_max_V_month['ALL', m] = tol_from_row(ver_row_month_ALL, dA)\n",
"\n",
"# Veredlung-Constraints (Summe über Schichten)\n",
"def v_w_day_rule(m, w, d):\n",
" return (m.dV_W[w, d] + m.a_min_V_day['W', w, d],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_W for s in m.S),\n",
" m.dV_W[w, d] + m.a_max_V_day['W', w, d])\n",
"model.v_w_day_tol = pyo.Constraint(model.W, model.D, rule=v_w_day_rule)\n",
"\n",
"def v_n_day_rule(m, w, d):\n",
" return (m.dV_N[w, d] + m.a_min_V_day['N', w, d],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_N for s in m.S),\n",
" m.dV_N[w, d] + m.a_max_V_day['N', w, d])\n",
"model.v_n_day_tol = pyo.Constraint(model.W, model.D, rule=v_n_day_rule)\n",
"\n",
"def v_all_day_rule(m, w, d):\n",
" return (m.dV_W[w, d] + m.dV_N[w, d] + m.a_min_V_day['ALL', w, d],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_W + I_N for s in m.S),\n",
" m.dV_W[w, d] + m.dV_N[w, d] + m.a_max_V_day['ALL', w, d])\n",
"model.v_all_day_tol = pyo.Constraint(model.W, model.D, rule=v_all_day_rule)\n",
"\n",
"# Woche\n",
"def v_w_week_rule(m, w):\n",
" return (sum(m.dV_W[w, d] for d in m.D) + m.a_min_V_week['W', w],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_W for d in m.D for s in m.S),\n",
" sum(m.dV_W[w, d] for d in m.D) + m.a_max_V_week['W', w])\n",
"model.v_w_week_tol = pyo.Constraint(model.W, rule=v_w_week_rule)\n",
"\n",
"def v_n_week_rule(m, w):\n",
" return (sum(m.dV_N[w, d] for d in m.D) + m.a_min_V_week['N', w],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_N for d in m.D for s in m.S),\n",
" sum(m.dV_N[w, d] for d in m.D) + m.a_max_V_week['N', w])\n",
"model.v_n_week_tol = pyo.Constraint(model.W, rule=v_n_week_rule)\n",
"\n",
"def v_all_week_rule(m, w):\n",
" return (sum(m.dV_W[w, d] + m.dV_N[w, d] for d in m.D) + m.a_min_V_week['ALL', w],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_W + I_N for d in m.D for s in m.S),\n",
" sum(m.dV_W[w, d] + m.dV_N[w, d] for d in m.D) + m.a_max_V_week['ALL', w])\n",
"model.v_all_week_tol = pyo.Constraint(model.W, rule=v_all_week_rule)\n",
"\n",
"# Monat\n",
"def v_w_month_rule(m, mm):\n",
" weeks_in_m = [w for w, month in week_to_month.items() if month == mm and w in m.W]\n",
" return (sum(m.dV_W[w, d] for w in weeks_in_m for d in m.D) + m.a_min_V_month['W', mm],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_W for w in weeks_in_m for d in m.D for s in m.S),\n",
" sum(m.dV_W[w, d] for w in weeks_in_m for d in m.D) + m.a_max_V_month['W', mm])\n",
"model.v_w_month_tol = pyo.Constraint(model.M, rule=v_w_month_rule)\n",
"\n",
"def v_n_month_rule(m, mm):\n",
" weeks_in_m = [w for w, month in week_to_month.items() if month == mm and w in m.W]\n",
" return (sum(m.dV_N[w, d] for w in weeks_in_m for d in m.D) + m.a_min_V_month['N', mm],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_N for w in weeks_in_m for d in m.D for s in m.S),\n",
" sum(m.dV_N[w, d] for w in weeks_in_m for d in m.D) + m.a_max_V_month['N', mm])\n",
"model.v_n_month_tol = pyo.Constraint(model.M, rule=v_n_month_rule)\n",
"\n",
"def v_all_month_rule(m, mm):\n",
" weeks_in_m = [w for w, month in week_to_month.items() if month == mm and w in m.W]\n",
" return (sum(m.dV_W[w, d] + m.dV_N[w, d] for w in weeks_in_m for d in m.D) + m.a_min_V_month['ALL', mm],\n",
" sum(m.x[i, 'V', w, d, s] for i in I_W + I_N for w in weeks_in_m for d in m.D for s in m.S),\n",
" sum(m.dV_W[w, d] + m.dV_N[w, d] for w in weeks_in_m for d in m.D) + m.a_max_V_month['ALL', mm])\n",
"model.v_all_month_tol = pyo.Constraint(model.M, rule=v_all_month_rule)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02de4280",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "ad36ed63",
"metadata": {},
"source": [
"## Kohlensortenmischverhältnis\n",
"\n",
"Eingelesene Mischungsverhältnisse pro Kraftwerk kontrollieren und ins Modell abbilden."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0b9070a8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" kraftwerk kohlesorte ziel_low ziel_high maximal minimal\n",
"0 Jänschwalde Reichwalder-Kohle 0.15 0.30 0.50 0.0\n",
"1 Jänschwalde Nochtener-Kohle 0.00 0.00 0.15 0.0\n",
"2 Jänschwalde Welzower-Kohle 0.70 1.00 1.00 0.0\n",
"3 Schwarze Pumpe Reichwalder-Kohle 0.00 0.30 0.40 0.0\n",
"4 Schwarze Pumpe Nochtener-Kohle 0.50 1.00 1.00 0.0\n",
"5 Schwarze Pumpe Welzower-Kohle 0.00 0.50 1.00 0.0\n",
"6 Boxberg Werk 3 Reichwalder-Kohle 0.35 0.35 0.35 0.0\n",
"7 Boxberg Werk 3 Nochtener-Kohle 0.65 0.65 0.65 0.0\n",
"8 Boxberg Werk 3 Welzower-Kohle 0.00 0.00 1.00 0.0\n",
"9 Boxberg Werk 4 Reichwalder-Kohle 0.30 0.70 0.80 0.0\n",
"10 Boxberg Werk 4 Nochtener-Kohle 0.30 0.70 1.00 0.2\n",
"11 Boxberg Werk 4 Welzower-Kohle 0.00 0.00 0.00 0.0\n"
]
}
],
"source": [
"# Mischungsverhältnisse je Kohlesorte pro Kraftwerk prüfen\n",
"print(tables[\"kohle_mix\"])"
]
},
{
"cell_type": "markdown",
"id": "520acdb9",
"metadata": {},
"source": [
"### Harte Grenze für die Mischverhältnisse"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "44597b29",
"metadata": {},
"outputs": [],
"source": [
"# Mapping und Setzen der Mischungsparameter im Modell\n",
"# Mapping Kraftwerk-Label -> J-Key\n",
"j_map = {\n",
" 'Jänschwalde': 'J',\n",
" 'Schwarze Pumpe': 'SP',\n",
" 'Boxberg Werk 3': 'B3',\n",
" 'Boxberg Werk 4': 'B4',\n",
"}\n",
"\n",
"# Mapping Kohlesorte -> Quelle i\n",
"i_map = {\n",
" 'Reichwalder-Kohle': 'Reichwalde',\n",
" 'Nochtener-Kohle': 'Nochten',\n",
" 'Welzower-Kohle': 'Welzow',\n",
"}\n",
"\n",
"mix_df = tables[\"kohle_mix\"]\n",
"\n",
"# α-Parameter (als Bruchteil, nicht Prozent)\n",
"model.alpha_min = pyo.Param(model.I, model.J, initialize=0.0, mutable=True)\n",
"model.alpha_max = pyo.Param(model.I, model.J, initialize=1.0, mutable=True)\n",
"\n",
"for _, row in mix_df.iterrows():\n",
" j = j_map.get(row['kraftwerk'])\n",
" i = i_map.get(row['kohlesorte'])\n",
" if (i in model.I) and (j in model.J):\n",
" model.alpha_min[i, j] = float(row['minimal']) # z.B. 0.15\n",
" model.alpha_max[i, j] = float(row['maximal']) # z.B. 0.50\n",
"\n",
"# Tageslieferung Xi,j,w,d (Summe über Schichten)\n",
"def x_day_rule(m, i, j, w, d):\n",
" return sum(m.x[i, j, w, d, s] for s in m.S)\n",
"model.x_day = pyo.Expression(model.I, model.J, model.W, model.D, rule=x_day_rule)\n",
"\n",
"# Tagesgesamte Yj,w,d (Summe über Quellen)\n",
"def y_day_rule(m, j, w, d):\n",
" return sum(m.x_day[i, j, w, d] for i in m.I)\n",
"model.y_day = pyo.Expression(model.J, model.W, model.D, rule=y_day_rule)\n",
"\n",
"# Untergrenze: alpha_min * Y <= X\n",
"def mix_lower_rule(m, i, j, w, d):\n",
" return m.alpha_min[i, j] * m.y_day[j, w, d] <= m.x_day[i, j, w, d]\n",
"\n",
"# Obergrenze: X <= alpha_max * Y\n",
"def mix_upper_rule(m, i, j, w, d):\n",
" return m.x_day[i, j, w, d] <= m.alpha_max[i, j] * m.y_day[j, w, d]\n",
"\n",
"model.mix_lower = pyo.Constraint(model.I, model.J, model.W, model.D, rule=mix_lower_rule)\n",
"model.mix_upper = pyo.Constraint(model.I, model.J, model.W, model.D, rule=mix_upper_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "382358f0",
"metadata": {},
"source": [
"### Zielbereich für das Kohlesortenmischverhältnis"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "b3cc2d07",
"metadata": {},
"outputs": [],
"source": [
"# Mapping und Setzen der Mischungsparameter im Modell\n",
"# Mapping Kraftwerk-Label -> J-Key\n",
"j_map = {\n",
" 'Jänschwalde': 'J',\n",
" 'Schwarze Pumpe': 'SP',\n",
" 'Boxberg Werk 3': 'B3',\n",
" 'Boxberg Werk 4': 'B4',\n",
"}\n",
"\n",
"# Mapping Kohlesorte -> Quelle i\n",
"i_map = {\n",
" 'Reichwalder-Kohle': 'Reichwalde',\n",
" 'Nochtener-Kohle': 'Nochten',\n",
" 'Welzower-Kohle': 'Welzow',\n",
"}\n",
"\n",
"mix_df = tables[\"kohle_mix\"]\n",
"\n",
"# Zielband-Parameter (als Bruchteil, nicht Prozent)\n",
"model.alpha_target_low = pyo.Param(model.I, model.J, initialize=0.0, mutable=True)\n",
"model.alpha_target_high = pyo.Param(model.I, model.J, initialize=1.0, mutable=True)\n",
"\n",
"for _, row in mix_df.iterrows():\n",
" j = j_map.get(row['kraftwerk'])\n",
" i = i_map.get(row['kohlesorte'])\n",
" if (i in model.I) and (j in model.J):\n",
" model.alpha_target_low[i, j] = float(row['ziel_low'])\n",
" model.alpha_target_high[i, j] = float(row['ziel_high'])\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "c6dcd7f3",
"metadata": {},
"outputs": [],
"source": [
"# Zielband (soft): Abweichungen werden als Slack-Variablen erfasst\n",
"model.mix_target_low_dev = pyo.Var(model.I, model.J, model.W, model.D, domain=pyo.NonNegativeReals)\n",
"model.mix_target_high_dev = pyo.Var(model.I, model.J, model.W, model.D, domain=pyo.NonNegativeReals)\n",
"\n",
"def mix_target_low_rule(m, i, j, w, d):\n",
" return m.x_day[i, j, w, d] + m.mix_target_low_dev[i, j, w, d] >= m.alpha_target_low[i, j] * m.y_day[j, w, d]\n",
"\n",
"def mix_target_high_rule(m, i, j, w, d):\n",
" return m.x_day[i, j, w, d] <= m.alpha_target_high[i, j] * m.y_day[j, w, d] + m.mix_target_high_dev[i, j, w, d]\n",
"\n",
"model.mix_target_low = pyo.Constraint(model.I, model.J, model.W, model.D, rule=mix_target_low_rule)\n",
"model.mix_target_high = pyo.Constraint(model.I, model.J, model.W, model.D, rule=mix_target_high_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "a1158ce7",
"metadata": {},
"source": [
"## Zielfunktion + Weiche Restriktionen\n",
"\n",
"Alle Zielfunktionskomponenten (Abweichungen, Glättung, Boni) werden hier zusammengesetzt."
]
},
{
"cell_type": "markdown",
"id": "98788dcd",
"metadata": {},
"source": [
"### Aufbau der Zielfunktion\n",
"\n",
"Wie die einzelnen Teilfunktionen zusammenwirken und welche Parameter sie skalieren."
]
},
{
"cell_type": "markdown",
"id": "75567f6a",
"metadata": {},
"source": [
"### Glättung der Schichtlieferungen\n",
"\n",
"Penalisiert große Schichtunterschiede je Kraftwerk, um feinere Verteilungspläne zu bevorzugen."
]
},
{
"cell_type": "markdown",
"id": "5474cbbb",
"metadata": {},
"source": [
"### Bonuskomponente\n",
"\n",
"Belohnt optionale Mehrlieferungen von Welzow nach Jänschwalde und kann parametrisch angepasst werden."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "58086035",
"metadata": {},
"outputs": [],
"source": [
"# Bonuslogik für Welzow → Jänschwalde\n",
"# Bonusparameter für Welzow → Jänschwalde (anpassbar)\n",
"model.lambda_J = pyo.Param(initialize=100_000, mutable=True, within=pyo.NonNegativeReals)\n",
"\n",
"# Bonus: Liefermenge Welzow → J\n",
"def welzow_to_J(m):\n",
" return sum(\n",
" m.x['Welzow', 'J', w, d, s]\n",
" for w in m.W for d in m.D for s in m.S\n",
" if ('Welzow', 'J', w, d, s) in m.x\n",
" )\n",
"model.bonus_welzow_J = pyo.Expression(rule=welzow_to_J)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f74aaac5",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "8f1cf472",
"metadata": {},
"source": [
"## Abweichungen und Hilfsvariablen\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "80558550",
"metadata": {},
"outputs": [],
"source": [
"# Nachfrage pro j,w,d (V nutzt dV_N + dV_W, Rest d)\n",
"def demand_rule(m, j, w, d):\n",
" return m.dV_N[w, d] + m.dV_W[w, d] if j == 'V' else m.d[j, w, d]\n",
"model.demand = pyo.Expression(model.J, model.W, model.D, rule=demand_rule)\n",
"\n",
"# Abweichung: Lieferung minus Nachfrage (für Reporting)\n",
"model.dev = pyo.Expression(model.J, model.W, model.D,\n",
" rule=lambda m, j, w, d: m.y[j, w, d] - m.demand[j, w, d])\n",
"\n",
"# Abweichungen nur für Kraftwerke in |dev|\n",
"J_dev = list(J_power) # ohne V\n",
"model.dev_pos = pyo.Var(J_dev, model.W, model.D, domain=pyo.NonNegativeReals)\n",
"model.dev_neg = pyo.Var(J_dev, model.W, model.D, domain=pyo.NonNegativeReals)\n",
"\n",
"def dev_balance_rule(model, j, w, d):\n",
" return model.dev_pos[j, w, d] - model.dev_neg[j, w, d] == model.dev[j, w, d]\n",
"model.dev_balance = pyo.Constraint(J_dev, model.W, model.D, rule=dev_balance_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "01c033ac",
"metadata": {},
"source": [
"## Maximalabweichung pro Tag\n"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "fa37c381",
"metadata": {},
"outputs": [],
"source": [
"# Binärvariable: Tagesabweichung erreicht das Maximum? (nur Kraftwerke)\n",
"model.z_max = pyo.Var(J_power, model.W, model.D, domain=pyo.Binary)\n",
"\n",
"def max_reached_rule(m, j, w, d):\n",
" return m.dev_pos[j, w, d] <= m.a_max_day[j, w, d] * m.z_max[j, w, d]\n",
"model.max_reached = pyo.Constraint(J_power, model.W, model.D, rule=max_reached_rule)\n",
"\n",
"D_list = list(model.D.ordered_data())\n",
"def no_three_in_a_row_rule(m, j, w, idx):\n",
" d1, d2, d3 = D_list[idx:idx+3]\n",
" return m.z_max[j, w, d1] + m.z_max[j, w, d2] + m.z_max[j, w, d3] <= 2\n",
"model.no_three_in_a_row = pyo.Constraint(J_power, model.W, range(len(D_list) - 2), rule=no_three_in_a_row_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "2e9d0b63",
"metadata": {},
"source": [
"## Veredlung je Quelle\n"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "8b2b18b1",
"metadata": {},
"outputs": [],
"source": [
"def y_V_nochten(m, w, d):\n",
" return sum(m.x['Nochten', 'V', w, d, s] for s in m.S)\n",
"\n",
"def y_V_welzow(m, w, d):\n",
" return sum(m.x['Welzow', 'V', w, d, s] for s in m.S)\n",
"\n",
"model.devV_N = pyo.Expression(model.W, model.D, rule=lambda m, w, d: y_V_nochten(m, w, d) - m.dV_N[w, d])\n",
"model.devV_W = pyo.Expression(model.W, model.D, rule=lambda m, w, d: y_V_welzow(m, w, d) - m.dV_W[w, d])\n",
"\n",
"model.devV_N_pos = pyo.Var(model.W, model.D, domain=pyo.NonNegativeReals)\n",
"model.devV_N_neg = pyo.Var(model.W, model.D, domain=pyo.NonNegativeReals)\n",
"model.devV_W_pos = pyo.Var(model.W, model.D, domain=pyo.NonNegativeReals)\n",
"model.devV_W_neg = pyo.Var(model.W, model.D, domain=pyo.NonNegativeReals)\n",
"\n",
"model.devV_N_balance = pyo.Constraint(model.W, model.D,\n",
" rule=lambda m, w, d: m.devV_N_pos[w, d] - m.devV_N_neg[w, d] == m.devV_N[w, d])\n",
"model.devV_W_balance = pyo.Constraint(model.W, model.D,\n",
" rule=lambda m, w, d: m.devV_W_pos[w, d] - m.devV_W_neg[w, d] == m.devV_W[w, d])\n"
]
},
{
"cell_type": "markdown",
"id": "c310d475",
"metadata": {},
"source": [
"## Glättung über Schichten\n"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "9ead659e",
"metadata": {},
"outputs": [],
"source": [
"model.lambda_glatt = pyo.Param(initialize=1e5, mutable=True, within=pyo.NonNegativeReals)\n",
"\n",
"model.m_ijwd = pyo.Expression(\n",
" model.I, model.J, model.W, model.D,\n",
" rule=lambda m, i, j, w, d: (1/len(m.S)) * sum(m.x[i, j, w, d, s] for s in m.S if (i,j,w,d,s) in m.x)\n",
")\n",
"\n",
"model.t_glatt = pyo.Var(model.I, model.J, model.W, model.D, model.S, domain=pyo.NonNegativeReals)\n",
"\n",
"def glatt_hi(m, i, j, w, d, s):\n",
" return m.t_glatt[i,j,w,d,s] >= m.x[i,j,w,d,s] - m.m_ijwd[i,j,w,d]\n",
"\n",
"def glatt_lo(m, i, j, w, d, s):\n",
" return m.t_glatt[i,j,w,d,s] >= -(m.x[i,j,w,d,s] - m.m_ijwd[i,j,w,d])\n",
"\n",
"model.glatt_hi = pyo.Constraint(model.I, model.J, model.W, model.D, model.S, rule=glatt_hi)\n",
"model.glatt_lo = pyo.Constraint(model.I, model.J, model.W, model.D, model.S, rule=glatt_lo)\n"
]
},
{
"cell_type": "markdown",
"id": "2389a64f",
"metadata": {},
"source": [
"## Zielfunktion\n"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "83595c99",
"metadata": {},
"outputs": [],
"source": [
"# Objective inkl. Smoothness (abweichungen über SHIFT_TOL werden bestraft)\n",
"\n",
"lambda_dev = 100_000 # Gewicht für Abweichungen an J\n",
"lambda_v = 100_000 # Gewicht für Abweichungen an V\n",
"\n",
"# Smoothness weight/tolerance\n",
"lambda_shift = 100_000_000 # Gewicht für Schicht-Glättung\n",
"lambda_mix = 100_000_000 # Gewicht für Zielband-Abweichungen (kohle_mix ziel_low/hoch)\n",
"SHIFT_TOL = 1000 # Toleranz bis zu der keine Strafe anfällt\n",
"SHIFT_PAIRS = [('F', 'S'), ('S', 'N'), ('F', 'N')]\n",
"model.SHIFT_PAIRS = pyo.Set(initialize=SHIFT_PAIRS, dimen=2)\n",
"\n",
"# Smoothness vars/constraints\n",
"model.shift_dev_soft = pyo.Var(model.I, model.J, model.W, model.D, model.SHIFT_PAIRS, domain=pyo.NonNegativeReals)\n",
"model.shift_excess = pyo.Var(model.I, model.J, model.W, model.D, model.SHIFT_PAIRS, domain=pyo.NonNegativeReals)\n",
"\n",
"model.shift_dev_abs = pyo.ConstraintList()\n",
"model.shift_excess_def = pyo.ConstraintList()\n",
"for i in model.I:\n",
" for j in model.J:\n",
" for w in model.W:\n",
" for d in model.D:\n",
" for s1, s2 in SHIFT_PAIRS:\n",
" if (i, j, w, d, s1) in model.x and (i, j, w, d, s2) in model.x:\n",
" # |x_s1 - x_s2|\n",
" model.shift_dev_abs.add(model.shift_dev_soft[i,j,w,d,s1,s2] >= model.x[i,j,w,d,s1] - model.x[i,j,w,d,s2])\n",
" model.shift_dev_abs.add(model.shift_dev_soft[i,j,w,d,s1,s2] >= model.x[i,j,w,d,s2] - model.x[i,j,w,d,s1])\n",
" # nur Überschreitung über SHIFT_TOL wird bestraft\n",
" model.shift_excess_def.add(model.shift_excess[i,j,w,d,s1,s2] >= model.shift_dev_soft[i,j,w,d,s1,s2] - SHIFT_TOL)\n",
"\n",
"PEN_JW_NOCHTEN = 1_000_000 # Nochten -> J stark bestrafen (verhindern)\n",
"BON_JW_WELZOW = 1_000_000 # Welzow -> J belohnen (fördern)\n",
"\n",
"PEN_SP_WELZOW = 40_000 # Welzow -> SP bestrafen (reduzieren)\n",
"BON_SP_REICH = 30_000 # Reichwalde -> SP belohnen (fördern)\n",
"\n",
"PEN_BOX3_WELZOW = 40_000 # Welzow -> B3 bestrafen (reduzieren)\n",
"BON_BOX3_REICH = 2 * 30_000 # Reichwalde -> B3 belohnen (stark fördern, 2/3)\n",
"BON_BOX3_NOCHTEN = 1 * 30_000 # Nochten -> B3 belohnen (moderat fördern, 1/3)\n",
"\n",
"def objective_rule(model):\n",
" # Abweichungsstrafen (Mengenabweichungen und V-Verteilung)\n",
" deviation_penalty = (\n",
" lambda_dev * sum(model.dev_pos[j, w, d] + model.dev_neg[j, w, d]\n",
" for j in J_dev for w in model.W for d in model.D)\n",
" + lambda_v * sum(model.devV_N_pos[w, d] + model.devV_N_neg[w, d]\n",
" + model.devV_W_pos[w, d] + model.devV_W_neg[w, d]\n",
" for w in model.W for d in model.D)\n",
" )\n",
"\n",
" # Welzow-Bonus (globaler Vorteil für W->J)\n",
" bonus_welzow = model.lambda_J * model.bonus_welzow_J\n",
"\n",
" # nur über existierende Indizes summieren\n",
" def flow_sum(src, dst):\n",
" return sum(model.x[i, j, w, d, s]\n",
" for (i, j, w, d, s) in model.x.index_set()\n",
" if i == src and j == dst)\n",
"\n",
" # Straf-/Bonus-Termine für bestimmte Strecken\n",
" penalty_bonus_strecken = (\n",
" PEN_JW_NOCHTEN * flow_sum('Nochten', 'J') # Nochten -> J teuer\n",
" - BON_JW_WELZOW * flow_sum('Welzow', 'J') # Welzow -> J günstig\n",
"\n",
" + PEN_SP_WELZOW * flow_sum('Welzow', 'SP') # Welzow -> SP teuer\n",
" - BON_SP_REICH * flow_sum('Reichwalde', 'SP') # Reichwalde -> SP günstig\n",
"\n",
" + PEN_BOX3_WELZOW * flow_sum('Welzow', 'B3') # Welzow -> B3 teuer\n",
" - BON_BOX3_REICH * flow_sum('Reichwalde', 'B3')# Reichwalde -> B3 günstig\n",
" - BON_BOX3_NOCHTEN * flow_sum('Nochten', 'B3') # Nochten -> B3 günstig\n",
" )\n",
"\n",
" # Smoothness: nur Abweichung über SHIFT_TOL wird bestraft\n",
" smoothness_penalty = lambda_shift * sum(\n",
" model.shift_excess[i, j, w, d, s1, s2]\n",
" for i in model.I for j in model.J for w in model.W for d in model.D\n",
" for s1, s2 in model.SHIFT_PAIRS\n",
" if (i, j, w, d, s1) in model.x and (i, j, w, d, s2) in model.x\n",
" )\n",
"\n",
" # Zielband-Mix: Abweichungen von ziel_low/hoch werden bestraft\n",
" mix_target_penalty = lambda_mix * sum(\n",
" model.mix_target_low_dev[i, j, w, d] + model.mix_target_high_dev[i, j, w, d]\n",
" for i in model.I for j in model.J for w in model.W for d in model.D\n",
" )\n",
"\n",
" return deviation_penalty - bonus_welzow + penalty_bonus_strecken + smoothness_penalty + mix_target_penalty\n",
"\n",
"if hasattr(model, 'obj'):\n",
" model.del_component(model.obj)\n",
"model.obj = pyo.Objective(rule=objective_rule, sense=pyo.minimize)\n"
]
},
{
"cell_type": "markdown",
"id": "ed37b283",
"metadata": {},
"source": [
"## Reichwalder Kohle geht nicht zur Veredlung\n",
"\n",
"Nebenbedingung, die Transporte aus Reichwalde zur Veredlung explizit ausschließt."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "1701f6e7",
"metadata": {},
"outputs": [],
"source": [
"def forbid_reichwalde_V_rule(m, w, d, s):\n",
" return m.x['Reichwalde', 'V', w, d, s] == 0\n",
"\n",
"model.forbid_reichwalde_V = pyo.Constraint(model.W, model.D, model.S,\n",
" rule=forbid_reichwalde_V_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "f297b423",
"metadata": {},
"source": [
"## Kohlefördermengenkapazitäten\n",
"\n",
"Kapazitätsgrenzen der Tagebaue pro Schicht/Woche/Tag ins Modell übertragen."
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "1d2d8239",
"metadata": {},
"outputs": [],
"source": [
"# Förderkapazität je Tagebau/Tag/Schicht in Constraintform bringen\n",
"cap_df = tables[\"foerderkapaz\"]\n",
"cap_month = cap_df[cap_df['zeitraum'] == 'pro Monat']\n",
"\n",
"# Mapping für Tagebau-Label -> i-Key\n",
"i_map = {\n",
" 'Reichwalde (RW)': 'Reichwalde',\n",
" 'Nochten (NO)': 'Nochten',\n",
" 'Welzow-Süd': 'Welzow',\n",
"}\n",
"\n",
"# Param für Monatsgrenzen je i\n",
"model.F_max_month = pyo.Param(model.I, initialize=1e12, mutable=True)\n",
"for _, row in cap_month.iterrows():\n",
" i = i_map.get(row['tagebau'])\n",
" if i in model.I:\n",
" model.F_max_month[i] = float(row['maximal'])\n",
"\n",
"# Monatsförderung je i: Summe über alle j,d,s in Wochen des Monats\n",
"def F_month_rule(m, i, mm):\n",
" weeks_in_m = [w for w, mth in week_to_month.items() if mth == mm and w in m.W]\n",
" return sum(m.x[i, j, w, d, s]\n",
" for j in m.J for w in weeks_in_m for d in m.D for s in m.S)\n",
"model.F_month = pyo.Expression(model.I, model.M, rule=F_month_rule)\n",
"\n",
"# Nebenbedingungen: Monatslimit je Tagebau\n",
"model.cap_month = pyo.Constraint(\n",
" model.I, model.M, rule=lambda m, i, mm: m.F_month[i, mm] <= m.F_max_month[i]\n",
")\n",
"\n",
"# Kombiniert RW+NO\n",
"model.cap_month_RWNO = pyo.Constraint(\n",
" model.M,\n",
" rule=lambda m, mm: m.F_month['Reichwalde', mm] + m.F_month['Nochten', mm] <= 3_000_000.0\n",
")\n"
]
},
{
"cell_type": "markdown",
"id": "d0ae92e0",
"metadata": {},
"source": [
"## Verladungskapazität am Kohlelagerplatz Boxberg\n",
"\n",
"Boxberg-KLP-Beladung als Limit für zulässige Lieferungen modellieren."
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "ba2ba375",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"| verladung | zeitraum | maximal |\n",
"|:----------------|:------------|----------:|\n",
"| Welzow-Süd | pro Schicht | 33000 |\n",
"| Welzow-Süd | pro Tag | 60000 |\n",
"| Boxberg (RW+NO) | pro Schicht | 25000 |\n",
"| Boxberg (RW+NO) | pro Tag | 75000 |\n"
]
}
],
"source": [
"# KLP-Restriktion: Boxberg-Verladung je Schicht begrenzen\n",
"df = tables[\"verladungskap\"]\n",
"print(df.to_markdown(index=False))"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "1bd7cd46",
"metadata": {},
"outputs": [],
"source": [
"# Alternative Darstellung der KLP-Kapazitäten zur Kontrolle\n",
"verladung_cap = tables[\"verladungskap\"]\n",
"\n",
"def cap_lookup(verladung_label: str, zeitraum_label: str) -> float:\n",
" row = verladung_cap[\n",
" (verladung_cap[\"verladung\"] == verladung_label)\n",
" & (verladung_cap[\"zeitraum\"] == zeitraum_label)\n",
" ]\n",
" if row.empty:\n",
" raise ValueError(f\"Keine Verladungskap für {verladung_label} / {zeitraum_label}\")\n",
" return float(row.iloc[0][\"maximal\"])\n",
"\n",
"BOXBERG_SOURCES = [\"Reichwalde\", \"Nochten\"]\n",
"BOXBERG_DEST = [\"J\", \"SP\", \"B3\", \"V\"]\n",
"BOXBERG_SHIFT_CAP = cap_lookup(\"Boxberg (RW+NO)\", \"pro Schicht\")\n",
"BOXBERG_DAY_CAP = cap_lookup(\"Boxberg (RW+NO)\", \"pro Tag\")\n",
"\n",
"WELZOW_SOURCES = [\"Welzow\"]\n",
"WELZOW_DEST = [\"J\", \"SP\", \"B3\", \"V\"]\n",
"WELZOW_SHIFT_CAP = cap_lookup(\"Welzow-Süd\", \"pro Schicht\")\n",
"WELZOW_DAY_CAP = cap_lookup(\"Welzow-Süd\", \"pro Tag\")\n",
"\n",
"def boxberg_shift_cap_rule(m, w, d, s):\n",
" return sum(\n",
" m.x[i, j, w, d, s]\n",
" for i in BOXBERG_SOURCES if i in m.I\n",
" for j in BOXBERG_DEST if j in m.J\n",
" ) <= BOXBERG_SHIFT_CAP\n",
"model.cap_boxberg_shift = pyo.Constraint(model.W, model.D, model.S, rule=boxberg_shift_cap_rule)\n",
"\n",
"def boxberg_day_cap_rule(m, w, d):\n",
" return sum(\n",
" m.x[i, j, w, d, s]\n",
" for i in BOXBERG_SOURCES if i in m.I\n",
" for j in BOXBERG_DEST if j in m.J\n",
" for s in m.S\n",
" ) <= BOXBERG_DAY_CAP\n",
"model.cap_boxberg_day = pyo.Constraint(model.W, model.D, rule=boxberg_day_cap_rule)\n",
"\n",
"def welzow_shift_cap_rule(m, w, d, s):\n",
" return sum(\n",
" m.x[i, j, w, d, s]\n",
" for i in WELZOW_SOURCES if i in m.I\n",
" for j in WELZOW_DEST if j in m.J\n",
" ) <= WELZOW_SHIFT_CAP\n",
"model.cap_welzow_shift = pyo.Constraint(model.W, model.D, model.S, rule=welzow_shift_cap_rule)\n",
"\n",
"def welzow_day_cap_rule(m, w, d):\n",
" return sum(\n",
" m.x[i, j, w, d, s]\n",
" for i in WELZOW_SOURCES if i in m.I\n",
" for j in WELZOW_DEST if j in m.J\n",
" for s in m.S\n",
" ) <= WELZOW_DAY_CAP\n",
"model.cap_welzow_day = pyo.Constraint(model.W, model.D, rule=welzow_day_cap_rule)\n"
]
},
{
"cell_type": "markdown",
"id": "511eee89",
"metadata": {},
"source": [
"## Kurzfrstige Nichtverfügbarkeiten"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "7f032640",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"dict_keys(['Verfuegbarkeiten', 'bounds_power_plants', 'bunker', 'foerderkapaz', 'kohle_mix', 'rohkohlebedarf', 'veredelung_bounds', 'verladungskap', 'zugdurchlass'])"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tables.keys() # for inspection of available tables]"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "fbc1cf9e",
"metadata": {},
"outputs": [],
"source": [
"avail = tables[\"Verfuegbarkeiten\"] # for inspection of available tables]\n"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "bc4a5192",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING: DEPRECATED: Param 'cap_Welzow' declared with an implicit domain of\n",
"'Any'. The default domain for Param objects is 'Any'. However, we will be\n",
"changing that default to 'Reals' in the future. If you really intend the\n",
"domain of this Paramto be 'Any', you can suppress this warning by explicitly\n",
"specifying 'within=Any' to the Param constructor. (deprecated in 5.6.9, will\n",
"be removed in (or after) 6.0) (called from\n",
"/Users/nicolaimacbook/Projects/LEAG/POC1/.venv/lib/python3.13/site-\n",
"packages/pyomo/core/base/indexed_component.py:785)\n",
"WARNING: DEPRECATED: Param 'cap_RW_N' declared with an implicit domain of\n",
"'Any'. The default domain for Param objects is 'Any'. However, we will be\n",
"changing that default to 'Reals' in the future. If you really intend the\n",
"domain of this Paramto be 'Any', you can suppress this warning by explicitly\n",
"specifying 'within=Any' to the Param constructor. (deprecated in 5.6.9, will\n",
"be removed in (or after) 6.0) (called from\n",
"/Users/nicolaimacbook/Projects/LEAG/POC1/.venv/lib/python3.13/site-\n",
"packages/pyomo/core/base/indexed_component.py:785)\n"
]
}
],
"source": [
"import math\n",
"from pyomo.environ import value\n",
"\n",
"# Falls schon vorhanden: löschen, dann neu anlegen\n",
"for name in [\"cap_Welzow\", \"cap_RW_N\", \"cap_welzow_con\", \"cap_rw_n_con\"]:\n",
" if hasattr(model, name):\n",
" model.del_component(name)\n",
"\n",
"day_map = {'Sat': 'Sa', 'Sun': 'So', 'Mon': 'Mo', 'Tue': 'Di', 'Wed': 'Mi', 'Thu': 'Do', 'Fri': 'Fr'}\n",
"schicht_order = ['F', 'S', 'N'] # S1/S2/S3 -> Früh/Spät/Nacht\n",
"\n",
"# Params: Default = nan => kein Limit, wenn nicht überschrieben\n",
"model.cap_Welzow = pyo.Param(model.W, model.D, model.S, initialize=math.nan, mutable=True)\n",
"model.cap_RW_N = pyo.Param(model.W, model.D, model.S, initialize=math.nan, mutable=True)\n",
"\n",
"for _, row in avail.iterrows():\n",
" datum = row['datum']\n",
" w = datum.isocalendar().week # anpassen, falls model.W anders ist\n",
" d = day_map[datum.strftime('%a')]\n",
"\n",
" caps_w = [row['Welzow_Sued_S1_t'], row['Welzow_Sued_S2_t'], row['Welzow_Sued_S3_t']]\n",
" caps_rn = [row['Boxberg_NO_RW_S1_t'], row['Boxberg_NO_RW_S2_t'], row['Boxberg_NO_RW_S3_t']]\n",
"\n",
" for idx, s in enumerate(schicht_order): # ['F','S','N']\n",
" val_w = caps_w[idx]\n",
" if not math.isnan(val_w):\n",
" model.cap_Welzow[w, d, s] = val_w\n",
" val_rn = caps_rn[idx]\n",
" if not math.isnan(val_rn):\n",
" model.cap_RW_N[w, d, s] = val_rn\n",
"\n",
"\n",
"# Constraints: skip bei nan, sonst Limit setzen\n",
"def cap_welzow_rule(m, w, d, s):\n",
" v = value(m.cap_Welzow[w, d, s])\n",
" if math.isnan(v):\n",
" return pyo.Constraint.Skip\n",
" return sum(m.x[i, j, w, d, s]\n",
" for i in ['Welzow'] for j in m.J\n",
" if (i, j, w, d, s) in m.x.index_set()) <= v\n",
"model.cap_welzow_con = pyo.Constraint(model.W, model.D, model.S, rule=cap_welzow_rule)\n",
"\n",
"def cap_rw_n_rule(m, w, d, s):\n",
" v = value(m.cap_RW_N[w, d, s])\n",
" if math.isnan(v):\n",
" return pyo.Constraint.Skip\n",
" return sum(m.x[i, j, w, d, s]\n",
" for i in ['Reichwalde', 'Nochten'] for j in m.J\n",
" if (i, j, w, d, s) in m.x.index_set()) <= v\n",
"model.cap_rw_n_con = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_rule)\n"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "e76b4aef",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"| week | day | shift | cap_Welzow | cap_RW_N |\n",
"|-------:|:------|:--------|-------------:|-----------:|\n",
"| 45 | Mi | F | 0 | 5000 |\n",
"| 45 | Mi | S | nan | 15000 |\n",
"| 46 | Mi | F | 0 | nan |\n",
"| 46 | Do | F | nan | 15000 |\n",
"| 47 | Mi | F | 0 | 25000 |\n",
"| 47 | Mi | S | nan | 25000 |\n",
"| 48 | Mi | F | 0 | 25000 |\n",
"| 48 | Mi | S | nan | 25000 |\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>week</th>\n",
" <th>day</th>\n",
" <th>shift</th>\n",
" <th>cap_Welzow</th>\n",
" <th>cap_RW_N</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>45</td>\n",
" <td>Mi</td>\n",
" <td>F</td>\n",
" <td>0.0</td>\n",
" <td>5000.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>45</td>\n",
" <td>Mi</td>\n",
" <td>S</td>\n",
" <td>NaN</td>\n",
" <td>15000.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>46</td>\n",
" <td>Mi</td>\n",
" <td>F</td>\n",
" <td>0.0</td>\n",
" <td>NaN</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>46</td>\n",
" <td>Do</td>\n",
" <td>F</td>\n",
" <td>NaN</td>\n",
" <td>15000.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>47</td>\n",
" <td>Mi</td>\n",
" <td>F</td>\n",
" <td>0.0</td>\n",
" <td>25000.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>47</td>\n",
" <td>Mi</td>\n",
" <td>S</td>\n",
" <td>NaN</td>\n",
" <td>25000.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>48</td>\n",
" <td>Mi</td>\n",
" <td>F</td>\n",
" <td>0.0</td>\n",
" <td>25000.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>48</td>\n",
" <td>Mi</td>\n",
" <td>S</td>\n",
" <td>NaN</td>\n",
" <td>25000.0</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" week day shift cap_Welzow cap_RW_N\n",
"0 45 Mi F 0.0 5000.0\n",
"1 45 Mi S NaN 15000.0\n",
"2 46 Mi F 0.0 NaN\n",
"3 46 Do F NaN 15000.0\n",
"4 47 Mi F 0.0 25000.0\n",
"5 47 Mi S NaN 25000.0\n",
"6 48 Mi F 0.0 25000.0\n",
"7 48 Mi S NaN 25000.0"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"from pyomo.environ import value\n",
"\n",
"def list_caps(model):\n",
" rows = []\n",
" for w in model.W:\n",
" for d in model.D:\n",
" for s in model.S:\n",
" v_w = value(model.cap_Welzow[w, d, s])\n",
" v_rn = value(model.cap_RW_N[w, d, s])\n",
" if not math.isnan(v_w) or not math.isnan(v_rn):\n",
" rows.append({\"week\": w, \"day\": d, \"shift\": s,\n",
" \"cap_Welzow\": v_w, \"cap_RW_N\": v_rn})\n",
" return pd.DataFrame(rows)\n",
"\n",
"caps_df = list_caps(model)\n",
"print(caps_df.to_markdown(index=False))\n",
"caps_df\n"
]
},
{
"cell_type": "markdown",
"id": "dccc6150",
"metadata": {},
"source": [
"## Zugdurchlasskapazität und Schichtrelation\n",
"\n",
"Aktuelle Annahmen zu maximalen Zugdurchlässen; noch zu validieren, daher separat gekennzeichnet."
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "dd3ddf26",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"| von | start | zum | ziel | zeitraum | maximal | vielfaches_von |\n",
"|:------|:------------------------------|:------|:------------------------------|:------------|----------:|-----------------:|\n",
"| Von | Verladung Boxberg (KLP) | zum | KW Jänschwalde | pro Schicht | 20000 | 1 |\n",
"| Von | Verladung Boxberg (KLP) | zum | KW Schwarze Pumpe | pro Schicht | 25000 | 1 |\n",
"| Von | Verladung Boxberg (KLP) | zum | Veredlung ISP | pro Schicht | 25000 | 1 |\n",
"| Von | Verladung Boxberg (KLP) | zum | KW Boxberg Werk 3 | pro Schicht | 32000 | 1 |\n",
"| Von | Verladung Welzow-Süd (KUP) | zum | KW Jänschwalde | pro Schicht | 26000 | 2 |\n",
"| Von | Verladung Welzow-Süd (KUP) | zum | KW Schwarze Pumpe | pro Schicht | 24000 | 1 |\n",
"| Von | Verladung Welzow-Süd (KUP) | zum | Veredlung ISP | pro Schicht | 24000 | 1 |\n",
"| Von | Verladung Welzow-Süd (KUP) | zum | KW Boxberg Werk 3 | pro Schicht | 20000 | 1 |\n",
"| Von | Verladung Boxberg (KLP) | zum | KW SP + V ISP | pro Schicht | 25000 | nan |\n",
"| Von | Verladung Boxberg (KLP) | zum | KW JW + KW SP + V ISP + KW B3 | pro Schicht | 55000 | nan |\n",
"| Von | Verladung Welzow-Süd (KUP) | zum | KW SP + V ISP | pro Schicht | 24000 | nan |\n",
"| Von | Verladung Welzow-Süd (KUP) | zum | KW SP + V ISP + KW B3 | pro Schicht | 25000 | nan |\n",
"| Von | KUP + KLP | zum | KW Jänschwalde | pro Schicht | 34000 | nan |\n",
"| Von | KUP + KLP | zum | KW Boxberg Werk 3 | pro Schicht | 32000 | nan |\n",
"| Von | KLP zum KW JW | UND | KUP zum KW B3 | pro Schicht | 20000 | nan |\n",
"| Von | KLP zum KW JW + KW SP + V ISP | UND | KUP zum KW B3 | pro Schicht | 32000 | nan |\n",
"| Von | Direktbekohlung Boxberg (KLP) | zum | KW Boxberg Werk 4 | pro Schicht | inf | nan |\n"
]
}
],
"source": [
"# Zugdurchlässe je Schicht als Obergrenze (noch zu validieren)\n",
"df = tables[\"zugdurchlass\"]\n",
"print(df.to_markdown(index=False))"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "489cfe09",
"metadata": {},
"outputs": [],
"source": [
"import math\n",
"\n",
"df = tables[\"zugdurchlass\"].copy()\n",
"\n",
"def cap_from_table(start_exact, ziel_exact):\n",
" s = df[\"start\"].astype(str).str.strip()\n",
" z = df[\"ziel\"].astype(str).str.strip()\n",
" mask = (s == start_exact) & (z == ziel_exact)\n",
" row = df[mask]\n",
" if row.empty:\n",
" raise ValueError(f\"cap not found for start={start_exact!r}, ziel={ziel_exact!r}\")\n",
" return float(row.iloc[0][\"maximal\"])\n",
"\n",
"# Caps aus Tabelle\n",
"CAP_RW_N_TO_J = cap_from_table(\"Verladung Boxberg (KLP)\", \"KW Jänschwalde\")\n",
"CAP_RW_N_TO_SP = cap_from_table(\"Verladung Boxberg (KLP)\", \"KW Schwarze Pumpe\")\n",
"CAP_RW_N_TO_V = cap_from_table(\"Verladung Boxberg (KLP)\", \"Veredlung ISP\")\n",
"CAP_RW_N_TO_B3 = cap_from_table(\"Verladung Boxberg (KLP)\", \"KW Boxberg Werk 3\")\n",
"\n",
"CAP_W_TO_J = cap_from_table(\"Verladung Welzow-Süd (KUP)\", \"KW Jänschwalde\")\n",
"CAP_W_TO_SP = cap_from_table(\"Verladung Welzow-Süd (KUP)\", \"KW Schwarze Pumpe\")\n",
"CAP_W_TO_V = cap_from_table(\"Verladung Welzow-Süd (KUP)\", \"Veredlung ISP\")\n",
"CAP_W_TO_B3 = cap_from_table(\"Verladung Welzow-Süd (KUP)\", \"KW Boxberg Werk 3\")\n",
"\n",
"CAP_RW_N_TO_SP_V = cap_from_table(\"Verladung Boxberg (KLP)\", \"KW SP + V ISP\")\n",
"CAP_RW_N_TO_ALL = cap_from_table(\"Verladung Boxberg (KLP)\", \"KW JW + KW SP + V ISP + KW B3\")\n",
"\n",
"CAP_W_TO_SP_V = cap_from_table(\"Verladung Welzow-Süd (KUP)\", \"KW SP + V ISP\")\n",
"CAP_W_TO_SP_V_B3 = cap_from_table(\"Verladung Welzow-Süd (KUP)\", \"KW SP + V ISP + KW B3\")\n",
"\n",
"CAP_RW_N_W_TO_J = cap_from_table(\"KUP + KLP\", \"KW Jänschwalde\")\n",
"CAP_RW_N_W_TO_B3 = cap_from_table(\"KUP + KLP\", \"KW Boxberg Werk 3\")\n",
"\n",
"CAP_RW_N_J_AND_W_B3 = cap_from_table(\"KLP zum KW JW\", \"KUP zum KW B3\")\n",
"CAP_RW_N_JSPV_AND_W_B3 = cap_from_table(\"KLP zum KW JW + KW SP + V ISP\", \"KUP zum KW B3\")\n",
"\n",
"# Constraints (wie bisher, nur mit CAP_* statt Zahlen)\n",
"\n",
"def cap_rw_n_to_j_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'J', w, d, s]\n",
" + m.x['Nochten', 'J', w, d, s]\n",
" <= CAP_RW_N_TO_J\n",
" )\n",
"model.cap_rw_n_to_j = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_to_j_rule)\n",
"\n",
"def cap_rw_n_to_sp_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'SP', w, d, s]\n",
" + m.x['Nochten', 'SP', w, d, s]\n",
" <= CAP_RW_N_TO_SP\n",
" )\n",
"model.cap_rw_n_to_sp = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_to_sp_rule)\n",
"\n",
"def cap_rw_n_to_v_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'V', w, d, s]\n",
" + m.x['Nochten', 'V', w, d, s]\n",
" <= CAP_RW_N_TO_V\n",
" )\n",
"model.cap_rw_n_to_v = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_to_v_rule)\n",
"\n",
"def cap_rw_n_to_bw3_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'B3', w, d, s]\n",
" + m.x['Nochten', 'B3', w, d, s]\n",
" <= CAP_RW_N_TO_B3\n",
" )\n",
"model.cap_rw_n_to_bw3 = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_to_bw3_rule)\n",
"\n",
"def cap_w_to_j_rule(m, w, d, s):\n",
" return m.x['Welzow', 'J', w, d, s] <= CAP_W_TO_J\n",
"model.cap_w_to_j = pyo.Constraint(model.W, model.D, model.S, rule=cap_w_to_j_rule)\n",
"\n",
"model.k_w_to_j_steps = pyo.Var(model.W, model.D, model.S, domain=pyo.NonNegativeIntegers)\n",
"def welzow_j_multiple_rule(m, w, d, s):\n",
" return m.x['Welzow', 'J', w, d, s] == 2_000 * m.k_w_to_j_steps[w, d, s]\n",
"model.welzow_j_multiple_2kt = pyo.Constraint(model.W, model.D, model.S, rule=welzow_j_multiple_rule)\n",
"\n",
"def cap_w_to_sp_rule(m, w, d, s):\n",
" return m.x['Welzow', 'SP', w, d, s] <= CAP_W_TO_SP\n",
"model.cap_w_to_sp = pyo.Constraint(model.W, model.D, model.S, rule=cap_w_to_sp_rule)\n",
"\n",
"def cap_w_to_v_rule(m, w, d, s):\n",
" return m.x['Welzow', 'V', w, d, s] <= CAP_W_TO_V\n",
"model.cap_w_to_v = pyo.Constraint(model.W, model.D, model.S, rule=cap_w_to_v_rule)\n",
"\n",
"def cap_w_to_bw3_rule(m, w, d, s):\n",
" return m.x['Welzow', 'B3', w, d, s] <= CAP_W_TO_B3\n",
"model.cap_w_to_bw3 = pyo.Constraint(model.W, model.D, model.S, rule=cap_w_to_bw3_rule)\n",
"\n",
"def cap_rw_n_to_sp_v_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'SP', w, d, s]\n",
" + m.x['Nochten', 'SP', w, d, s]\n",
" + m.x['Reichwalde', 'V', w, d, s]\n",
" + m.x['Nochten', 'V', w, d, s]\n",
" <= CAP_RW_N_TO_SP_V\n",
" )\n",
"model.cap_rw_n_to_sp_v = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_to_sp_v_rule)\n",
"\n",
"def cap_rw_n_to_j_sp_v_b3_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'J', w, d, s]\n",
" + m.x['Reichwalde', 'SP', w, d, s]\n",
" + m.x['Reichwalde', 'V', w, d, s]\n",
" + m.x['Reichwalde', 'B3', w, d, s]\n",
" + m.x['Nochten', 'J', w, d, s]\n",
" + m.x['Nochten', 'SP', w, d, s]\n",
" + m.x['Nochten', 'V', w, d, s]\n",
" + m.x['Nochten', 'B3', w, d, s]\n",
" <= CAP_RW_N_TO_ALL\n",
" )\n",
"model.cap_rw_n_to_j_sp_v_b3 = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_to_j_sp_v_b3_rule)\n",
"\n",
"def cap_w_to_sp_v_rule(m, w, d, s):\n",
" return (\n",
" m.x['Welzow', 'SP', w, d, s]\n",
" + m.x['Welzow', 'V', w, d, s]\n",
" <= CAP_W_TO_SP_V\n",
" )\n",
"model.cap_w_to_sp_v = pyo.Constraint(model.W, model.D, model.S, rule=cap_w_to_sp_v_rule)\n",
"\n",
"def cap_w_to_sp_v_b3_rule(m, w, d, s):\n",
" return (\n",
" m.x['Welzow', 'SP', w, d, s]\n",
" + m.x['Welzow', 'V', w, d, s]\n",
" + m.x['Welzow', 'B3', w, d, s]\n",
" <= CAP_W_TO_SP_V_B3\n",
" )\n",
"model.cap_w_to_sp_v_b3 = pyo.Constraint(model.W, model.D, model.S, rule=cap_w_to_sp_v_b3_rule)\n",
"\n",
"def cap_rw_n_w_to_j_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'J', w, d, s]\n",
" + m.x['Nochten', 'J', w, d, s]\n",
" + m.x['Welzow', 'J', w, d, s]\n",
" <= CAP_RW_N_W_TO_J\n",
" )\n",
"model.cap_rw_n_w_to_j = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_w_to_j_rule)\n",
"\n",
"def cap_rw_n_w_to_bw3_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'B3', w, d, s]\n",
" + m.x['Nochten', 'B3', w, d, s]\n",
" + m.x['Welzow', 'B3', w, d, s]\n",
" <= CAP_RW_N_W_TO_B3\n",
" )\n",
"model.cap_rw_n_w_to_bw3 = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_w_to_bw3_rule)\n",
"\n",
"def cap_rw_n_j_and_w_b3_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'J', w, d, s]\n",
" + m.x['Nochten', 'J', w, d, s]\n",
" + m.x['Welzow', 'B3', w, d, s]\n",
" <= CAP_RW_N_J_AND_W_B3\n",
" )\n",
"model.cap_rw_n_j_and_w_b3 = pyo.Constraint(model.W, model.D, model.S, rule=cap_rw_n_j_and_w_b3_rule)\n",
"\n",
"def cap_rw_n_to_j_sp_v_and_w_to_b3_rule(m, w, d, s):\n",
" return (\n",
" m.x['Reichwalde', 'J', w, d, s]\n",
" + m.x['Reichwalde', 'SP', w, d, s]\n",
" + m.x['Reichwalde', 'V', w, d, s]\n",
" + m.x['Nochten', 'J', w, d, s]\n",
" + m.x['Nochten', 'SP', w, d, s]\n",
" + m.x['Nochten', 'V', w, d, s]\n",
" + m.x['Welzow', 'B3', w, d, s]\n",
" <= CAP_RW_N_JSPV_AND_W_B3\n",
" )\n",
"model.cap_rw_n_to_j_sp_v_and_w_to_b3 = pyo.Constraint(\n",
" model.W, model.D, model.S, rule=cap_rw_n_to_j_sp_v_and_w_to_b3_rule\n",
")\n"
]
},
{
"cell_type": "markdown",
"id": "2ccce321",
"metadata": {},
"source": [
"## Solver aufrufen\n",
"\n",
"Konfiguration des Gurobi-Solvers und Ermittlung der zulässigen KLP-Zuordnungen."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "bbe5e6b0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Read LP format model from file /var/folders/pp/f78_zrb95px1ygnxdpvgtnn40000gn/T/tmpe8harj04.pyomo.lp\n",
"Reading time = 0.03 seconds\n",
"x1: 15801 rows, 9618 columns, 78930 nonzeros\n",
"Set parameter TimeLimit to value 300\n",
"Set parameter MIPGap to value 0.01\n",
"Set parameter LogFile to value \"gurobi_first_opt_model.log\"\n",
"Gurobi Optimizer version 13.0.0 build v13.0.0rc1 (mac64[arm] - Darwin 24.6.0 24G419)\n",
"\n",
"CPU model: Apple M3 Pro\n",
"Thread count: 11 physical cores, 11 logical processors, using up to 11 threads\n",
"\n",
"Non-default parameters:\n",
"TimeLimit 300\n",
"MIPGap 0.01\n",
"\n",
"Optimize a model with 15801 rows, 9618 columns and 78930 nonzeros (Min)\n",
"Model fingerprint: 0x611ed4b3\n",
"Model has 4536 linear objective coefficients\n",
"Variable types: 7434 continuous, 2184 integer (168 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+04]\n",
" Objective range [1e+05, 1e+09]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 4e+06]\n",
"Warning: Model contains large objective coefficients\n",
" Consider reformulating model or setting NumericFocus parameter\n",
" to avoid numerical issues.\n",
"Presolve removed 11578 rows and 6568 columns\n",
"Presolve time: 0.04s\n",
"Presolved: 4223 rows, 3050 columns, 28757 nonzeros\n",
"Variable types: 191 continuous, 2859 integer (120 binary)\n",
"\n",
"Root relaxation: objective 2.866993e+13, 2505 iterations, 0.02 seconds (0.03 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 2.8670e+13 0 912 - 2.8670e+13 - - 0s\n",
"H 0 0 1.074215e+14 2.8670e+13 73.3% - 0s\n",
"H 0 0 8.278702e+13 2.8670e+13 65.4% - 0s\n",
"H 0 0 8.072192e+13 2.8670e+13 64.5% - 0s\n",
"H 0 0 8.072190e+13 2.8670e+13 64.5% - 0s\n",
"H 0 0 8.011886e+13 2.8670e+13 64.2% - 0s\n",
" 0 0 postponed 0 8.0119e+13 3.1101e+13 61.2% - 0s\n",
" 0 0 postponed 0 8.0119e+13 3.1101e+13 61.2% - 0s\n",
" 0 0 postponed 0 8.0119e+13 3.1101e+13 61.2% - 0s\n",
" 0 2 postponed 0 8.0119e+13 3.1101e+13 61.2% - 0s\n",
"H 11 10 7.971886e+13 3.1101e+13 61.0% 1870 1s\n",
"H 11 9 5.066721e+13 3.1101e+13 38.6% 1870 1s\n",
"H 11 9 5.066708e+13 3.1101e+13 38.6% 1870 1s\n",
"H 11 8 5.065198e+13 3.1101e+13 38.6% 1870 1s\n",
"H 11 8 5.058858e+13 3.1101e+13 38.5% 1870 1s\n",
"H 11 7 5.042278e+13 3.1101e+13 38.3% 1870 1s\n",
"H 11 7 5.026894e+13 3.1101e+13 38.1% 1870 1s\n",
"H 11 6 5.018394e+13 3.1101e+13 38.0% 1870 1s\n",
"H 11 6 5.016894e+13 3.1101e+13 38.0% 1870 1s\n",
"H 11 6 4.976373e+13 3.1101e+13 37.5% 1870 1s\n",
" 93 114 postponed 16 4.9764e+13 3.1101e+13 37.5% 501 6s\n",
"H 95 114 4.886373e+13 3.1101e+13 36.4% 496 6s\n",
"H 125 152 4.876379e+13 3.1101e+13 36.2% 453 10s\n",
"H 171 194 4.043707e+13 3.1101e+13 23.1% 426 12s\n",
"H 207 240 4.033196e+13 3.1101e+13 22.9% 711 13s\n",
"H 251 281 4.023696e+13 3.1101e+13 22.7% 881 13s\n",
"H 253 281 3.921172e+13 3.1101e+13 20.7% 874 13s\n",
"H 258 281 3.828594e+13 3.1101e+13 18.8% 857 13s\n",
"H 280 281 3.773034e+13 3.1101e+13 17.6% 796 13s\n",
"H 292 328 3.763024e+13 3.1101e+13 17.4% 767 13s\n",
" 339 340 postponed 24 3.7630e+13 3.1101e+13 17.4% 678 15s\n",
"H 362 354 3.669079e+13 3.1101e+13 15.2% 1058 16s\n",
"H 365 395 3.669068e+13 3.1101e+13 15.2% 1050 19s\n",
" 406 450 3.1101e+13 27 913 3.6691e+13 3.1101e+13 15.2% 953 21s\n",
"H 463 499 3.635349e+13 3.1101e+13 14.4% 1069 23s\n",
"H 471 499 3.633063e+13 3.1101e+13 14.4% 1060 23s\n",
"H 475 499 3.626548e+13 3.1101e+13 14.2% 1052 23s\n",
"H 482 499 3.624065e+13 3.1101e+13 14.2% 1296 23s\n",
"H 486 499 3.328458e+13 3.1101e+13 6.56% 1286 23s\n",
"H 510 511 3.328454e+13 3.1101e+13 6.56% 1226 24s\n",
"H 522 551 3.298718e+13 3.1101e+13 5.72% 1354 26s\n",
"H 562 587 3.298715e+13 3.1101e+13 5.72% 1539 28s\n",
"H 565 587 3.278710e+13 3.1101e+13 5.14% 1555 28s\n",
"H 573 587 3.273717e+13 3.1101e+13 5.00% 1691 28s\n",
" 598 654 3.1101e+13 32 899 3.2737e+13 3.1101e+13 5.00% 1628 30s\n",
"H 673 668 3.273694e+13 3.1101e+13 5.00% 1752 32s\n",
"H 705 722 3.257590e+13 3.1101e+13 4.53% 1772 33s\n",
"H 733 801 3.257582e+13 3.1101e+13 4.53% 1708 37s\n",
"H 812 825 3.254401e+13 3.1101e+13 4.43% 1725 38s\n",
"H 827 825 3.252996e+13 3.1101e+13 4.39% 1846 38s\n",
" 938 1018 postponed 52 3.2530e+13 3.1101e+13 4.39% 1686 41s\n",
" 1127 1154 postponed 57 3.2530e+13 3.1101e+13 4.39% 1770 47s\n",
"H 1129 1154 3.252995e+13 3.1101e+13 4.39% 1769 47s\n",
"H 1132 1154 3.252956e+13 3.1101e+13 4.39% 1830 47s\n",
"H 1175 1218 3.252952e+13 3.1101e+13 4.39% 1836 49s\n",
"H 1243 1272 3.252949e+13 3.1101e+13 4.39% 1868 52s\n",
"H 1289 1272 3.252948e+13 3.1101e+13 4.39% 1993 52s\n",
" 1300 1341 postponed 63 3.2529e+13 3.1101e+13 4.39% 2038 55s\n",
" 1463 1508 postponed 64 3.2529e+13 3.1101e+13 4.39% 2167 60s\n",
" 1686 1700 3.1101e+13 66 887 3.2529e+13 3.1101e+13 4.39% 2541 66s\n",
"H 1760 1739 3.252946e+13 3.1101e+13 4.39% 2892 69s\n",
"H 1760 1739 3.252946e+13 3.1101e+13 4.39% 2892 69s\n",
"H 1764 1739 3.252944e+13 3.1101e+13 4.39% 2886 69s\n",
" 1780 1798 postponed 68 3.2529e+13 3.1101e+13 4.39% 2915 72s\n",
" 1842 1857 postponed 69 3.2529e+13 3.1101e+13 4.39% 3073 75s\n",
"H 1909 1895 3.252942e+13 3.1101e+13 4.39% 3206 78s\n",
"H 1930 1895 3.252942e+13 3.1101e+13 4.39% 3356 78s\n",
"H 1942 1895 3.252939e+13 3.1101e+13 4.39% 3342 78s\n",
" 1943 1920 postponed 73 3.2529e+13 3.1101e+13 4.39% 3343 80s\n",
"H 1945 1920 3.252935e+13 3.1101e+13 4.39% 3340 80s\n",
"H 1967 1920 3.252927e+13 3.1101e+13 4.39% 3517 80s\n",
"\n",
"Interrupt request received\n",
"H 1977 1952 3.252918e+13 3.1101e+13 4.39% 3573 83s\n",
"\n",
"Cutting planes:\n",
" Learned: 1\n",
"\n",
"Explored 2002 nodes (7183125 simplex iterations) in 83.22 seconds (203.33 work units)\n",
"Thread count was 11 (of 11 available processors)\n",
"\n",
"Solution count 10: 3.25292e+13 3.25293e+13 3.25294e+13 ... 3.25295e+13\n",
"\n",
"Solve interrupted\n",
"Best objective 3.252917704080e+13, best bound 3.110115207591e+13, gap 4.3900%\n",
"WARNING: Loading a SolverResults object with an 'aborted' status, but\n",
"containing a solution\n",
"##########################################################\n"
]
}
],
"source": [
"GUROBI = True # set to True if Gurobi is available\n",
"if not GUROBI:\n",
" solver = pyo.SolverFactory(\"highs\")\n",
"\n",
" solver.options.update({\n",
" \"limits/time\": 600, # Sekunden\n",
" \"limits/gap\": 0.07, # relative MIP-Gap\n",
" \"display/verblevel\": 4, # Log-Level\n",
" \"lp/threads\": -1 # Parallelisierung\n",
" })\n",
" print(\"HiGHS\")\n",
"\n",
" results = solver.solve(model, tee=True)\n",
" print(\"Solver-Status:\", results.solver.status)\n",
" print(\"##########################################################\")\n",
" print(\"Termination:\", results.solver.termination_condition)\n",
"\n",
"else:\n",
" opt = pyo.SolverFactory('gurobi')\n",
"\n",
" opt.options['TimeLimit'] = 300 \n",
" opt.options['MIPGap'] = 0.05 # optional, wie bei HiGHS\n",
" opt.options['LogFile'] = 'gurobi_first_opt_model.log'\n",
"\n",
" res = opt.solve(model, tee=True)\n",
"\n",
" print(\"##########################################################\")\n",
"\n",
" if hasattr(opt, \"_solver_model\"):\n",
" m = opt._solver_model\n",
" print(\"Solved as MIP?\", bool(m.IsMIP))\n",
" print(\"Runtime (s):\", np.round(m.Runtime,2))\n",
" print(\"Nodes:\", m.NodeCount)\n",
" print(\"Termination:\", res.solver.termination_condition)\n",
" print(\"Optimal Solution Count:\", m.SolCount)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6196c53d",
"metadata": {},
"outputs": [],
"source": [
"# Definition zulässiger KLP-Zuordnungen (Veredlung darf nur Nochten/Welzow beziehen)\n",
"allowed_V = set(I_W + I_N) # ['Welzow', 'Nochten']\n",
"for i in model.I:\n",
" if i in allowed_V:\n",
" continue\n",
" for w in model.W:\n",
" for d in model.D:\n",
" for s in model.S:\n",
" model.k[i, 'V', w, d, s].fix(0)\n",
"\n",
"lieferungen_schicht = []\n",
"for j in model.J:\n",
" for w in model.W:\n",
" for d in model.D:\n",
" for s in model.S:\n",
" if j == 'V':\n",
" nachfrage = pyo.value(model.dV_N[w, d] + model.dV_W[w, d])\n",
" else:\n",
" nachfrage = pyo.value(model.d[j, w, d])\n",
" y_val = pyo.value(model.y_shift[j, w, d, s])\n",
" lieferungen_schicht.append({\n",
" \"kraftwerk\": j,\n",
" \"woche\": w,\n",
" \"tag\": d,\n",
" \"datum\": wd_to_date.get((w, d)),\n",
" \"schicht\": s,\n",
" \"nachfrage_tonnen\": nachfrage,\n",
" \"lieferung_tonnen\": y_val,\n",
" \"abweichung_tonnen\": y_val - nachfrage,\n",
" \"Nochten\": pyo.value(model.x['Nochten', j, w, d, s]),\n",
" \"Reichwalde\": pyo.value(model.x['Reichwalde', j, w, d, s]),\n",
" \"Welzow\": pyo.value(model.x['Welzow', j, w, d, s]),\n",
" })\n",
"\n",
"\n",
"order_k_pw = ['J', 'SP', 'B3', 'B4']\n",
"order_k_v = ['V']\n",
"order_sources = ['Reichwalde', 'Nochten', 'Welzow']\n",
"order_s = ['F', 'S', 'N']\n",
"\n",
"# DataFrame aus der vorherigen Zelle\n",
"df = pd.DataFrame(lieferungen_schicht).copy()\n",
"df['datum'] = pd.to_datetime(df['datum'])\n",
"\n",
"# Nachfrage-Split für V aus dem Modell ableiten (falls nicht schon vorhanden)\n",
"v_demand_map = {\n",
" (int(w), d): {\n",
" \"welzow\": float(pyo.value(model.dV_W[w, d])),\n",
" \"nochten\": float(pyo.value(model.dV_N[w, d])),\n",
" }\n",
" for w in model.W for d in model.D\n",
"}\n",
"df['nachfrage_welzow'] = df.get('nachfrage_welzow', pd.Series(index=df.index))\n",
"df['nachfrage_nochten'] = df.get('nachfrage_nochten', pd.Series(index=df.index))\n",
"df['nachfrage_welzow'] = df.apply(\n",
" lambda r: r['nachfrage_welzow'] if pd.notna(r['nachfrage_welzow']) else v_demand_map.get((int(r['woche']), r['tag']), {}).get('welzow', 0),\n",
" axis=1,\n",
")\n",
"df['nachfrage_nochten'] = df.apply(\n",
" lambda r: r['nachfrage_nochten'] if pd.notna(r['nachfrage_nochten']) else v_demand_map.get((int(r['woche']), r['tag']), {}).get('nochten', 0),\n",
" axis=1,\n",
")\n",
"\n",
"# Quellen-Summe erzeugen, falls nicht vorhanden\n",
"df = df.rename(columns={'lieferungen_tonnen': 'lieferung_tonnen'})\n",
"if 'lieferung_tonnen' not in df.columns:\n",
" df['lieferung_tonnen'] = df[order_sources].sum(axis=1)\n",
"\n",
"present_sources = [c for c in order_sources if c in df.columns]\n",
"\n",
"# Lieferungen nach Quelle/Schicht\n",
"df_src = (\n",
" df.pivot_table(\n",
" index=['datum', 'woche', 'tag'],\n",
" columns=['kraftwerk', 'schicht'],\n",
" values=present_sources,\n",
" aggfunc='sum',\n",
" )\n",
" .fillna(0)\n",
")\n",
"df_src.columns = df_src.columns.reorder_levels([1, 0, 2])\n",
"df_src = df_src.reindex(\n",
" columns=pd.MultiIndex.from_product([order_k_pw + order_k_v, present_sources, order_s]),\n",
" fill_value=0,\n",
")\n",
"\n",
"# Nachfrage je Kraftwerk (gesamt)\n",
"df_demand = (\n",
" df.groupby(['datum', 'woche', 'tag', 'kraftwerk'])['nachfrage_tonnen']\n",
" .first()\n",
" .unstack('kraftwerk')\n",
" .reindex(columns=order_k_pw + order_k_v, fill_value=0)\n",
" .reindex(df_src.index, fill_value=0)\n",
")\n",
"\n",
"# Nachfrage-Split für V (Welzow/Nochten)\n",
"df_v_demand_split = (\n",
" df[df['kraftwerk'] == 'V']\n",
" .groupby(['datum', 'woche', 'tag'])[['nachfrage_welzow', 'nachfrage_nochten']]\n",
" .first()\n",
" .reindex(df_src.index, fill_value=0)\n",
")\n",
"\n",
"# Gesamt je Kraftwerk (Tageslieferung)\n",
"totals_plain = df_src.T.groupby(level=0).sum().T\n",
"totals_plain = totals_plain.reindex(columns=order_k_pw + order_k_v, fill_value=0)\n",
"\n",
"# Tagesabweichung = Gesamt Nachfrage\n",
"day_diff_plain = totals_plain.sub(df_demand, fill_value=0)\n",
"\n",
"# MultiIndex Spalten aufbauen\n",
"totals = totals_plain.copy()\n",
"totals.columns = pd.MultiIndex.from_tuples([(k, 'Gesamt', '') for k in totals.columns])\n",
"\n",
"demand_cols = df_demand.copy()\n",
"demand_cols.columns = pd.MultiIndex.from_tuples([(k, 'Nachfrage', '') for k in demand_cols.columns])\n",
"\n",
"day_diff_cols = day_diff_plain.copy()\n",
"day_diff_cols.columns = pd.MultiIndex.from_tuples([(k, 'Tagesabweichung', '') for k in day_diff_cols.columns])\n",
"\n",
"# V-Nachfrage-Splits als MultiIndex-Spalten\n",
"v_demand_cols = df_v_demand_split.copy()\n",
"v_demand_cols.columns = pd.MultiIndex.from_tuples([\n",
" ('V', 'Nachfrage_Welzow', ''),\n",
" ('V', 'Nachfrage_Nochtener', ''),\n",
"])\n",
"\n",
"# Spalten-Reihenfolge\n",
"col_order = []\n",
"for k in order_k_pw:\n",
" for src in present_sources:\n",
" for sch in order_s:\n",
" col_order.append((k, src, sch))\n",
" col_order.append((k, 'Gesamt', ''))\n",
" col_order.append((k, 'Nachfrage', ''))\n",
" col_order.append((k, 'Tagesabweichung', ''))\n",
"\n",
"# Veredlung: Schichten, Nachfrage-Splits, Standardspalten\n",
"col_order += [\n",
" ('V', src, sch) for src in present_sources for sch in order_s\n",
"] + [\n",
" ('V', 'Nachfrage_Welzow', ''),\n",
" ('V', 'Nachfrage_Nochtener', ''),\n",
" ('V', 'Gesamt', ''),\n",
" ('V', 'Nachfrage', ''),\n",
" ('V', 'Tagesabweichung', ''),\n",
"]\n",
"\n",
"df_out = (\n",
" pd.concat([df_src, v_demand_cols, totals, demand_cols, day_diff_cols], axis=1)\n",
" .reindex(columns=col_order, fill_value=0)\n",
")\n",
"\n",
"df_out.head(20).round(2)\n",
"\n",
"weekday_order = ['Mo', 'Di', 'Mi', 'Do', 'Fr', 'Sa', 'So']\n",
"\n",
"idx = df_out.index\n",
"\n",
"df_out = df_out.copy()\n",
"df_out.index = pd.MultiIndex.from_arrays(\n",
" [\n",
" pd.to_datetime(idx.get_level_values('datum')),\n",
" idx.get_level_values('woche'),\n",
" pd.Categorical(\n",
" idx.get_level_values('tag'),\n",
" categories=weekday_order,\n",
" ordered=True,\n",
" ),\n",
" ],\n",
" names=['datum', 'woche', 'tag'],\n",
")\n",
"\n",
"df_out = df_out.sort_index(level=['datum', 'woche', 'tag'])\n",
"\n",
"# Rundung auf ganze Tonnen, bevor die Darstellung erfolgt\n",
"df = df_out.copy()\n",
"# df = df_out.round(1).copy()\n",
"df_out = df_out/1000\n",
"# df = df.round(2)\n",
"\n",
"df.to_excel(\"lieferungen_nach_kraftwerk_woche_tag_schicht.xlsx\")"
]
},
{
"cell_type": "markdown",
"id": "c5cb34a3",
"metadata": {},
"source": [
"## PLAN"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f3067008",
"metadata": {},
"outputs": [],
"source": [
"def style_table(df):\n",
" styler = df.style\n",
"\n",
" styler = styler.format(\"{:.1f}\")\n",
"\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": \"tbody tr:nth-child(odd)\", \"props\": [(\"background-color\", \"#fafafa\"), (\"color\", \"#111\")]},\n",
" {\"selector\": \"tbody tr:nth-child(even)\", \"props\": [(\"background-color\", \"#eaeaea\"), (\"color\", \"#111\")]},\n",
" {\"selector\": \"tbody td\", \"props\": [(\"border-bottom\", \"2px solid #666\")]},\n",
" {\"selector\": \"td, th\", \"props\": [(\"font-size\", \"12px\"), (\"padding\", \"4px 6px\")]},\n",
" ])\n",
"\n",
" # Palette für Top-Level-Gruppen (Kraftwerke); wird zyklisch genutzt\n",
" palette = [\n",
" \"#2c3e50\",\n",
" \"#16a085\",\n",
" \"#8e44ad\",\n",
" \"#d35400\",\n",
" \"#7f8c8d\",\n",
" \"#2980b9\",\n",
" \"#c0392b\",\n",
" \"#27ae60\",\n",
" ]\n",
"\n",
" top_level = df.columns.get_level_values(0)\n",
" mid_level = df.columns.get_level_values(1)\n",
"\n",
" groups = list(dict.fromkeys(top_level))\n",
" for i, grp in enumerate(groups):\n",
" color = palette[i % len(palette)]\n",
" for col_idx, g in enumerate(top_level):\n",
" if g == grp:\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": f\"th.col_heading.level0.col{col_idx}\",\n",
" \"props\": [(\"background-color\", color), (\"color\", \"white\")]}\n",
" ], overwrite=False)\n",
"\n",
" prev_top = top_level[0]\n",
" for idx in range(1, len(top_level)):\n",
" cur_top = top_level[idx]\n",
" if cur_top != prev_top:\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": f\"th.col_heading.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"th.col_heading.level0.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"th.col_heading.level1.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"th.col_heading.level2.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"td.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" ], overwrite=False)\n",
" prev_top = cur_top\n",
"\n",
" prev_mid = mid_level[0]\n",
" for idx in range(1, len(mid_level)):\n",
" cur_mid = mid_level[idx]\n",
" if top_level[idx] == top_level[idx - 1]:\n",
" if cur_mid == prev_mid:\n",
" width_head = \"1px solid #aaa\"\n",
" width_body = \"1px solid #ddd\"\n",
" else:\n",
" width_head = \"2px solid #444\"\n",
" width_body = \"2px solid #444\"\n",
"\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": f\"th.col_heading.level1.col{idx}\", \"props\": [(\"border-left\", width_head)]},\n",
" {\"selector\": f\"th.col_heading.level2.col{idx}\", \"props\": [(\"border-left\", width_head)]},\n",
" {\"selector\": f\"td.col{idx}\", \"props\": [(\"border-left\", width_body)]},\n",
" ], overwrite=False)\n",
" prev_mid = cur_mid\n",
"\n",
" return styler\n",
"\n",
"styled = style_table(df_out)\n",
"styled\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12650ec2",
"metadata": {},
"outputs": [],
"source": [
"def style_table(df):\n",
" styler = df.style\n",
"\n",
" styler = styler.format(\"{:.1f}\")\n",
"\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": \"tbody tr:nth-child(odd)\", \"props\": [(\"background-color\", \"#fafafa\"), (\"color\", \"#111\")]},\n",
" {\"selector\": \"tbody tr:nth-child(even)\", \"props\": [(\"background-color\", \"#eaeaea\"), (\"color\", \"#111\")]},\n",
" {\"selector\": \"tbody td\", \"props\": [(\"border-bottom\", \"2px solid #666\")]}, # horizontale Linien dicker\n",
" {\"selector\": \"td, th\", \"props\": [(\"font-size\", \"12px\"), (\"padding\", \"4px 6px\")]},\n",
" ])\n",
"\n",
" palette = [\n",
" \"#2c3e50\",\n",
" \"#16a085\",\n",
" \"#8e44ad\",\n",
" \"#d35400\",\n",
" \"#7f8c8d\",\n",
" \"#2980b9\",\n",
" ]\n",
"\n",
" top_level = df.columns.get_level_values(0)\n",
" mid_level = df.columns.get_level_values(1)\n",
"\n",
" groups = list(dict.fromkeys(top_level))\n",
"\n",
" for i, grp in enumerate(groups):\n",
" color = palette[i % len(palette)]\n",
" for col_idx, g in enumerate(top_level):\n",
" if g == grp:\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": f\"th.col_heading.level0.col{col_idx}\", \"props\": [(\"background-color\", color), (\"color\", \"white\")]}\n",
" ], overwrite=False)\n",
"\n",
" prev_top = top_level[0]\n",
" for idx in range(1, len(top_level)):\n",
" cur_top = top_level[idx]\n",
" if cur_top != prev_top:\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": f\"th.col_heading.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"th.col_heading.level0.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"th.col_heading.level1.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"th.col_heading.level2.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" {\"selector\": f\"td.col{idx}\", \"props\": [(\"border-left\", \"4px solid #444\")]},\n",
" ], overwrite=False)\n",
" prev_top = cur_top\n",
"\n",
" prev_mid = mid_level[0]\n",
" for idx in range(1, len(mid_level)):\n",
" cur_mid = mid_level[idx]\n",
"\n",
" if top_level[idx] == top_level[idx - 1]:\n",
" if cur_mid == prev_mid:\n",
" width_head = \"1px solid #aaa\"\n",
" width_body = \"1px solid #ddd\"\n",
" else:\n",
" width_head = \"2px solid #444\"\n",
" width_body = \"2px solid #444\"\n",
"\n",
" styler = styler.set_table_styles([\n",
" {\"selector\": f\"th.col_heading.level1.col{idx}\", \"props\": [(\"border-left\", width_head)]},\n",
" {\"selector\": f\"th.col_heading.level2.col{idx}\", \"props\": [(\"border-left\", width_head)]},\n",
" {\"selector\": f\"td.col{idx}\", \"props\": [(\"border-left\", width_body)]},\n",
" ], overwrite=False)\n",
"\n",
" prev_mid = cur_mid\n",
"\n",
" return styler\n",
"\n",
"\n",
"styled = style_table(df)\n",
"styled\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59776a72",
"metadata": {},
"outputs": [],
"source": [
"# Anteil je Kraftwerk (in %) nach Kohlequelle, pro Tag\n",
"import pandas as pd\n",
"import pyomo.environ as pyo\n",
"\n",
"j_rev = {'J': 'Jänschwalde', 'SP': 'Schwarze Pumpe', 'B3': 'Boxberg Werk 3', 'B4': 'Boxberg Werk 4'}\n",
"i_rev = {'Reichwalde': 'Reichwalder-Kohle', 'Nochten': 'Nochtener-Kohle', 'Welzow': 'Welzower-Kohle'}\n",
"\n",
"rows = []\n",
"for j in model.J:\n",
" for w in model.W:\n",
" for d in model.D:\n",
" y = pyo.value(model.y_day[j, w, d])\n",
" if y is None or y <= 0:\n",
" continue\n",
" for i in model.I:\n",
" x = pyo.value(model.x_day[i, j, w, d])\n",
" share_pct = 100.0 * x / y if y > 0 else 0.0\n",
" rows.append({\n",
" \"kraftwerk\": j_rev.get(j, j),\n",
" \"woche\": w,\n",
" \"tag\": d,\n",
" \"kohlesorte\": i_rev.get(i, i),\n",
" \"anteil_pct\": share_pct,\n",
" })\n",
"\n",
"mix_day_df = pd.DataFrame(rows).sort_values([\"kraftwerk\", \"woche\", \"tag\", \"kohlesorte\"])\n",
"\n",
"# Pivot: je Kraftwerk/Woche/Tag eine Zeile, Kohlesorten als Spalten (Prozent)\n",
"mix_pivot = mix_day_df.pivot_table(\n",
" index=[\"kraftwerk\", \"woche\", \"tag\"],\n",
" columns=\"kohlesorte\",\n",
" values=\"anteil_pct\",\n",
" aggfunc=\"sum\",\n",
").reset_index()\n",
"\n",
"# Pivot: Woche/Tag als Index, doppelte Spaltenebene (kraftwerk, kohlesorte)\n",
"mix_pivot = mix_day_df.pivot_table(\n",
" index=[\"woche\", \"tag\"],\n",
" columns=[\"kraftwerk\", \"kohlesorte\"],\n",
" values=\"anteil_pct\",\n",
" aggfunc=\"sum\",\n",
")\n",
"\n",
"mix_pivot.round(2)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3027e40d",
"metadata": {},
"outputs": [],
"source": [
"# Zielbereich aus Tabelle holen\n",
"target = tables[\"kohle_mix\"][[\"kraftwerk\", \"kohlesorte\", \"ziel_low\", \"ziel_high\"]].copy()\n",
"\n",
"# Ist-Anteile (mix_day_df) mit Zielbereich mergen\n",
"cmp_df = mix_day_df.merge(target, on=[\"kraftwerk\", \"kohlesorte\"], how=\"left\")\n",
"\n",
"# Abweichungen berechnen (in Prozentpunkten)\n",
"cmp_df[\"unter_ziel_pp\"] = (cmp_df[\"ziel_low\"] * 100 - cmp_df[\"anteil_pct\"]).clip(lower=0)\n",
"cmp_df[\"ueber_ziel_pp\"] = (cmp_df[\"anteil_pct\"] - cmp_df[\"ziel_high\"] * 100).clip(lower=0)\n",
"\n",
"# Optional: Flag, ob innerhalb Zielband\n",
"cmp_df[\"im_zielband\"] = (cmp_df[\"anteil_pct\"] >= cmp_df[\"ziel_low\"] * 100) & (cmp_df[\"anteil_pct\"] <= cmp_df[\"ziel_high\"] * 100)\n",
"\n",
"cmp_df = cmp_df.round(2)\n"
]
},
{
"cell_type": "markdown",
"id": "027b86d1",
"metadata": {},
"source": [
"# Backup"
]
},
{
"cell_type": "markdown",
"id": "4f7907db",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "230530ce",
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as mpatches\n",
"import pandas as pd\n",
"\n",
"idx = pd.IndexSlice\n",
"\n",
"# Totale je Kraftwerk (Gesamtspalte aus df_out). Falls df_out in Tonnen ist,\n",
"# kannst du values = values / 1000 setzen, damit die Beschriftung in kt läuft.\n",
"values = (\n",
" df_out.loc[:, idx[:, \"Gesamt\", \"\"]]\n",
" .sum() # über alle Wochen/Tag/Schicht\n",
" .droplevel([1, 2]) # nur Kraftwerks-Level behalten\n",
" .reindex(order_k_pw + order_k_v) # Reihenfolge aus der Aufbereitung\n",
")\n",
"\n",
"labels = {\n",
" \"J\": \"KW Jänschwalde\",\n",
" \"SP\": \"KW Schwarze Pumpe\",\n",
" \"B3\": \"KW Boxberg Werk 3\",\n",
" \"B4\": \"KW Boxberg Werk 4\",\n",
" \"V\": \"Veredlung\",\n",
"}\n",
"colors = {\n",
" \"J\": \"#2f9565\",\n",
" \"SP\": \"#46c1ee\",\n",
" \"B3\": \"#e26100\",\n",
" \"B4\": \"#f6b216\",\n",
" \"V\": \"#e4e15a\",\n",
"}\n",
"\n",
"total = values.sum()\n",
"share = values / total\n",
"\n",
"fig = plt.figure(figsize=(16, 7))\n",
"ax = fig.add_axes([0.12, 0.55, 0.8, 0.2]) # Achse für den Balken\n",
"ax.axis(\"off\")\n",
"\n",
"x0 = 0\n",
"for k, val in values.items():\n",
" ax.barh(0, width=val, left=x0, height=0.45,\n",
" color=colors[k], edgecolor=\"white\", linewidth=1.3)\n",
" ax.text(x0 + val / 2, 0, f\"{val:,.0f} ({share[k]*100:.0f}%)\",\n",
" ha=\"center\", va=\"center\", color=\"white\", fontsize=14, fontweight=\"bold\")\n",
" x0 += val\n",
"\n",
"ax.set_xlim(-0.05 * total, 1.05 * total)\n",
"ax.text(-0.02 * total, 0, \"Rohkohlebedarf [t]\",\n",
" ha=\"right\", va=\"center\", fontsize=15, color=\"#235472\")\n",
"ax.text(total * 1.01, 0, f\"{total:,.0f} t\",\n",
" va=\"center\", fontsize=15, color=\"#235472\")\n",
"\n",
"# Titelzeilen\n",
"fig.text(0.12, 0.93, \"Rohkohleverteilung November 2025\",\n",
" fontsize=28, fontweight=\"bold\", color=\"#235472\")\n",
"# fig.text(0.12, 0.87, \"Scope 1 Increment 1 Grafische Überlegungen\",\n",
"# fontsize=24, color=\"#235472\")\n",
"\n",
"# Legendengruppe links unterhalb\n",
"legend_ax = fig.add_axes([0.12, 0.25, 0.35, 0.25])\n",
"legend_ax.axis(\"off\")\n",
"for i, (k, txt) in enumerate(labels.items()):\n",
" legend_ax.add_patch(plt.Rectangle((0, -i * 1.05), 0.35, 0.8, color=colors[k]))\n",
" legend_ax.text(0.45, -i * 1.05 + 0.4, txt,\n",
" va=\"center\", fontsize=16, color=\"#235472\")\n",
"legend_ax.set_xlim(0, 3)\n",
"legend_ax.set_ylim(-len(labels) * 1.1, 1)\n",
"\n",
"plt.show()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7795538",
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as mpatches\n",
"import pandas as pd\n",
"\n",
"idx = pd.IndexSlice\n",
"\n",
"# Farben wie im Beispiel\n",
"colors_src = {\n",
" \"Welzow\": \"#e26100\", # TB Welzow-Süd\n",
" \"Nochten\": \"#f6b216\",\n",
" \"Reichwalde\": \"#e4e15a\",\n",
"}\n",
"\n",
"# Beiträge je Kraftwerk x Quelle aufsummieren (über alle Schichten / Tage / Wochen)\n",
"stack = []\n",
"for kw in order_k_pw + order_k_v:\n",
" row = {}\n",
" for src in order_sources:\n",
" if (kw, src) in {(c[0], c[1]) for c in df_out.columns}:\n",
" row[src] = df_out.loc[:, idx[kw, src, :]].sum().sum()\n",
" else:\n",
" row[src] = 0\n",
" stack.append(pd.Series(row, name=kw))\n",
"stack = pd.DataFrame(stack).reindex(order_k_pw + order_k_v)\n",
"\n",
"totals = stack.sum(axis=1)\n",
"max_total = totals.max()\n",
"height = 0.85\n",
"y_pos = range(len(stack))\n",
"\n",
"fig, ax = plt.subplots(figsize=(16, 8))\n",
"ax.axvline(0, color=\"#235472\", linewidth=2)\n",
"\n",
"for src in order_sources:\n",
" vals = stack[src].values\n",
" pct = vals / totals.replace(0, pd.NA).values\n",
" left = stack.loc[:, order_sources].cumsum(axis=1)[src].values - vals\n",
" bars = ax.barh(y_pos, vals, left=left, height=height,\n",
" color=colors_src[src], edgecolor=\"white\", linewidth=1.4, label=f\"TB {src}\")\n",
" for i, (w, p) in enumerate(zip(vals, pct)):\n",
" if w > 0:\n",
" ax.text(left[i] + w / 2, y_pos[i], f\"{w:,.0f} ({p*100:.0f}%)\",\n",
" ha=\"center\", va=\"center\", color=\"white\", fontsize=14, fontweight=\"bold\")\n",
" else:\n",
" # Optional: Null-Anzeige direkt links neben der Achse\n",
" ax.text(max(left) * 0.01, y_pos[i] + 0.02, \"0 (0%)\",\n",
" ha=\"left\", va=\"center\", color=\"#235472\", fontsize=12)\n",
"\n",
"# Totals rechts anzeigen\n",
"for i, tot in enumerate(totals):\n",
" ax.text(tot + max_total * 0.015, y_pos[i], f\"{tot:,.0f}\",\n",
" va=\"center\", color=\"#235472\", fontsize=15)\n",
"\n",
"ax.set_yticks(list(y_pos))\n",
"ax.set_yticklabels([\n",
" \"KW Jänschwalde\", \"KW Schwarze Pumpe\",\n",
" \"KW Boxberg Werk 3\", \"KW Boxberg Werk 4\",\n",
" \"Veredlung\"\n",
"], fontsize=15, color=\"#235472\")\n",
"\n",
"ax.set_xticks([])\n",
"ax.tick_params(axis=\"y\", length=0)\n",
"ax.spines[:].set_visible(False)\n",
"\n",
"# Titel\n",
"fig.text(0.12, 0.93, \"Rohkohleverteilung\",\n",
" fontsize=28, fontweight=\"bold\", color=\"#235472\")\n",
"# fig.text(0.12, 0.87, \"Scope 1 Increment 1 Grafische Überlegungen\",\n",
"# fontsize=24, color=\"#235472\")\n",
"\n",
"# Legende\n",
"handles = [mpatches.Patch(color=colors_src[s], label=f\"TB {s}\") for s in order_sources]\n",
"ax.legend(handles=handles, loc=\"upper right\", bbox_to_anchor=(1.04, 0.92), frameon=False, fontsize=14)\n",
"\n",
"fig.tight_layout(rect=[0, 0, 1, 0.86])\n",
"plt.show()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45c64f1f",
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import pandas as pd\n",
"\n",
"idx = pd.IndexSlice\n",
"\n",
"sources = [\"Welzow\", \"Nochten\", \"Reichwalde\"] # Tagebaue\n",
"consumers = order_k_pw + order_k_v # ['J','SP','B3','B4','V']\n",
"\n",
"colors_kw = {\n",
" \"J\": \"#2f9565\",\n",
" \"SP\": \"#46c1ee\",\n",
" \"B3\": \"#e26100\",\n",
" \"B4\": \"#f6b216\",\n",
" \"V\": \"#e4e15a\",\n",
"}\n",
"labels_kw = {\n",
" \"J\": \"KW Jänschwalde\",\n",
" \"SP\": \"KW Schwarze Pumpe\",\n",
" \"B3\": \"KW Boxberg Werk 3\",\n",
" \"B4\": \"KW Boxberg Werk 4\",\n",
" \"V\": \"Veredlung\",\n",
"}\n",
"\n",
"# Summen pro Quelle -> Verbraucher (über alle Schichten/Tage/Wochen)\n",
"stack = {}\n",
"for src in sources:\n",
" vals = []\n",
" for kw in consumers:\n",
" try:\n",
" vals.append(df_out.loc[:, idx[kw, src, :]].sum().sum())\n",
" except KeyError:\n",
" vals.append(0)\n",
" stack[src] = pd.Series(vals, index=consumers)\n",
"\n",
"stack_df = pd.DataFrame(stack)\n",
"\n",
"# Optional: in kt umrechnen\n",
"# stack_df = stack_df / 1000\n",
"\n",
"totals = stack_df.sum(axis=0)\n",
"\n",
"def fmt(x):\n",
" return f\"{x:,.0f}\".replace(\",\", \".\")\n",
"\n",
"fig, ax = plt.subplots(figsize=(16, 8))\n",
"\n",
"x_pos = range(len(sources))\n",
"bar_width = 0.55\n",
"\n",
"for kw in consumers:\n",
" bottoms = stack_df.loc[consumers].loc[:kw].iloc[:-1].sum(axis=0) if kw != consumers[0] else 0\n",
" ax.bar(x_pos, stack_df.loc[kw], bottom=stack_df.loc[consumers].loc[:kw].iloc[:-1].sum(axis=0) if kw != consumers[0] else 0,\n",
" color=colors_kw[kw], width=bar_width, edgecolor=\"white\", label=labels_kw[kw])\n",
"\n",
"# Beschriftungen pro Segment\n",
"cum = pd.DataFrame(0, index=consumers, columns=sources)\n",
"for kw in consumers:\n",
" for i, src in enumerate(sources):\n",
" val = stack_df.loc[kw, src]\n",
" bottom = cum.loc[kw, src]\n",
" if val > 0:\n",
" pct = val / totals[src] if totals[src] > 0 else 0\n",
" ax.text(i, bottom + val/2, f\"{fmt(val)} ({pct*100:.0f}%)\",\n",
" ha=\"center\", va=\"center\", color=\"white\", fontsize=14, fontweight=\"bold\")\n",
" cum.loc[consumers[consumers.index(kw):], src] += val\n",
"\n",
"# Totals oben\n",
"for i, src in enumerate(sources):\n",
" ax.text(i, totals[src] + totals.max()*0.02, f\"{fmt(totals[src])} t\", ha=\"center\", va=\"bottom\",\n",
" fontsize=14, color=\"#235472\")\n",
"\n",
"ax.set_xticks(list(x_pos))\n",
"ax.set_xticklabels([\"TB Welzow-Süd\", \"TB Nochten\", \"TB Reichwalde\"], fontsize=15, color=\"#235472\")\n",
"ax.set_xlim(-0.4, len(sources)-0.6)\n",
"ax.axhline(0, color=\"#235472\", linewidth=2)\n",
"ax.spines[:].set_visible(False)\n",
"ax.set_yticks([])\n",
"\n",
"# Titel\n",
"fig.text(0.08, 0.93, \"Rohkohleverteilung\",\n",
" fontsize=28, fontweight=\"bold\", color=\"#235472\")\n",
"# fig.text(0.08, 0.87, \"Scope 1 Increment 1 Grafische Überlegungen\",\n",
"# fontsize=24, color=\"#235472\")\n",
"\n",
"# Legende links/unten\n",
"handles = [plt.Rectangle((0,0),1,1,color=colors_kw[k]) for k in consumers]\n",
"ax.legend(handles, [labels_kw[k] for k in consumers],\n",
" bbox_to_anchor=(0.02, 0.42), frameon=False, fontsize=14)\n",
"\n",
"fig.tight_layout(rect=[0,0,1,0.84])\n",
"plt.show()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4054c8fa",
"metadata": {},
"outputs": [],
"source": [
"import plotly.graph_objects as go\n",
"import pandas as pd\n",
"\n",
"idx = pd.IndexSlice\n",
"sources = [\"Welzow\", \"Nochten\", \"Reichwalde\"]\n",
"consumers = order_k_pw + order_k_v # ['J','SP','B3','B4','V']\n",
"\n",
"colors_kw = {\n",
" \"J\": \"#2f9565\",\n",
" \"SP\": \"#46c1ee\",\n",
" \"B3\": \"#e26100\",\n",
" \"B4\": \"#f6b216\",\n",
" \"V\": \"#e4e15a\",\n",
"}\n",
"labels_kw = {\n",
" \"J\": \"KW Jänschwalde\",\n",
" \"SP\": \"KW Schwarze Pumpe\",\n",
" \"B3\": \"KW Boxberg Werk 3\",\n",
" \"B4\": \"KW Boxberg Werk 4\",\n",
" \"V\": \"Veredlung\",\n",
"}\n",
"\n",
"# Summen pro Quelle -> Verbraucher\n",
"stack = {}\n",
"for src in sources:\n",
" vals = []\n",
" for kw in consumers:\n",
" try:\n",
" vals.append(df_out.loc[:, idx[kw, src, :]].sum().sum())\n",
" except KeyError:\n",
" vals.append(0)\n",
" stack[src] = pd.Series(vals, index=consumers)\n",
"stack_df = pd.DataFrame(stack)\n",
"\n",
"# Optional: in kt\n",
"# stack_df = stack_df / 1000\n",
"\n",
"totals = stack_df.sum(axis=0)\n",
"\n",
"fig = go.Figure()\n",
"\n",
"for kw in consumers:\n",
" custom_pct = [\n",
" (v / totals[s]) if totals[s] > 0 else 0\n",
" for v, s in zip(stack_df.loc[kw], sources)\n",
" ]\n",
" fig.add_bar(\n",
" x=sources,\n",
" y=stack_df.loc[kw],\n",
" name=labels_kw[kw],\n",
" marker_color=colors_kw[kw],\n",
" text=[\n",
" f\"{v:,.0f} ({(v/totals[s])*100:.0f}%)\" if totals[s] > 0 and v > 0 else \"\"\n",
" for v, s in zip(stack_df.loc[kw], sources)\n",
" ],\n",
" textposition=\"inside\",\n",
" textfont=dict(color=\"white\", size=14),\n",
" customdata=custom_pct, # Prozent pro Säule\n",
" hovertemplate=\"%{y:,.0f} (%{customdata:.0%})<extra>\" + labels_kw[kw] + \"</extra>\",\n",
" )\n",
"\n",
"# Totals oben\n",
"for src in sources:\n",
" fig.add_annotation(\n",
" x=src, y=totals[src],\n",
" text=f\"{totals[src]:,.0f} t\",\n",
" showarrow=False, yshift=18, font=dict(size=14, color=\"#235472\")\n",
" )\n",
"\n",
"fig.update_layout(\n",
" barmode=\"stack\",\n",
" title={\n",
" \"text\": \"PoC 1 Rohkohleverteilung<br><span style='font-size:20px;'>Scope 1 Increment 1 Grafische Überlegungen</span>\",\n",
" \"x\": 0.03, \"xanchor\": \"left\", \"y\": 0.95\n",
" },\n",
" xaxis_title=\"Tagebau\",\n",
" yaxis_title=\"Menge\",\n",
" legend_title=\"Verbraucher\",\n",
" legend=dict(orientation=\"h\", yanchor=\"bottom\", y=-0.15, xanchor=\"left\", x=0),\n",
" plot_bgcolor=\"white\",\n",
" paper_bgcolor=\"white\",\n",
" margin=dict(l=40, r=40, t=90, b=80),\n",
")\n",
"\n",
"fig.update_xaxes(showline=True, linewidth=2, linecolor=\"#235472\", ticks=\"outside\")\n",
"fig.update_yaxes(showgrid=False, zeroline=True, zerolinecolor=\"#235472\", zerolinewidth=2)\n",
"\n",
"fig.show()\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "braunkohle-supply-chain-poc1",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}