Charge power limit: 7kW (set both in ESS and DESS settings)
VenusOS 3.80~4.
Observed behavior:
Every 15 minutes, DESS sends a new SOC target to the inverter. The inverter starts charging at 7kW, reaches that target within ~1.5 minutes, and charging power decreases. This repeats every quarter hour identically.
Confirmed via Node-RED DESS stats node: current SOC 77%, DESS target SOC 79% — only 2% delta, despite the day plan showing a target of 98% for the next hour.
Behaviour was observed in 3.70 (betas) also, but I thought it was caused bij Seplos BMS 2.0. I upgraded the batteries with new BMS, but same behaviour.
Root cause hypothesis:
DESS appears to set an hourly SOC target (e.g. 79% at end of current hour, 98% at end of next hour), but communicates this same target to the inverter every 15 minutes. The inverter interprets this as “reach this SOC now” rather than “reach this SOC by end of hour.”
With a 7kW charge limit on a 48kWh battery, 2% SOC (~1kWh) is reached in ~90 seconds. The inverter then stops charging and waits for the next 15-minute interval.
Result: effective charge power averages ~700W instead of 7kW, and the battery never reaches its planned SOC targets.
What rules out other causes:
Disabling DESS & minimum SoC at 100% → inverter immediately charges at full 7kW → confirms DESS is the source
SOC is in 70-80% range → no BMS or CV-phase throttling
PV stable, household consumption stable
No scheduled charges conflicting
Day plan looks correct (charge now, discharge later)
Expected behavior:
DESS should either communicate a progressive per-quarter SOC target toward the hourly goal, or communicate the end-of-hour target once and let the inverter pursue it continuously.
There is very little in the base venus OS that directly affects DESS, certainly from a scheduling and logic perspective. The majority of the logic is external via VRM so beta changes should have little impact on DESS unless a common underlying piece of functionality (DVCC, ESS etc) has had structural changes, which are typically documented in the release notes and would affect all users, such as the recent peak shaving issue introduced in 3.70 and addressed in the upcoming 3.71 release.
Hi Nick, thanks for your answer. I thought is was meaningful to mention my venusos release, since venusos and vrm clearly communicate. And the issue could be VRM/DESS or VRM/DESS/VenusOS. And I know about the peak shaving issue, I was (one of) the reporter(s) of the issue in 3.70beta
Thanks Rob, it certainly is helpful as is you clearly and concisely stating the issue, that makes it so much easier for people to help and is greatly appreciated.
My note was not for your benefit, but rather for many other users who don’t fully understand the system and are opting for betas, not appreciating the challenges this can introduce.
Slot at 1:00 is missing due to vrm-stats request timeout
Every 15 minutes, the same pattern repeats: charge power briefly spikes to ~7kW at the quarter-hour mark, then throttles back to ~4–5kW, dropping further to ~2.5–3kW in the final minutes before the next quarter. At 04:02–04:15, charging stopped completely for 13 minutes.
The 04:15 moment stands out: slot[0] = 64%, actual SOC = 64%, charging stopped completely.
A possible explanation?
Could DESS be using slot[0] as a hard per-quarter target rather than an end-of-hour goal? If so, and if the system charges slightly faster than DESS expects, the target gets hit early each quarter — causing the throttling and occasional full stops.
Is this known behavior with hourly tariffs, or worth investigating further?
This is likely a large part of the issue here. Are you able to use a BMV (0.1% reported SoC) as battery monitor? I use a BMV and reconstruct SoC using the much higher precision consumedAh value: 1 + ( consumedAh / capacity ), then populated a virtual battery with all BMV values except the improved SoC. That achieves real 0.01% precision and solved problems that looked much the same as yours. You might not even need a BMV for that depending on what your Seplos BMS reports.
Definitely not intended behaviour, hourly tariffs should not factor into this issue, the scheduler is 15m window based regardless, VenusOS runs a 5 second loop that interpolates between TargetSoc at the start and the end of the 15m window to figure out whether the actual SoC is higher or lower than expected and drives the inverter (power) accordingly. This is very hard to do right with a BMS that only reports integer SoC. Then on top of it all the scheduler attempts to realign the TargetSoc values every 15m or so. You could disconnect the external internet for one or two hours to exclude the schedulers realignment effort to obtain a clearer insight into the schedule execution process.
Yup, if you use VRM-API set to hourly reporting. But ‘stats’ is not reporting what gets send to VenusOS. You need to look at what the D-BUS is actually populated with ‘under the hood’.
Under the hood values are a bit different indeed, but not much. 48% currently for the next quarter. While SoC is at 46%, only 2% planned. For reaching the SoC setpoint with power limit set to 7kW around 3-4% SoC increase per quarter is required.
That might actually be a bug (1). Lots of things changed with VRM API during v3.70 beta, including regressions. I only trust what I see on the D-BUS itself.
Hereva trick: use Victron input nodes to subscribe to D-BUS …Schedule/x/Duration and /x/Start and /x/TargetSoc for 0 <= x <= 3. Then monitor those values getting populated in your global context.
(1) I remember the most reliable way to poll stats via VRM API was to construct the exact path/value in an (inject) node to override what is set in the VRM API node itself. I’d have check later if I can find an example you can copy. Better is to use an MQQT input node to subscribe to the exact same broker that VenusOS does, you’d have to ssh into VenusOS to extract the correct credentials but that is the best way to see exactly what is send to your D-Bus. It doesn’t require enabling the MQQT (republish) broker on VenusOS either.
I am assuming your DESS power values are reasonably rational. But as a quick test to see if there are issues there, you could set your grid limits to the actual absolute maximum achievable (and allowed mains fuse) AC system limits, and then the battery limits at least 10% higher than that (the scheduler does take losses into account reasonably accurately) . Then reduce system efficiency from 90% (probably) to 80%. This should guarantee that the scheduler schedules very near your maximum power limits, in a very well behaved manner (as long as loads and solar actual vs forecasts differences are not throwing a wrench that is).
From all the above, I suspect your observed behavior stems from the BMS integer SoC reporting keeping VenusOS in legacy mode (using 1% integer /Schedule/x/SoC instead of 0.01% accurate /x/TargetSoc on the D-BUS), aggrevated by a recent (undocumented as far as I know) VRM DESS scheduling change that increased the re-alignment attempts to 15m frequency instead of hourly frequency (or so I believe to have observed).
To give a bit of hope: DESS Trade can work (and still has much room for improvement too):
Update 2 — Root cause identified: VRM scheduler corrections cause throttling
Following a suggestion by @UpCycleElectric to isolate the VRM scheduler’s realignment effort, I blocked all outgoing internet traffic from the Cerbo GX at the firewall level at the start of a charge window.
Result: continuous charging at 7kW for the entire window, no throttling whatsoever.
As soon as internet was restored, the familiar pattern returned immediately.
I also found and monitored the internal “set target SOC for this time slot” value on the Cerbo. Without internet this value remains stable throughout the quarter. With internet connected, it drops by nearly 2% shortly after each quarter-hour transition — consistent with the VRM scheduler sending a corrected (lower) target after receiving the updated SOC reading.
This confirms that the throttling is caused by periodic schedule corrections sent by VRM, not by VenusOS itself, the battery, or DVCC. VenusOS executes the schedule correctly — it is the mid-window corrections from VRM that repeatedly adjust the target SOC downward, causing the inverter to reduce charge power each quarter.
This is consistent with what @dognose described in the related topic Dynamic ESS does not charge fast enough to reach target SOC: VRM sees the integer SOC lagging slightly behind the expected value just before a window transition, corrects the next window target downward, and by the time the SOC flips to the next integer the correction has already been applied — reducing effective charge power for the entire next window.
@dognose@UpCycleElectric I don´t know if these values are of use to you. This is what I collected from Venus System | The set target SOC for this time slot (%) in Nodered.
Time
Target SOC %
Note
15:45:04
83
Quarter target
15:46:19
82
VRM correction: −1%
16:00:04
83
New hour, quarter target
16:01:19
0
Missing data or reset?
16:15:04
85
Quarter target
16:30:04
88
Quarter target
16:31:15
87
VRM correction: −1%
16:45:04
91
Quarter target
16:46:15
89
VRM correction: −2%
The value of 16:01/zero is also really strange, maybe unrelated, but still obviously wrong.
This is an elegant solution to increase the SoC precision. Mine is already 0.1, but calculating it myself would in my case at least increase it 3 fold (330ah battery).
Is there an easy way to copy all DBUS values into the virtual battery? or do i really have to connect 20+ input nodes into it.
And I assume you have to select the newly created battery then as the battery monitor in the gx gui.
Update 3 — Switched to Lynx Shunt (decimal SOC precision)
I switched the battery monitor from the Seplos BMS (integer SOC) to the Lynx Shunt, which reports SOC with decimal precision.
Result: the stepping pattern disappears — charge power now ramps down smoothly and continuously over the 15-minute window rather than in discrete steps. However, the fundamental problem remains: every quarter, charge power starts near 7kW and bleeds down to near 0 by the end of the window.
This rules out integer SOC rounding as the root cause. The VRM scheduler is still sending corrections that drive the target SOC down within each window, regardless of SOC precision. Higher precision only changes how the inverter responds — smoothly instead of in steps — but does not prevent the correction itself.