By the way in VRM and remote console the current SOC is presented with one decimal behind the point.
So only a new Venus OS version is needed to fix the issue.
No I’m afraid it is the SoC% that is communicated with the ESS assistent somehow, not sure though, I never ben able to find a good source of information on the interaction and division of labour (processes) between the ESS assitant on the VE-Bus device and ESS/DESS on VenusOS. Might be out there somewhere but time constraints prevent doing a deep dive just for this one issue, with no sight on being able to actually solve it or have it solved when found even.
Think this is a quite simple fix.
Make target SOC float and only change Target SOC ones every 15 minutes.
Wanna ping the dev’s on this one? Beware of the fan boys shooting the messenger.
I’m sure they will fix it.
Only thing is how to report there is an issue,…
Floats are annoying and come with all sorts of problems. Scaled integers are easier.
Hey Guys,
we do read the community forums occassionaly, but can’t answer to every individual question. Mind, it’s a community forum, not a sort of support forum ![]()
Yet, when there are issues potentially affecting multiple systems we look at it and provide some feedback that may resolve questions for multiple users.
So, here, I may provide you some insides on everything, because you have some wrong assumptions in your issue-finding and therefore also came to wrong conclusions what may be cause or required fix:
So, first, the schedule is calculated for a whole 15 Minute window. That calculations are “invoked” at every :00 :15 :30 and :45 of the hour. By that time the executor to process all that is fed with the last known SoC value of the system, which may be from the prior windows reporting.
(One thing to make sure for best DESS-Experience: Don’t choose a VRM reporting interval greater than 5 minutes in your gx device.)
Calculating an updated schedule for thausands of systems takes some time, so the updated schedule may be send to a device a bit later. Yet, the schedule data itself is 15 minute chunks.
Now, whenever new windows are calculated, the updated target Soc is a PREDICTED value, which means it is suspect to some slight deviations. Calculation may have assumed your system to be at 40% soc when calculations were done, while it only was at 39 or already reached 41 or sth. Therefore a target soc generated may a bit too high or a bit to low for the “current soc”.
Having a Integer-based target-soc greatly mitigates this issues, as it basically allows for a +/- 0.5% SoC-Imprecission of your local system to be threated equal. Switching to a float-based target-soc would raise the bar to only allowing a +/- 0.05 deviation from the expected state.
Hence it would make the overall transitioning to a new window way more “unstable” than with a integer-based target.
When you see a “chunked” charging like that, it usually originates from issues with the reported SoC. If your SoC is raising “faster” than the calculation expects, the system would hit the target-soc early and “pause” charging at target soc.
We’ve added states to “catch” this within a certain grace (The “SMOOTH_TRANSITION_STATES”), they are essentialy saying : "If target SoC is reached within the last 60 seconds of a window already, we just keep up the current chargerate instead of idle) but if it reaches target soc even earlier than that, they are not kicking in, as it would result in a much bigger overshoot of the target soc, causing issues with the overall schedule.
Now, within a window, the system is also continiously calculating a new charge rate to reach target soc “in time”. This is best to be explained with some real world data:
In this screenshot, you can see a system, whichs reported SoC perfectly aligns with the calculation done by VRM. You can see, that the charge rate (yellow line) is bit different for each 15 minute chunk, that is the consequence of a window may be +2%, but another +3%. However, within a 15 minute the chargerate is quite constant, not much corrections happening:
Here you can see a system, whichs SoC is raising FASTER than the scheduler calculates. The result is, that the chargerate is continiously corrected down within a 15 minute window. Now, this still makes it achieve all goals fine, doesn’t hurt anything, it just looks a little jiggy:
Your system apparently seems to be way beyond that: It seems to achieve a target soc so fast, that it even enters a complete charge-stop instead of just doing some minor corrections.
Bottom line is, that the root-cause of such a behaviour is inacurate/non-linear SoC beeing reported by the system. There are many reasons why this may happen, not limited to invalid configuration only:
One thing people quite often forget about as battery degradation: If your battery for example is already down to 90% SoH, the initial capacity of 14 kWh would now be only
12.6 kWh, causing all calculations to be performed based on 14 kWh (so 140 Wh per %), while the batteries SoC would actually increase by 1.11 % when charging 140 Wh. And as you can imagine, that error just sums up for every % of SoC attempted to be charged.
Another consideration is to “review” the source of your batteries. We’ve been seeing systems, where people insisted to have a certain battery capacity, because they ordered 320 Ah LiFePo-Cells from a foreign express marketing platform, but after all they have received 280 Ah cells with a fake label.
And ofc, in some cases you may even face a combination of all possible factors (settings, measurement-precision, degradation) that contribute a certain amount to a inaccurate SoC-State, but when summed up, it becomes huge.
I’ve send you ( @MartijnT ) a PN, if you could give me your VRM ID (the numeric one in the URL) I can review to see If I can figure out the root cause for your system.
Thanks for looking at this issue.
Why do I see the target SOC changing twice every 15 minutes?
The SOC in between the 15 minutes causes in mine system stopping charging or discharging if the middle SOC is 1% off.
Is this caused by recalculating, and how can I change this SOC recalculating interval?
Would it not be better to implement an adjustable max error value between Target SOC and real SOC before recalculating is needed?
Calculating now with Target SOC that has 1% resolution induces strange behavior when The real SOC follows the target perfectly.
Would give Target SOC calculation a 0.1% resolution and first recalculate new Target SOC if there is a + or - error of 0.5% (or user adjustable error range)
Thank you for the technical insights.
Although I agree with most of your assertions, there is one that I believe is a real logical fallacy:
This defies all logic, reducing precision simply cannot increase accuracy, no matter how you look at it.
The use case where we are experiencing the negative effects of a lack of precision of the ‘target SoC for current time slot’ concerns a dedicated DESS Trade system consisting of a 88kWh battery, a MultiPlus II 5000, an additional 8000W HF Boost Charger all on a single phase 40A mains connection.
This system is setup to charge at AC 40A flat by means of the MPII’s capability to adhere to the ‘grid current limit’ setting and the Boost Charger located on the MPII’s AC-out.
As simple Node-RED flow detects when the MPII is set to charge at full rate and simply switches on the Boost Charger bank (~95% efficiency), at which point the MPII downregulates itself to only charge a few percent extra upto that 40A limit. The only significant variable influencing the actual DC charge power is the mains AC voltage, which stays within a range of 224 to 234 Vac for prolonged periods of time, a power variation of ~5% at most.
Discharging is done at full capacity of the MPII where special precausions have been taken to keep the toroid transformer cool enough to sustain full power for hours on row, only on very hot days and after 4 or more hours do we ever experience thermal throttling but generally speaking it simply outputs a flat 4400 Watt AC (of which it somehow attributes ~50 Watts to AC-out even without any load attached and only ~4350 Watts to feed-in to the grid but alas, the power curve is as flat as can be.
There are no other significant loads, only a Cerbo, the MP itself and an internet modem all in all maybe 100W at most, all on the DC side.
We have been measuring, calibrating and tuning this system for a year now, please accept at face value that all the common settings are very well known and tuned. We tried all kinds of tweaks such as nudging a little higher charge or discharge rate but in the end getting the settings as close as possible to actual measurred values provides the best overall result.
But still we cannot rely on the DESS execution process not to overshoot or undershoot the target SoC on an hourly, and now 15 minute basis.
I argue with certainty that the root cause for this is the lack of precision of the ‘target SoC for current time slot’. The most compelling argument I can make towards that end is that a 1% delta SoC corresponds with 0.88kWh of energy, which is awfully close to the 1.1kWh of energy the MPII feeds in the the grid per 15 minutes (even closer when taking conversion losses into account). For practical purposes it is a 1:1 relation. This means that even the smallest variation in the actual power flow (that 5% in AC voltage for instance) will always lead to undershoot overshoot situations. An analogy that comes to mind is that of digital audio: to represent a 22kHz analog audio signal in the digital domain, the bitstream rate needs to be 44 kilo bits per second (kbps) at least. The overshoot and undershoot behaviour is directly comparable to aliasing artifacts when attempting to capture 22kHz audio with a 22kbps digital stream (and comparable to aliasing artifacts in digital photography just same). There is no post-processing solution possible to compensate for missing precision of the key control variable. The information required to do so, no matter how smart and/or complicated the method, is just not available within the bounds of the official DESS execution process implementation.
PS, our workaround is based on a virtual higher precision ‘target SoC for current time slot’ in Node-RED that synchronizes the actual ‘target SoC for current time slot’ to corrected nearest integer values, based on additional information taken from all other information sources such as the full scheduling object arrays. Unfortunately we now first need to rebuild this workaround to take the 15 minute pricing schedule into account to make it work as required again. And it is by no means a solution, just another hack.
Increase the target SOC resolution (0,1%) and build in a Hysteresis of 0.5% (or user adjustable) that triggers recalculation of the target SOC.
Think that would solve everything.
I’d like to share the result of an experimental grok analysis concerning this target soc precision issue. I’m by far not an expert on using the AI tools but just for fun and giggles I’d like to share the result of what I’ve been doing the past few hours:
dess.pdf (1.4 MB)
PS, if I can find some time in the upcomming week or so, and soonest after the release of v3.x of the new VRM-API node, I will refactor our high precision ‘virtual target Soc for the current time slot’ Node-RED flow and if I’m satisfied with its performance I will share it here for those interested.
Probably the only way to convince Victron there is an actual root cause to these issues, is to demonstrate a robust functional workaround that solves it for a wide spectrum of use cases. Not saying that’s an ideal process of cooperation but what else can we do?
I didn’t say it improves accuracy, the opposite is the case, it allows for a bigger error-margin between the actual SoC and the estimated SoC. And that is what improves system-behaviour-stability on window-transitions, as the local SoC doesn’t have to be within +/- 0.05 to have a smooth transition, but within +/- 0.5.
Had a look at your document:
I can’t understand this properly. Each Window and it’s slot is a 15min interval, even if the system uses hourly prices. How can a system reach a target soc of a 15 min interval 20 minutes early?
So, you are literally saying “tiny offsets will cause unwanted charge/discharge” but believe raising the goals precision by a factor of 10 (decimal target socs) will improve this? The opposite will be the case, as mentioned initially.

Here you would need to provide some examples of “unexpected grid interaction”. When the target soc is reached (early) and the next window doesn’t indicate the system to continue to charge / discharge in a scheduled way, the system will enter a mode based on what the scheduler “wants to happen” and therefore should make economical sence. If it wants to idle and feedin, then it does this, because the requirements for consumption are met and feedin (lossless) provides a higher economical benefit than continue to charge the battery and discharge it later.
Bottom line is: There is no such a thing as reaching “target soc early” by about 15 to 20 minutes. If that’s what you are experiencing, then there is a all different issue that is not related to the precision of the target soc levels.
@MartijnT I’ve now looked at your system - and what you are experiencing has a simple reason, we just need to figure out the cause. I hope you don’t mind if I share this screenshot with reference to the two provided screenshots of an ideal, and not very ideal system:
This is, how the charging looks on your system:
First thing to note is that your system is heavily “correcting the chargerate down” (yellow line) until it even hits 0. Why does this happen? Explainable:
According to your system settings you’ve configured a maximum chargerate of 4000W. This value is applied by the delegate - but your system instead charges with 6400 Watts (orange line) - all through the window, despite the chargerate is lowered consequently:
So, the core thing to figure out: Why is it doing that? Do you run any additional scripts that modify the indended setpoints? If the battery charges continiously with 6400 Watts, it is kinda logical thing that a schedule for an hour with 4000 Watts is reached after about 2/3 of the hour.
During discharge however, your system is perfectly fine sticking to the scheduled discharge rate:
And as a consequence of that “it’s smooth”, would rank that as a “perfect” system, no intra-window correction on the chargerate visible.
So, why does it “overcharge” in charge cases?
I can see, you have DVCC disabled. I would need to test this, BUT it seems that the charge-rate-control (on forced charges) is performed by DVCC. So, with DVCC disabled, your system will most likely just charge at whatever is possible rathern than applying a “certain chargerate” (4000W in this case)
Without DVCC, the inverters basically receive the “clearance” to pull (in your case) 17 kW from grid, and there is no control mechanism keeping them within desired limits. They’ll max out your battery charge, no matter what DESS schedules for.
I’ll clearify internally if this is to be considered a bug - or if DVCC is a prerequisite for controlled charging. (Then ofc. That needs to be added to the documentation, denying dess operation as long as DVCC is disabled or Forcing DVCC on, when enabling DESS - all to be clearified)
See pictures posted here:
I do see the target SOC changing every 7,5 minutes and not 15 minutes.
Sometimes the target SOC in between a 15 minute block is 1% off and this stops charging or discharging.
I don’t understand why you would change SOC in between a 15 minute block, but it happens.
Recalculating of target SOC is only needed if real SOC is off from Target SOC by 0,5% (or user adjustable) at end of a 15 minute time block.
This would real lower your server SOC recalculation time.
This is exactly what I’ve been asking ![]()
The current max MP2 setting is 40A, (3x40=) 120A is about 6500W. So I guess that’s where the maximum comes from.
From my understanding, DVCC isn’t a requirement, and given that it discharges with 4000W, it appears to be the case that it should or could work without.
But I’ll wait for your answer on this one. ![]()
Well, I reviewed that, and as of now, it IS a requirement: When DESS is scheduling a charge, it tells DVCC: “Hey, limit the charge to 4000W” - and DVCC is… Well… Disabled.
In Case of Discharging, the desired Limit is passed to hub4 (now known as ESS) directly, so no DVCC involved.
I will review, why the chargecase is handled different, it probably has to do with security considerations to have DVCC exclusively in the responsibility to obey chargecurrent limits of the battery and orchestrate connected MPPTs accordingly. But it may as well just be an oversight as probably 98% of systems run with DVCC enabled due to desired BMS-Chargerate control.
And this said, what we could derrive here: There are potentially more systems affected by this, if the following conditions are met:
- DESS enabled
- DVCC disabled
- configured a battery charge limit in DESS bellow the maximum possible battery chargerate. (All others probably won’t ever note their battery is continiously charging at max power instead of the scheduled rate, as there are only a few situations where DESS would schedule bellow the maximum configured rate)
EDIT: I made a mistake, see apology below.
Your msg to @dognose is regarding this pdf document you shared yourself?
This is the same logical fallacy: there is no need to reduce the hysteresis window to same order of magnitude as the target SoC precision is increased.
You are correct and I stand corrected. I falsely concluded @dognose quoted from another document that I shared with @guystewart for which I offer my public apologies. I do make mistakes at times, this being a pretty embarrasing example thereof.





