DESS: charging goes through 15 minute cycles

FYI, I didn’t enabled DVCC, because there’s currently no DC coupled MPPT controllers, only the 3x MultiPlus 2 and a AC OUT coupled PV inverter (added to VRM/Cerbo as a virtual device).

I do have a Node-RED script, but it only runs when DESS is disabled and under a specific condition (and yes, I’m sure).

Agreed. :slight_smile:

The hysteresis (if applicable) is 1%. Yet I didn’t meant to change the hysteresis. The hysteresis currently only has an effect, when switching from idle to charge or discharge cases. In case of a consecutive charging it is set to “0” to allow the system to reach target-soc “spot on”. (And After reaching the target, hysteresis is set to 1 again, to prevent immediate charges, if the actual soc drops a bit during idle for example)

I’ve thought about this past hours, and the +/-0.5 example I provided is probably not the best to understand without explaining that a bit more:

There are two types of systems: Those that report SoC Values with decimal-digits, so with a 0.1 precision and the second type that only reports whole integers. (That’s depending on the BMS manufacturer)

For the integer-only type, having a decimal target-soc would introduce a number of “issues”:

  • target socs to enter idle in the range of .1 - .9 would be impossible, the system would always see a charge or discharge requirement first because it can only be “bellow” or “above” that target soc.
  • When discharging, having a target-soc like “30.9” would always require such a system to discharged down to the next whole integer (31->30) before the goal would be achieved, therefore already almost spending the hysteresis value to just reach that target.

For the decimal-type of system, these issues wont exist, so it would be kinda possible to say “Well, then let’s adjust the target soc to the system type and we are good” - but decimal-based systems wouldn’t see any “gain” of doing this. What is the underlaying assumption, that a system that does not reach “48.0 %” precisely enough would operate better, if the target soc would be “47.7%”?

To think about this “question”, I may add another detail about the mentioned “charge-rate-correction”: Whenever the local systems SoC value “changes”, the system is calculating an updated rate that is required to reach target soc by the end of the window. Because the moment of a reported SOC-Change is the moment the missing amount of energy is known “precisely”.

So, that means, a integer-based system receiving a target soc of +3% will undergo a total of 3 charge-rate corrections within that window (initial rate + corrections on +1 and +2), while a decimal-soc-based system would undergo 30 charge rate corrections. In other words, a system having a decimal soc reported would/should/could already be way more precise on reaching a target-soc value “spot on” - no matter if that is “50.0” or “49.7”.

Just compare the last step at a target soc of 50: The integer-based system would have it’s last correction at 49%, then eventually missing the 50% by some seconds. A decimal-based system would have 9 more corrections to go and the last one done at 49.9% is such a minor step, it shouldn’t cause any huge missmatch on reaching any soc level just fine. If it still does - then there might be an all different issue involved.

So, therefire I also will send you a PN, if you could give me your VRM id, I may have a look at your system and eventually get a better idea on what’s not working ideal and what’s eventually the root-cause of that.

Don’t get me wrong, I’m not a CR-Refuser, I’m the kind of guy who gladly takes user-requests or proposals into internal chats for discussion - but If I can’t see an advantage on it, i’ll have a hard time justifying why it should be changed/improved. If I can outline a problem and somewhat proof that this may solve it, acceptance-probability on that is an all different story :slight_smile:

(Yet, just me not seeing a benefit on that doesn’t stop you to propose that ofc.)

1 Like

Problem is simple to solve.

You almost never charge only 15 minutes, this are always blocks of 15 minutes.
Say you charge at a ratio of 2%/h and DESS decides to charge 3h, increase the target SOC with 6% and don’t update it every 7,5 minutes.
This way none need for in between Target points with rounding errors.

I appreciate the offer but I decided to take break from active DESS development until after the release of 3.70 to focus on my core business of circular battery manufacturing.
That said I do not think there is much to see on our test system: 40A mains, MPII5000, 88kWh battery running DESS trade, no solar, no loads (to speak of).

How hard should it be to get the settings right? 9.2kW mains power in/out, 8.8kW battery power in (using a boost charger piggy backed to the MPII’s charge rate for it’s enable signal) and 4.4kW battery power out (MPII limit).

This results in DESS planning a round 5.0% delta SoC% drop per hour during high price timeslots. If you you take a step back to first principles engineering, it should be clear as day that not a single one of the 15 minutes timeslots will be scheduled with a valid target SoC% because 3 out 4 timeslots will be dropping 1% and the 4th timeslots will drop the remainder 2% (not necessarily in that order).
The effect thereof is that 3 of 4 time slots will power throttle nearing the last couple of their 15 minutes and the one slot with a 2% SoC drop cannot ‘catch-up’ due to the MPII power limits.
The only way to reduce this effect is to increase the battery out power setting to let say 4.8kW, forcing DESS to ‘overschedule’ it’s discharge planning. But this only shifts the issue to the hourly reschedule process that then will bump the whole target SoC curve by 1% sometime in the middle of the first quarter of the hour, leading to an instant discharge power throttle there and then. Or even worse, regularly switching back to recharging for the remainder of that timeslot.
And that my dear Watson, is the ‘aliasing’ effect of not being ‘allowed/able’ to set the SoC% drop to the actual 5% / 4 = 1.25% per 15 minutes (for the execution process).
By the way, we already use a workaround to align the precision of the battery’s SoC% (from BMV Smartshunt) by means of setting a virtual battery’s SoC to the much higher precision of the consumed Ah: SoC% = 100% * (1 + consumed Ah / Total Ah), with note that consumed Ah already is a negative number. IIRC that gives a SoC% resolution of 100% * (0.1Ah / 2048Ah) = 0.00488%

So I will wait and see what does and what doesn’t make it into the v3.70 release. I’d be a very happy camper to see:

a) a three digit target SoC% resolution (0.001%).
I wouldn’t mind if the system would still populate the scheduled SoC targets for the … timeslot with 1% rounded values, as long as I can repopulate those with corrected values (and make them ‘stick’, not getting truncated to INT) or even if only it would be made possible to write to the 'target SoC for current (15m) timeslot with a 3 digit corrected value ever 15 minutes. I would not even have to calculate those values because the scheduler is actually already providing them as float values per hour I know with certainty. I’d have to see the upcoming VRM-API 3.x release to check whether the per 15m scheduled SoC values (in the schedulers return object arrays) are float values as well.
It is only the push process from the DESS scheduler output object to the DESS execution engine where the scheduled SoC values get truncated to INT.

b) a re-enabled ‘scheduled SoC’ function. Because, as said before, a pure DESS Trade only system does not include a solar (or any other) secondary power source to provide the required trade bias towards a full battery. Without a scheduledSoC function the scheduler cannot plan for an effective dual hedging ‘sell to buy profitably later’ AND ‘buy to sell profitably later’ strategy.
And please pretty please humor me when I state that the ‘optimized with battery life’ function cannot substitute for the lack of a ‘scheduledSoC’ function. I have spend months trying to get that to work before I understood that it can’t work and why.

c) 15 minute graph bars in VRM dashboard. The hourly bars may be great when all works well but they are horrible for debugging.

You have a lot of big Trading systems out in the field. It would be great to have Victron getting some more insight to your analyses supported by how you solved it in the field. Very very valuable now that you get a entrance point with Dognose. Perhaps all the things that you are describing could get solved by the end of the year that way. And unfortunately you take a brake after so much efforts you did put in, all the energy…

We will see, I have been arguing the case for higher Soc% resolution of the DESS execution process extensively. It’s up to Victron now to accept the premise there is actually a there there. And surely they’ll be able to setup a test system for it, real or virtual even.

Reading about your system-sizing (88 kWh on a single MP 5000) I can see the Issue arising from this.

So, the issue is actually not that a target-soc won’t be reached precicesly, but the Issue you are facing is, that 1% based on your battery size, translated to a 15 min charge (so 3.520 Watts) is kind of the “only” charge rate that could be calculated for your system, because a 2% won’t be scheduled by DESS as it would exceed your inverters capabilities - and 3520W obviously is less then your MP could do.

Clear. I wasn’t considering “such a setup” when looking at this topic initially and probably the scheduler team didn’t as well.

1 Like

Yes that is pretty close. With remark that it is not the scheduler where the issue arrises.
When monitoring the output of the scheduler:

  • the (hourly) scheduled SoC% array as can be read from the VRM-API return object is returning correct (enough) float SoC% targets.
  • the ‘DESS expected energy to battery flow (Wh)’ and ‘DESS expected energy to grid flow (Wh)’ as can be shown on the VRM Advanced page as 15 minute bar graph, are perfectly capable of sticking to the target 4.4kW battery out (or grid out whichever is lower) power setting, 1100Wh per 15 minutes.

The issue arrises in the proces step where VenusOS in tandem with the ESS assistent on the MPII (I am not fully informed how these two interact, would love to see a good proces diagram thereof) pushes INT SoC% targets to the derived 900 seconds scheduling objects/values (residing in/on the D-BUS I suppose).

And that’s where things go sour.

And just make one more thing clear: the truncation of the available float SoC% target to an INT SoC% target effectively removes information from the scheduling data leading to unrecoverable ‘aliasing’ artifacts in the control proces.
In the context of our bare bones trade setup this becomes ‘evident’ due to the large battery capacy in relation to the limited power performance of a single MPII.
But that is not even the real problem here. The bigger problem is that the same aliasing artefacts will wreck havoc much sooner and will be even harder to identify when the control process needs to deal with all the other layers of complexity that come about when adding solar and irregular loads.
I firmly believe that a correct root cause analysis and fix for the issue we experience on our system, will potentially solve a whole slew of other issues for many other DESS use cases as we all have been seeing widely reported here on the forum. The potential payoff of getting this right is pretty big. (Hence my rather stuborn chasing this topic for so long already. And when this is done, I will chase the ‘scheduledSoC’ topic for exactly the same reason).

And I have the problem that target SOC is updated every 7,5minutes and the calculation is sometimes wrong and than jumps 1% up or down, causing stopping charging or discharging.
See pictures:

Not sure this statement is correct. From reverse engineering alone I have come to the conclusion that there are at least four key processes interacting with each other to make DESS ‘tick’. I would love to see corrections and/or better documentation (high level, flow charts, process descriptions) on this from Victron to fill in the blancks and facilitate better understanding:

  1. The scheduler running in Victron/VRM cloud that pushes an updated forward looking schedule object to venusOS once per hour, roughly 7 minutes past the whole hour. It contains forcasted solar and load tables for some days ahead and it also contains the (float!) target SoC% table and the hourly or 15minutes prices table upto the end of the known price period (upto midnight today or tomorrow, pending on the moment the new prices roll in). Sometimes, when changing key settings for instance and as I can imagine also when actual performance values significantly differ from the forecasted schedule there may be more ‘midterm’ (as in half way the 15 minutes) updates in the 2nd, 3rd and 4th quarter but in general I don’t notice those on our system, just the one in the 1st quarter.
  2. The schedule parser running on VenusOS that takes this schedule object and parses it onto 15 minute schedule objects that reside on the D-bus (or just VenusOS memory, that I’m not sure about). Every 15minutes strict on quarter hour mark, it updates the ‘targetSoC for the current time slot’ based on the value stored in schedule 0, 1, 2, 3 and notably, the first 7~ish minutes of the new hour, schedule 4 (or higher even when cloud updates are lacking). Until the hourly update rolls in, then all schedules roll 4 places to be aligned with the current hour again. (this mechanism allows DESS to continue working upto the end of the provided schedule when cloud updates are missing or late.)
  3. The DESS execution process. This process is (aside other parameters such as power and time limitations) driven by the INT ‘target SoC for current time slot’ value and the per 15 minute energy in/out values that are derived from it (i.c.w. your DESS system settings) to calculate the instantaneous DESS charge rate (positive or negative). This is where we first started noticing that even though DESS schedules to deliver 1100Wh per 15 minutes to the grid, it actually throttles down at the end of the quarter to reacht the INT 1% lower SoC%. It takes about 1250 Wh from the battery to push 1100Wh to the grid, but the systems can not target 1.25% delta SoC% per 15 minutes)
  4. The power control process in the Multiplus + ESS assitant that then (within the bounds of its own limits) drive the actual charger/inverter power flow.

Just from observation I do see the target SOC changing every 7,5 minutes.
See picture with cursor on 14:08, target SOC dropped 1% and caused stopping charging.

Every 7.5 minutes, are you sure? That might indicate your DESS settings are to far off the mark of the actual system’s performance limits. And that might then lead to skewed schedule forecasts that require more than a once per hour update. I have no reason (yet) to think it by design meant to update every 7.5 minutes. But, well, it’s complicated.

You could try add some extra graphs to get bettr insights and/or create some monitoring flows in Node-RED to see what all the schedule objects are doing (2nd proces in the list above)


zoomed in:

Yes, twice every 15 minutes (see screenshot in previous post)!
In Node red I do see 900second time blocks, so venus os is recalculating the target SOC or DESS does this.

I think you are on the right track but you might need to decide whether you are willing to jump into this rabbit hole ‘Formally known as DESS’ sans forum parachute. All I can say is that there is definately light at the end of the learning curve. :wink: (I really need to log out of this forum for a while to attend to my core business)

Do you have any producers in your system of which Venus (and therefore VRM / Dess Scheduler) is not aware off?

Updates are calculated every 15 minutes, and then pushed to the device about 0 to 7 minutes after, depending on load.

That updated schedule always contains the slots of “this” hour.

So, if you see a SOC change around 7.5 minute, it just means, that the target soc for the :00 to :15 window calculated in the prior update is different from the target soc calculated for :00 to :15 in THIS update. So, let’s call this a “Ghost-SoC-Update” :wink:

That is very rare and basically means that your actual soc reported frequently is always a bit different than the anticipated soc at reporting time, causing a corrected window in the next update, because the scheduler concludes that your system is to far off to keep the target calculated in the prior run (but for the same slot) up.

This usually happens on systems, where there is a “unknown” producer (or therefore maybe also a producer heavily underreporting it’s production), so that DESS simply always underestimates the SoC Progress and revices “active” Windows due to no longer appearing feasible.

The Ideal way would be, that a reported “Soc” is about where the scheduler scheduled it to be, so it will kinda say "Yes, we can keep the :00 to :15 window up as-is. Then, you won’t see the mentioned “Ghost-Soc-Change”

Exactly, just one Target SOC every 15 minutes should fix the problem.

I don’t have any ghost consumers in the system, all consumers and PV on critical loads only, AC in directly on the grid.
Real SOC is very close compared to target SOC, so a different of 1% stops charging or discharging.
Have a system with 3 Multiplus II 48V - 5000VA (3 phase) and 97kWh storage.
For efficiency I limit charging and discharging @ 8000W (2,66kW for each inverter)
SOC is measured by lynx shunt.

If that lynx shunt provides a ‘consumed Ah’ readout, you might be able to create a virtual battery and re-calculate a SoC% with (much?) higher resolution. For our system with a BMV Smartshunt the (virtual battery’s) SoC% resolution went from the BMV reported 0.1% to 0.005% that way, all other values kept the same. This won’t solve any issues directly but might be helpful on the overall system level.