Venus OS v3.70 - MPPT 150/100 VE.Can with DC-side export enabled suddenly stops charging

Hi @tstiller,

You do not by any change use DynamicESS (for energy arbitrage, buying and selling from the grid), or perhaps you tried it out briefly?

I ask because this is one place where it can override the feeding in of excess DC coupled PV, and it is somewhat suspicious that this stopped working at exactly 10AM. This is when I see the Multi(s) lowering the charge voltage to 61.25V, which is your float voltage, suggesting that something told it to stop feeding excess DC PV into the grid.

Otherwise, I’ll have to look into it tomorrow, on Venus 3.70.

Hi @iburger

No, I’ve never tried enabling DESS — not even briefly. I have the ESS assistant installed and I’m using the standard ESS, which works properly and hasn’t caused any issues.

Up until now, everything was working correctly on Venus OS v3.67 with the settings I had been using for quite some time. After updating to v3.70, this problem occurred several times — first on the beta version, and later even on the production v3.70. So far, I haven’t actually been able to clearly confirm that Venus OS v3.70 was really the root cause of the problem. During my discussion with Nick, we concluded that some of my settings might not have been entirely correct — that could possibly have contributed to the issue. In fact, I only restored those problematic settings this evening after rolling the system back to v3.67.

Together with Nick we tried to investigate the cause of this behavior. So far, the only thing we managed to identify is that my DVCC settings (manual, not enforced by a BMS) were affecting the issue. Specifically, when DVCC was manually disabled, everything immediately started working correctly again.

Nick also suggested that my AC-coupled PV configuration might not be entirely correct. My system includes several grid-tied microinverters that are connected on the AC-In side of the Multi and measured by a separate VM-3P75CT energy meter. Up until now, the ESS has been correctly using the energy from those microinverters to charge the battery via Multi.

After talking with Nick, I was supposed to change the configuration and move the microinverters to AC-Out, but I haven’t done that yet. I didn’t want to change too many variables at once, in order to better isolate what is actually causing the problem. It’s also possible that the issue is simply related to my overall configuration.

In a system without a managed battery, the DVCC setting doesn’t do a whole lot. It makes available some additional features, such as SVS, STS, SCS (in non-ESS systems), and it always synchronises the voltage reading of the MPPT with the Multi(s).

If there was a very large voltage calibration discrepancy between the solar charger and the Multi (more than 2V), which had to be calibrated away, then it would make sense that turning of DVCC makes a difference. But then the problem would be persistent, not intermittent.

The other aspect DVCC affects is how charging is done, when you actively instruct the Multi to charge (eg, set ESS mode to Keep Batteries Charged), but this difference is only in systems where excess DC coupled PV is not fed into the grid. Which again means it does not apply to your system.

In short, I cannot figure out why turning off DVCC somehow fixes this.

But it might make sense to temporarily disconnect your solar charger from the GX device, wait about a minute for it to return to using its own voltage measurement, and then check how it correlates with what the Multi reports (using VictronConnect). If there is a significant difference in the voltage measurement, it might explain this. It would not really explain why the problem only manifests with Venus 3.70.

In any case, if you want me to dig deeper, just let me know.

Checking the input voltage levels is basically the first thing I verify in this kind of situation. The readings in VictronConnect between the devices were consistent within a few to a dozen millivolts — essentially within a range where I wouldn’t expect any issues. Of course, I also verified this independently with a multimeter, and the measurements matched within reasonable limits. I really don’t think the voltage levels in my system could be causing the problem.

I also couldn’t initially understand why DVCC would have any influence. What I noticed was that during a reboot of the Cerbo GX, when the SmartSolar temporarily lost the connection and started operating based on its own internal settings, it immediately began charging. But as soon as the Cerbo finished rebooting and the SmartSolar switched into External Control mode, it stopped charging again — clearly the system was instructing it to stop.

The same effect also occurred when I unplugged the CAN cable, which you mentioned yourself. As soon as the SmartSolar lost communication with the Cerbo, it immediately started working again.

Based on that, I concluded that the only possible “external control” could be coming from DVCC — which indeed turned out to be the case. Disabling DVCC switched the SmartSolar back into autonomous operation, and as a result it started working again.

Maybe DVCC itself isn’t the actual cause, but rather some incorrect control coming from the Cerbo? It’s hard to say.

In any case, the entire day yesterday the system worked correctly with the same settings I had before updating to v3.70. If everything still looks good today, I’ll assume that v3.67 works properly and then update the system to v3.70 once again.

If you have any additional suggestions on what I could check or change, please let me know.

Yesterday the system was again operating correctly on Venus OS v3.67.

I’m about to update to v3.70 once again and see what happens.

Today around 10:00, with the same settings, the SmartSolar stopped charging the battery. There was plenty of sun available.

Disabling DVCC and rebooting the Cerbo GX restored operation, and it continued working normally for the rest of the day.

So there is clearly some kind of regression in Venus OS v3.70 compared to v3.67 — or alternatively, something that previously wasn’t working correctly has been fixed, and in my setup it results in this particular behavior.

@iburger Could you please take a look at this issue?

Up until now, the system has been running correctly on Venus OS v3.70, but with DVCC completely disabled.

This evening I updated to version 3.71, and starting tomorrow morning I’ll re-enable DVCC for testing to check whether the issue with the charger stopping still occurs.

Today, once again, my MPPT charger stopped working — this time between 10:27 and 11:29.

Interestingly, this happened even with DVCC disabled. The question is: why?

In the meantime, for testing purposes, I switched the battery from my 15S NMC (61.5 V) to a standard 16S LFP pack. Of course, all system voltages were adjusted accordingly for the new battery.

The MPPT stopped working at the moment when the battery was fully charged. DVCC is still not enabled.

Interestingly, after that period, the system for some reason decided to discharge the battery by exporting energy to the grid. During that time, there were no loads running that would justify such battery usage.

@iburger Could you dig deeper?

Hi @iburger could you please take a look at my system?

To be honest, I’ve run out of options in terms of what else I can check or try — I’m not sure what to do next. Is it possible that my solar charger is faulty?

Since our last discussion, I’ve updated Venus OS to version v3.71, switched the battery to a standard 16S LFP pack, connected it to the system, and enabled DVCC — but the problem still occurs.

Today, shortly after 9:15, the charger stopped working and I can’t get it to start again. This time, even disabling DVCC didn’t help.

Hi @tstiller , Yes I will take a look.

That’s why it stops feeding in. The voltage offset disappears. I added this chart to your dashboard to help with diagnosing this. This is not an answer yet, just to let you know that I found something relatively quickly.

@tstiller It looks as if you restarted the Multis, and after that the issue disappeared again.

The issue is that the Multi stops raising the charge voltage. What I am trying to work out is whether it was instructed to do so, or not. I can find no evidence that it would have been instructed to stop feedin (on your system the only thing that affects that is the “feed in excess DC coupled PV” setting).

That makes it pretty likely that this is not a Venus bug. We have to trace the real issue in your installation.

Yes, I’ve restarted the system several times.

First, I rolled back to Venus OS version 3.67. When that didn’t help, I tried disabling DVCC. However, that caused low state-of-charge alarms on the Multi side and error 67 on the MPPT.

In the end, I re-enabled DVCC and restarted the entire system because I didn’t have more time to deal with the issue — I preferred to have a working system, even without DC-side export.

I’m happy to test anything you think might help identify the issue. Honestly, I’ve run out of ideas at this point. I also have a set of three Multis in the 48/6.5k version on hand, so I could swap out my current 48/5000 setup if that would help in any way.

At this point, I’m even considering getting rid of the MPPT charger altogether and moving all the panels to the AC side using microinverters. If it’s going to cause this kind of trouble, then from my perspective it simply doesn’t make sense to keep it.

@tstiller I see your system started acting up about ten minutes ago. I’ll see what I can find, otherwise I will contact you by email so we can take the issue further.

It is the same time every day it seems.

Yes, also very similar voltage. Always around 54V.

But there are more complications. Shared Voltage Sense is on, and the battery is calibrated a good 0.8V lower. If I turn off all voltage syncing, the Multi and the MPPT agree on the voltage, so it seems the battery is the more likely candidate for being “wrong”. Not that it matters, just an observation, and 0.8V is not too big a deal. It does not affect anything directly.

If I turn off SVS, matters improve. It still happens, but less frequently. The evidence to take note of here: The battery now implicitly runs 0.8V lower.

So this does seem related to battery voltage, but not the actual measurement, something about the way the battery behaves at higher voltages.

I know the DC ripple on batteries are higher when they are really highly charged. So there is a connection between a battery that is full, or almost full, and DC ripple.

I also note that this issue seems to happen during times of higher DC ripple, but cum hoc ergo propter hoc (is there causation)?

This is a fairly large battery at >300Ah, so I would not normally expect to see ripple that is high enough to cause problems on a battery this size, at the low power levels I see it at. It could point to a poor connection somewhere or wiring that can be made thicker, but that is pure speculation.

It is a JK-BMS, so probably self-built?

In any case, I asked some questions about this, and when I know you will know. In the interim, I added voltage ripple to the custom chart on VRM.

Today I was basically away from the site all day and didn’t interact with the system at all — so I didn’t notice any issues occurring.

I’m not sure if this matters at this point, but just as a reminder: during the diagnostics I completely changed the type of battery connected to the system.

When the problems first appeared, the system was running on a 15S Li-Ion NCM pack with the charge voltage set to 61.50 V. Right now, I have a 16S Li-Ion LFP pack connected, with the voltage set to the nominal 55.2 V.

The only thing that might be relevant is the capacity difference. The original setup consisted of three batteries, each 8 kWh, for a total of 24 kWh. The current setup is a single 16 kWh pack — although I do have another identical pack ready, I just haven’t had time to assemble and connect it yet.

While the capacity is still sufficient for me, I’ve started to wonder about DC ripple — to be honest, I hadn’t paid attention to that before. The question is: what could be causing it?

Each MultiPlus is connected to the main DC bus using 70 mm² cables. These cables are relatively short — about 2 meters total length (positive + negative). The MPPT charger is connected to the DC bus with a 35 mm² cable, also about 2 meters in total length.

The battery pack is connected to the main DC bus with a 70 mm² cable, and the second pack — once I finish assembling it — will be connected in the same way.

So the cable cross-sections shouldn’t really be an issue — unless I’m mistaken?

In both cases, these are battery packs that I built myself, but I can assure you they’ve been thoroughly tested and I don’t expect any issues there — although of course I can’t rule it out 100%.

Tomorrow, when I have some time, I’ll go through and check all the connections again. All the cables are crimped using a hydraulic crimper with ring terminals, and I verify the connection resistance afterward.

Yes, you’re right — the current pack is running on a JK-BMS. Both the previous and the current one use it, with the difference that the current pack has communication connected to the Cerbo GX, while the previous ones did not.

Wouldn’t issues with connections — especially on the battery side — typically show up under high discharge loads? During charging I haven’t seen currents above 100A for quite a while, whereas during discharge reaching 200A is not a problem at all.

To be clear, I no longer think this is the cause. But I will explain it anyway.

When a low-frequency topology inverter, like the Multiplus and Quattro, creates a 50Hz waveform, this causes a corresponding alternating current draw (as the amplitude ramps from zero to peak and back down), which in turn causes a 100Hz ripple on the DC side. The higher the load, the larger the ripple. This is completely normal.

The size of the ripple will depend on the size of the battery and the capacity of the cabling.

If the amplitude of this ripple is too large, it will start to cause all sorts of problems. At your site this ripple appears to be about 200mV, which after some enquiries, would not be enough to cause this issue.

I can at least confirm, after yesterday, that this is not a bug in Venus. This logic is in the firmware of the Multi.

Thank you for the explanation.

In general, I understand why DC ripple occurs with this kind of inverter topology — the question is whether the level we’re seeing is already problematic. 200 mV seems quite high to me, especially considering that most of the time this system isn’t doing much. So far, the Multi hasn’t reported any DC ripple alarms.

Also, I noticed something strange on the graph — the DC ripple completely disappears at some point during the night. At that time, nothing significant should really be happening, and yet something is clearly affecting the ripple. The question is: what do we do next?

From what I’ve seen, a new firmware v560 for the Multi has been released — should I try installing it on the system for testing?

I don’t know if you noticed, but today the MPPT also briefly stopped working again. It happened around 9:30, and after several minutes it resumed normal operation. The system is still running on Venus OS v3.67, so my initial theory about a bug in v3.70 has essentially fallen apart.

At this point, I’d really like to resolve this issue. As next steps, I’d suggest updating the firmware to v560 on all Multis. If that doesn’t help, I can replace the entire 3×48/5000 setup with 3×48/6.5k units — I already have new ones ready. I could also start by adding the second battery pack, which should help reduce DC ripple.

What do you think? What should we try next?

If you can be patient for a few more days, I’ve booked some time with a firmware developer to look into this. It is better to have an exact answer. Your site is also not the only one, we have another similar one, but yours is quite predictable, which is of great value.