3x MultiPlus+CerboGX – System shuts down after grid loss (ESS enabled) – works fine without Cerbo

Hi all,

I’m having an issue with my 3-phase setup and I’ve done quite a bit of testing already, so I’d really appreciate some direction.

Setup

  • 3x MultiPlus 5000 (3-phase)

  • Cerbo GX

  • Fronius Symo 10kW 3-phase (AC-coupled on AC-out)

  • No external grid meter (Multi used as meter in ESS)

  • ESS enabled

  • DVCC disabled

  • Grid monitor disabled

  • Latest stable firmware (also tried latest dev firmware)

  • Node-RED disabled for testing

  • Battery bank ~100 kWh, healthy

What happened originally

There was a real grid outage in my area.
My system went down as well and started cycling on and off.
First I see “Grid lost”, then VE.Bus sync error.
It kept rebooting and eventually stopped completely.

Current reproducible behavior

At night (no PV), with no loads connected:

  1. I switch off the grid.

  2. The system islands normally and keeps running.

  3. After about 1 minute, it reports grid lost again and then VE.Bus time sync error.

  4. It restarts.

  5. It goes into a reboot loop and eventually shuts down.

Important test

If I disconnect the Cerbo from VE.Bus:

  • The 3x MultiPlus system islands perfectly.

  • It runs stable indefinitely.

  • No errors, no restart.

So the hardware seems fine. The issue only appears when Cerbo is connected and ESS is active.

I’ve:

  • Disabled Node-RED completely.

  • Tried both stable and development firmware.

  • Tested with no loads.

  • Tested at night (Fronius not producing).

Still the same behavior.

Is there something specific about:

  • ESS without external grid meter

  • 3-phase MultiPlus

  • Grid loss handling

that could explain this?

It feels like the system is capable of islanding (because it does), but something in the GX/ESS logic forces a shutdown after about a minute.

Any ideas on what I should check next?

Thanks.

You don’t mention what your battery is.
This is a critical component.
Does the system cycle 3 times before stopping?

Redo your tests with the cerbo and dvcc set to “no bms control”.

What does the inverter state and alarms widgets report once it is back up?

Thanks for looking into this.

To answer your questions:

  • Yes, the system cycles approximately 3 times before stopping completely.

  • During the event the inverter state shows “Fault”.

  • The first alarm is “Grid fault / Grid lost”, followed by VE.Bus Error 10 – system time synchronisation.

  • No other alarms are present (no low voltage, no overload, no battery current limit), and the tests are done with AC loads disconnected.

The battery is a Seplos system connected via CAN . The Multis are set to “Lithium – no BMS” and DVCC is disabled.

I will repeat the test with the Seplos CAN cable physically disconnected to fully exclude any possible BMS influence and report back.

This is often a secondary alarm and unrelated to the actual cause.
3 start attempts are common with overload, relay failure etc.

The system should be set to lithium and other type BMS in ESS. DVCC should also be enabled for a managed battery.
I would get the system configured correctly for the type of battery.

Thanks for the clarification.

The configuration (Lithium – no BMS, DVCC off) is intentional due to the way the battery system has been integrated over time. This is not a misconfiguration but a deliberate setup choice.

This behavior is also not completely new. I have seen the same Grid fault + VE.Bus Error 10 shutdown a few times in the past, but it was rare and I did not investigate further at the time. After a recent real grid outage where the system attempted to take over, cycled three times and then stayed off, I now want to resolve it properly.

The key points remain:

  • No AC loads connected during testing

  • Tested at night (no PV contribution)

  • No battery alarms or current limits

  • System runs in inverter mode for ~60 seconds

  • Then enters Fault → Grid fault → VE.Bus Error 10

  • Three restart attempts, then stops

  • With Cerbo disconnected from VE.Bus, the 3× Multi system islands and runs indefinitely

Given that the hardware operates correctly without the Cerbo in the loop, and that the issue is reproducible under controlled conditions, the next logical step would be escalation to Victron support for detailed log-level analysis of GX / ESS / VE.Bus interaction, no ?

That is between you and the distributor who supplied your system, victron does not directly support end-users and the community is, per guidelines, not a support forum.
You have unsupported batteries and are using a non-best-practice approach (not using DVCC), so I suspect you will have a challenge with support.

The GX is a coordinator, it has a role in DVCC and ESS. Since this is a longstanding issue that is getting worse, there is likely a config or install issue somewhere.

The time error is a non-event, you see it on systems that cannot start and the triple attempt to restart is normal for a system trying to recover from a fault.

This is impossible to diagnose from the internet, it is just too complex.
I would check your dynamic cut-offs in ESS, I would also seriously reconsider your choices re no BMS and not using DVCC.

How are you providing battery SOC and readings to the system? Do you have a shunt?

Obvious things are to check and replace the comms cables, make sure they aren’t bundled with other (AC etc) cabling.

This just isn’t a problem we typically see, so highly unlikely it is a software issue.

Thanks for the reply.

In the past I did run a battery aggregator (dbus-serialbattery and related add-ons), but for debugging this issue I have removed all non-Victron components. The system is now stripped back to a minimal configuration.

SOC is currently provided by the internal Multi battery monitor and the Seplos CAN battery as seen by the GX. There is no external shunt.

The battery bank is significantly oversized relative to the inverter capacity. Even with all three Multis at full output, DC current remains well below the battery’s continuous rating. During testing DC voltage is stable and there are no battery or low-voltage alarms.

Dynamic cut-off is 47 V and is not reached during the event.

VE.Bus cables are routed alongside AC cabling; I can reroute them as a test to exclude interference.

I’ve attached the current VEConfigure file of the three Multis in this simplified setup.

The issue remains reproducible with no load and no PV, and disappears completely when the Cerbo is disconnected from VE.Bus.

My Config file : Unique Download Link | WeTransfer

The internal meter is inaccurate. If the system monitor is set to internal, then what the system will see may not be correct. In this config a shunt is needed for accuracy if you are not using the BMS.
That the GX can see the battery doesn’t help if it isn’t being used as the monitor nor to control the chargers, the system defaults to the inverter algorithm.

The settings look ok, biggest issue is the difference in ages but it isn’t a parallel setup, so shouldn’t be a problem.

Since this is repeatable, I suggest some structured troubleshooting.
Try remove the ESS assistant.
If needs be, disconnect the AC out connections.
Disable PV inverters.

Pair the system back to basics and remove connections until it, hopefully, starts to work as expected.
You can even reconfig the system, removing phases to rule out an individual system.

It is a pain to go through, but it might help get to the bottom of it.

3-Phase Multi – Middle Unit Drops Out, Others Go to Fault After Grid Loss

I think I may have identified the source of my issue, but I would like confirmation if this behaviour is normal.

I recorded a video just before the fault occurs. The sequence is as follows:

  1. I switch off the grid.

  2. All three Multis continue inverting normally (green inverter LEDs).

  3. Just before the fault:

    • The middle inverter switches off first.

    • Immediately after that, the other two inverters go into fault.

This happens consistently.

Additional Tests Already Performed

To rule out external causes, I have also:

  • Disconnected the communication cable from the Seplos BMS.

  • Turned off the PV inverter outputs completely.

  • Rerouted communication cables away from AC wiring to avoid possible interference.

The behaviour remains exactly the same:

  • Middle unit drops out first.

  • The other two units then enter fault.

Question

In a properly functioning 3-phase setup, is it normal for one unit to switch off first and cause the others to fault?

At this point, it seems likely that:

  • The issue is specific to the middle inverter (hardware, DC supply, or VE.Bus issue),
    rather than PV, BMS communication, or cable interference.

Has anyone seen similar behaviour, or can confirm whether this points to a defective unit?

That is normal.
You can disable the behaviour but then ESS can’t be used.
Reconfigure the system without that phase and test again.
Or, double check all the wiring for that system.
Alternatively swap two units around and see if the issue moves.

As per led definitions from the toolkit
You don’t have a mk3 plugged in at the same time as your gx do you?