Thanks for the detailed explanation - the surge vs nominal power math itself makes sense, and I agree that an inverter of this class is physically capable of drawing very high short-term current.
What still concerns me, however, is when and why this surge is actually demanded.
In my case:
I can repeatedly disconnect AC manually via the breaker with 5-6 kW of active load, and the inverter transitions cleanly every time.
The shutdown only happens during a slow / unstable grid collapse, and sometimes even with very small loads (300-400 W).
If the inverter genuinely needs a 2×-3× surge from the battery just to transition to inverter mode, then logically this surge should be required every time AC is lost, regardless of how it is lost. But that is not what is observed.
This suggests that the surge demand is not an inherent requirement of inverter takeover itself, but is instead tied to a specific internal operating mode during brownout conditions, where the inverter is still trying to stabilize AC rather than committing immediately to inverter mode.
From a system design perspective, this puts me in a real dilemma.
I do not consider it reasonable to increase battery capacity simply to achieve two days of continuous autonomy, given my average consumption. At the same time, I cannot downsize the inverter significantly, because I need to occasionally run high-inrush or high-power appliances (for example induction cooking, washing machine, etc.), even if only for short periods.
What makes this especially confusing is that cheaper Chinese inverters handle this exact scenario without issues, which strongly suggests that this behaviour is more related to control logic / firmware decisions rather than fundamental power electronics limits. I fully understand that these systems are primarily designed for PV-based installations and not as large UPS systems, and that this particular use case may not have been the main design target.
However, if the UPS functionality is advertised and supported, it should also be validated under realistic brownout conditions. Requiring a battery bank capable of delivering 20,000 VA just to support a 300-500 W load during grid collapse feels, at the very least, counterintuitive.
It also raises an open question about battery chemistry stress: if such extreme current spikes are indeed required, even for very short moments, what does that imply for long-term battery health, especially for LiFePO₄ systems that are otherwise considered very robust?
I am genuinely interested in understanding where the practical boundary lies here - architectural limitation, firmware behaviour, or simply a design assumption that does not fully match this usage pattern.