Possible Bug? - Cerbo GX (V3.66) - VRM Stored Records listed as -5 after network connection lost

We recently returned to our boat after 10 days away, and of course, this is when out Starlink PoE injector decided to have a loose connection. We are back and network connectivity is resotred, however we were not sending VRM report updates for about 7 days. That data has not uploaded to the VRM portal, so in looking in the Cerbo, udner Settings→VRM→Stored Records, I see a value of -5, which I know can’t be right.

Any thoughts on what would cause this? The record update rate is set to 5 minutes and the data since internet was restored has been uploaded.

Everything else looks in order:

Storage Location: Internal

Free Disk Space: 559.57M

I would love to recover this missing data since there was something going on (shore power issues), that I’d like to understand. Any help would be appreciated.

Mike

You mean “negative 5”? Can you make a screenshot?

Only 559MB free diskspace?
I just checked in a system with a Cerbo and there it shows almost 5000MB.
Do you have any 3rd party software installed?

@M_Lange Hi, Yes, it showed as “-5”. I have Venus OS Large installed and use Node-RED (signal-K is not used on the Cerbo). I do have some elaborate Node-RED Flows, but I can’t imagine they really take much space. After my original post, I did some more digging and found out that there really isn’t much capability to hold that many records so I installed a 32GB SD card, which probably will prevent this problem from re-surfacing, however I did want to report what I saw because it seemed like a problem (I used to code for a living). When I inserted the SD Card the Stored Records number updated to “0” so I can’t give you a screenshot, but it was as simple as being displayed as -5 (formatting looked normal, nothing weird, just literally a negative number).

I SSHd in and used the DF command:

Filesystem Size Used Available Use% Mounted on
/dev/root 1.1G 1.0G 54.0M 95% /
devtmpfs 453.9M 0 453.9M 0% /dev
tmpfs 502.4M 956.0K 501.5M 0% /run
tmpfs 502.4M 228.0K 502.2M 0% /var/volatile
/dev/mmcblk1p5 1.1G 570.7M 459.8M 55% /data
tmpfs 502.4M 956.0K 501.5M 0% /service
/dev/mmcblk0p1 14.8G 128.0K 14.8G 0% /run/media/mmcblk0p1

/ (/dev/root) seems pretty full, but /data seems to be okay at 55% usage.

Here is the first page or so of the largest directories on the filesystem:

root@einstein:/# du -d 3 | sort -nr
du: ./proc/4258: No such file or directory
1644800 .
953064 ./usr
718932 ./usr/lib
584504 ./data
458960 ./usr/lib/node_modules
346828 ./data/conf
346680 ./data/conf/signalk
229716 ./data/home
203260 ./data/home/nodered
93984 ./usr/bin
73972 ./usr/libexec
72528 ./usr/libexec/gcc
55608 ./usr/lib/python3.12
37076 ./usr/include
34224 ./opt
34220 ./opt/victronenergy
31440 ./var
31104 ./var/www
31080 ./var/www/venus
26436 ./data/home/root
22588 ./lib
20320 ./usr/lib/opkg
16752 ./usr/share
14624 ./lib/modules
14620 ./lib/modules/6.12.23-venus-5
13980 ./usr/include/c++
12304 ./usr/sbin
11012 ./usr/lib/dri
10940 ./usr/lib/perl5
10268 ./opt/victronenergy/gui-v2
10140 ./usr/lib/arm-ve-linux-gnueabi
9080 ./usr/lib/qml
7804 ./opt/victronenergy/gui
7500 ./data/log
6744 ./usr/include/linux

I would gladly take any advice on directories that seem abnormal.