I've cruised this and other forums for resources but can't seem to get un-stuck. When Venus OS is loaded to the Pi it works fine, but I'm hoping to use the Pi for more than just that purpose, so I'm trying to use the docker container with Grafana.
The containers install fine, but no measurements are picked up:
When I run cat /dev/ttyUSB0 I get this result, so it looks like the MPPT 75|15 is outputting as it should:
I've tried a few options but none seem to achieve the desired effect:
The log isn't reporting any errors, but when debug us turned on, it looks like its rapidly connecting and closing connections through the USB cable.
Any advice would be greatly appreciated.
Here's my .yaml file:
I have a Batrium controlling my CCGX. It is passing data back to the GX and VRM. However not everything gets passed to VRM. I have installed Docker/Grafana in an effort to track the missing data.
The information in question is WHICH cell of the battery is the high voltage and which is the low voltage. VRM and Grafana readily show the min and max cell voltages, but not which cell is each.
If I run MQTT Explorer and connect to the CCGX, it reports the min/max cell IDs consistent with what the BMS is reporting:
My Grafana queries are identical, but plots for "MinVoltageCellId" and "MaxVoltageCellId" show "No Data".
Can anybody provide a suggestion?
Hello Everyone! :)
In the last few weeks i tried to install the Service victronenergy/venus-docker-grafana (github.com) from Victron without Docker on a Debian 11 Linux.
Todo so i pulled the master Branch of victronenergy/venus-docker-grafana-images: See https://github.com/victronenergy/venus-docker-grafana .
Then i installed InfluxDb and Grafana manually. Connected Grafana to Influx by the Datasource.
The next step was to start the Server Node JS Application.
This also worked as expected with the proper Nodejs Versions and NVM.
The logoutput said something like this one (Copied from Victron)
[info] [influxdb] Attempting connection to v-7c242b35567af6a044fcaae432a98bb7f25cdf68-influxdb:8086/venus
[info] [venus-server] running at 0.0.0.0:8088
[info] [influxdb] Connected
[info] [influxdb] Set retention policy to 30d
As the log output says the Server connected to the influx and started the server itself on 0.0.0.0:8088 . Now the static websites should also be available on 0.0.0.0:8088 .
But this was not the case and i started searching for the issue.
After hours of reverse engineering i found out that the Server Application can also be installed via npm install with the command "npm install g unsafe-perm venus-docker-grafana-server"
The installation will be placed in usr local lib node_modules venus_docker_grafana_server
Starting this venus-server in the usr local lib node_modules venus_docker_grafana_serve bin folder will start the Server Application also accordingly AND will load the static websites as expected. With the Websites on 0 0 0 0 8088 all the configuration can be set as needed and will therefore be placed in the config config.json file.
If the configuration is valid,the Venus OS data will be collected form the Server.
Shutting down the "npm Server" and restarting the "Github Server" will result also in proper working Server and pushing data to the InfluxDB. Just the Static Websites are not loaded.
The reason is that there is a Difference between the "npm Server" and the "Github Server"
The "Github Server" only has the folder "public_src" instead of the "public" folder with the static websites.
Now after the long explanation my Question.
Why there is a difference between the npm install and the pulled Github Server?
It may be a dump question due to my lack of nowledge in NodeJS and Docker ;-)
PS: Hier are my used Versions:
i have now two cerbo gx running. So i have two installations in VRM. I could see data from both installations in VRM.
I use venus-docker-grafana to load the data into a local influxdb. All Tables has the field portalID so it should work with different installations. (Each installation has its own portalID).
[image]But in the Dashboard it only transfers data from the first installation.
What do i missing?
I use the victron docker Images to push all Data from MQTT to a local Influx DB.
This is a very nice solution, having detailed 6 samples per Second for all measurements.
The Data consume about 300 MB per Day what is fine for a few month.
But I want to use this also for long term storage of the Data (>10 Years) as well.
So I want to use the influx data aggregation and retention methods to down sample old data.
Data older as 1 Month should get aggregated to 10 minutes slots, reducing the data consumption from 9 GB / Month to about 15 MB/Month.
Anyone already done this and can provide his downsampling task script? Would be great to share this, that not everyone needs to write it again and again.