The Node-RED route is not a good solution, it’s not something that will be easier to install.
You will have to install the large OS, enable Node-RED and then copy/paste a working flow.
If you cannot make it work with the default nodes bundled with the Node-RED install then you will have to install additional packages.
I’m doing this on a single install, but if you intend to do it on multiple installs, it will be much faster to go with the driver solution.
I did not, it requires the TPDOs to be reprogrammed on the controller.
When I checked the default TPDOs that were configured on my controller, none matched what the Oceanvolt driver expected.
It would be more complicated to setup if we all have to reprogram our controllers.
Both ways have their own merit, I’m less knowledgeable modifying VenusOS directly and I appreciate the higher level of transparency that building code in Node-RED allows, especially during development. In the end you are probably right that a robust well working mod for incorporating the Sevcon data stream into the VenusOS motor monitoring abilities should best reside on the realtime os itself. But I’m not there yet.
Understood – you’re right that it’s easier to deal with all this from the Cerbo end instead or reprogramming the controllers (needs Sevcon dongle and software), as long as it can be made easy to do, and persistent so it carries on working after reboots and OS upgrades.
Maybe once the driver is debugged Victron can add it onto the list in the standard OS package, like the Oceanvolt one?
With the development of DESS, Victron started out in Node-RED first, only to move core components to the real-time OS on the inverters (assistants) and Cerbo (VenusOS) later. I think that is a good approach to take.
Once a working solution can be demonstrated on Node-RED, including all required exception handling (if any), we might be able to convince Victron to include a modified Oceanvolt driver (implementing the same functions of the Node-RED solution).
But I am generalizing now, perhaps even overcomplicating due to lack of hands on experience with this specific challenge. From reading the Oceanvolt code I got the impression things could be much simpler than I expect them to be. (Although can-bus and simple hardly ever happen at the same time)
I think you’re looking at this from the wrong direction; requiring the installation of OS-Large and Node-RED and then implementing the motordrive/display flow within this is a much bigger (and not officially supported!) change than just adding the driver that @citolen is working on, it’s a big overhead on the system as well as being far less efficient for real-time monitoring due to all the layers of indirection and coding and libraries. It’s using a big slow cumbersome sledgehammer to crack a small fast-moving nut…
Node-RED is good for occasional event triggering like turning things on and off with complex functions, it’s not so good for things like displaying multiple real-time parameters which need updating several times per second to be usable – I expect the CPU load to do this would be pretty high.
As I said, both ways have their merits, I am not arguing what is ‘the best’ technical solution. I am arguing that getting anything picked up by Victron to include in the official build is, by far, the most difficult goal to accomplish. The mere fact that the Node-RED virtual motor is supported is the deciding practical factor, to me.
And w.r.t. any possible performance issues:
I have four virtual devices running in Node-RED, all being fed a steady stream of calculated data, including the use of Karman filters to smooth out the most annoying data acquisition noise.
All that runs without problems and not even so much as a noticeable uptick in the ‘D-Bus Round-trip time (ms)’
So I don’t worry about any virtualization overhead, we are not talking millisecond response times for the core power path controller here.
In the end the determining factor is who actually steps up to the plate and does it, and they’ll use whatever method they think is best – so over to you for Node-RED then…
If @citolen goes down the driver route (which looks like working, and being simple) and you (or anyone else) goes down the Node-RED route (because you already have it installed) then users will have a choice, which is always good…
I suggest we stop arguing about what the best way is in theory, and just concentrate on coming up with one way (or two?) that works in practice…
I am looking to buy a Sevcon controller to test with at my lab. Preferably a 48V Gen4 suitable for asynchronous (old-fashioned 3 phase adapted for lower voltages) motors that are quite common in marine application. Saves me the trouble to develop and test onboard my customers boats.
I’ll see what I can do for you with regards to getting you a product id. Please give me a few days.
Whats the typical traffic on a canopen bus for motor controllers?
Ie. how many msg-es per second?
I’m asking due to the Python-induced overhead.
That should be fine though, as long as its not too crazy. And everyone is welcome to upgrade to an Ekrano GX .
Just as an example to illustrate: making a Python script handle all CAN msges on a VE.Can / N2K network is quickly too much.
Lastly: YachtDevices sell small CAN to CAN converters that could also be used for this. They offer paid engineering services where if you give them a clear spec wrt canopen, they can make a converter from that into N2K for you:
Don’t get me wrong; I’m not saying to give up on the Node-RED approach. Just let @citolen do his Python one.
Wrt making installing such mods & keeping them nice and visible to the user plus managable, we’re working on a plan for that, well actually I have the plan.
But as usual it needs time / alignment / discussion / small changes in lots of places..
Doesn’t the Node-RED approach still need some custom code writing to get the CANopen data from the Sevcon into the virtual motor driver, it can’t just use Node-RED?
IIRC that’s what you said further up the thread – in which case doesn’t it have the same issue (non-standard code installation) as the driver that @citolen is writing, plus the need to install/configure Node-RED?
Regarding traffic levels, there’s no point reading the motordrive data faster than you want the display to be refreshed – this is a tradeoff between getting the bars to update/move smoothly and being able to read changing numeric values (yes I know Node-RED could filter these differently…), but I wouldn’t have thought more than a few updates per second (two? three?) would be needed which can’t be much traffic.
I can’t say for the others, for the Sevcon gen4 it’s 5 TPDO max, but they broadcast at different frequencies depending on how it was configured.
Mine had some torque data broadcasting multiple times per seconds.
This resulted in much more traffic than what the driver is doing by fetching the data we’re interested in once per second.
The drivers does at minimum 5 SDO reads/s and at maximum 5 SDO reads and 2 SDO writes/s (to get the proper access level to read the other values).
I would be happy not to use Python for this, that wasn’t my preferred choice, but I imagine the C library for velib is closed source?
I didn’t take time to check if the .so and includes are anywhere in venus os.
If there is a possibility for us to access it then I will write the driver in C/C++, otherwise I will stick to Python
The key point here is that unlike using broadcast TPDOsfrom the Sevcon (which could swamp the bus/driver) the Cerbo (and the driver running on it) is the master, and controls how many SDOs are read repeatedly and at what rate.
Once per second should be fine for the display, and except at startup (when all the status ones such as ID can be read) only the ones useful for display need to be read out – which I assume as a minimum is just rpm, voltage and current so that power can be calculated, and maybe direction (F/N/R) though this seems a bit pointless…
(I already have temperature sensors on the motor and controller (already connected to the Cerbo) so don’t need to read these over the CANbus)
There is a misunderstanding, using the polling method will not prevent the controller from broadcasting the TPDOs.
The controller will send them regardless, once operational the controller will start broadcasting.
I doubt your controller has no TPDOs configured, they are probably there, just not the data you would like to display.
Each solution has its pros and cons. - TPDOs, it keeps the CAN bus with just the right amount data (assuming correctly configured) and anything on the bus can read the controller’s data, downside is you need to reprogram the controller.
- Polling, no reprogramming of the controller, downside is the CAN bus will have the default broadcast from the controller + the polling requests.
It’s understandable that the Oceanvolt driver relied on the first solution (TPDOs), they would program them correctly before reaching the client.
One problem at a time, let’s see later if the amount of data on the bus is an issue.
We can make the driver support polling or TPDOs if that’s ever an issue.
Regarding the motor direction, it’s derived from whether the RPM value is positive or not.
Regarding the motor temperature, it’s fetched from the controller, no need for external sensors for those that do not have them.
The only issue with the motor temperature is that on venus 3.6 the motor temperature gauges are commented out in the code, I guess they may make them available in a later release.
Ian, if the Sevcon controller emits the messages per the Oceanvolt repo then it will display with a limited set of data. There is no real standard for PDO content in CanOpen - just the format. I am in the process of defining a generic CanOpen format that will provide more data and hope to use that as a basis for a new driver (in Python) - should have the format defined this week and will send to Victron to see if they have any comment. The idea is to define a slow PDO send rate (2 per sec or so) so python would work instead of C…