


It’s very awkward for using third party containers based on different package managers. Hi, Thanks for the - Yes, I tried out NFS client pattern shown in the VolkovLabs/balena-nfs repo and that’s a reasonable if heavy pattern for created containers where you have control of the base OS. We recently produced YouTube video to explain the NFS solution for the community.

We will share our clustering ideas in future articles. Overall, after 2 months, we are happy with the solution.Īs additional benefit it’s possible to mount NFS storage from another device located in the same network when building cluster and we are exploring this option now. NFS4 is another great feature which was added recently. We accept the risk and will use sync for specific use cases. We also recently switched to async mode which controlled by an environment variable for performance boost and can lead to data-loss. In this case, using 2nd drive is a preferred way for NFS storage and it will have better performance overall. Unfortunately, there is no way to limit disk space during installation and we had to repartition ResinData volume to use NFS. We recently started using generic_x86_64 image, which partition the whole drive for system volumes and rest is going to resinData. The only issue I see so far that users can overflow the redisData with their files and it will break the device functionality. I must emphasise though, no time frame on this or whether the bind mount label solution will in fact be implemented, but anyone who has seen me around the forum well knows that I am all about hearing from those with the problem, I would love more people in the conversation.Īs always a shoutout to for the solution and for elevating the problem on our I like the idea of using the bind-mount. I haven’t tested it myself yet, but I think you would need to include the :shared addition to your mounted volume in the docker-compose file. And if interested in testing it out, here is a PR with a bind mount label idea: Add bind-mount label by maggie0002 If you had ideas on how you feel this functionally could be best added based on your experience and use cases I am all for hearing it. Moreover, I haven’t yet got around to testing this theoretical option to be able to start bringing in all the other devs on a brainstorm around it. Adding labels though isn’t something we take lightly, once they are added, they are hard for us to then change direction later, and mount naming is also a challenge. What seems most likely is including a label in containers that allows a bind mount to an empty folder, and from that bind mount we should be able to pass mounts between containers. It is more difficult than may be expected as any solution entails quite a significant change to the ways of working. Linux 52e711b 5.14.21-yocto-standard #1 SMP PREEMPT Mon Nov 29 01:17: x86_64 x86_64 x86_64 I understand the friction, and have been giving some thought to how best to overcome the need for the NFS mounts in the first place. Generic x86_64 (legacy MBR) - balenaOS 2.98.33 - development - cat /proc/filesystems | grep uname -a Is there an easier or better way to do this? So, unless I’m missing something, it seems like the right step forward is to build my own BalenaOS kernel. I tried out the balena-nfs pattern using the 3rd party containers as a starting point but there are different base OS involved so I’ve got a mess of different package managers to sort in my sidecar container and I’m messing with the 3rd party container provided orchestration layer so it’s a lot. I’ve looked at doing this on BalenaOS but it doesn’t have NFS support in the kernel. I’m using a variety of 3rd party containers and that just works. Normally, I could mount NFS on the docker host and provide those as volumes to the docker containers. I’ve looked at the NFS patterns in VolkovLabs / balena-nfs and while that’s a great pattern for my personal containers, I’m having trouble abstracting that to a 3rd party container pattern.
