-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
external NFS storage support #94
Comments
@cloustone there's work going on on cleaning that tight integration that we have and we should have something out relatively soon. the thought process is that you can create a PVC, load all the training data to this PVC and in the manifest file provide a pvc reference id/name similar to the way you provide s3 details in manifest and the learner can mount that pvc rather than the s3 storage and use the data |
@atinsood thanks for your reply. I just used dynamic external storage with NFS to deploy model train. It seems ok. |
@cloustone would love to get more details about how you did this. We would love to include a PR with a doc stating how to leverage NFS, with the steps you defined above "The following steps are our adaptions for NFS. Deploy an external NFS server out of kubernetes. |
@cloustone thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training (basically change the https://github.com/IBM/FfDL/blob/master/lcm/service/lcm/learner_deployment_helpers.go#L493 and add the volume mount) I wonder if you went this route or a different one |
@atinsood Yes, the method is almost same with what you provided. thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training. |
@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/ https://github.com/IntelAI/vck we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines. this is an interesting approach and should work well if you don't have a need of isolation of training data for every training. |
@atinsood Thanks, we will try this method according to our requirement. |
@cloustone Can you please detailly tell me how to use NFS? I also want to use NFS but I do not know how to use it. Which files do you change and how to change? Thank you very much. |
@atinsood Do you have add this method into FfDL? Or do you have document about how to use this method in FfDL? Thank you very much. |
@atinsood @Eric-Zhang1990 No, we do not currently have vck integration in FfDL. @cloustone said:
Which I think just implies a host mount, which I think is enabled in the current FfDL. So you could give that a try. @cloustone said:
We do have an internal PR that enables use of generic PVCs for training and result volumes. I don't think we need a configmap? The idea is that PVC allocation is done by some other process, and then we just point to the training data and result data volumes by name, in the manifest. Perhaps we can go ahead and externalize this in the next few days, at least on a branch, and you could give it a try. Let me see what I can do. |
@sboagibm Thank you for your kind reply. You say "then we just point to the training data and result data volumes by name, in the manifest.", can you give me a example of manifest file using local path of host? I find a file in "https://github.com/IBM/FfDL/blob/vck-patch/etc/examples/vck-integration.md", what you say is like this manifest file? If it is, can I add multi learners in it?
|
@cloustone @atinsood @sboagibm How to use NFS to store data to start training jobs?? Can you provide more detail docs for us?? |
Hello, @FfDL
We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS.
The following steps are our adaptions for NFS.
We are confirming the above method, however, new question already occurred.
If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?
Would you please confirm the above method and question, or provide a right solution to us.
Thanks
The text was updated successfully, but these errors were encountered: