NFS volume created with nobody:nobody ownership. How to configure correctly?

i have storage-class that dynamically provision nfs volume. when i execute claim the volume is created as nobody:nobody on the remote nfs and from reading i assume it relate to the fact that the claim is forwarded to the nfs as root user, what is the correct way to setup the nfs or the claim ?

It depends on your cloud provider.
On Azure, you could create a volumeclaimtemplate where you could specify storageclassname. And then setup the claim as you expected.

Setup NFS example

mount -t nfs -o tcp,rw,async,vers=3 NFS-Server:ServerPath LocalPath

Created PersistentVolumeClaim example

apiVersion: v1
metadata:
  name: customdb-writerpool-volume-rack2-v0-1
  namespace: customdb-system
  labels:
    <http://app.kubernetes.io/component|app.kubernetes.io/component>: customdb-installation
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: customdb
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: customdb-operator
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: customdb-installation
    <http://app.kubernetes.io/namespace|app.kubernetes.io/namespace>: customdb-system
    <http://app.kubernetes.io/part-of|app.kubernetes.io/part-of>: customdb-installation
    <http://app.kubernetes.io/version|app.kubernetes.io/version>: 0.0.1
    <http://customdb.datastax.com/rack|customdb.datastax.com/rack>: rack2
    <http://customdb.datastax.com/writerpool-name|customdb.datastax.com/writerpool-name>: rack2
    <http://customdb.datastax.com/writerpool-revision|customdb.datastax.com/writerpool-revision>: '1'
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Gi
  storageClassName: local-path
  volumeMode: Filesystem```

the is no credentials here although. the pv is created on the nfs it just matter of permission on nfs that crack it to nobody:nobody probably as a result of root user use during provisionning

Believe you could specify credentials through NFS mount as following

mount -t nfs -o tcp,rw,async,vers=3, username=svc_account,password=password, NFS-Server:ServerPath LocalPath

wait i think we misunderstand each other.

• i installed csi-nfs-driver as provisioner.

created storageclass:
apiVersion: http://storage.k8s.io/v1|storage.k8s.io/v1
kind: StorageClass
metadata:
name: harbor-database
provisioner: http://nfs.csi.k8s.io|nfs.csi.k8s.io
parameters:
server: 10.254.254.6
share: /harbor_repository/database
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
mountOptions:

  • nfsvers=4.1

The PVC has following fields you could play with
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#persistentvolumeclaimspec-v1-core

created a claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc

namespace: harbor-repo

spec:
storageClassName: harbor-database
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi

the pv that is created on the nfs is with nobody:nobody permissions

ok will try to hardcode to filesystem

the Pod fails to read/write ?

the pod failes to create relevent dir of pgdata under it and i suspect it due to permission of the owner dir since when i change that manually he was able to write what he needed

Check the storageClass dynamic mechanism on the platform