The OSDs must be located on different nodes, because the failureDomain will be set to host by default, and the erasureCoded chunk settings require at least 3 different OSDs (2 dataChunks + 1 codingChunks).ĪpiVersion : /v1 kind : CephFilesystem metadata : name : myfs namespace : rook-ceph spec : metadataPool : failureDomain : host replicated : size : 3 dataPools : - failureDomain : host replicated : size : 3 preserveFilesystemOnDelete : true metadataServer : activeCount : 1 activeStandby : true mirroring : enabled : true # list of Kubernetes Secrets containing the peer token # for more details see: peers : secretNames : - secondary-cluster-peer # specify the schedule(s) on which snapshots should be taken # see the official syntax here snapshotSchedules : - path : / interval : 24h # daily snapshots startTime : 11:55 # manage retention policies # see syntax duration here snapshotRetention : - path : / duration : " h 24" NOTE: This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The metadataPool must use a replicated pool.
Additionally, erasure coded pools can only be used with dataPools. (These definitions can also be found in the filesystem.yaml file) Erasure CodedĮrasure coded pools require the OSDs to use bluestore for the configured storeType.
ApiVersion : /v1 kind : CephFilesystem metadata : name : myfs namespace : rook-ceph spec : metadataPool : failureDomain : host replicated : size : 3 dataPools : - failureDomain : host replicated : size : 3 preserveFilesystemOnDelete : true metadataServer : activeCount : 1 activeStandby : true # A key/value list of annotations annotations : # key: value placement : # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: role # operator: In # values: # - mds-node # tolerations: # - key: mds-node # operator: Exists # podAffinity: # podAntiAffinity: # topologySpreadConstraints: resources : # limits: # cpu: "500m" # memory: "1024Mi" # requests: # cpu: "500m" # memory: "1024Mi"