Using Ansible v2.10.4 to configure RHEL 8 VMs on Azure IaaS, and am trying to use the parted directive to partition a new data disk:
- name: Partition the data disk for app use.
parted:
device: /dev/sdc
number: 1
state: present
align: optimal
label: msdos
part_start: 0%
part_end: 100%
part_type: primary
This generally works about half the time; when it doesn't, it fails with this message:
FAILED! => {"changed": false, "err": "Error: Partition(s) on /dev/sdc are being used.
", "msg": "Error while running parted script: /sbin/parted -s -m -a optimal /dev/sdc -- unit KiB mklabel msdos mkpart primary 0% 100%", "out": "", "rc": 1}
While I'm not sure why a newly created, unpartitioned/unformatted disk would be "in use", I added this directive above it to remove any partitions:
- name: First ensure the data disk is "de-partitioned" (so next call to parted doesn't fail).
parted:
device: /dev/sdc
number: 1
state: absent
Sometimes this works, but other times, it also fails with a slightly different (but effectively the same) message:
FAILED! => {"changed": false, "err": "Warning: Partition /dev/sdc1 is being used. Are you sure you want to continue?
", "msg": "Error while running parted script: /sbin/parted -s -m -a optimal /dev/sdc -- rm 1", "out": "", "rc": 1}
Is there any way with Ansible to test whether a disk is "in use" before attempting to do anything with it?
question from:
https://stackoverflow.com/questions/65850971/ansible-parted-command-fails-when-new-disk-being-used 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…