Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.9k views
in Technique[技术] by (71.8m points)

aws cdk - Unable to attach Volumes to EC2 instance, grantAttachVolumeByResourceTag not clear on usage

I have a bastion EC2 instance that I am trying to mount a Volume to (I want the Volume to persist even on Bastion replacement). I have read the docs but they tend to have bad non-working examples in the first place. I have created the Bastion and the Volume but I am not able to get the Volume to attach to the EC2 instance.

This is the code I am using currently (note: part of a larger construct I am working on):

// Bastion
this.bastion = new Instance(this, 'Bastion', {
  instanceName: 'BASTION-' + this._vpcName,
  vpc: this.vpc,
  vpcSubnets: {
    subnets: [_bastionSubnet],
    availabilityZones:[
      Stack.of(this).availabilityZones[0] // Force to same as Volume
    ]
  },
  machineImage: MachineImage.latestAmazonLinux({
    generation: AmazonLinuxGeneration.AMAZON_LINUX_2,
  }),
  instanceType: _instanceType,
  role: bastionRole,
  userData: UserData.custom(bootscript),
  userDataCausesReplacement: true,
  securityGroup: this.securityGroup_Bastion,
  keyName: this._props.bastion.keyName,
  blockDevices: [
    {
      deviceName: '/dev/xvda',
      volume: BlockDeviceVolume.ebs(_rootVolumeSize, {
        volumeType: EbsDeviceVolumeType.GP2,
      }),
    },
  ],
});

const _targetDevice = '/dev/xvdz';

// Create Volume
this.volumeBackups = new Volume(this, 'backupsVolume', {
  availabilityZone: Stack.of(this).availabilityZones[0], // Force to same as Bastion
  size: Size.gibibytes(200),
  encrypted: true,
  volumeName: _targetDevice
});

// Add attach access
this.volumeBackups.grantAttachVolumeByResourceTag(this.bastion.grantPrincipal, [this.bastion]);

So far, what I am seeing, is the Bastion gets created and the Volume gets created. They have the expected VolumeGrantAttach-<suffix> tag and they both match. When checking in AWS console, I do not see the Volume under the Storage Tab for the instance. When I log into the instance and run lsblk, I do not see the volume available (just my root device).

$ sudo lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1       259:0    0  30G  0 disk 
├─nvme0n1p1   259:1    0  30G  0 part /
└─nvme0n1p128 259:2    0   1M  0 part 

At this point, I thought I was missing a mapping. I tried to add that to the bastion blockDevices prop but that doesn't seem to be correct as the types don't work together. I even tried to add the mount to the first-run script but it isn't even recognized by the Instance, I have no way to mount it.

I am still not able to get the Volume to mount. I really have no idea what else to do.

question from:https://stackoverflow.com/questions/65922864/unable-to-attach-volumes-to-ec2-instance-grantattachvolumebyresourcetag-not-cle

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Ok, so this is pretty straight forward but not easy to find out how to do this properly. The docs example does not work... (threw an error that I cannot recall currently).

You need to use AWS Cli to attach the Volume in the instance. Given that I wanted to automate this task, this was a challenge but I was able to solve it.

I am using typescript on my own construct so I will attempt to only show the commands I used, not the actual code that does this.

const _volumeId = 'vol-somerandomid';
const _targetDevice = '/dev/xvdz';

// I actually added a sleep for 2 mins here to ensure the volume is detached from a previous bastion instance. Otherwise you can, and will, get `VolumeInUse` error

// Attach VIA AWS CLI
aws --region ${Stack.of(this).region} ec2 attach-volume --volume-id ${_volumeId} --instance-id $(cat /var/lib/cloud/data/instance-id) --device ${_targetDevice}

// Verify it's attached
while ! test -e ${_targetDevice}; do sleep 1; done

Once it's attached, you will still need to format this the first time it's created. The problem here, is how can I make the first run solve this issue? This is what I can up with:

First, add to fstab:

const _backupVolumeMountLocation = '/home/ec2-user/backups';

// Must create the mount point first
mkdir -p ${_backupVolumeMountLocation}
 
// Add to fstab
echo "'+`${_targetDevice} ${_backupVolumeMountLocation} ext4 defaults,nofail 0 0`+'" >> /etc/fstab

This here is the tricky part as this will attempt to mount the volume then it will format if the mount fails.

sleep 1 ; mount -a && echo "Backups Volume already formatted and mounted" || (echo "Backups Volume is not formatted" && mkfs -t ext4 ${_targetDevice} && sleep 1 && mount -a && (chown -R ec2-user:ec2-user ${_backupVolumeMountLocation} && echo "Formatted backups volume") || echo "Failed to format and mount the backups volume" )

The above command is broken down to this logic:

  • Sleep for 1 second - This ensure the fstab is saved first before we attempt to mount
  • Mount the Backups Volume
    • If the mount did not fail:
      • This will echo Backups Volume already formatted and mounted
    • If the mount failed:
      • This will echo Backups Volume is not formatted
      • It will format the volume
      • It will mount the volume
        • If the mount did not fail:
          • It will set ownership for the newly formatted volume
          • This will echo Formatted backups volume
        • If the mount failed:
          • This will echo Failed to format and mount the backups volume.

Hopefully this will help others in the future. This was not very clear to me on how to do this properly.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...