I've experienced strange behavior in the past when modifying EC2 and EBS environments, so today while while moving my EC2 instance from t2 to t3a, I carefully recorded the sequence of events and resulting outage.
What could be causing this?
- EC2 t2 and EBS healthy - SpringBoot app running, API reachable
- EC2 security group includes rule for HTTP & HTTPS for all traffic (port 443 and 80) (verified)
- Through EBS console > Configuration > Capacity, t3a was selected from Instance Type dropdown
- Click Apply > confirm dialog stating charges may be incurred > EC2 begins changing the environment
- Once complete - EC2 and EBS show healthy, app logs show no errors from Spring
- App unreachable :( - API requests timeout, access logs show nothing coming thru, attempting from EC2 IP and EBS app subdomain
- In EBS, redeploy last WAR, app still not reachable
- From EC2 dash>security groups, change EC2 inbound rule to allow ALL TRAFFIC
- App now responds :)
- Revert inbound rule to previous HTTP + HTTPS setting, app still responds
I gave each of these steps time to complete.
I've experienced similar behavior before when adjusting .ebextensions config, EC2 config, etc. But I've always assumed I fat fingered something. Going into this change, I had a feeling something like this may happen, hence recording each step. The result is I don't think I did anything wrong, and this is an AWS defect.
Things like this have stumped me many times. But I figure I'd bring it hear and just see if there was something I am missing that would cause this behavior.
question from:
https://stackoverflow.com/questions/65895515/strange-behavior-outage-in-aws-beanstalk-after-t2-to-t3a-upgrade 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…