Hey folks. I have a weird situation where many thousands of S3 bucket objects have ended up in Glacier and nobody on the team has a clue why. I looked through CloudTrail event history using resourcename = bucket. I can see results for lifecycle rules going back 90 days but none of those rules have anything to do with Glacier. The developer whose code uses this bucket swears all the objects were in standard storage till recently. How do I find out what’s going on here?
Is it possible you’re using intelligent tiering?
Yeah definitely sounds like intelligent tiering. Might be a blessing in disguise as this means they likely weren’t being accessed
Thanks, , .
not using intelligent tiering intentionally, no. I don’t have access to the code ATM but I’m pretty sure no storage class is being specified when uploading. Shouldn’t it default to standard storage?
StandardStorage is the default, yup! You checked for lifecycle policies on the bucket as well if intelligent tiering isn’t enabled?
Yup, no lifecycle rules to transition stuff to glacier. BTW, I’d looked at intelligent tiering a few months back and it didn’t do glacier back then, so thanks for the tip. Also, this from the doc makes me suspicious:
If the objects are accessed later, S3 Intelligent-Tiering moves the objects back to the Frequent Access tier.
What we’re seeing is that GetObject calls are failing with something like this storage class doesn't support this operation
I’m not 100% on this but I believe that S3 Intelligent Tiering has its own special version that’s equivalent to glacier rates but is still a separate storage class technical which is what may be happening here