What are the best practices for deploying and running scripts from user-data script while avoiding AMI?

Hi all, I’m wondering what the best practice is for deploying and running scripts from your user-data script. I have a couple scripts that I currently build into an AMI, then I just run them from my user-data script, but I’m trying to move that out of the AMI. We use terraform to create the launch template with the user data script. The two options I see are

  1. Using the terraform template_file resource and rendering the scripts/files into the user-data script. Fine for smaller things but this can get ugly with larger scripts/files when rendeered into the user-data script
  2. Store the files/scripts in s3 and pull them down from there in the user-data script. That seems like the best option but then those scripts and files aren’t kept in version control… unless I have a separate project to store and upload them

What are the scripts doing

I want to do as much as possible before I even start the image, like having an image builder pipeline to pre-install all of the needed tools and do any needed configurations that will survive when creating an AMI. Assuming I’m doing that, hopefully there’s not much for my user data script to do, but if I need to manage configuration over time or just have a lot of config that needs to happen after the instance starts, I would use some configuration management tool like Ansible or puppet in my startup script. I like Ansible better and it’s more compatible IME with containers and cattle management philosophies. Ymmv

In this instance it’s a node that’s joining a k3s cluster, so it’s running the install script and doing a few other things that can’t be done before launching the instance. I settled on keeping those scripts and the k3s binary in an s3 bucket, then I have a separate git repo to store them and push to s3 when upgrading versions. It works fine, I just try to keep in line with best practices when I can