K8s - cluster-autoscaler overwriting changes on deploy

I’m doing some tests with cluster-autoscaler and when I scale a deployment the changes are immediately overwritten.
If TestApp has 10 pods and I scale it to 20, 10 new pods spin up and are immediately terminated.
If TestApp has 10 pods and I scale it to 5, 5 pods are terminated, and then 5 new pods immediately spin up, bringing the count back to 10.
Any idea what might be happening? Ultimately, my goal is to spin up enough new pods to push cluster-autoscaler to add new nodes, but this is preventing that from occuring.

Likely there is some other controller scaling down your deployment, probably an HPA

That appears to be the issue - thank you!