I am seeing issue with scaling m6i.8xlarge instance using autoscalar in aws. Other instances like m6i.large and m6i.xlarge is being launched.
Below is my pod defination
kind: Pod
metadata:
name: my-pod
namespace: argo
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:latest
resources:
requests:
memory: "120Gi"
cpu: "30"
limits:
memory: "128Gi"
cpu: "32"
nodeSelector:
<http://eks.amazonaws.com/nodegroup|eks.amazonaws.com/nodegroup>: default-spot
tolerations:
- key: pipeline
operator: Equal
value: ""
effect: NoSchedule```
Autoscalar pod logs
```I0731 02:05:42.399310 1 scale_up.go:93] Pod my-pod can't be scheduled on eks-default-spot-26c51474-c6ce-4ced-a882-5bfbcaf57a06, predicate checking error: Insufficient cpu, Insufficient memory; predicateName=NodeResourcesFit; reasons: Insufficient cpu, Insufficient memory; debugInfo=
I0731 02:05:42.399329 1 scale_up.go:262] No pod can fit to eks-default-spot-26c51474-c6ce-4ced-a882-5bfbcaf57a06```
Screen shot of asg