I am having trouble with AWS Batch when running on EC2… I have it set to “optimal” for instance selection but depending on my selection of ram and cpu the jobs won’t start. But if I choose low values like 20vcpu and 307200 of memory it will run… but 63vcpu and 457764 of memory won’t start. I am having trouble finding documentation on any kind of governing principle here. The compute environment has a “max vcpu” of 360.
is that 307200 in…MB? GB? what measure is that?
I don’t see anything about memory specification in the instance configurations, so are you meaning on the task itself? so that’s MiBs? 95% of MB? so 285GB
which means the other one, 457764 is…425GB
it appears there is only a SINGLE instance type in the r4/m4/c4 family that satisfies this - the r4.16xl (choosing optimal
picks from r4/c4/m4 per the docs above). And it’s not even guaranteed that AWS has that instance type in the region/AZ you are trying to run in, or that they aren’t starved out.
It looks like that was it, I changed it to specify the r6i family which has a instances with higher ram and its picking up the jobs now. I thought “optimal” was designed to do just that but I guess its limited in the instance choices. Thank you for the help! Much appreciated!