We're trying to configure the EC2 plugin so it will automatically
"spin-up" slave nodes for running tests. The documentation states that
it "enables Hudson to automatically provision new instances on EC2,
based on the system demand. That is, if Hudson notices that your
system is overloaded, it will provision new slaves on EC2". However,
for us, it doesn't.
I've configured the EC2 plugin correctly and builds will run if I
provision instances manually. The plugin will also kill those EC2
instances once they become under-utilised. It just won't start
A quick check-list of our set-up:
- master is set to have 0 executors
- the EC2 config has 1 AMI set, with a "foo" label, and 2 executors
- the EC2 config has a limit of 2 instances
- the job we're interested in has been restricted to only run on a
slave with the "foo" label
If I hit the "build now" link for the job, it enters the build queue
then immediately gets the small black clock icon ("waiting for the
next available executor"), but no EC2 instance is provisioned.
Both hudson and the EC2 plugin are at their latest versions.
On Fri, 2010-11-26 at 05:56 -0800, Phillip B Oldham wrote:
> removing this limit, leaving the box blank. Instances spin up now
> without issue.
Have you tried or been able to have hudson start more than one instance
of a given slave? i.e. you have 10 build jobs for the same slave type
in the queue, does it queue all 10 builds for the same slave or can you
get it to start more than one slave of that type to drain the queue more
I can't seem to (a) get hudson to do this or (b) find any tunable that
suggests it's possible and limits the number of same slaves to start as
you would think that if hudson could/would start more than one slave
type, it should have a setting to set a maximum.