Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust defaults to better handle self-hosting and bootstrapping #19190

Merged
merged 2 commits into from
Apr 3, 2018

Conversation

smarterclayton
Copy link
Contributor

@deads2k as discussed although I bumped the default for pods-per-core rather than removing.

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: smarterclayton

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Apr 2, 2018
@smarterclayton
Copy link
Contributor Author

This blocks landing static pods because atomic has only one core and can only run 10 pods on the master.

@smarterclayton
Copy link
Contributor Author

/test install

@smarterclayton
Copy link
Contributor Author

/test extended_conformance_install

@deads2k
Copy link
Contributor

deads2k commented Apr 2, 2018

@deads2k as discussed although I bumped the default for pods-per-core rather than removing.

What value does that have over floating like upstream does? We hit a situation where it was better to not have this value. Why is it better to do this than to be consistent with upstream?

Just the question. lgtm otherwise.

@smarterclayton
Copy link
Contributor Author

I'm thinking about the true pathological cases where we let the limit off and someone suddenly gets "max-pods" scheduled. With this change and the defaults that would be at 6 cores, whereas with no defaults it could happen on 1 core. I'm neither strongly for or against, so your call

A single core master may run up to 25 pods before too long. This limit
was intentionally low in the early days where stability was an issue. At
this point it is no longer helpful and should be increased.
We plan on using long durations for when auto-approve is off and short
durations (<1d) for when they are on. A month was too far from either
extreme.
@smarterclayton
Copy link
Contributor Author

Removed, david convinced me we should do what upstream does wherever possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. retest-not-required size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants