Skip to content

Conversation

@tiffanny29631
Copy link
Contributor

@google-oss-prow
Copy link

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@tiffanny29631
Copy link
Contributor Author

/test-all

@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from tiffanny29631. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link

Summary of Changes

Hello @tiffanny29631, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses persistent scheduling failures for the test-git-server on GKE Autopilot clusters. It achieves this by introducing conditional logic to the git-server deployment configuration. Specifically, it ensures that when running on Autopilot, the deployment explicitly defines resource requests and limits for its containers and sets an empty node selector, thereby resolving issues related to node affinity and the optimize-utilization-scheduler.

Highlights

  • Resource Management Import: The k8s.io/apimachinery/pkg/api/resource package is now imported to enable the definition of CPU and memory resource requirements for Kubernetes objects.
  • Function Signature Update: The gitServer and gitDeployment functions have been updated to accept the nt *NT parameter, allowing them to access test environment details, specifically whether the cluster is a GKE Autopilot cluster.
  • Autopilot-Specific Configuration: Conditional logic has been introduced within the gitDeployment function to apply specific configurations only when running on GKE Autopilot clusters.
  • Explicit Resource Requests and Limits: For Autopilot clusters, explicit CPU and memory requests and limits (set to equal values) are now defined for both the git-server (100m CPU, 128Mi Memory) and http-git-server (50m CPU, 64Mi Memory) containers.
  • Empty NodeSelector for Autopilot: An empty nodeSelector is explicitly set for the git-server deployment on Autopilot clusters. This prevents conflicts with Autopilot's optimize-utilization-scheduler and resolves node affinity errors.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses test failures on GKE Autopilot clusters by correctly configuring resource requirements and an empty nodeSelector for the test-git-server deployment. The changes are well-structured and the refactoring to accommodate the Autopilot-specific logic is clean. I have one suggestion to further improve the maintainability of the new code by reducing some duplication.

Comment on lines +189 to +214
// Set resource requests and limits for git-server container
podSpec.Containers[0].Resources = corev1.ResourceRequirements{
Requests: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("100m"),
corev1.ResourceMemory: resource.MustParse("128Mi"),
},
Limits: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("100m"),
corev1.ResourceMemory: resource.MustParse("128Mi"),
},
}
// Set resource requests and limits for http-git-server container
podSpec.Containers[1].Resources = corev1.ResourceRequirements{
Requests: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("50m"),
corev1.ResourceMemory: resource.MustParse("64Mi"),
},
Limits: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("50m"),
corev1.ResourceMemory: resource.MustParse("64Mi"),
},
}
// Explicitly set empty nodeSelector to ensure no node affinity/selector constraints
// This prevents Autopilot's optimize-utilization-scheduler from applying implicit constraints
// that could cause "didn't match Pod's node affinity/selector" errors
podSpec.NodeSelector = map[string]string{}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve readability and maintainability, you can reduce code duplication by creating a ResourceList variable and reusing it for both Requests and Limits, as is required for Autopilot workloads.

Additionally, consider defining the magic strings for CPU and memory as constants.

		// Set resource requests and limits for git-server container
		gitServerResources := corev1.ResourceList{
			corev1.ResourceCPU:    resource.MustParse("100m"),
			corev1.ResourceMemory: resource.MustParse("128Mi"),
		}
		podSpec.Containers[0].Resources = corev1.ResourceRequirements{
			Requests: gitServerResources,
			Limits:   gitServerResources,
		}
		// Set resource requests and limits for http-git-server container
		httpGitServerResources := corev1.ResourceList{
			corev1.ResourceCPU:    resource.MustParse("50m"),
			corev1.ResourceMemory: resource.MustParse("64Mi"),
		}
		podSpec.Containers[1].Resources = corev1.ResourceRequirements{
			Requests: httpGitServerResources,
			Limits:   httpGitServerResources,
		}
		// Explicitly set empty nodeSelector to ensure no node affinity/selector constraints
		// This prevents Autopilot's optimize-utilization-scheduler from applying implicit constraints
		// that could cause "didn't match Pod's node affinity/selector" errors
		podSpec.NodeSelector = map[string]string{}

@Camila-B
Copy link
Contributor

/test all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants