-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Plugin: Contour #6458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Plugin: Contour #6458
Conversation
Initial write up
✅ Deploy Preview for knative ready!Built without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: iRaindrop The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Added installing-contour.md to Installing Plugins
Misc edits
Minor edits
term fix
Misc edits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise mirroring documentation from ingress implementations seems like a red-flag. We should be linking out to these websites.
|
/assign |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Dave that we don't really want to duplicate the Contour installation documentation here. I seem to recall that we have some sort of specialized Contour configuration in net-contour -- @dprotaso , is that contour.yaml still needed, or can users install the upstream Contour at this point? (My recollection was that it installed two Contour installations, one for external addresses and one for internal services.)
| This page shows how to install Contour in three ways: | ||
|
|
||
| - By using Contour’s example YAML. | ||
| - By using the Helm chart for Contour. | ||
| - By using the Contour gateway provisioner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should focus on installing the Knative adapter for Contour, rather than installing Contour itself. I'd make "Contour installed on the cluster" a pre-requisite, and then just talk about installing the net-contour controller.
|
|
||
| It then shows how to deploy a sample workload and route traffic to it through Contour. | ||
|
|
||
| This guidance uses all default settings. No additional configuration is required. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I liked how the Kourier docs provided information about configuration options. It may be that net-contour does not have any separate configuration options, but if so, we should say that the configuration is done natively through Contour configuration, and people should see the guides at https://projectcontour.io/
|
|
||
| - A Kubernetes cluster with the Knative Serving component installed. | ||
| - Knative [load balancing](../serving/load-balancing/README.md) is activated. | ||
| - HELM installed locally, if selected as the installation method. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Helm" as project is not an initialism, and shouldn't be in all-caps. In any case, this seems like guidance for installing Contour, and we should simply make that a pre-requisite, rather than duplicating Contour's installation instructions.
| ## Supported Contour versions | ||
|
|
||
| For information about Contour versions, see the Contour [Compatibility Matrix](https://projectcontour.io/resources/compatibility-matrix/). | ||
|
|
||
| ## Option 1 - YAML installation | ||
|
|
||
| 1. Use the following command to install Contour: | ||
|
|
||
| ```bash | ||
| kubectl apply -f https://projectcontour.io/quickstart/contour.yaml | ||
| ``` | ||
|
|
||
| 1. Verify the Contour pods are ready: | ||
|
|
||
| ```bash | ||
| kubectl get pods -n projectcontour -o wide | ||
| ``` | ||
|
|
||
| You should see the following results: | ||
|
|
||
| - Two Contour pods each with status Running and 1/1 Ready. | ||
| - One or more Envoy pods, each with the status Running and 2/2 Ready. | ||
|
|
||
| ## Option 2 - Helm installation | ||
|
|
||
| This option requires Helm to be installed locally. | ||
|
|
||
| 1. Use the following command to add the `bitnami` chart repository that contains the Contour chart: | ||
|
|
||
| ```bash | ||
| helm repo add bitnami https://charts.bitnami.com/bitnami | ||
| ``` | ||
|
|
||
| 1. Install the Contour chart: | ||
|
|
||
| ```bash | ||
| helm install my-release bitnami/contour --namespace projectcontour --create-namespace | ||
| ``` | ||
|
|
||
| 1. Verify Contour is ready: | ||
|
|
||
| ```bash | ||
| kubectl -n projectcontour get po,svc | ||
| ``` | ||
|
|
||
| You should see the following results: | ||
|
|
||
| - One instance of pod/my-release-contour-contour with status Running and 1/1 Ready. | ||
| - One or more instances of pod/my-release-contour-envoy with each status Running and 2/2 Ready. | ||
| - One instance of service/my-release-contour. | ||
| - One instance of service/my-release-contour-envoy. | ||
|
|
||
| ## Option 3: Contour Gateway Provisioner | ||
|
|
||
| The Gateway provisioner watches for the creation of Gateway API Gateway resources, and dynamically provisions Contour and Envoy instances based on the Gateway's spec. | ||
|
|
||
| Although the provisioning request itself is made using a Gateway API resource (Gateway), this method of installation still allows you to use any of the supported APIs for defining virtual hosts and routes: Ingress, HTTPProxy, or Gateway API’s HTTPRoute and TLSRoute. | ||
|
|
||
| 1. Use the following command to deploy the Gateway provisioner: | ||
|
|
||
| ```bash | ||
| kubectl apply -f https://projectcontour.io/quickstart/contour-gateway-provisioner.yaml | ||
| ``` | ||
|
|
||
| 1. Verify the Gateway provisioner deployment is available: | ||
|
|
||
| ```bash | ||
| kubectl -n projectcontour get deployments | ||
| NAME READY UP-TO-DATE AVAILABLE AGE | ||
| contour-gateway-provisioner 1/1 1 1 1m | ||
| ``` | ||
|
|
||
| 1. Create a GatewayClass: | ||
|
|
||
| ```bash | ||
| kubectl apply -f - <<EOF | ||
| kind: GatewayClass | ||
| apiVersion: gateway.networking.k8s.io/v1 | ||
| metadata: | ||
| name: contour | ||
| spec: | ||
| controllerName: projectcontour.io/gateway-controller | ||
| EOF | ||
| ``` | ||
|
|
||
| 1. Create a Gateway: | ||
|
|
||
| ```bash | ||
| kubectl apply -f - <<EOF | ||
| kind: Gateway | ||
| apiVersion: gateway.networking.k8s.io/v1 | ||
| metadata: | ||
| name: contour | ||
| namespace: projectcontour | ||
| spec: | ||
| gatewayClassName: contour | ||
| listeners: | ||
| - name: http | ||
| protocol: HTTP | ||
| port: 80 | ||
| allowedRoutes: | ||
| namespaces: | ||
| from: All | ||
| EOF | ||
| ``` | ||
|
|
||
| 1. Verify the Gateway is available. It may take up to a minute to become available. | ||
|
|
||
| ```bash | ||
| ubectl -n projectcontour get gateways | ||
| NAME CLASS ADDRESS READY AGE | ||
| contour contour True 27s | ||
| ``` | ||
|
|
||
| 1. Verify the Contour pods are ready: | ||
|
|
||
| ```bash | ||
| kubectl -n projectcontour get pods | ||
| ``` | ||
|
|
||
| You should see the following results: | ||
|
|
||
| - Two Contour pods each with status Running and 1/1 Ready. | ||
| - One or move Envoy pods, each with the status Running and 2/2 Ready. | ||
|
|
||
| ## Test application | ||
|
|
||
| Install a web application workload and activate traffic flowing to the backend. | ||
|
|
||
| 1. Use the following command to install httpbin: | ||
|
|
||
| ```bash | ||
| kubectl apply -f https://projectcontour.io/examples/httpbin.yaml | ||
| ``` | ||
|
|
||
| 1. Verify the pods and service are ready: | ||
|
|
||
| ```bash | ||
| kubectl get po,svc,ing -l app=httpbin | ||
| ``` | ||
|
|
||
| You should see the following: | ||
|
|
||
| - Three instances of pods/httpbin, each with status Running and 1/1 Ready. | ||
| - One service/httpbin CLUSTER-IP listed on port 80. | ||
| - One Ingress on port 80 | ||
|
|
||
| 1. The Helm install configures Contour to filter Ingress and HTTPProxy objects based on the contour IngressClass name. If using Helm, ensure the Ingress has an ingress class of contour with the following command: | ||
|
|
||
| ```bash | ||
| kubectl patch ingress httpbin -p '{"spec":{"ingressClassName": "contour"}}' | ||
| ``` | ||
|
|
||
| You you can send some traffic to the sample application, via Contour & Envoy. | ||
|
|
||
| For simplicity and compatibility across all platforms use `kubectl port-forward` to get traffic to Envoy, but in a production environment you would typically use the Envoy service’s address. | ||
|
|
||
| 1. Port-forward from your local machine to the Envoy service: | ||
|
|
||
| If using YAML: | ||
|
|
||
| ```bash | ||
| kubectl -n projectcontour port-forward service/envoy 8888:80 | ||
| ``` | ||
|
|
||
| If using Helm: | ||
|
|
||
| ```bash | ||
| kubectl -n projectcontour port-forward service/my-release-contour-envoy 8888:80 | ||
| ``` | ||
|
|
||
| If using the Gateway provisioner: | ||
|
|
||
| ```bash | ||
| kubectl -n projectcontour port-forward service/envoy-contour 8888:80 | ||
| ``` | ||
|
|
||
| In a browser or via curl, make a request to `http://local.projectcontour.io:8888`. The `local.projectcontour.io` URL is a public DNS record resolving to `127.0.0.1` to make use of the forwarded port. You should see the httpbin home page. | ||
|
|
||
| ## See also | ||
|
|
||
| Contour [Getting Started](https://projectcontour.io/getting-started/) documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of these instructions get Contour installed on the cluster, but they don't get net-contour (the Knative component that performs the adapter layer between Knative abstract Routes and Contour HTTPProxy objects) installed. We have some instructions for that in install-serving-with-yaml.md:
1. Install the Knative Contour controller by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-contour",org="knative-extensions",file="net-contour.yaml")}}
```
1. Configure Knative Serving to use Contour by default by running the command:
```bash
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"contour.ingress.networking.knative.dev"}}'
```
1. Fetch the External IP address or CNAME by running the command:
```bash
kubectl --namespace contour-external get service envoy
```
!!! tip
Save this to use in the following [Configure DNS](#configure-dns) section.(I removed the step 1 which used a contour.yaml which was published by Knative, because I think the necessary changes have already been contributed upstream.
You may want to document at least the visibility configuration from https://github.com/knative-extensions/net-contour/blob/main/config/config-contour.yaml#L50 as well, though I think most of the configuration for net-contour is inherited from Contour's defaults described in the Contour documentation.
Yeah we still need both for external and internal routes. |
|
This content will now be under Configuring Knative->Networking Options->Plugin: Contour |
Need topic to install Contour plugin.
Content derived from https://projectcontour.io/getting-started/
Proposed Changes