Skip to content

Commit a4d7b2f

Browse files
authored
Fixes ISSUE-9: add configuration of web console (#10)
1 parent 3858ca3 commit a4d7b2f

File tree

45 files changed

+2037
-1447
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+2037
-1447
lines changed

.plano.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
subrepos/skewer/config/.plano.py

.planofile

Lines changed: 0 additions & 1 deletion
This file was deleted.

README.md

Lines changed: 35 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -54,31 +54,40 @@ As an example, the three clusters might consist of:
5454
The `skupper` command-line tool is the entrypoint for installing
5555
and configuring Skupper. You need to install the `skupper`
5656
command only once for each development environment.
57+
5758
On Linux or Mac, you can use the install script (inspect it
5859
[here][install-script]) to download and extract the command:
60+
5961
~~~ shell
6062
curl https://skupper.io/install.sh | sh
6163
~~~
64+
6265
The script installs the command under your home directory. It
6366
prompts you to add the command to your path if necessary.
67+
6468
For Windows and other installation options, see [Installing
6569
Skupper][install-docs].
70+
6671
[install-script]: https://github.com/skupperproject/skupper-website/blob/main/docs/install.sh
6772
[install-docs]: https://skupper.io/install/index.html
6873

6974
## Step 2: Configure separate console sessions
7075

7176
Skupper is designed for use with multiple namespaces, usually on
72-
different clusters. The `skupper` command uses your
77+
different clusters. The `skupper` and `kubectl` commands use your
7378
[kubeconfig][kubeconfig] and current context to select the
74-
namespace where it operates.
79+
namespace where they operate.
80+
7581
[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
82+
7683
Your kubeconfig is stored in a file in your home directory. The
7784
`skupper` and `kubectl` commands use the `KUBECONFIG` environment
7885
variable to locate it.
86+
7987
A single kubeconfig supports only one active context per user.
8088
Since you will be using multiple contexts at once in this
8189
exercise, you need to create distinct kubeconfigs.
90+
8291
Start a console session for each of your namespaces. Set the
8392
`KUBECONFIG` environment variable to a different path in each
8493
session.
@@ -107,6 +116,7 @@ The procedure for accessing a Kubernetes cluster varies by
107116
provider. [Find the instructions for your chosen
108117
provider][kube-providers] and use them to authenticate and
109118
configure access for each console session.
119+
110120
[kube-providers]: https://skupper.io/start/kubernetes.html
111121

112122
## Step 4: Set up your namespaces
@@ -138,35 +148,39 @@ kubectl config set-context --current --namespace private1
138148

139149
## Step 5: Install Skupper in your namespaces
140150

141-
The `skupper init` command installs the Skupper router and service
151+
The `skupper init` command installs the Skupper router and
142152
controller in the current namespace. Run the `skupper init` command
143153
in each namespace.
154+
144155
**Note:** If you are using Minikube, [you need to start `minikube
145156
tunnel`][minikube-tunnel] before you install Skupper.
157+
146158
[minikube-tunnel]: https://skupper.io/start/minikube.html#running-minikube-tunnel
147159

148160
_**Console for public1:**_
149161

150162
~~~ shell
151-
skupper init --site-name public1
163+
skupper init --enable-console --enable-flow-collector
152164
~~~
153165

154166
_**Console for public2:**_
155167

156168
~~~ shell
157-
skupper init --site-name public2
169+
skupper init
158170
~~~
159171

160172
_**Console for private1:**_
161173

162174
~~~ shell
163-
skupper init --site-name private1
175+
skupper init
164176
~~~
165177

166178
_Sample output:_
179+
167180
~~~ console
168-
$ skupper init --site-name <namespace>
181+
$ skupper init
169182
Waiting for LoadBalancer IP or hostname...
183+
Waiting for status...
170184
Skupper is now installed in namespace '<namespace>'. Use 'skupper status' to get more information.
171185
~~~
172186

@@ -194,30 +208,34 @@ skupper status
194208
~~~
195209

196210
_Sample output:_
211+
197212
~~~ console
198213
Skupper is enabled for namespace "<namespace>" in interior mode. It is connected to 1 other site. It has 1 exposed service.
199214
The site console url is: <console-url>
200215
The credentials for internal console-auth mode are held in secret: 'skupper-console-users'
201216
~~~
217+
202218
As you move through the steps below, you can use `skupper status` at
203219
any time to check your progress.
204220

205221
## Step 7: Link your namespaces
206222

207223
Creating a link requires use of two `skupper` commands in
208224
conjunction, `skupper token create` and `skupper link create`.
225+
209226
The `skupper token create` command generates a secret token that
210227
signifies permission to create a link. The token also carries the
211228
link details. Then, in a remote namespace, The `skupper link
212229
create` command uses the token to create a link to the namespace
213230
that generated it.
231+
214232
**Note:** The link token is truly a *secret*. Anyone who has the
215233
token can link to your namespace. Make sure that only those you
216234
trust have access to it.
235+
217236
First, use `skupper token create` in one namespace to generate the
218237
token. Then, use `skupper link create` in the other to create a
219238
link.
220-
Continue this pattern until all namespaces are linked.
221239

222240
_**Console for public1:**_
223241

@@ -242,6 +260,11 @@ skupper link create ~/private1-to-public2-token.yaml
242260
skupper link status --wait 60
243261
~~~
244262

263+
If your console sessions are on different machines, you may need
264+
to use `scp` or a similar tool to transfer the token securely. By
265+
default, tokens expire after a single use or 15 minutes after
266+
creation.
267+
245268
## Step 8: Deploy the iperf3 servers
246269

247270
After creating the application router network, deploy `iperf3` in each namespace.
@@ -324,6 +347,7 @@ network. To access it, use `skupper status` to look up the URL of
324347
the web console. Then use `kubectl get
325348
secret/skupper-console-users` to look up the console admin
326349
password.
350+
327351
**Note:** The `<console-url>` and `<password>` fields in the
328352
following output are placeholders. The actual values are specific
329353
to your environment.
@@ -339,7 +363,7 @@ _Sample output:_
339363

340364
~~~ console
341365
$ skupper status
342-
Skupper is enabled for namespace "public1" in interior mode. It is connected to 1 other site. It has 1 exposed service.
366+
Skupper is enabled for namespace "public1". It is connected to 1 other site. It has 1 exposed service.
343367
The site console url is: <console-url>
344368
The credentials for internal console-auth mode are held in secret: 'skupper-console-users'
345369

@@ -352,7 +376,8 @@ in as user `admin` and enter the password.
352376

353377
## Cleaning up
354378

355-
Restore your cluster environment by returning the resources created in the demonstration and delete the skupper network
379+
To remove Skupper and the other resources from this exercise, use
380+
the following commands.
356381

357382
_**Console for private1:**_
358383

skewer.yaml

Lines changed: 17 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -18,37 +18,28 @@ prerequisites: |
1818
* Two public cloud clusters running in public cloud providers (**public1** and **public2**)
1919
sites:
2020
public1:
21-
kubeconfig: ~/.kube/config-public1
21+
platform: kubernetes
2222
namespace: public1
23+
env:
24+
KUBECONFIG: ~/.kube/config-public1
2325
public2:
24-
kubeconfig: ~/.kube/config-public2
26+
platform: kubernetes
2527
namespace: public2
28+
env:
29+
KUBECONFIG: ~/.kube/config-public2
2630
private1:
27-
kubeconfig: ~/.kube/config-private1
31+
platform: kubernetes
2832
namespace: private1
33+
env:
34+
KUBECONFIG: ~/.kube/config-private1
2935
steps:
3036
- standard: install_the_skupper_command_line_tool
3137
- standard: configure_separate_console_sessions
3238
- standard: access_your_clusters
3339
- standard: set_up_your_namespaces
3440
- standard: install_skupper_in_your_namespaces
3541
- standard: check_the_status_of_your_namespaces
36-
- title: Link your namespaces
37-
preamble: |
38-
Creating a link requires use of two `skupper` commands in
39-
conjunction, `skupper token create` and `skupper link create`.
40-
The `skupper token create` command generates a secret token that
41-
signifies permission to create a link. The token also carries the
42-
link details. Then, in a remote namespace, The `skupper link
43-
create` command uses the token to create a link to the namespace
44-
that generated it.
45-
**Note:** The link token is truly a *secret*. Anyone who has the
46-
token can link to your namespace. Make sure that only those you
47-
trust have access to it.
48-
First, use `skupper token create` in one namespace to generate the
49-
token. Then, use `skupper link create` in the other to create a
50-
link.
51-
Continue this pattern until all namespaces are linked.
42+
- standard: link_your_namespaces
5243
commands:
5344
"public1":
5445
- run: skupper token create ~/private1-to-public1-token.yaml
@@ -77,21 +68,21 @@ steps:
7768
Before we can test performance, we need access to the `iperf3` from each namespace.
7869
commands:
7970
"private1":
80-
- await: deployment/iperf3-server-a
71+
- await_resource: deployment/iperf3-server-a
8172
- run: skupper expose deployment/iperf3-server-a --port 5201
82-
- await: service/iperf3-server-a
73+
- await_resource: service/iperf3-server-a
8374
- run: skupper service status
8475
apply: test
8576
"public1":
86-
- await: deployment/iperf3-server-b
77+
- await_resource: deployment/iperf3-server-b
8778
- run: skupper expose deployment/iperf3-server-b --port 5201
88-
- await: service/iperf3-server-b
79+
- await_resource: service/iperf3-server-b
8980
- run: skupper service status
9081
apply: test
9182
"public2":
92-
- await: deployment/iperf3-server-c
83+
- await_resource: deployment/iperf3-server-c
9384
- run: skupper expose deployment/iperf3-server-c --port 5201
94-
- await: service/iperf3-server-c
85+
- await_resource: service/iperf3-server-c
9586
- run: skupper service status
9687
apply: test
9788
- title: Run benchmark tests across the clusters
@@ -125,4 +116,4 @@ steps:
125116
- run: kubectl delete deployment iperf3-server-c
126117
- run: skupper delete
127118
next_steps: |
128-
- [Find more examples](https://skupper.io/examples/)
119+
- [Find more examples](https://skupper.io/examples/)

subrepos/skewer/.gitrepo

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[subrepo]
77
remote = https://github.com/skupperproject/skewer
88
branch = main
9-
commit = 05cf19ec1b033fab683c29b0618750be7adb4a8e
10-
parent = 7fb9791badeaa0d1aa7547eb066760bad6fda737
9+
commit = e22ace4f8eda92222ed6039c676cef0c54312151
10+
parent = 234126b76cd684604cfdb7ef0e1e4be45e5baf0d
1111
method = merge
12-
cmdver = 0.4.5
12+
cmdver = 0.4.6
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,3 +57,13 @@ def clean():
5757
remove("README.html")
5858
remove("htmlcov")
5959
remove(".coverage")
60+
61+
@command
62+
def update_plano():
63+
"""
64+
Update the embedded Plano repo
65+
"""
66+
67+
make_dir("external")
68+
remove("external/plano-main")
69+
run("curl -sfL https://github.com/ssorj/plano/archive/main.tar.gz | tar -C external -xz", shell=True)

0 commit comments

Comments
 (0)