BinderHub 环境搭建

Setup a Kubernetes Cluster

(一开始使用了方案一,后来连接tiller出现了Connection failed问题,一直没法解决,就干脆把minikube完全卸载了,但是我的kubectl依然连接到以前的minikube且tiller无法连上,因此我就想重新搭建一个k8s集群,将kubectl连到这个新的集群.这次重新安装一定要有耐心,把背后的原理搞懂)

方案一 Minikube

官方教程要求的是在云提供商上搭建Kubernetes的教程,我本来想顺着它给的GKEd教程也弄个Google的K8s cluster,但是需要填信用卡信息…😢穷狗只能放弃了,于是决定先建个本地的集群试试

本地集群的搭建参考了👇这篇blog

https://juejin.im/post/5b62d0356fb9a04fb87767f5

但是用minikube生产本地部署的集群的时候,由于国内网络的限制,pull不到需要的docker image(翻墙也不行🤢),所以我采用的方法是使用阿里云的修改版minikube,地址和教程在这👇

https://yq.aliyun.com/articles/221687

然后启动minikube的时候用

1
minikube start --vm-driver hyperkit --registry-mirror=https://registry.docker-cn.com --kubernetes-version v1.12.1

就行了(第一遍我没有指定版本号结果默认安装了1.10.0,然后后面装JupyterHub的时候版本不够了真是想哭 我只好 minikube delete重新来过 结果v1.13.3版本安装不成功,尝试N次后发现1.12.1可以,浪费了好多时间和精力真是😤)

方案二 Docker 本地 Kubernetes

Docker for Mac自18.06版本后,提供了对Kubernetes的支持,可以直接在Docker Desktop应用中开启,Kubernetes,建立一个本地的单节点集群(🤔可不可以只有一个节点呢?我看了官网给的在GKE上建立k8s集群的教程,它设定的num-nodes=2,但提到了可以后期修改,且没有其他提醒,因此应该是可以的).

但是我使用的18.09版本在选择开启Kubernetes后出现了一直Kubernetes is starting…但一直开启不起来的情况,我又试用了最新的版的Edge版本(之前是Stable)依然不行,可能是因为两者使用的是同样的Kubernetes版本?

参考 https://docs.docker.com/docker-for-mac/release-notes/ 后,我选择了Kubernetes最近一次更新前的18.09.0版本

Setup JupyterHub

Setting up Helm

Helm是Kubernetes上用来管理包的工具,它可以将应用进行封装,然后发布/更新到你的kubernetes集群上.

Helm作为Kubernetes的客户端工具,而Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages.

按照这个官方教程进行安装

https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html

安装完成后用

1
helm version

检验,出现错误

1
2
3
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find a ready tiller pod
1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl -n kube-system get po
NAME READY STATUS RESTARTS AGE
coredns-5bfd87b64b-x79k7 1/1 Running 0 1h
etcd-minikube 1/1 Running 0 1h
kube-addon-manager-minikube 1/1 Running 0 1h
kube-apiserver-minikube 1/1 Running 1 1h
kube-controller-manager-minikube 1/1 Running 0 1h
kube-dns-b4bd9576-t5nhd 3/3 Running 1 1h
kube-proxy-p2g5f 1/1 Running 0 1h
kube-scheduler-minikube 1/1 Running 0 1h
kubernetes-dashboard-866c7586d-bgk8q 1/1 Running 0 1h
storage-provisioner 1/1 Running 0 1h
tiller-deploy-7cf844c44b-k6cnz 0/1 ImagePullBackOff 0 41m

发现tiller是ImagePullBackOff,也就是镜像拉取失败了(这个错误太恶魔了吧😈阴魂不散)

翻一下我的日志,发现确实helm init时最后是有报错的(我却没注意)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ helm init --service-account tiller --wait
Creating /Users/oliveds/.helm
Creating /Users/oliveds/.helm/repository
Creating /Users/oliveds/.helm/repository/cache
Creating /Users/oliveds/.helm/repository/local
Creating /Users/oliveds/.helm/plugins
Creating /Users/oliveds/.helm/starters
Creating /Users/oliveds/.helm/cache/archive
Creating /Users/oliveds/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/oliveds/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Error: tiller was not found. polling deadline exceeded
1
2
3
4
# 原指令helm init --service-account tiller --wait
helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts --service-account tiller --wait
# 或(我没用,应该也是一样的) helm init --service-account tiller --wait --tiller-image registry.cn-hangzhou.aliyuncs.com/acs/tiller:v2.9.1 --upgrade
helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --version=0.8.0 --values config.yaml\

改用上面这个指令,注意版本号要改成 helm version里看到的那个,再次helm init就成功了

*废弃 Installing JupyterHub

依然是遵循官方教程,但是执行

1
2
3
4
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.0 \
--values config.yaml

后,又一次出现了

1
Error: timed out waiting for the condition
1
2
3
4
$ kubectl get pods -n jhub
NAME READY STATUS RESTARTS AGE
hook-image-awaiter-lxsnj 1/1 Running 0 11m
hook-image-puller-6vgz2 0/1 ImagePullBackOff 0 11m

于是我觉得应该还是镜像的问题,但是换阿里云镜像后依然没有解决,具体更换的方法如下

1
2
3
helm repo remove stable
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update

后来参考这篇博客👇

https://my.oschina.net/u/2306127/blog/1836933

先把安装包fetch到本地,再直接安装

1
2
3
4
5
6
helm fetch jupyterhub/jupyterhub --version=0.8.0
helm install ./jupyterhub-0.8.0.tgz \
--name=$RELEASE \
--namespace $NAMESPACE \
--version=0.8.0 \
--values config.yaml

依然不行

https://zhuanlan.zhihu.com/p/50407362

1
2
3
4
5
6
7
8
9
10
11
$ git clone https://github.com/kubernetes-incubator/external-storage.git
$ cd external-storage/nfs-client/
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml # 我在-i后加了空格
$ kubectl create -f deploy/rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

屡战屡败,我开始考虑替代方法,于是重新看BinderHub安装教程,结果发现好像不用单独安装JupyterHub😟于是这一节可以跳过???我

1
2
helm delete --purge jhub
kubectl delete namespace jhub

*废弃 Setup BinderHub

https://binderhub.readthedocs.io/en/latest/setup-binderhub.html

官方教程,采用的是DockerHub方式(虽然我并不知道是不是安装了Docker就可以)

我把binderhub建在了/usr/local/binderhub

使用Docker Hub, 直接填写上Docker ID & password

(直接使用官方教程编辑的yaml文件好像不对,后来我采用了开源中国上的编辑⤵️

https://my.oschina.net/u/2306127/blog/1863719

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
$ helm install jupyterhub/binderhub --version=v0.1.0-85ac189  --name=binder --namespace=binder -f secret.yaml -f config.yaml

输出结果:
NAME: binder
LAST DEPLOYED: Sat Mar 9 10:47:31 2019
NAMESPACE: binder
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
binder-config 9 1s
hub-config 26 1s
nginx-proxy-config 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hub-db-dir Bound pvc-af6c816d-4215-11e9-8709-c69af665fa4b 1Gi RWO standard 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
binder-77d7f5cdd6-4gw98 0/1 Pending 0 0s
hub-77cbd48568-l5754 0/1 ContainerCreating 0 0s
proxy-5d56c87798-d8dhw 0/2 Pending 0 0s

==> v1/Secret
NAME TYPE DATA AGE
binder-secret Opaque 2 1s
hub-secret Opaque 2 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
binder LoadBalancer 10.96.233.146 <pending> 80:30534/TCP 0s
hub ClusterIP 10.102.22.155 <none> 8081/TCP 1s
proxy-api ClusterIP 10.103.189.60 <none> 8001/TCP 1s
proxy-http ClusterIP 10.100.165.19 <none> 8000/TCP 1s
proxy-public LoadBalancer 10.108.122.145 <pending> 80:30927/TCP,443:32704/TCP 1s

==> v1/ServiceAccount
NAME SECRETS AGE
binderhub 1 1s
hub 1 1s
proxy 1 1s

==> v1beta1/ClusterRole
NAME AGE
nginx-binder 1s

==> v1beta1/ClusterRoleBinding
NAME AGE
nginx-binder 1s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
binder 0/1 1 0 0s
hub 0/1 1 0 0s
proxy 0/1 1 0 0s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
hub 1 N/A 0 1s
proxy 1 N/A 0 1s

==> v1beta1/Role
NAME AGE
binderhub 1s
hub 1s
kube-lego 1s
nginx 1s

==> v1beta1/RoleBinding
NAME AGE
binderhub 1s
hub 1s
kube-lego 1s
nginx 1s


NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w binder-binderhub'
export SERVICE_IP=$(kubectl get svc --namespace binder binder-binderhub -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:

分岔路

安装好Helm后,因为我是本地部署的minikube集群,并不是真的拥有一个云Kubernetes平台,所以官方教程

https://binderhub.readthedocs.io/en/latest/setup-binderhub.html

的配置文件不能直接套用了,我上网找资料,果然也有人是这样部署的,找到了这个教程感觉比较靠谱

通过上网看其他人的提问,发现大部分人在用minicube搭建BinderHub时先安装了JupyterHub,但是我装JupyterHub时总是出现 Error: timed out waiting for the condition 的错误.

……

0%