2019-08-10T08:32:40 I REPL [replication-2] could not find member to sync from 2019-08-10T08:32:40 E REPL [rsBackgroundSync] too stale to catch up -- entering maintenance mode 2019-08-10T08:32:40 I REPL [rsBackgroundSync] Our newest OpTime: {ts: Timestamp 1503977172000|27, t: 1} 2019-08-10T08:32:40 I REPL [rsBackgroundSync] Earliest OpTime available is {ts: Timestamp 1503998451000|1, t: -1} 2019-08-10T08:32:40 I REPL [rsBackgroundSync] See http://dochub.mongodb.org/core/resyncingaverystalereplicasetmember 2019-08-10T08:32:40 I REPL [rsBackgroundSync] going into maintenance mode with 1820 other maintenance mode tasks in progress 2019-08-10T08:32:40 I REPL [rsBackgroundSync] sync source candidate: 172.16.0.21:28000 2019-08-10T08:32:40 I REPL [replication-2] We are too stale to use 172.16.0.21:28000 as a sync source. Blacklisting this sync source because our last fetched timestamp: 59a4ded4:1b is before their earliest timestamp:59a524a9:21e for 1min until: 2019-08-10T08:34:19
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}')
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
[root@dev-vm1 ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-78d6f96c7b-t2xnr 1/1 Running 0 22m kube-system calico-node-4t2pb 1/1 Running 0 22m kube-system coredns-545d6fc579-5rqjz 0/1 Running 0 28m kube-system coredns-545d6fc579-gztfz 0/1 Running 0 28m kube-system etcd-dev-vm1 1/1 Running 0 28m
查看pod日志报如下错误:
1 2 3 4
[INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" E0607 08:24:21.215568 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope [INFO] plugin/ready: Still waiting on: "kubernetes"
Jun 07 22:07:02 dev-vm3 kubelet[43654]: I0607 22:07:02.184088 43654 container_manager_linux.go:995] "CPUAccounting not enabled for process" pid=43654 Jun 07 22:07:02 dev-vm3 kubelet[43654]: I0607 22:07:02.184098 43654 container_manager_linux.go:998] "MemoryAccounting not enabled for process" pid=43654 Jun 07 22:07:03 dev-vm3 kubelet[43654]: E0607 22:07:03.870531 43654 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/kubelet.service\": failed to get container info for \"/system.slice/kubelet.service\": unknown container \"/system.slice/kubelet.service\"" containerName="/system.slice/kubelet.service" Jun 07 22:07:03 dev-vm3 kubelet[43654]: E0607 22:07:03.870594 43654 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/docker.service\": failed to get container info for \"/system.slice/docker.service\": unknown container \"/system.slice/docker.service\"" containerName="/system.slice/docker.service" Jun 07 22:07:13 dev-vm3 kubelet[43654]: E0607 22:07:13.959837 43654 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/kubelet.service\": failed to get container info for \"/system.slice/kubelet.service\": unknown container \"/system.slice/kubelet.service\"" containerName="/system.slice/kubelet.service" Jun 07 22:07:13 dev-vm3 kubelet[43654]: E0607 22:07:13.959873 43654 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/docker.service\": failed to get container info for \"/system.slice/docker.service\": unknown container \"/system.slice/docker.service\"" containerName="/system.slice/docker.service" Jun 07 22:07:24 dev-vm3 kubelet[43654]: E0607 22:07:24.068455 43654 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/kubelet.service\": failed to get container info for \"/system.slice/kubelet.service\": unknown container \"/system.slice/kubelet.service\"" containerName="/system.slice/kubelet.service"
Fielddata is disabled on text fields by default. Set fielddata=true on [your_field_name] in order to load fielddata in memory by uninverting the inverted index.