CreationTimestamp: Sat, 26 Jan 2019 22:23:08 +0800
Reference: Deployment/nginx
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 70%
Min replicas: 1
Max replicas: 5
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 1m (x12 over 3m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 1m (x13 over 3m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
CreationTimestamp: Sun, 27 Jan 2019 00:18:02 +0800
Reference: Deployment/nginx
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 70%
Min replicas: 1
Max replicas: 5
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics r resource cpu: failed to get pod resource metrics: an error on the server ("Error: 'dial tcp 172.30.9.4:8082: getsockoptconnection timed out'\nTrying to reach: 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/default/pods?labelSelect=app%3Dnginx-hpa'") has prevented the request from succeeding (get services http:heapster:)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedUpdateStatus 2m horizontal-pod-autoscaler Operation cannot be fulfilled on hozontalpodautoscalers.autoscaling "nginx-hpa": the object has been modified; please apply your changes to the latest versi and try again
Warning FailedGetResourceMetric 24s (x3 over 4m) horizontal-pod-autoscaler unable to get metrics for resource u: failed to get pod resource metrics: an error on the server ("Error: 'dial tcp 172.30.9.4:8082: getsockopt: connection med out'\nTrying to reach: 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/default/pods?labelSelector=app%3Dnginhpa'") has prevented the request from succeeding (get services http:heapster:)
Warning FailedComputeMetricsReplicas 24s (x3 over 4m) horizontal-pod-autoscaler failed to get cpu utilization: unab to get metrics for resource cpu: failed to get pod resource metrics: an error on the server ("Error: 'dial tcp 172.30.9.4:8082: getsockopt: connection timed out'\nTrying to reach: 'http://172.30.9.4:8082/apis/metrics/v1alpha1/namespaces/defaulpods?labelSelector=app%3Dnginx-hpa'") has prevented the request from succeeding (get services http:heapster:)
[root@master ~]#
意思是HPA无法连接heapster服务。于是检查heapster服务是否异常。
1
2
3
4
5
6
7
[root@master ~]# kubectl get pod -o wide -n kube-system
The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (metrics.k8s.io, custom.metrics.k8s.io, and external.metrics.k8s.io). The metrics.k8s.io API is usually provided by metrics-server, which needs to be launched separately. See metrics-server for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster.
Note:
FEATURE STATE: Kubernetes 1.11 deprecated
Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
autoscaling/v1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master ~]# cat nginx-hpa-cpu.yml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
[root@master ~]#
这里只针对CPU的HPA 压力测试。 压测命令
1
2
3
4
5
6
[root@node1 ~]# cat test.sh
whiletrue
do
wget -q -O- http://192.168.1.204:30080
done
[root@node1 ~]# sh test.sh
观察HPA当前负载和POD的情况
1
2
3
4
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx 0% / 70% 1 5 1 14h
[root@master ~]#
1
2
3
4
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx 14% / 70% 1 5 1 14h
[root@master ~]#
当负载飙升时,HPA会按照定义的规则开始创建新的POD副本(定义POD的CPU阈值为70%)。
1
2
3
4
5
6
7
8
9
10
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
toscaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"nginx-hpa","namespace":"default"},"spec":{"maxRepl...CreationTimestamp: Sun, 27 Jan 2019 01:04:25 +0800
Reference: Deployment/nginx
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 70%
Min replicas: 1
Max replicas: 5
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale False BackoffDownscale the time since the previous scale is still within the downscale forbidden window
ScalingActive True ValidMetricFound the HPA was able to succesfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited True TooFewReplicas the desired replica count is increasing faster than the maximum scale rate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 41m (x2 over 1h) horizontal-pod-autoscaler New size: 5; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 29m (x2 over 1h) horizontal-pod-autoscaler New size: 3; reason: All metrics below target
Normal SuccessfulRescale 17m horizontal-pod-autoscaler New size: 2; reason: All metrics below target
Normal SuccessfulRescale 8m (x2 over 1h) horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 3m (x2 over 12m) horizontal-pod-autoscaler New size: 1; reason: All metrics below target