chore(new_chart): Add chart for pod IO stress charts (#320)

* chore(new_chart): Add chart for pod io stress charts

Signed-off-by: Udit Gaurav <udit.gaurav@mayadata.io>
This commit is contained in:
UDIT GAURAV
2020-09-13 21:33:43 +05:30
committed by GitHub
parent ab7e638a68
commit 615f58dfe9
10 changed files with 197 additions and 7 deletions

View File

@@ -33,6 +33,7 @@ spec:
- k8-pod-delete - k8-pod-delete
- k8-service-kill - k8-service-kill
- node-io-stress - node-io-stress
- pod-io-stress
keywords: keywords:
- Kubernetes - Kubernetes

View File

@@ -60,3 +60,6 @@ experiments:
- name: node-io-stress - name: node-io-stress
CSV: node-io-stress.chartserviceversion.yaml CSV: node-io-stress.chartserviceversion.yaml
desc: "node-io-stress" desc: "node-io-stress"
- name: pod-io-stress
CSV: pod-io-stress.chartserviceversion.yaml
desc: "pod-io-stress"

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -28,7 +28,7 @@ spec:
value: '120' value: '120'
## specify the size as percentage of free space on the file system ## specify the size as percentage of free space on the file system
- name: FILESSYSTEM_UTILIZATION_PERCENTAGE - name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '10' value: '10'
## enter the name of the desired node ## enter the name of the desired node

View File

@@ -1,7 +1,7 @@
apiVersion: litmuschaos.io/v1alpha1 apiVersion: litmuschaos.io/v1alpha1
description: description:
message: | message: |
Give a memory hog on a node belonging to a deployment Give IO disk stress on a node belonging to a deployment
kind: ChaosExperiment kind: ChaosExperiment
metadata: metadata:
name: node-io-stress name: node-io-stress
@@ -50,12 +50,12 @@ spec:
## specify the size as percentage of free space on the file system ## specify the size as percentage of free space on the file system
## default value 90 (in percentage) ## default value 90 (in percentage)
- name: FILESSYSTEM_UTILIZATION_PERCENTAGE - name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '10' value: '10'
## we can specify the size in Gigabyte (Gb) also in place of percentage of free space ## we can specify the size in Gigabyte (Gb) also in place of percentage of free space
## NOTE: for selecting this option FILESSYSTEM_UTILIZATION_PERCENTAGE should be empty ## NOTE: for selecting this option FILESYSTEM_UTILIZATION_PERCENTAGE should be empty
- name: FILESSYSTEM_UTILIZATION_BYTES - name: FILESYSTEM_UTILIZATION_BYTES
value: '' value: ''
## Total number of workers default value is 4 ## Total number of workers default value is 4

View File

@@ -13,8 +13,7 @@ spec:
categoryDescription: | categoryDescription: |
This experiment causes disk stress on the Kubernetes node. The experiment aims to verify the resiliency of applications that share this disk resource for ephemeral or persistent storage purposes.. This experiment causes disk stress on the Kubernetes node. The experiment aims to verify the resiliency of applications that share this disk resource for ephemeral or persistent storage purposes..
- Disk stress on a particular node filesystem where the application deployment is available. - Disk stress on a particular node filesystem where the application deployment is available.
- The amount of disk stress can be either specifed as the size in percentage of the total free space on the file system or simply in Gigabytes(Gb) - The amount of disk stress can be either specifed as the size in percentage of the total free space on the file system or simply in Gigabytes(GB)
- After the test, the recovery should be manual for the application pod and node in case they are not in an appropriate state.
keywords: keywords:
- Kubernetes - Kubernetes
- Disk - Disk

View File

@@ -0,0 +1,35 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be true/false
annotationCheck: 'true'
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: pod-io-stress-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: pod-io-stress
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '120'
## specify the size as percentage of free space on the file system
- name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '10'
- name: TARGET_POD
value: ''

View File

@@ -0,0 +1,74 @@
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
IO stress on a app pods belonging to an app deployment
kind: ChaosExperiment
metadata:
name: pod-io-stress
spec:
definition:
scope: Namespaced
permissions:
- apiGroups:
- ""
- "batch"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "pods/log"
- "events"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
image: "litmuschaos/go-runner:latest"
args:
- -c
- ./experiments/pod-io-stress
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '120'
## specify the size as percentage of free space on the file system
## default value 90 (in percentage)
- name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '10'
## we can specify the size in Gigabyte (Gb) also in place of percentage of free space
## NOTE: for selecting this option FILESYSTEM_UTILIZATION_PERCENTAGE should be empty
- name: FILESYSTEM_UTILIZATION_BYTES
value: ''
## Total number of workers default value is 4
- name: NUMBER_OF_WORKERS
value: '4'
## Percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# Provide the LIB here
# Only pumba supported
- name: LIB
value: 'pumba'
# provide lib image
- name: LIB_IMAGE
value: 'gaiaadm/pumba'
labels:
name: pod-io-stress

View File

@@ -0,0 +1,42 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-09-13T10:28:08Z
name: pod-io-stress
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: pod-io-stress
categoryDescription: |
This experiment causes disk stress on the application pod. The experiment aims to verify the resiliency of applications that share this disk resource for ephemeral or persistent storage purposes.
- Consumes the disk available by executing filesystem IO stress as available memory or by providing the value in GB
- The application pod should be healthy once chaos is stopped. Expectation is that service-requests should be served despite chaos.
keywords:
- Kubernetes
- Memory
platforms:
- GKE
- Packet(Kubeadm)
- Minikube
- EKS
maturity: alpha
maintainers:
- name: Udit Gaurav
email: udit.gaurav@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/pod-io-stress
- name: Documentation
url: https://docs.litmuschaos.io/docs/pod-io-stress/
- name: Video
url:
icon:
- base64data: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-io-stress/experiment.yaml

View File

@@ -0,0 +1,36 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-io-stress-sa
namespace: default
labels:
name: pod-io-stress-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-io-stress-sa
namespace: default
labels:
name: pod-io-stress-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","events","pods/log","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: pod-io-stress-sa
namespace: default
labels:
name: pod-io-stress-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-io-stress-sa
subjects:
- kind: ServiceAccount
name: pod-io-stress-sa
namespace: default