-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[prometheus-kube-stack] Grafana is not persistent #436
Comments
There is a setting for prometheus and alertmanager
I think we should have the same for grafana ? |
This is not a bug as data persistence is not enabled by default. You can either claim a PersistentVolume in your custom values.yaml file like @survivant suggested or export your dashboards as JSON definition files and create a ConfigMap with the JSON-formatted data for each custom dashboard. This way with each new release of the stack via helm, the modifications within Grafana do not persist but your exported dashboards get redeployed with everything else. |
@ofiryy I update the yaml: values.yaml add: grafana:
persistence:
enabled: true to fix the grafana persistent problem. |
@blademainer but we still can't choose our storage class |
@survivant This works for me
|
I'm not sure the workflow of expecting all the grafana settings to get zapped on the next stopping of a pod has got the best interests of the enterprise in mind. I get the argument of exporting the charts as JSON and storing into configMaps to make them deployment agnostic, but there are other settings not related to charts that we don't want to have disappear when a pod crashes either (such as user login information, settings around alerting, and so forth). So, unless there is a best practice for storing all of that into configmaps as well (and a good user UI for how to do that, which doesn't require kubectl and a Kubernetes admin), it seems shortsighted to think that Grafana can live in an enterprise environment as an application that doesn't require persistence. It seems the opposite would be true. I too am wringing out the kinks of my Prometheus install and ran into this exact same problem of grafana not supporting persistence out of the box. It was rather alarming to learn that after I began building out dashboards, I lost that work when I tested out the failover scenarios of the pod going down. I did not see a persistence piece in the grafana part of the values.yaml and didn't know that this would turn grafana into an app with a temporary persistence layer. In hind sight, I should have done my pod failover test first before beginning to "persist" data in grafana to learn about this annoying default. I do wish that the helm chart can be upgraded to have a section under grafana that allows the ability to define the persistence layer...even if its commented out. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
For anyone who's looking - kube-prometheus stack uses values from Probably this should be included in docs.. |
I have used your code snippet, but I'm facing issue:
I wonder how I can fix it. |
@darox The issues is that your PVC is already bound to the pod A quick fix would be to delete the ReplicaSet for the older A permanent fix would be to make sure that two EDIT: Previous strategy was wrong, this one works. grafana:
deploymentStrategy:
type: Recreate This strategy will ensure the old |
I have applied your recommendations:
|
@darox is the kubectl get pvc -n prometheus
kubectl describe pvc -n prometheus prometheus-grafana (replace name if needed) Do you in fact have a kubectl get sc |
It worked with:
Thanks a lot for your support :) |
For some reason it doesn't work for me, got such values:
and after doing an upgrade no PVCs are created, I also tried just this for Grafana and still no luck
|
can anyone help me with dashboard location ? I have added above values.yaml for persistence, the volume is bound. But when i restart pod, dashboards wont come up ? kamilgregorczyk |
Hi @AwateAkshay , did you solve your problem? I am having the same issue. I can see my dashboards when I get into grafana container, but they are not present in the grafana itself. |
@UrosCvijan exec into grafana pod, you will see grafana.db file which is SQLite DB. Inside you can see your dashboard. |
@kamilgregorczyk not "persistance" but "persistence":
|
Thanks for posting your code, it helped me debug how to add environmental variables in kube prometheus stack. Now I know that syntax. |
I tried the above methods and it got the pv created . But the pod failed to start as an initContainer chownData fails to start even after multiple tries. I followed the following issue 752 and set the initChownData to false. |
Describe the bug
I installed the prometheus-community/kube-prometheus-stack chart.
and then I defined panels and alerts on grafana.
when I delete the grafana pod - all the data is deleted from grafana - there is no persistency.
I wanted to use this solution: prometheus-operator/prometheus-operator#2558 (comment)
but to my surprise - no pv or pvc was created by the prometheus-kube-stack chart.
how can I make my Grafana persistent ?
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"}
Kubernetes Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.13-eks-2ba888", GitCommit:"2ba888155c7f8093a1bc06e3336333fbdb27b3da", GitTreeState:"clean", BuildDate:"2020-07-17T18:48:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Which chart: prometheus-kube-stack
Which version of the chart: 12.3.0
How to reproduce it (as minimally and precisely as possible): Install prometheus-kube-stack and define a panel in grafana, then delete the grafana pod
The text was updated successfully, but these errors were encountered: