{"_id":"5832ff490752650f00eb53ac","parentDoc":null,"__v":0,"project":"56a1f77442dfda0d00046285","user":"56a1f7423845200d0066d71b","version":{"_id":"56a1f77542dfda0d00046288","__v":9,"project":"56a1f77442dfda0d00046285","createdAt":"2016-01-22T09:33:41.397Z","releaseDate":"2016-01-22T09:33:41.397Z","categories":["56a1f77542dfda0d00046289","56a1fdf442dfda0d00046294","56a2079f0067c00d00a2f955","56a20bdf8b2e6f0d0018ea84","56a3e78a94ec0a0d00b39fed","56af19929d32e30d0006d2ce","5721f4e9dcfa860e005bef98","574e870be892bf0e004fde0d","5832fdcdb32d820f0072e12f"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"category":{"_id":"5832fdcdb32d820f0072e12f","project":"56a1f77442dfda0d00046285","__v":0,"version":"56a1f77542dfda0d00046288","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-11-21T13:59:41.977Z","from_sync":false,"order":1,"slug":"container","title":"Container Monitoring"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-11-21T14:06:01.575Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"## Overview\n\nIn **Trace by RisingStack** it's possible to send container level metrics and see them together with your application level metrics. It can be pretty useful when you need to find correlations between lower level metrics like network and your application's behaviour.\n\n## Metrics\n\nSends the following metrics from each `pod`(s) with `trace-service-name` label to Trace By RisingStack metrics.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Metrics Name\",\n    \"h-1\": \"Description\",\n    \"0-0\": \"cpu/usage_rate\",\n    \"0-1\": \"CPU usage on all cores in millicores.\",\n    \"1-0\": \"memory/major_page_faults_rate\",\n    \"1-1\": \"Number of major page faults per second.\",\n    \"2-0\": \"memory/page_faults_rate\",\n    \"2-1\": \"Number of page faults per second.\",\n    \"3-0\": \"memory/usage\",\n    \"3-1\": \"Total memory usage,  collected as Megabyte.\",\n    \"4-0\": \"memory/working_set\",\n    \"4-1\": \"Total working set usage. Working set is the memory being used and not easily dropped by the kernel, collected as Megabyte.\",\n    \"5-0\": \"network/rx_rate\",\n    \"5-1\": \"Number of bytes received over the network per second.\",\n    \"6-0\": \"network/rx_errors_rate\",\n    \"7-0\": \"network/tx_rate\",\n    \"8-0\": \"network/tx_errors_rate\",\n    \"6-1\": \"Number of errors while receiving over the network per second.\",\n    \"7-1\": \"Number of bytes sent over the network per second.\",\n    \"8-1\": \"Number of errors while sending over the network.\"\n  },\n  \"cols\": 2,\n  \"rows\": 9\n}\n[/block]\n### Container CPU Usage\n\nIt can be really handy to check your containers' CPU usage together with some application level metrics like garbage collection time.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ba83490-kubernetes-cpu.png\",\n        \"kubernetes-cpu.png\",\n        2218,\n        868,\n        \"#e6eaee\"\n      ],\n      \"caption\": \"CPU usage on all cores in millicores.\"\n    }\n  ]\n}\n[/block]\n### Container Memory Usage\n\n**Trace by RisingStack** can provide application level memory usage, but it can be useful to check it on container level as well.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/fbdcc57-kubernetes-memory-usage.png\",\n        \"kubernetes-memory-usage.png\",\n        2216,\n        872,\n        \"#3c8bc3\"\n      ],\n      \"caption\": \"Total memory usage and working set collected as Megabyte.\"\n    }\n  ]\n}\n[/block]\n### Container Memory Faults\n\nIf you have lots of page faults, your service can slow down. It means that your application needs to access a page which is located on the disk instead and not in the memory.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/5ffe6cc-kubernetes-memory-faults.png\",\n        \"kubernetes-memory-faults.png\",\n        2224,\n        874,\n        \"#3b80b1\"\n      ],\n      \"caption\": \"Number of page faults per second.\"\n    }\n  ]\n}\n[/block]\n### Container Incoming Network Metrics\n\nHow much data does your application receives? Trace can tell it - you can check the number of bytes here.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/a90c322-kubernetes-rx.png\",\n        \"kubernetes-rx.png\",\n        2212,\n        866,\n        \"#3d8bc3\"\n      ],\n      \"caption\": \"Number of bytes received over the network per second with error rate.\"\n    }\n  ]\n}\n[/block]\n### Container Outgoing Network Metrics\n\nDoes your outgoing traffic move together with your throughput? It's time to find out.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/0bce332-kubernetes-tx.png\",\n        \"kubernetes-tx.png\",\n        2208,\n        864,\n        \"#e9edf0\"\n      ],\n      \"caption\": \"Number of bytes sent over the network per second with error rate.\"\n    }\n  ]\n}\n[/block]\n## How does it work?\n\nThe **Trace by RisingStack** Kubernetes collector is a Docker image that runs inside your Kubernetes infrastructure. It's easy to setup, takes only 2-3 minutes. Our app automatically collects metrics from your containers with `trace-service-name` label and sends it to our servers.\n\n## How to get it?\n\nContact sales at [trace-support:::at:::risingstack.com](mailto:[email protected]).","excerpt":"Kubernetes container monitoring","slug":"kubernetes-monitoring","type":"basic","title":"Kubernetes monitoring"}

Kubernetes monitoring

Kubernetes container monitoring

## Overview In **Trace by RisingStack** it's possible to send container level metrics and see them together with your application level metrics. It can be pretty useful when you need to find correlations between lower level metrics like network and your application's behaviour. ## Metrics Sends the following metrics from each `pod`(s) with `trace-service-name` label to Trace By RisingStack metrics. [block:parameters] { "data": { "h-0": "Metrics Name", "h-1": "Description", "0-0": "cpu/usage_rate", "0-1": "CPU usage on all cores in millicores.", "1-0": "memory/major_page_faults_rate", "1-1": "Number of major page faults per second.", "2-0": "memory/page_faults_rate", "2-1": "Number of page faults per second.", "3-0": "memory/usage", "3-1": "Total memory usage, collected as Megabyte.", "4-0": "memory/working_set", "4-1": "Total working set usage. Working set is the memory being used and not easily dropped by the kernel, collected as Megabyte.", "5-0": "network/rx_rate", "5-1": "Number of bytes received over the network per second.", "6-0": "network/rx_errors_rate", "7-0": "network/tx_rate", "8-0": "network/tx_errors_rate", "6-1": "Number of errors while receiving over the network per second.", "7-1": "Number of bytes sent over the network per second.", "8-1": "Number of errors while sending over the network." }, "cols": 2, "rows": 9 } [/block] ### Container CPU Usage It can be really handy to check your containers' CPU usage together with some application level metrics like garbage collection time. [block:image] { "images": [ { "image": [ "https://files.readme.io/ba83490-kubernetes-cpu.png", "kubernetes-cpu.png", 2218, 868, "#e6eaee" ], "caption": "CPU usage on all cores in millicores." } ] } [/block] ### Container Memory Usage **Trace by RisingStack** can provide application level memory usage, but it can be useful to check it on container level as well. [block:image] { "images": [ { "image": [ "https://files.readme.io/fbdcc57-kubernetes-memory-usage.png", "kubernetes-memory-usage.png", 2216, 872, "#3c8bc3" ], "caption": "Total memory usage and working set collected as Megabyte." } ] } [/block] ### Container Memory Faults If you have lots of page faults, your service can slow down. It means that your application needs to access a page which is located on the disk instead and not in the memory. [block:image] { "images": [ { "image": [ "https://files.readme.io/5ffe6cc-kubernetes-memory-faults.png", "kubernetes-memory-faults.png", 2224, 874, "#3b80b1" ], "caption": "Number of page faults per second." } ] } [/block] ### Container Incoming Network Metrics How much data does your application receives? Trace can tell it - you can check the number of bytes here. [block:image] { "images": [ { "image": [ "https://files.readme.io/a90c322-kubernetes-rx.png", "kubernetes-rx.png", 2212, 866, "#3d8bc3" ], "caption": "Number of bytes received over the network per second with error rate." } ] } [/block] ### Container Outgoing Network Metrics Does your outgoing traffic move together with your throughput? It's time to find out. [block:image] { "images": [ { "image": [ "https://files.readme.io/0bce332-kubernetes-tx.png", "kubernetes-tx.png", 2208, 864, "#e9edf0" ], "caption": "Number of bytes sent over the network per second with error rate." } ] } [/block] ## How does it work? The **Trace by RisingStack** Kubernetes collector is a Docker image that runs inside your Kubernetes infrastructure. It's easy to setup, takes only 2-3 minutes. Our app automatically collects metrics from your containers with `trace-service-name` label and sends it to our servers. ## How to get it? Contact sales at [[email protected]](mailto:[email protected]).