{"_id":"56bee06a7ba2830d00f62d62","user":"56a1f7423845200d0066d71b","version":{"_id":"56a1f77542dfda0d00046288","__v":9,"project":"56a1f77442dfda0d00046285","createdAt":"2016-01-22T09:33:41.397Z","releaseDate":"2016-01-22T09:33:41.397Z","categories":["56a1f77542dfda0d00046289","56a1fdf442dfda0d00046294","56a2079f0067c00d00a2f955","56a20bdf8b2e6f0d0018ea84","56a3e78a94ec0a0d00b39fed","56af19929d32e30d0006d2ce","5721f4e9dcfa860e005bef98","574e870be892bf0e004fde0d","5832fdcdb32d820f0072e12f"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"parentDoc":null,"project":"56a1f77442dfda0d00046285","__v":31,"category":{"_id":"56a1f77542dfda0d00046289","__v":3,"version":"56a1f77542dfda0d00046288","pages":["56a1f77642dfda0d0004628b","56a8b39d1bb4420d004cabd3","56bee06a7ba2830d00f62d62"],"project":"56a1f77442dfda0d00046285","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-01-22T09:33:41.956Z","from_sync":false,"order":4,"slug":"documentation","title":"Reference"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-02-13T07:51:06.742Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"## Response time\n\nResponse time is the total amount of time it takes to respond to a request for service.\n\n- **Median:** 50% of a service's requests were completed within less time than the median, and 50% were completed within more. Which means that median is separating the faster half of requests, from the slower half.\n\n- **95th Percentile:** 95% of the requests were completed within less time than the *95th Percentile*, and 5% were completed within more. It gives you a good picture on what most clients experience when using your service.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/qKvNhShQneWFpS8mmPqV_metrics_response_time.png\",\n        \"metrics_response_time.png\",\n        \"2230\",\n        \"728\",\n        \"#225e9b\",\n        \"\"\n      ],\n      \"caption\": \"Response time\"\n    }\n  ]\n}\n[/block]\n## Throughput\n\nThroughput or request per minutes *(rpm)* is the rate of message delivery over a communication channel like HTTP(s).\n\n- **2xx:** The number of requests with status code 200 - 299 per minute. \n\n- **3xx:** The number of requests with status code 300 - 399 per minute. \n\n- **4xx:** The number of requests with status code 400 - 499 per minute. \n\n- **5xx:** The number of requests with status code > 500 per minute. \n\nWant to know more about status codes? Check out the [Status Code Definitions](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html).\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ek1XtNfATjOAW5xxYCXf_metrics_throughput.png\",\n        \"metrics_throughput.png\",\n        \"2202\",\n        \"671\",\n        \"#31b2a2\",\n        \"\"\n      ],\n      \"caption\": \"Throuhput\"\n    }\n  ]\n}\n[/block]\n## System load\n\n- **Max load:** maximum number of processes waiting for resources in the last 5 minutes.\n\n- **Average load:** average number of processes waiting for resources in the last 5 minutes.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/H1I2k1ytTCeh6HhSDFaJ_metrics_load.png\",\n        \"metrics_load.png\",\n        \"2134\",\n        \"638\",\n        \"#2454a4\",\n        \"\"\n      ],\n      \"caption\": \"System load\"\n    }\n  ]\n}\n[/block]\n## Memory usage\n\n- **Used Heap:** Used heap memory by the process in MB.\n\n- **Total Heap:** Total heap memory available for the process in MB.\n\n- **rss:** (resident set size) is the portion of memory occupied by a process that is held in the RAM, this contains: the code itself, the stack and the heap\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/168RrlCGSTGKN0WCAOLo_metrics_memory.png\",\n        \"metrics_memory.png\",\n        \"1119\",\n        \"337\",\n        \"#3e8ac9\",\n        \"\"\n      ],\n      \"caption\": \"Memory usage\"\n    }\n  ]\n}\n[/block]\n## Garbage collection\n\n- **Time:** Indicates how many microseconds your application spent doing garbage collection in the given timeframe.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/qi8B2R1kRJG37kUGNCOT_metrics_garbage_collector_time.png\",\n        \"metrics_garbage_collector_time.png\",\n        \"2276\",\n        \"712\",\n        \"#0856a4\",\n        \"\"\n      ],\n      \"caption\": \"Time spent doing garbage collection\"\n    }\n  ]\n}\n[/block]\n- **Scavenge:** Number of scavenge collections in the given timeframe.\n\n- **Marksweep:** Number of mark-sweep collections in the given timeframe.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/jy6qNs2wSgyJnU0DUAzH_metrics_gc_runs%20.png\",\n        \"metrics_gc_runs .png\",\n        \"2222\",\n        \"750\",\n        \"#164e86\",\n        \"\"\n      ],\n      \"caption\": \"Garbage collector runs\"\n    }\n  ]\n}\n[/block]\nTo hunt down memory leaks and better understand how garbage collection works, we recommend to check out the [Finding a Memory Leak in Node.js](https://blog.risingstack.com/finding-a-memory-leak-in-node-js/) article.\n\n## Node.js specific metrics\n\n### Event loop\n\n- **Max Lag:**: maximum number of milliseconds spent in a single loop\n\n- **Average Lag:**: average number of milliseconds spent in a single loop\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/CL9DTtrRyuzg05Vu8SFH_metrics_eventloop_queue.png\",\n        \"metrics_eventloop_queue.png\",\n        \"2196\",\n        \"698\",\n        \"#83a3cb\",\n        \"\"\n      ],\n      \"caption\": \"Eventloop lag for Node.js\"\n    }\n  ]\n}\n[/block]\n- **Active handlers:**: number of active requests in the eventloop\n\n- **Active requests:**: number of active handlers in the eventloop\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/5WwEbzmQ7CJ2DlESic3O_metrics_eventloop_lag.png\",\n        \"metrics_eventloop_lag.png\",\n        \"2252\",\n        \"692\",\n        \"#83a3cb\",\n        \"\"\n      ],\n      \"caption\": \"Eventloop queue for Node.js\"\n    }\n  ]\n}\n[/block]","excerpt":"Performance and response metrics of your application","slug":"service-metrics","type":"basic","title":"Service metrics"}

Service metrics

Performance and response metrics of your application

## Response time Response time is the total amount of time it takes to respond to a request for service. - **Median:** 50% of a service's requests were completed within less time than the median, and 50% were completed within more. Which means that median is separating the faster half of requests, from the slower half. - **95th Percentile:** 95% of the requests were completed within less time than the *95th Percentile*, and 5% were completed within more. It gives you a good picture on what most clients experience when using your service. [block:image] { "images": [ { "image": [ "https://files.readme.io/qKvNhShQneWFpS8mmPqV_metrics_response_time.png", "metrics_response_time.png", "2230", "728", "#225e9b", "" ], "caption": "Response time" } ] } [/block] ## Throughput Throughput or request per minutes *(rpm)* is the rate of message delivery over a communication channel like HTTP(s). - **2xx:** The number of requests with status code 200 - 299 per minute. - **3xx:** The number of requests with status code 300 - 399 per minute. - **4xx:** The number of requests with status code 400 - 499 per minute. - **5xx:** The number of requests with status code > 500 per minute. Want to know more about status codes? Check out the [Status Code Definitions](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html). [block:image] { "images": [ { "image": [ "https://files.readme.io/ek1XtNfATjOAW5xxYCXf_metrics_throughput.png", "metrics_throughput.png", "2202", "671", "#31b2a2", "" ], "caption": "Throuhput" } ] } [/block] ## System load - **Max load:** maximum number of processes waiting for resources in the last 5 minutes. - **Average load:** average number of processes waiting for resources in the last 5 minutes. [block:image] { "images": [ { "image": [ "https://files.readme.io/H1I2k1ytTCeh6HhSDFaJ_metrics_load.png", "metrics_load.png", "2134", "638", "#2454a4", "" ], "caption": "System load" } ] } [/block] ## Memory usage - **Used Heap:** Used heap memory by the process in MB. - **Total Heap:** Total heap memory available for the process in MB. - **rss:** (resident set size) is the portion of memory occupied by a process that is held in the RAM, this contains: the code itself, the stack and the heap [block:image] { "images": [ { "image": [ "https://files.readme.io/168RrlCGSTGKN0WCAOLo_metrics_memory.png", "metrics_memory.png", "1119", "337", "#3e8ac9", "" ], "caption": "Memory usage" } ] } [/block] ## Garbage collection - **Time:** Indicates how many microseconds your application spent doing garbage collection in the given timeframe. [block:image] { "images": [ { "image": [ "https://files.readme.io/qi8B2R1kRJG37kUGNCOT_metrics_garbage_collector_time.png", "metrics_garbage_collector_time.png", "2276", "712", "#0856a4", "" ], "caption": "Time spent doing garbage collection" } ] } [/block] - **Scavenge:** Number of scavenge collections in the given timeframe. - **Marksweep:** Number of mark-sweep collections in the given timeframe. [block:image] { "images": [ { "image": [ "https://files.readme.io/jy6qNs2wSgyJnU0DUAzH_metrics_gc_runs%20.png", "metrics_gc_runs .png", "2222", "750", "#164e86", "" ], "caption": "Garbage collector runs" } ] } [/block] To hunt down memory leaks and better understand how garbage collection works, we recommend to check out the [Finding a Memory Leak in Node.js](https://blog.risingstack.com/finding-a-memory-leak-in-node-js/) article. ## Node.js specific metrics ### Event loop - **Max Lag:**: maximum number of milliseconds spent in a single loop - **Average Lag:**: average number of milliseconds spent in a single loop [block:image] { "images": [ { "image": [ "https://files.readme.io/CL9DTtrRyuzg05Vu8SFH_metrics_eventloop_queue.png", "metrics_eventloop_queue.png", "2196", "698", "#83a3cb", "" ], "caption": "Eventloop lag for Node.js" } ] } [/block] - **Active handlers:**: number of active requests in the eventloop - **Active requests:**: number of active handlers in the eventloop [block:image] { "images": [ { "image": [ "https://files.readme.io/5WwEbzmQ7CJ2DlESic3O_metrics_eventloop_lag.png", "metrics_eventloop_lag.png", "2252", "692", "#83a3cb", "" ], "caption": "Eventloop queue for Node.js" } ] } [/block]