{"_id":"56a20bd98b2e6f0d0018ea82","project":"56a1f77442dfda0d00046285","version":{"_id":"56a1f77542dfda0d00046288","__v":9,"project":"56a1f77442dfda0d00046285","createdAt":"2016-01-22T09:33:41.397Z","releaseDate":"2016-01-22T09:33:41.397Z","categories":["56a1f77542dfda0d00046289","56a1fdf442dfda0d00046294","56a2079f0067c00d00a2f955","56a20bdf8b2e6f0d0018ea84","56a3e78a94ec0a0d00b39fed","56af19929d32e30d0006d2ce","5721f4e9dcfa860e005bef98","574e870be892bf0e004fde0d","5832fdcdb32d820f0072e12f"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"parentDoc":null,"category":{"_id":"56a20bdf8b2e6f0d0018ea84","pages":["56a20e302255370d00ad5ecb"],"project":"56a1f77442dfda0d00046285","__v":1,"version":"56a1f77542dfda0d00046288","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-01-22T11:00:47.207Z","from_sync":false,"order":3,"slug":"features","title":"Features"},"__v":15,"user":"56a1f7423845200d0066d71b","updates":["57e91b3b21c9990e00344535"],"next":{"pages":[],"description":""},"createdAt":"2016-01-22T11:00:41.191Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"Metrics are the near real-time information of your running system collected by our agent, if you'd like to know what it collects exactly check out our [agent documentation](doc:nodejs-agent).\n\nOur metrics page allows you to keep track of what is currently happening in your application. It makes it easy to spot potential errors during runtime.\n\nThe metrics page shows an overview of a service, that could consist of multiple running instances that are reporting under the same name. Those are aggregated by our servers and shown on the UI. The aggregation method depends on the type of metric. (further details below)\n\nThe charts can display deployment data that you can send to our servers, to learn more visit [it's documentation](doc:deployment-hook)\n\n## What kind of metrics does Trace by RisingStack provide?\n\n### Response time: \nThe length of time required to respond to a request. \n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/f94a8fb-responseTime.png\",\n        \"responseTime.png\",\n        594,\n        358,\n        \"#165b9f\"\n      ],\n      \"caption\": \"Response time chart\"\n    }\n  ]\n}\n[/block]\nAggregations used here are 95th and median, 95th means that 95 percent of the requests were server below that time, median is the midpoint of a frequency distribution of observed values.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"body\": \"When monitoring web services it is essential to know how much time did a request spend in our application and when exactly did the server respond. Based on that metric we can spot potential lengthy operations in our application.\"\n}\n[/block]\n### Throughput\n\nIndicates how many requests were served within a given time period. It's broken down by the response status codes, which are grouped by their first digit.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/77a4a65-throughput.png\",\n        \"throughput.png\",\n        516,\n        333,\n        \"#069b84\"\n      ],\n      \"caption\": \"Throughput chart\"\n    }\n  ]\n}\n[/block]\nBy analyzing a throughput chart we can spot 4XX and 5XX response codes. These response codes could mean some serious errors in your app, so keep an eye on them.\n\n### Memory usage\n\n\nMemory usage charts indicate how much memory does the running process use at a given time. We aggregate all of your instance's data by averaging them. It's broken down to 3 parts:\n- resident set size: the size of the memory that is currently in your RAM (not swapping)\n- total heap size: total heap size of your application\n- used heap: currently used heap size of your application\n\nBy default, NodeJS sets the max heap size at around 1.8 GB on x86 64bit computers. That is the maximum value of the total heap size (can be modified by [v8-flags](https://nodejs.org/api/v8.html). The used heap size depends on garbage collection and memory allocation within your program.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/12c0e4b-memoryUsage_ok.png\",\n        \"memoryUsage_ok.png\",\n        557,\n        348,\n        \"#2a8ac0\"\n      ]\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"danger\",\n  \"title\": \"\",\n  \"body\": \"When you see growing memory patterns on the chart, it could mean memory leaks in your app, as seen on the following chart. Consider creating a memory heapdump and inspecting it by hand.\"\n}\n[/block]\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/cc2b579-memoryUsage_warning.png\",\n        \"memoryUsage_warning.png\",\n        563,\n        352,\n        \"#188bc2\"\n      ],\n      \"caption\": \"Memory usage chart.\"\n    }\n  ]\n}\n[/block]\n### Garbage collector\n\nGarbage collector can have a huge impact on the application's performance. If you're allocating big object structures it takes a long time to deallocate them. The garbage collector runs autonomously, and you can not predetermine when will it start and stop.\n\nFirst of our garbage collector metrics is **garbage collector runs**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/82348bb-garbageCollectorRun.png\",\n        \"garbageCollectorRun.png\",\n        531,\n        348,\n        \"#5a8abb\"\n      ]\n    }\n  ]\n}\n[/block]\nIt's broken up into two parts:\n- Scavenge: fast by design, it is suitable for frequently occurring short GC cycles.\n- Mark-sweep: pause time in mark-sweep is found to be high, slowing down overall performance\n\nFor further information about this topic see [this article on garbage collection](https://strongloop.com/strongblog/node-js-performance-garbage-collection/).\n\nOur other chart on the topic is **time spent doing garbage collection**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/0bdcb10-garbageCollectorTimeSpent.png\",\n        \"garbageCollectorTimeSpent.png\",\n        552,\n        349,\n        \"#bccad4\"\n      ]\n    }\n  ]\n}\n[/block]\n### Event loop\n\nIf you're not familiar with the NodeJS event loop, check out this [video](https://www.youtube.com/watch?v=8aGhZQkoFbQ).\n\nThe key metrics of the event loop are the queue, which is the amount of tasks that are waiting to get processed. The other one is the event loop lag, which is the time it takes for a tasks to get processed.\n\nWe're displaying an average event loop stats of your services.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/1ae1cd1-eventLoopQueue.png\",\n        \"eventLoopQueue.png\",\n        534,\n        332,\n        \"#5c8cbc\"\n      ]\n    }\n  ]\n}\n[/block]\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/fa5e547-eventLoopLag.png\",\n        \"eventLoopLag.png\",\n        541,\n        342,\n        \"#618fbd\"\n      ]\n    }\n  ]\n}\n[/block]","excerpt":"Performance metrics of your services like system load, memory, response time...","slug":"metrics","type":"basic","title":"Metrics"}

Metrics

Performance metrics of your services like system load, memory, response time...

Metrics are the near real-time information of your running system collected by our agent, if you'd like to know what it collects exactly check out our [agent documentation](doc:nodejs-agent). Our metrics page allows you to keep track of what is currently happening in your application. It makes it easy to spot potential errors during runtime. The metrics page shows an overview of a service, that could consist of multiple running instances that are reporting under the same name. Those are aggregated by our servers and shown on the UI. The aggregation method depends on the type of metric. (further details below) The charts can display deployment data that you can send to our servers, to learn more visit [it's documentation](doc:deployment-hook) ## What kind of metrics does Trace by RisingStack provide? ### Response time: The length of time required to respond to a request. [block:image] { "images": [ { "image": [ "https://files.readme.io/f94a8fb-responseTime.png", "responseTime.png", 594, 358, "#165b9f" ], "caption": "Response time chart" } ] } [/block] Aggregations used here are 95th and median, 95th means that 95 percent of the requests were server below that time, median is the midpoint of a frequency distribution of observed values. [block:callout] { "type": "warning", "body": "When monitoring web services it is essential to know how much time did a request spend in our application and when exactly did the server respond. Based on that metric we can spot potential lengthy operations in our application." } [/block] ### Throughput Indicates how many requests were served within a given time period. It's broken down by the response status codes, which are grouped by their first digit. [block:image] { "images": [ { "image": [ "https://files.readme.io/77a4a65-throughput.png", "throughput.png", 516, 333, "#069b84" ], "caption": "Throughput chart" } ] } [/block] By analyzing a throughput chart we can spot 4XX and 5XX response codes. These response codes could mean some serious errors in your app, so keep an eye on them. ### Memory usage Memory usage charts indicate how much memory does the running process use at a given time. We aggregate all of your instance's data by averaging them. It's broken down to 3 parts: - resident set size: the size of the memory that is currently in your RAM (not swapping) - total heap size: total heap size of your application - used heap: currently used heap size of your application By default, NodeJS sets the max heap size at around 1.8 GB on x86 64bit computers. That is the maximum value of the total heap size (can be modified by [v8-flags](https://nodejs.org/api/v8.html). The used heap size depends on garbage collection and memory allocation within your program. [block:image] { "images": [ { "image": [ "https://files.readme.io/12c0e4b-memoryUsage_ok.png", "memoryUsage_ok.png", 557, 348, "#2a8ac0" ] } ] } [/block] [block:callout] { "type": "danger", "title": "", "body": "When you see growing memory patterns on the chart, it could mean memory leaks in your app, as seen on the following chart. Consider creating a memory heapdump and inspecting it by hand." } [/block] [block:image] { "images": [ { "image": [ "https://files.readme.io/cc2b579-memoryUsage_warning.png", "memoryUsage_warning.png", 563, 352, "#188bc2" ], "caption": "Memory usage chart." } ] } [/block] ### Garbage collector Garbage collector can have a huge impact on the application's performance. If you're allocating big object structures it takes a long time to deallocate them. The garbage collector runs autonomously, and you can not predetermine when will it start and stop. First of our garbage collector metrics is **garbage collector runs**. [block:image] { "images": [ { "image": [ "https://files.readme.io/82348bb-garbageCollectorRun.png", "garbageCollectorRun.png", 531, 348, "#5a8abb" ] } ] } [/block] It's broken up into two parts: - Scavenge: fast by design, it is suitable for frequently occurring short GC cycles. - Mark-sweep: pause time in mark-sweep is found to be high, slowing down overall performance For further information about this topic see [this article on garbage collection](https://strongloop.com/strongblog/node-js-performance-garbage-collection/). Our other chart on the topic is **time spent doing garbage collection**. [block:image] { "images": [ { "image": [ "https://files.readme.io/0bdcb10-garbageCollectorTimeSpent.png", "garbageCollectorTimeSpent.png", 552, 349, "#bccad4" ] } ] } [/block] ### Event loop If you're not familiar with the NodeJS event loop, check out this [video](https://www.youtube.com/watch?v=8aGhZQkoFbQ). The key metrics of the event loop are the queue, which is the amount of tasks that are waiting to get processed. The other one is the event loop lag, which is the time it takes for a tasks to get processed. We're displaying an average event loop stats of your services. [block:image] { "images": [ { "image": [ "https://files.readme.io/1ae1cd1-eventLoopQueue.png", "eventLoopQueue.png", 534, 332, "#5c8cbc" ] } ] } [/block] [block:image] { "images": [ { "image": [ "https://files.readme.io/fa5e547-eventLoopLag.png", "eventLoopLag.png", 541, 342, "#618fbd" ] } ] } [/block]