{"_id":"574e8b2b4be6f22200666761","__v":21,"project":"56a1f77442dfda0d00046285","user":"56a1f912d00f7d0d00c8efd7","category":{"_id":"574e870be892bf0e004fde0d","project":"56a1f77442dfda0d00046285","version":"56a1f77542dfda0d00046288","__v":0,"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-06-01T06:56:11.733Z","from_sync":false,"order":2,"slug":"use-cases","title":"Use Cases"},"parentDoc":null,"version":{"_id":"56a1f77542dfda0d00046288","__v":9,"project":"56a1f77442dfda0d00046285","createdAt":"2016-01-22T09:33:41.397Z","releaseDate":"2016-01-22T09:33:41.397Z","categories":["56a1f77542dfda0d00046289","56a1fdf442dfda0d00046294","56a2079f0067c00d00a2f955","56a20bdf8b2e6f0d0018ea84","56a3e78a94ec0a0d00b39fed","56af19929d32e30d0006d2ce","5721f4e9dcfa860e005bef98","574e870be892bf0e004fde0d","5832fdcdb32d820f0072e12f"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-06-01T07:13:47.264Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"#1 - Check your applications metrics\"\n}\n[/block]\nFirst of all, head over to **Metrics page**. You will find information on the most important metrics, like\n* response time,\n* throughput,\n* system load,\n* memory usage,\n* garbage collector runs\n* time spent doing garbage collection,\n* eventloop lag,\n* eventloop queue.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"#2 - Correlate response time with throughput\"\n}\n[/block]\nCheck and correlate your response time and throughput metrics. \n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/skL7dZ7S02X3fDqAQrYg_correlate-response-time-with-throughput-in-trace.png\",\n        \"correlate-response-time-with-throughput-in-trace.png\",\n        \"2412\",\n        \"1098\",\n        \"#059b84\",\n        \"\"\n      ],\n      \"caption\": \"Correlate response time with throughput in Trace\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"If you see that not just the response time went up, but you have a lot more traffic too, you may only need to scale your services vertically.\"\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"#3 - Correlate response time with system load, memory, and garbage collection\"\n}\n[/block]\nCheck out whether the slowness of your system correlates with system load, memory usage and garbage collection metrics.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/oPW0jjiSQ9CixMlRsQb7_correlate-service-metrics-with-trace.png\",\n        \"correlate-service-metrics-with-trace.png\",\n        \"2368\",\n        \"1120\",\n        \"#5784c5\",\n        \"\"\n      ],\n      \"caption\": \"Correlate service metrics with Trace\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"In some cases you will see that your process is running out of memory, and the garbage collection starts to do the heavy lifting. Since it runs in the same process, it affects the performance of your application, and it can be an indicator of a memory leak.\"\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"#4 - Mark your transactions\"\n}\n[/block]\nIf you are interested in code-level insights on why your application is slowing down, you can use the distributed tracing feature. \n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/vCOV5VXwRkmqOWrfCZeX_trace-list-of-distributed-stack-traces.png\",\n        \"trace-list-of-distributed-stack-traces.png\",\n        \"2476\",\n        \"590\",\n        \"#384e64\",\n        \"\"\n      ],\n      \"caption\": \"List of distributed stack traces\"\n    }\n  ]\n}\n[/block]\nTrace automatically collects stack traces of faulty transactions and puts them in a list on the **Trace list page**.\n[block:callout]\n{\n  \"type\": \"success\",\n  \"body\": \"To track down performance related issues, you can send a custom header to your application that marks the transaction for collection in Trace.\"\n}\n[/block]\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/PrpZFGUfR2S92A7YuIt3_x-must-collect-http-header.png\",\n        \"x-must-collect-http-header.png\",\n        \"1276\",\n        \"430\",\n        \"#8ca3d5\",\n        \"\"\n      ],\n      \"caption\": \"X-must-collect http header\"\n    }\n  ]\n}\n[/block]\nOnce you have sent the request, it may take a couple of minutes until it shows up in Trace. When it does, it will be shown using the blue information signal.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ysBfv4VqSZeOlW04tAfo_custom-data-reported-into-trace.png\",\n        \"custom-data-reported-into-trace.png\",\n        \"1580\",\n        \"842\",\n        \"#9f6757\",\n        \"\"\n      ],\n      \"caption\": \"Custom data reported into Trace\"\n    }\n  ]\n}\n[/block]\nWhat you see here are events that occurred during the given request. You can switch to a visualized version using the **Timeline graph** button. You will end up with a graph similar to this one:\n\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/qcQJ5Ww0SbGwmXagWNmH_timeline-graph-in-trace.png\",\n        \"timeline-graph-in-trace.png\",\n        \"2378\",\n        \"1110\",\n        \"#1a354c\",\n        \"\"\n      ],\n      \"caption\": \"Timeline graph in Trace\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"What you see here are the timings of the events and the microservice boundaries (if you have any).  The vertical lines represent services, while the vertical lines are the calls between them.\"\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"#5 - Add custom instrumentation to your services\"\n}\n[/block]\nThe purple icons on the transaction graphs are showing information that you, as a developer can add to the services.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/cOuaTjCuR8KtbK5xribk_custom_information_in_trace.png\",\n        \"custom_information_in_trace.png\",\n        \"2402\",\n        \"1230\",\n        \"#73140d\",\n        \"\"\n      ],\n      \"caption\": \"Custom information sent into Trace\"\n    }\n  ]\n}\n[/block]\nTo do this, use the `trace.report(‘event_name’, data)` method, by just adding this line to your application. This way you can instrument your codebase wherever you want. \n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"If you click on the **purple icons** on the **Timeline graph**, you will be able to see the data sent.\"\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"#6 - Set up alerts\"\n}\n[/block]\nOnce you fixed the issue using the previously mentioned features, make sure that you will be notified immediately when it happens again.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/rD9NIOtiQ7ivJG7V6oR0_alert-list-in-trace.png\",\n        \"alert-list-in-trace.png\",\n        \"2558\",\n        \"670\",\n        \"#2c3f55\",\n        \"\"\n      ],\n      \"caption\": \"Alert list in Trace\"\n    }\n  ]\n}\n[/block]\nYou can do that by going to the **Alert page**, and click the **Alert list** button on the top.\n\nTo create a new alert, click on **Create a new alert**. On the alert creation page you can set up new alerts for slow response times by selecting **Response time** under the conditions. \n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/q2uTViwBSUiysm4QKeWu_set-up-alerting-in-trace.png\",\n        \"set-up-alerting-in-trace.png\",\n        \"1994\",\n        \"750\",\n        \"#434050\",\n        \"\"\n      ],\n      \"caption\": \"Set up alerting in Trace\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"You will be able to set up warning and critical alert levels, observe multiple services, and get notification through various channels, like Slack, Pagerduty, Webhook or Email.\"\n}\n[/block]","excerpt":"Applications can slow down for no apparent reason. \nThese are the steps you should perform to find the issue at hand.","slug":"investigate-slow-nodejs-apps","type":"basic","title":"Investigate slow Node.js apps"}

Investigate slow Node.js apps

Applications can slow down for no apparent reason. These are the steps you should perform to find the issue at hand.

[block:api-header] { "type": "basic", "title": "#1 - Check your applications metrics" } [/block] First of all, head over to **Metrics page**. You will find information on the most important metrics, like * response time, * throughput, * system load, * memory usage, * garbage collector runs * time spent doing garbage collection, * eventloop lag, * eventloop queue. [block:api-header] { "type": "basic", "title": "#2 - Correlate response time with throughput" } [/block] Check and correlate your response time and throughput metrics. [block:image] { "images": [ { "image": [ "https://files.readme.io/skL7dZ7S02X3fDqAQrYg_correlate-response-time-with-throughput-in-trace.png", "correlate-response-time-with-throughput-in-trace.png", "2412", "1098", "#059b84", "" ], "caption": "Correlate response time with throughput in Trace" } ] } [/block] [block:callout] { "type": "info", "body": "If you see that not just the response time went up, but you have a lot more traffic too, you may only need to scale your services vertically." } [/block] [block:api-header] { "type": "basic", "title": "#3 - Correlate response time with system load, memory, and garbage collection" } [/block] Check out whether the slowness of your system correlates with system load, memory usage and garbage collection metrics. [block:image] { "images": [ { "image": [ "https://files.readme.io/oPW0jjiSQ9CixMlRsQb7_correlate-service-metrics-with-trace.png", "correlate-service-metrics-with-trace.png", "2368", "1120", "#5784c5", "" ], "caption": "Correlate service metrics with Trace" } ] } [/block] [block:callout] { "type": "info", "body": "In some cases you will see that your process is running out of memory, and the garbage collection starts to do the heavy lifting. Since it runs in the same process, it affects the performance of your application, and it can be an indicator of a memory leak." } [/block] [block:api-header] { "type": "basic", "title": "#4 - Mark your transactions" } [/block] If you are interested in code-level insights on why your application is slowing down, you can use the distributed tracing feature. [block:image] { "images": [ { "image": [ "https://files.readme.io/vCOV5VXwRkmqOWrfCZeX_trace-list-of-distributed-stack-traces.png", "trace-list-of-distributed-stack-traces.png", "2476", "590", "#384e64", "" ], "caption": "List of distributed stack traces" } ] } [/block] Trace automatically collects stack traces of faulty transactions and puts them in a list on the **Trace list page**. [block:callout] { "type": "success", "body": "To track down performance related issues, you can send a custom header to your application that marks the transaction for collection in Trace." } [/block] [block:image] { "images": [ { "image": [ "https://files.readme.io/PrpZFGUfR2S92A7YuIt3_x-must-collect-http-header.png", "x-must-collect-http-header.png", "1276", "430", "#8ca3d5", "" ], "caption": "X-must-collect http header" } ] } [/block] Once you have sent the request, it may take a couple of minutes until it shows up in Trace. When it does, it will be shown using the blue information signal. [block:image] { "images": [ { "image": [ "https://files.readme.io/ysBfv4VqSZeOlW04tAfo_custom-data-reported-into-trace.png", "custom-data-reported-into-trace.png", "1580", "842", "#9f6757", "" ], "caption": "Custom data reported into Trace" } ] } [/block] What you see here are events that occurred during the given request. You can switch to a visualized version using the **Timeline graph** button. You will end up with a graph similar to this one: [block:image] { "images": [ { "image": [ "https://files.readme.io/qcQJ5Ww0SbGwmXagWNmH_timeline-graph-in-trace.png", "timeline-graph-in-trace.png", "2378", "1110", "#1a354c", "" ], "caption": "Timeline graph in Trace" } ] } [/block] [block:callout] { "type": "info", "body": "What you see here are the timings of the events and the microservice boundaries (if you have any). The vertical lines represent services, while the vertical lines are the calls between them." } [/block] [block:api-header] { "type": "basic", "title": "#5 - Add custom instrumentation to your services" } [/block] The purple icons on the transaction graphs are showing information that you, as a developer can add to the services. [block:image] { "images": [ { "image": [ "https://files.readme.io/cOuaTjCuR8KtbK5xribk_custom_information_in_trace.png", "custom_information_in_trace.png", "2402", "1230", "#73140d", "" ], "caption": "Custom information sent into Trace" } ] } [/block] To do this, use the `trace.report(‘event_name’, data)` method, by just adding this line to your application. This way you can instrument your codebase wherever you want. [block:callout] { "type": "info", "body": "If you click on the **purple icons** on the **Timeline graph**, you will be able to see the data sent." } [/block] [block:api-header] { "type": "basic", "title": "#6 - Set up alerts" } [/block] Once you fixed the issue using the previously mentioned features, make sure that you will be notified immediately when it happens again. [block:image] { "images": [ { "image": [ "https://files.readme.io/rD9NIOtiQ7ivJG7V6oR0_alert-list-in-trace.png", "alert-list-in-trace.png", "2558", "670", "#2c3f55", "" ], "caption": "Alert list in Trace" } ] } [/block] You can do that by going to the **Alert page**, and click the **Alert list** button on the top. To create a new alert, click on **Create a new alert**. On the alert creation page you can set up new alerts for slow response times by selecting **Response time** under the conditions. [block:image] { "images": [ { "image": [ "https://files.readme.io/q2uTViwBSUiysm4QKeWu_set-up-alerting-in-trace.png", "set-up-alerting-in-trace.png", "1994", "750", "#434050", "" ], "caption": "Set up alerting in Trace" } ] } [/block] [block:callout] { "type": "info", "body": "You will be able to set up warning and critical alert levels, observe multiple services, and get notification through various channels, like Slack, Pagerduty, Webhook or Email." } [/block]