From 184a7eb9187025bb43352687b0c0105e1631594d Mon Sep 17 00:00:00 2001 From: Sebastian Beltran Date: Tue, 19 Nov 2024 20:23:33 -0500 Subject: [PATCH] remove some references --- en/advanced/best-practice-performance.md | 38 +++++------------------- 1 file changed, 7 insertions(+), 31 deletions(-) diff --git a/en/advanced/best-practice-performance.md b/en/advanced/best-practice-performance.md index a65eb880c2..7d4c5beef1 100644 --- a/en/advanced/best-practice-performance.md +++ b/en/advanced/best-practice-performance.md @@ -211,7 +211,7 @@ In addition to restarting your app when it crashes, a process manager can enable * Gain insights into runtime performance and resource consumption. * Modify settings dynamically to improve performance. -* Control clustering (StrongLoop PM and pm2). +* Control clustering (pm2). The most popular process managers for Node are as follows: @@ -222,20 +222,9 @@ For a feature-by-feature comparison of the three process managers, see [http://s Using any of these process managers will suffice to keep your application up, even if it does crash from time to time. -However, StrongLoop PM has lots of features that specifically target production deployment. You can use it and the related StrongLoop tools to: - -* Build and package your app locally, then deploy it securely to your production system. -* Automatically restart your app if it crashes for any reason. -* Manage your clusters remotely. -* View CPU profiles and heap snapshots to optimize performance and diagnose memory leaks. -* View performance metrics for your application. -* Easily scale to multiple hosts with integrated control for Nginx load balancer. - -As explained below, when you install StrongLoop PM as an operating system service using your init system, it will automatically restart when the system restarts. Thus, it will keep your application processes and clusters alive forever. - #### Use an init system -The next layer of reliability is to ensure that your app restarts when the server restarts. Systems can still go down for a variety of reasons. To ensure that your app restarts if the server crashes, use the init system built into your OS. The two main init systems in use today are [systemd](https://wiki.debian.org/systemd) and [Upstart](http://upstart.ubuntu.com/). +The next layer of reliability is to ensure that your app restarts when the server restarts. Systems can still go down for a variety of reasons. To ensure that your app restarts if the server crashes, use the init system built into your OS. The main init systems in use today is [systemd](https://wiki.debian.org/systemd). There are two ways to use init systems with your Express app: @@ -277,6 +266,7 @@ Restart=always [Install] WantedBy=multi-user.target ``` + For more information on systemd, see the [systemd reference (man page)](http://www.freedesktop.org/software/systemd/man/systemd.unit.html). ### Run your app in a cluster @@ -291,25 +281,11 @@ In clustered apps, worker processes can crash individually without affecting the #### Using Node's cluster module -Clustering is made possible with Node's [cluster module](https://nodejs.org/dist/latest-v4.x/docs/api/cluster.html). This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it's far better to use one of the many tools out there that does it for you automatically; for example [node-pm](https://www.npmjs.com/package/node-pm) or [cluster-service](https://www.npmjs.com/package/cluster-service). - -#### Using StrongLoop PM - -If you deploy your application to StrongLoop Process Manager (PM), then you can take advantage of clustering _without_ modifying your application code. - -When StrongLoop Process Manager (PM) runs an application, it automatically runs it in a cluster with a number of workers equal to the number of CPU cores on the system. You can manually change the number of worker processes in the cluster using the slc command line tool without stopping the app. - -For example, assuming you've deployed your app to prod.foo.com and StrongLoop PM is listening on port 8701 (the default), then to set the cluster size to eight using slc: - -```console -$ slc ctl -C http://prod.foo.com:8701 set-size my-app 8 -``` - -For more information on clustering with StrongLoop PM, see [Clustering](https://docs.strongloop.com/display/SLC/Clustering) in StrongLoop documentation. +Clustering is made possible with Node's [cluster module](https://nodejs.org/api/cluster.html). This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it's far better to use one of the many tools out there that does it for you automatically; for example [node-pm](https://www.npmjs.com/package/node-pm) or [cluster-service](https://www.npmjs.com/package/cluster-service). #### Using PM2 -If you deploy your application with PM2, then you can take advantage of clustering _without_ modifying your application code. You should ensure your [application is stateless](http://pm2.keymetrics.io/docs/usage/specifics/#stateless-apps) first, meaning no local data is stored in the process (such as sessions, websocket connections and the like). +If you deploy your application with PM2, then you can take advantage of clustering _without_ modifying your application code. You should ensure your [application is stateless](https://pm2.keymetrics.io/docs/usage/specifics/#stateless-apps) first, meaning no local data is stored in the process (such as sessions, websocket connections and the like). When running an application with PM2, you can enable **cluster mode** to run it in a cluster with a number of instances of your choosing, such as the matching the number of available CPUs on the machine. You can manually change the number of processes in the cluster using the `pm2` command line tool without stopping the app. @@ -345,7 +321,7 @@ Use a caching server like [Varnish](https://www.varnish-cache.org/) or [Nginx](h No matter how optimized an app is, a single instance can handle only a limited amount of load and traffic. One way to scale an app is to run multiple instances of it and distribute the traffic via a load balancer. Setting up a load balancer can improve your app's performance and speed, and enable it to scale more than is possible with a single instance. -A load balancer is usually a reverse proxy that orchestrates traffic to and from multiple application instances and servers. You can easily set up a load balancer for your app by using [Nginx](http://nginx.org/en/docs/http/load_balancing.html) or [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts). +A load balancer is usually a reverse proxy that orchestrates traffic to and from multiple application instances and servers. You can easily set up a load balancer for your app by using [Nginx](https://nginx.org/en/docs/http/load_balancing.html) or [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts). With load balancing, you might have to ensure that requests that are associated with a particular session ID connect to the process that originated them. This is known as _session affinity_, or _sticky sessions_, and may be addressed by the suggestion above to use a data store such as Redis for session data (depending on your application). For a discussion, see [Using multiple nodes](https://socket.io/docs/v4/using-multiple-nodes/). @@ -353,4 +329,4 @@ With load balancing, you might have to ensure that requests that are associated A reverse proxy sits in front of a web app and performs supporting operations on the requests, apart from directing requests to the app. It can handle error pages, compression, caching, serving files, and load balancing among other things. -Handing over tasks that do not require knowledge of application state to a reverse proxy frees up Express to perform specialized application tasks. For this reason, it is recommended to run Express behind a reverse proxy like [Nginx](https://www.nginx.com/) or [HAProxy](http://www.haproxy.org/) in production. +Handing over tasks that do not require knowledge of application state to a reverse proxy frees up Express to perform specialized application tasks. For this reason, it is recommended to run Express behind a reverse proxy like [Nginx](https://www.nginx.com/) or [HAProxy](https://www.haproxy.org/) in production.