-
-
Notifications
You must be signed in to change notification settings - Fork 163
PostgreSQL
PostgreSQL >= 9.5
It works with pg
, knex
and sequelize
.
As a table created automatically during limiter creation, it is required to provide ready
callback as the second option of new RateLimiterPostgres(opts, ready)
to handle errors during table creation. Limiter can't work until table is created. See example below.
Rows in the table expired more than an hour ago are removed every 5 minutes by setTimeout
. Note, call rateLimiter.clearExpired(Date.now() - 3600000)
manually to remove expired rows in AWS Lambda or GCP function.
If you want to create a table manually, set tableCreated option to true. You can find table structure here. ready
callback can be omitted and there is no need to wrap it by async function in this case.
See detailed options description here
Here is example of pure Node.js applciation. You can easily adapt it for any framework.
// createRateLimiter.js file
const {RateLimiterPostgres} = require('rate-limiter-flexible');
module.exports = async (opts) => {
return new Promise((resolve, reject) => {
let rateLimiter
const ready = (err) => {
if (err) {
reject(err)
} else {
resolve(rateLimiter)
}
}
rateLimiter = new RateLimiterPostgres(opts, ready)
})
}
const http = require('http')
const { Pool } = require('pg');
const createRateLimiter = require('./createRateLimiter')
const client = new Pool({
host: '127.0.0.1',
port: 5432,
database: 'root',
user: 'root',
password: 'secret',
});
const opts = {
storeClient: client,
points: 5, // Number of points
duration: 1, // Per second(s)
// Custom options
tableName: 'mytable', // if not provided, keyPrefix used as table name
keyPrefix: 'myprefix', // must be unique for limiters with different purpose
};
createRateLimiter(opts)
.then((rateLimiter) => {
const srv = http.createServer((req, res) => {
rateLimiter.consume('userId1')
.then((rateLimiterRes) => {
// There were enough points to consume
res.end(rateLimiterRes.toString())
})
.catch((rejRes) => {
if (rejRes instanceof Error) {
// Some Postgres error
// Never happen if `insuranceLimiter` set up
res.writeHead(500)
} else {
// Can't consume
res.writeHead(429);
}
res.end(rejRes.toString())
});
})
srv.listen(3002);
})
.catch((err) => {
console.error(err)
process.exit()
})
By default, RateLimiterPostgres creates separate table by keyPrefix
for every limiter.
All limits are stored in one table if tableName
option is set.
Note: Carefully test performance, if your application limits more than 500 requests per second.
It gets internal connection from Sequelize or Knex to make raw queries. Connection is released after any query or transaction, so workflow is clean.
const rateLimiter = new RateLimiterPostgres({
storeClient: sequelizeInstance,
}, ready);
const rateLimiter = new RateLimiterPostgres({
storeClient: knexInstance,
storeType: `knex`, // knex requires this option
}, ready);
See detailed options description here
Endpoint is pure NodeJS endpoint launched in node:10.5.0-jessie
and postgres:latest
Docker containers with 4 workers
User key is random number from 0 to 300.
Endpoint is limited by RateLimiterPostgres
with config:
new RateLimiterPostgres({
storeClient: pgClient,
points: 5, // Number of points
duration: 1, // Per second(s)
});
Statistics Avg Stdev Max
Reqs/sec 995.09 303.79 2010.29
Latency 7.48ms 5.30ms 51.60ms
Latency Distribution
50% 5.25ms
75% 8.07ms
90% 16.36ms
95% 21.85ms
99% 29.42ms
HTTP codes:
1xx - 0, 2xx - 8985, 3xx - 0, 4xx - 21024, 5xx - 0
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting