Rate Limiting Endpoints
While Svix can handle however many messages you send us, your customers' endpoints may not be able to. This is why the Svix API includes rate-limiting calls to customer endpoints.
This lets you send as many webhooks per second as you want without having to worry about overloading your customers' systems.
How does it work?
You can set a rate limit on both the application, which applies to all of the endpoints for that application, or the endpoint to only set it for that specific endpoint.
The rate limit is defined as a limit for the number of messages per second to send to the endpoint. After the limit is reached, requests will get throttled so to keep a consistent rate under the limit.
Due to the nature of distributed systems the actual rate of messages can sometimes be slightly above the enforce rate limit. So for example, if you set a rate limit of 1,000 per seconds, an endpoint may potentially get messages at a rate of 1,050 or even higher.
Setting the limits
You can set the rate limit on either the application or each of its endpoint. The rate limit is enforced per endpoint, so limits set on the application propagate to the endpoints and can be overridden by the endpoint specific settings.
Setting application rate limits
You can set the rate limit on an application when creating or updating it using the rateLimit
property. The value is the number of messages per second.
- JavaScript
- Python
- Rust
- Go
- Java
- Kotlin
- Ruby
- C#
- CLI
- cURL
import { Svix } from "svix";
const svix = new Svix("AUTH_TOKEN");
const app = await svix.application.create({name: "Application name", rateLimit: 1000});
from svix.api import Svix, ApplicationIn
svix = Svix("AUTH_TOKEN")
app = svix.application.create(ApplicationIn(name="Application name", rate_limit=1000))
import (
svix "github.com/svix/svix-webhooks/go"
)
svixClient := svix.New("AUTH_TOKEN", nil)
app, err := svixClient.Application.Create(&svix.ApplicationIn{
Name: "Application name",
RateLimit: 1000,
})}
use svix::api::{ApplicationIn, Svix, SvixOptions};
let svix = Svix::new("AUTH_TOKEN".to_string(), None);
let app = svix.application().create(ApplicationIn {
name: "Application name".to_string(),
rate_limit: 1000,
..ApplicationIn::default()
}).await?;
import com.svix.models.ApplicationIn;
import com.svix.models.ApplicationOut;
import com.svix.Svix;
Svix svix = new Svix("AUTH_TOKEN");
ApplicationOut app = svix.getApplication().create(new ApplicationIn().name("Application name").rateLimit(1000));
import com.svix.kotlin.models.ApplicationIn;
import com.svix.kotlin.Svix;
val svix = Svix("AUTH_TOKEN")
val applicationOut = svix.application.create(ApplicationIn(
name = "Application name",
rateLimit = 1000))
require "svix"
svix = Svix::Client.new("AUTH_TOKEN")
application_out = svix.application.create(Svix::ApplicationIn.new({
"name" => "Application name"
"rate_limit" => 1000}))
using Svix;
using Svix.Model;
using Svix.Models;
var svix = new SvixClient("AUTH_TOKEN", new SvixOptions("https://api.svix.com"));
await svix.Application.CreateAsync(new EventTypeIn(
name: "Application Name",
rateLimit: 1000
))
export SVIX_AUTH_TOKEN='AUTH_TOKEN'
svix application create '{ "name": "Application name", "rateLimit": 1000 }'
curl -X POST "https://api.svix.com/api/v1/app/" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer AUTH_TOKEN" \
-d '{"name": "Application name", "rateLimit": 1000}'
Setting endpoint rate limits
Similar to the application rate limiting, you can set the rate limit on an endpoint when creating or updating it using the rateLimit
property. The value is the number of messages per second.
Intended use
The rate limit is intended to be used as a sort of a fuse to protect your customers' servers from sudden peaks in message traffic that they may not be able to withstand. With the rate limiting, this peak is smoothened and spread over multiple seconds, sending messages at a rate your customers can handle.
One important thing to remember with rate limiting, is that if you are limiting to 1,000 messages per second and are consistently sending messages over that limit (e.g. 2,000 per second), the queue will get congested and you will be sending messages with infinitely increasing delays. We can alert you of such issues, and help you address them when they occur, though this is still something to be mindful of.