Setting Up Custom Webhooks for Fleet Events

Stripe popularized webhooks for payment notifications back in the early 2010s, and the concept still feels like one of the cleanest ideas in software: instead of asking a server “did anything happen?” over and over, the server just tells you when something happens. The simplicity is deceptive, though, because in practice setting up webhooks for fleet events can go sideways fast if you do not plan ahead, and I know this because I once made every possible mistake in a single afternoon.

During a proof-of-concept for a field team tracking project, I toggled on webhook notifications for every single event type the platform offered: arrivals, departures, geofence entries and exits, speed changes, connection drops, reconnections. All of them pointed at a single endpoint that forwarded everything to my email inbox. Within an hour, I had over two thousand notifications piling up, the inbox was completely buried, and my phone vibrated so aggressively it walked itself off my desk.

The immediate reaction was to turn everything off entirely, cutting webhooks, notifications, and alerts all at once, and then I spent an entire weekend building custom filtering rules from scratch. A more measured approach, starting with just the events that actually mattered, would have saved me all that effort. That experience shaped how I think about webhook configuration now, and it is what I want to walk through with you here.

What Kinds of Events Can You Hook Into?

Most fleet and field team tracking platforms expose a handful of core event types through their webhook systems. The exact names vary, but the categories are consistent. You will typically see arrival and departure events (a team member reaches or leaves a designated location), geofence events (someone enters or exits a geographic boundary you have drawn), status change events (a member goes online, offline, or switches between active states), and location update events (periodic coordinate pushes at some configurable interval).

That last category is the one that burned me. Location updates can fire every few seconds depending on how the platform is configured, and if your webhook endpoint receives a POST request for every single GPS ping from every team member, the volume scales multiplicatively. A modest team updating at a typical interval can easily generate hundreds of webhook calls per minute, which is not a notification system anymore but a firehose you cannot drink from.

The events worth subscribing to depend entirely on what you are building. If you run a delivery operation and need to know when drivers reach customer locations, arrival events are your bread and butter. For teams managing field technicians spread across a metro area and need to know when someone wanders outside their assigned zone, geofence events make more sense. Platforms like Konvoyage focus on tracking people rather than vehicles, which means the event model maps cleanly to what your team members are actually doing rather than what a piece of hardware bolted to a dashboard is reporting.

There is also a less obvious event type worth knowing about: trip lifecycle events. These fire when a trip or route is created, started, completed, or cancelled. They tend to be low volume and high signal, which makes them excellent candidates for your first webhook subscription. If your dispatch system needs to know when a field tech finishes their current job so it can assign the next one, a trip-completed event is exactly what you want, and you will receive maybe a dozen of them per day instead of thousands.

Start with one or two event types and resist the urge to subscribe to everything at once.

Configuring Your Endpoint and Locking It Down

A webhook is just an HTTP POST request that a server sends to a URL you provide. That means your endpoint needs to be publicly reachable, which immediately raises security questions. Anyone who discovers your webhook URL could start sending fake payloads to it, and if your system trusts those payloads blindly, you have a real problem.

Three layers of protection are worth implementing from the start.

First, always use HTTPS for your webhook endpoint because payloads often contain location data, team member identifiers, and timestamps. Sending that over plain HTTP is broadcasting it to anyone who cares to listen, and there is no scenario where that tradeoff makes sense.

Second, validate a shared secret, which most webhook providers make straightforward through HMAC signature verification. The provider signs each payload with a secret key that only you and the provider know, and your endpoint verifies the signature before processing anything. If the signature does not match, you reject the request. The implementation looks something like this: you extract the signature header from the incoming request, compute your own HMAC-SHA256 digest of the request body using your secret, and compare the two. Matching signatures confirm the payload is legitimate and safe to process. For those curious about the deeper mechanics, HMAC works by running the message through a hash function twice with the key mixed in at each pass, which prevents length extension attacks that would break a naive “hash the key plus the message” approach. That subtlety is why you should never roll your own signature scheme.

Third, consider IP allowlisting if your infrastructure supports it. This adds a network-level gate on top of the application-level signature check. Not every provider publishes their outbound IP ranges, but when they do, it is worth configuring.

One thing people often overlook: your endpoint needs to respond quickly. Most webhook providers expect a success response within a few seconds, and if your endpoint takes too long because it is doing heavy processing synchronously, the provider may assume delivery failed and retry, which creates duplicate events. Accept the payload, return a success status immediately, and process the data asynchronously in a background worker or queue. This pattern, sometimes called “acknowledge and defer,” keeps your webhook receiver reliable even when downstream processing is slow or temporarily broken.

It is also worth understanding what your provider’s retry policy looks like, because most providers use exponential backoff, retrying after progressively longer intervals and eventually giving up altogether. When your endpoint was down for maintenance and comes back online, you might receive a burst of retried payloads all at once, so your handler needs to survive that burst without falling over and it needs to deduplicate events that were already successfully processed before the outage.

Handling Payloads Without Losing Your Mind

Webhook payloads are typically JSON, and they usually follow a consistent structure within a given platform. A typical fleet event payload contains an event type identifier, a timestamp, the relevant entity (team member ID, trip ID, or location ID), and the event-specific data. For an arrival event, that might include the destination coordinates and the member’s actual arrival coordinates so you can compare precision.

Have you ever received a webhook payload and realized you had no idea what triggered it? That happens more than people admit, especially during initial setup when you are subscribing to multiple event types and they all land on the same endpoint. The fix is routing, and it is simpler than it sounds.

You can either use separate endpoint URLs for different event types (like /webhooks/arrivals and /webhooks/geofence), or use a single endpoint with a router that inspects the event type field in the payload and dispatches accordingly. I prefer the single-endpoint approach with internal routing because it keeps your webhook configuration in the provider’s dashboard simple, and it centralizes your logging.

Here is a quick comparison of how the two routing approaches stack up:

Approach Pros Cons
Separate endpoints per event Clean separation, easy to disable one type More URLs to manage, harder to correlate cross-event data
Single endpoint with routing Centralized logging, one config in provider Routing logic lives in your code, slightly more complex handler

Whichever path you choose, log every incoming payload before you process it, capturing it raw, unmodified, and timestamped. When something breaks in the middle of the night, those logs are the difference between a quick fix and a multi-hour guessing game. I learned this after an incident where our arrival event handler silently dropped payloads with an unexpected field, and we had no record of what the original data looked like.

Idempotency matters just as much as logging does. Webhooks can and will be delivered more than once, so if you are recording arrival events in a database, make sure your insert logic handles duplicates gracefully, either by using a unique constraint on the event ID or by checking for existence before writing. Processing the same arrival event twice might not seem catastrophic, but if that event triggers a customer notification, your customer just got two “your technician has arrived” messages, and that erodes trust fast.

One pattern that has saved me repeatedly: maintain a small in-memory set of recently processed event IDs, backed by a database check for older events. The in-memory check catches the most common duplicate scenario where rapid retries arrive within seconds, and the database check catches the rarer case where a retry arrives hours later. This two-tier approach keeps your handler fast for the common case while still being correct for edge cases.

Testing Without Breaking Production

Confession: the first time I tested webhooks in production, I did not mean to. I had configured the production endpoint URL in what I thought was a staging environment, and suddenly real team member location data was hitting my test handler, which was printing everything to stdout in a terminal window my coworker was watching over my shoulder. Embarrassing does not begin to cover it.

Use a tool like webhook.site or ngrok during development. These give you a publicly reachable URL that you can point your webhook configuration at, and they log every incoming request with full headers and body. You can inspect payloads, verify signatures, and test your parsing logic without touching your actual infrastructure.

Once your handler code is working against test payloads, deploy it to a staging environment with its own webhook configuration. Keep production and staging webhook endpoints completely separate, and never share a URL between environments because the production-in-staging mistake I described above happened specifically because I reused a URL, thinking I would “just change it later.” Later never comes when things are working.

For debugging live webhooks after deployment, build in a replay mechanism. Store every raw payload you receive (you are already logging them, right?), and give yourself the ability to re-process any stored payload through your handler. This turns debugging from “wait for the next event and watch what happens” into “grab the payload that caused the problem and run it through the handler locally.” It is a massive time saver, and it pairs well with writing unit tests because once you have a real payload that exposed a bug, you can turn it into a test fixture so the same class of issue never reaches production again.

Something I genuinely enjoy about webhook debugging is that it forces you to think about your system as a receiver rather than a requester. Most of the code we write initiates actions, but webhooks flip that dynamic entirely, and the interesting question becomes “what do I do when the world tells me something happened?” That shift in perspective makes you better at event-driven architecture in general.

Build yourself a simple health dashboard for your webhook receiver, even if it is just a page that shows the last batch of events received, their types, and whether they were processed successfully. When a stakeholder asks “are we receiving fleet events?” you want to answer in seconds, not minutes. A quick glance at timestamps tells you whether events are flowing, and gaps in the timeline tell you when they stopped.

Whenever you are integrating webhooks with a fleet tracking tool and want to connect arrival events to your dispatch system or your CRM, think carefully about the data flow before you write the glue code. Map out which event types feed which downstream systems, what happens when an event is delayed or duplicated, and what your fallback is if the webhook delivery stops entirely. Having a trip replay capability as a backup means you are not entirely dependent on real-time hooks for operational visibility.

What is the first fleet event you would actually want a webhook for, and what would you do with it when it arrived?

Leave a Comment