How to Use Braking and Acceleration Data for Driver Coaching

A sudden brake at highway speed sends a spike through your telematics dashboard before the driver’s foot has even fully left the pedal, routed through a moving-average filter on the accelerometer stream, compared against a g-force threshold the vendor quietly chose for you, and filed as a single red row in a weekly harsh event report that somebody, eventually, will turn into a coaching conversation. The gap between the row showing up and that conversation actually happening is where most fleet coaching programs quietly fall apart. I have been reading these reports on my own driving for years, and I am still honestly figuring out how to separate the rows that describe a habit from the rows that describe the world happening at me.

That ambiguity is the whole problem in miniature, and it is what I want to talk about here.

What follows is how I read a harsh event report now, how I try to tell a pattern from a one-off, what a before-and-after comparison needs to look like to mean anything, and how the coaching conversation itself should be shaped so the driver walks out feeling helped instead of audited.

The habit that made me take this seriously

I drive a personal vehicle with a connected-services app, and after long trips I have a small ritual of opening MyHyundai with Bluelink on my phone and pulling up the driving score summary. The first thing I look at is not the overall score, because the overall score is a vendor composite designed to fit on a watch face and is almost useless as a coaching signal. I scroll past it and look at the harsh brake count broken out by day, because a single bad stretch of road can drag an entire week into the red even when the rest of my driving was clean, and the daily view is the only one that lets me remember which day did the damage. Then I try to reconstruct what actually happened on the day the count spiked, because the reconstruction is the only thing that turns the number into information I can act on, and there is more on that reconstruction habit in our writeup on using trip replay for fleet ops investigations. Sometimes the memory comes back fully formed, and sometimes I cannot remember at all, and which of those two things happens is the most useful signal the whole exercise gives me.

When the memory comes back, it is usually a situation: the rideshare driver who stopped for a pickup with no signal, the cardboard box that appeared under my wheels on a rural two-lane, the panicked deer that made me briefly contemplate the physics of my front bumper. Those are not habits. They are the world throwing things at me, and no amount of coaching is going to make them stop happening.

When I cannot remember, that is when I start to worry, because a forgettable harsh brake is rarely forgettable because it was minor, it is forgettable because it did not feel unusual to me in the moment, which is another way of saying it felt like something I do on autopilot. That is the category worth paying attention to, and it is also the category the data cannot flag for me, because a tailgating-induced brake and a deer-induced brake look completely identical in the log. If I cannot reliably separate my own habits from my own situational responses with the advantage of having been the person actually sitting in the seat, imagine what it is like for a safety lead reading hundreds of driver rows a week without any of that context.

Reading the report like it matters

A typical harsh event report gives you a driver identifier, a timestamp, a location, a severity bucket, sometimes a video clip, and a delta describing the magnitude of the deceleration, and that is the raw material for a decision you still have to make. The trap almost everybody falls into is treating each row as a self-contained verdict, as if the platform had already done the interpretation for you and your only job was to deliver the bad news to whoever earned the row. The interpretation has not been done for you, and doing it well starts with looking at the report in aggregate before you look at any individual row.

Cluster by location before you cluster by driver

If several of your drivers register harsh braking events at the same intersection, the intersection itself is the problem, and you will not coach that away without burning trust that took months to build. Group events by geohash first and by driver second, because any cluster that pulls in multiple unrelated drivers is a route, signage, or scheduling issue rather than a coaching issue, and the upstream fix is almost always cheaper and more durable than a round of nearly identical conversations with the people who happen to drive through it.

Normalize, then ask whether this is actually a pattern

Drivers on different route mixes generate wildly different event rates, and a courier on dense urban routes will look worse than an interstate runner every day, not because the courier is worse but because city blocks produce more braking per mile than open highway. Always divide by distance or driving hours before ranking anyone. The deeper question, once you have normalized, is whether a driver’s elevated rate represents a pattern or a cluster of bad luck, and I owe the spirit of my answer to driver coaching research out of the Virginia Tech Transportation Institute. Their studies keep reaching the same conclusion: frequency and clustering of harsh events are far more predictive of crash risk than any individual event, and monitoring on its own does not change behavior unless it is paired with a real accountability conversation afterward. Virginia Tech research associate Andrew Miller, commenting on the institute’s coaching work, said that monitoring systems on their own were not particularly effective, but with the inclusion of some form of accountability, behavior actually changed, and the full VTTI writeup is worth reading if you want the methodology.

The practical translation for a fleet lead looks roughly like this. A single harsh event, even a severe one, is almost always situational, and treating a one-off as a coaching case makes drivers feel surveilled and defensive, which is the exact posture you do not want them in when you are trying to change behavior.

A pattern is what you coach, and a real pattern has three signatures you should be able to name before you walk into the room with the driver. The first is repetition across different contexts: the same driver generates harsh brakes on different routes, in different weather and traffic, where the environment is not the constant and the person behind the wheel is. The second is co-occurrence of braking and acceleration events in a tight loop, because drivers who tailgate accelerate hard into closing gaps and then brake hard when those gaps close faster than expected, and that cycle is a following-distance problem showing up in two columns. The third is a rolling-average trend line that does not flatten, because weekly variance is not interesting on its own, but a trend climbing for over a month on a consistent route mix is telling you something the individual rows cannot. Anything that does not hit at least two of those signatures is almost certainly a one-off, and you should file it and save your relational capital for the patterns that actually need the conversation.

Before and after, done honestly

Most fleets run before-and-after comparisons badly, and I say that with affection because I have done them badly too. The usual failure mode is to pull a driver’s raw harsh event count for the month before a coaching conversation, pull it again for the month after, and either celebrate or despair based on the difference, and that comparison is almost entirely noise once you account for route mix changes, weather, schedule churn, vacation days, and the simple regression to the mean that pulls any outlier week back toward the driver’s long-run average whether you intervened or not. The honest version requires normalization to an event rate instead of a raw count, filtering the comparison to roughly comparable route types on both sides of the intervention, and giving both windows enough time to smooth out week-to-week noise.

Here is what a real comparison can look like, kept inline so I do not have to pretend a tabular layout will render consistently in every feed reader. A driver was averaging roughly two and a half harsh brakes per hundred miles in the weeks before a coaching conversation about late braking on descending grades, and afterward, on a similar route mix, that rate dropped to under one per hundred miles, with most of the remaining events clustering around a construction zone that was added to the route mid-period. That is a real improvement, the construction-zone cluster is almost certainly not coachable at all, and knowing when to stop pushing on a driver who has already adjusted is part of the skill the dashboard will never teach you.

The technical aside I promised

One thing that trips almost everyone up when they first pull telematics data seriously is that most platforms apply a low-pass filter to the raw accelerometer stream before calling anything a harsh event, because raw accelerometer data is absurdly noisy and every pothole, speed bump, and door slam shows up as a spike that would otherwise flood the report with false positives. The filter is typically a moving average over a few hundred milliseconds, which means the g-force threshold the vendor shows you in the UI is a filtered peak rather than the instantaneous peak that actually occurred, and the instantaneous peak during the same brake could be meaningfully higher than what the dashboard reports. The consequence for coaching is that comparing harsh event counts across two telematics vendors is not apples to apples, it is two different filters over two different window sizes producing two different counts from identical driving, and once I understood that I stopped being surprised when my car’s app flagged events my friend’s aftermarket tool did not flag in the same vehicle on the same road.

The coaching conversation itself

A harsh event report is a prompt, not a script, and if you walk into a coaching conversation and read data at the driver, you have already lost the room before the first sentence is finished. Drivers carry context you do not have, they remember specific moments you are asking them about, and some of the time they will be correct that the brake you are asking about was the right response to a situation you cannot see from a row in a report. The point of bringing the data is not to convict anyone, it is to start a shared conversation about a trajectory that both of you can see on the same screen.

Open with the trend rather than the incident, because something like “your harsh brake rate has been climbing over the last few weeks and I want to understand what is changing” is neutral, it is about the trajectory rather than the driver’s character, and it invites an explanation instead of triggering defense. Ask before you explain, because half the time the driver already knows what is going on and has been thinking about it longer than you have, so open questions about route changes and tight stops will often hand you the coaching answer for free. If you have video, show exactly one clip rather than a reel of the driver’s worst moments, because the brain can only productively examine one moment at a time and a supercut reads as an attack regardless of what you intended. Watch it together, ask the driver to walk you through what they were seeing in the seconds before the brake, and let memory and telemetry inform each other honestly rather than treating the video as evidence at a hearing. End with exactly one specific change rather than a checklist, because behavior change does not scale by the number of items you list, and a single measurable commitment with a clear follow-up date is what actually moves the trend line.

When that follow-up date arrives, you have to actually look at the data together again, because the driver will remember whether you followed through, and the next conversation’s seriousness is set entirely by whether this one had a real checkpoint or turned out to be bureaucratic theater that nobody ever revisited. Drivers can read theater from across a parking lot, and the moment they decide a coaching program is performance, you have lost the ability to coach them at all. It is also worth noting that the coaching records you produce this way carry value outside of safety outcomes, because insurers will trade better rates for documented evidence of a working monitoring program, which is the argument we made in a separate writeup on using fleet data in insurance negotiations.

I still open the Bluelink app after long drives, and I still cannot always tell, even for myself, which events were my fault and which were the world doing its thing, but what I have learned is that the answer for any individual event does not actually matter all that much, because what matters is whether the rolling trend is flat, climbing, or falling, and whether the clusters line up with routes I drive regularly or with trips that were genuine one-offs for weird reasons. The same thing is true at fleet scale, except the stakes are higher and the relational cost of getting it wrong is measured in driver turnover rather than my own bruised ego. The short version: do not coach the row, do not even coach the week, coach the shape, and when the shape is ambiguous, be willing to say out loud that you are not sure this is a pattern yet and you would like to check back in a couple of weeks rather than inventing a certainty the data has not given you.

Which brings me back to that sudden brake at highway speed from the first sentence. The spike through the dashboard is still just a cluster of accelerometer readings passed through a filter somebody chose for a reason they probably never wrote down, and what it means depends entirely on what happened in the minute before it, who else in your fleet has registered the same spike at the same corner, whether this driver has a history at that spot or is passing through for the first time, and whether the conversation you have about it treats the driver as a collaborator or a suspect. The red row is only the invitation to pay attention, and the coaching is whatever you build around it afterward.

Leave a Comment