Perfect timing! ⏰ Asterfusion's latest blog is a game-changer for anyone in broadcast, media, or 5G.
We're talking nanosecond-perfect timing with SONiC-based PTP switches! ⏱️
This guide breaks down everything you need to know about deploying Precision Time Protocol (PTP) to achieve unparalleled clock synchronization. It's a massive leap from the millisecond precision of traditional NTP.

Our SONiC-based PTP switches are perfect for:
Seamless A/V alignment in broadcast 📺
Coordinating radio units in 5G & O-RAN networks 📶
Ready to achieve perfect timing? See how Asterfusion is making it happen!

🔗 https://cloudswit.ch/blogs/how-to-deploy-ptp-network-switches-with-sonic/

#PTP #SONiC #NetworkSwitch #Timing #Synchronization #5G #Broadcast #O-RAN #Asterfusion #PrecisionTimeProtocol
https://cloudswit.ch/blogs/how-to-deploy-ptp-network-switches-with-sonic/

How To Deploy PTP Network Switches With SONiC

Achieve precise network time with Asterfusion's PTP network switches. SONiC campus switches featuring GNSS Grandmaster, Transparent & Boundary Clocks, and multi-profile support (SMPTE, G.8275.1/.2), ensure nanosecond accuracy for critical telecom, media, & industrial applications.

Asterfusion Data Technologies

Okay, it's time for the big #ntp and #ptp wrap-up post. My week-long timing project spiraled out of control and turned into a two month monster, complete with 7 (ish?) GPS timing devices, 14 different test NICs, and a dozen different test systems.

What'd I learn along the way? See https://scottstuff.net/posts/2025/06/10/timing-conclusions/ for the full list (and links to measurements and experimental results), but the top few are:

1. It's *absolutely* possible to get single-digit nanosecond time syncing with NTP between a pair of Linux systems with Chrony in a carefully-constructed test environment. Outside of a lab, 100-500 ns is probably more reasonable with NTP on a real network, and even that requires carefully selected NICs. But single-digit nanoseconds *are* possible. NTP isn't just for millisecond-scale time syncing.
2. Generally, PTP on the same hardware shows similar performance to NTP in a lab setting, with a bit less jitter. I'd expect it to scale *much* better in a real network, though. However, PTP mostly requires higher-end hardware (especially switches) and a bit more engineering work. Plus many older NICs just aren't very good at PTP (especially ConnectX-3s).
3. Intel's NICs, *especially* the E810 and to a lesser extent the i210 are very good at time accuracy. Unfortunately their X710 isn't as good, and the i226 is mixed. Mellanox is less accurate in my tests, with 200ns of skew, but still far better than Realtek and other consumer NICs.
4. GPS receivers aren't really *that* accurate. Even good receivers "wander" around 5-30 ns from second to second.
5. Antennas are critical. The cheap, flat window ones aren't a good choice for timing work. (Also, they're not actually supposed to be used in windows, they generally want a ground plane).
6. Your network probably has more paths with asymmetrical timing in it than you'd have expected. ECMP, LACP, and 2.5G/5G/10Gbase-T probably all negatively impact your ability to get extremely accurate time.

Anyway, it's been a fun journey. I had a good #time.

Timing Conclusions

This is the 13th article that I’ve written lately on NTP and PTP timing with Linux. I set out to answer a couple questions for myself and ended up spending two months swimming in an ocean of nanosecond-scale measurements. When I started, I saw a lot of misinformation about NTP and PTP online. Things like: Conventional wisdom said that NTP was good for millisecond-scale timing accuracy. I expected that to be rather pessimistic, and expected to see low microsecond to high nanosecond-range syncing with Chrony, at least under controlled circumstances.In a lab environment, it’s possible to get single-digit nanosecond time skew out of Chrony. With a less-contrived setup, 500 ns is probably a better goal. In any case “milliseconds” is grossly underselling what’s possible. Conventional wisdom also said that PTP was better than NTP when you really cared about time, but that it was more difficult to use and made more requirements on hardware.You know, conventional wisdom is actually right sometimes. PTP is somewhat more difficult to set up and really wants to have hardware support from every switch and every NIC, but once you have that it’s pretty solid. Along the way I tested NTP and PTP “in the wild” on my network, built a few new GPS-backed NTP (and PTP) servers, collected a list of all known NICs with timing features,Specifically GNSS modules or PPS inputs. built a testing environment for measuring time-syncing accuracy to within a few nanoseconds, tested the impact of various Chrony polling settings, tested 14 different NICs for time accuracy, and tested how much added latency PTP-aware switches add. I ran into problems with PTP on Mellanox/nVidia ConnectX-4 and Intel X710 NICs.Weird stuff. The X710 doesn’t seem to like PTP v2.1, and it doesn’t like it when you ask it to timestamp packets too frequently. I fought with Raspberry Pis. I tested NICs until my head hurt. I fought with statistics. This little project that I’d expected to last most of a week has now dragged on for two months. It’s finally time to summarize what I’ve learned and celebrate The End Of Time.

scottstuff.net

My overnight tests finished!

In my environment, I get the best #NTP accuracy with #Chrony when using `minpoll -2 maxpoll -2` and not applying any filtering. That is, have the client poll the NTP server 4 times per second. Anything between `minpoll -4` (16x/second) and `minpoll 0` (1x/second) should have similar offsets, but the jitter increases with fewer than 4 polls per second.

https://scottstuff.net/posts/2025/06/03/measuring-ntp-accuracy-with-an-oscilloscope-2/

Chrony has a `filter` option that applies a median filter to measurements; the manual claims that it's useful for high-update rate local servers. I don't see any consistent advantage to `filter` in my testing and larger filter values (8 or 16) consistently make everything worse.

When polling 4x/second on a carefully constructed test network, NTP on the client machine is less than 2 ns away from #PTP with 20 ns of jitter. I know that PTP on the client is 4 ns away from PTP on the server (w/ 2 ns of jitter), as measured via oscilloscope.

So, you could argue that this counts as single-digit nanosecond NTP error, although with 20 ns of jitter that's probably a bit optimistic. In any case, that's *well* into the range where cable lengths are a major factor in accuracy. It's a somewhat rigged test environment, but it's still much better than I'd have expected from NTP.

Measuring NTP and PTP Accuracy With An Oscilloscope (part 2: Chrony's poll and filter settings)

In part 1 yesterday I went through all of the work needed to measure NTP and PTP accuracy between two computers on my desk using an oscilloscope. I demonstrated that PTP was accurate to a mean of 4 ns with 2 ns of standard deviation. Under ideal circumstances NTP was only slightly worse at 8–10 ns with a SD of 12–20 ns, depending on the test setup. I measured these with extremely high NTP polling rates, thousands of times more frequent than Chrony’s defaults. I ran a few tests with slower polling rates but had a hard time getting stable results due to issues with my test environment. I went back and rethought a few things, and I was able to do a bunch more testing overnight. I wanted answers to these two questions: What is the best polling rate for Chrony on an extremely low-latency, low-jitter LAN? Does Chrony’s per-source filter setting improve accuracy? After running tests overnight, I have my answers: For the best accuracy, use something between minpoll -4 and minpoll -1.Please only poll this aggressively when you control the NTP server that you’re polling. Don’t DoS public servers. Above 1 second or so, error starts increasing exponentially. Very high polling rates (1/128th and sometimes 1/64th of a second) show added error as well. The filter keyword is never a win in my environment. Small values (filter=2 and filter=4) don’t make a huge difference in results; larger values add increasing amounts of error. Here’s how the measured time offset varied as the update period and filter settings changed: {"legend":{"bottom":5,"data":["filter=1","filter=2","filter=4","filter=8","filter=16"]},"series":[{"data":[[0.0625,2,17.5,12684],[0.125,5.1,18.3,12559],[0.25,1.5,20,12235],[0.5,5.6,39.8,12271],[1,1.3,40.5,12279],[2,78.1,155.6,12359],[4,26.7,285.3,12258],[8,475,567.4,12343]],"name":"filter=1","smooth":false,"type":"line"},{"data":[[0.0625,1.8,25,12764],[0.125,1.7,22,12691],[0.25,9.5,23.3,12560],[0.5,14.8,28.9,12336],[1,3.9,27.8,12302],[2,177.6,285.2,12258],[4,203.6,448.3,12295],[8,228.5,298.2,12307]],"name":"filter=2","smooth":false,"type":"line"},{"data":[[0.0625,7.7,22.4,12820],[0.125,3.5,20.5,12773],[0.25,1.4,20.2,12695],[0.5,23.9,38.7,12606],[1,57.3,81.3,12304],[2,60.7,124.1,12264],[4,77.4,174.8,12187],[8,128.9,289.9,12360]],"name":"filter=4","smooth":false,"type":"line"},{"data":[[0.0625,12.8,32.3,12950],[0.125,7,17.7,12804],[0.25,14.5,24.2,12760],[0.5,11.7,35.2,12714],[1,75,121.3,12624],[2,4.8,69.6,12315],[4,47.7,213,12299],[8,889.5,1070.5,12324]],"name":"filter=8","smooth":false,"type":"line"},{"data":[[0.125,15.7,37.1,12981],[0.25,11.2,27,12795],[0.5,43.5,76,12769],[1,43.9,89.9,12695],[2,163.8,312.4,12538],[4,424.1,8255.2,12404],[8,515.6,1350.4,12254]],"name":"filter=16","smooth":false,"type":"line"}],"title":{"left":"center","text":"NTP Clock offset by effective polling rate and filter"},"tooltip":{"trigger":"axis"},"xAxis":{"logBase":2,"max":8,"min":0.0625,"name":"effective seconds between polls (including filtering period)","nameGap":20,"nameLocation":"middle","type":"log"},"yAxis":{"logBase":10,"name":"nanoseconds →","nameGap":40,"nameLocation":"middle","nameRotate":90,"type":"log"}} And here’s the measured jitter in the same environment: {"legend":{"bottom":5,"data":["filter=1","filter=2","filter=4","filter=8","filter=16"]},"series":[{"data":[[0.0625,17.5,12684],[0.125,18.3,12559],[0.25,20,12235],[0.5,39.8,12271],[1,40.5,12279],[2,155.6,12359],[4,285.3,12258],[8,567.4,12343]],"name":"filter=1","smooth":false,"type":"line"},{"data":[[0.0625,25,12764],[0.125,22,12691],[0.25,23.3,12560],[0.5,28.9,12336],[1,27.8,12302],[2,285.2,12258],[4,448.3,12295],[8,298.2,12307]],"name":"filter=2","smooth":false,"type":"line"},{"data":[[0.0625,22.4,12820],[0.125,20.5,12773],[0.25,20.2,12695],[0.5,38.7,12606],[1,81.3,12304],[2,124.1,12264],[4,174.8,12187],[8,289.9,12360]],"name":"filter=4","smooth":false,"type":"line"},{"data":[[0.0625,32.3,12950],[0.125,17.7,12804],[0.25,24.2,12760],[0.5,35.2,12714],[1,121.3,12624],[2,69.6,12315],[4,213,12299],[8,1070.5,12324]],"name":"filter=8","smooth":false,"type":"line"},{"data":[[0.125,37.1,12981],[0.25,27,12795],[0.5,76,12769],[1,89.9,12695],[2,312.4,12538],[4,8255.2,12404],[8,1350.4,12254]],"name":"filter=16","smooth":false,"type":"line"}],"title":{"left":"center","text":"NTP Clock jitter by effective polling rate and filter"},"tooltip":{"trigger":"axis"},"xAxis":{"logBase":2,"max":8,"min":0.0625,"name":"effective seconds between polls (including filtering period)","nameGap":20,"nameLocation":"middle","type":"log"},"yAxis":{"name":"nanoseconds →","nameGap":40,"nameLocation":"middle","nameRotate":90,"type":"log"}} Read on for details on how I measured these.

scottstuff.net
Aviation weather for Pointe-à-Pitre Le Raizet airport (Guadeloupe) is “TFFR 070830Z AUTO 16002KT 9999 BKN021 23/23 Q1011 TEMPO VRB15G25KT 1000 TSRA BKN004 BKN012CB” : See what it means on https://www.bigorre.org/aero/meteo/tffr/en #pointeapitre #guadeloupe #pointeapitreleraizetairport #tffr #ptp #metar #aviation #aviationweather #avgeek vl
Pointe-à-Pitre Le Raizet airport (Guadeloupe) aviation weather and informations TFFR PTP

Aviation weather with TAF and METAR, Maps, hotels and aeronautical information for Pointe-à-Pitre Le Raizet airport (Guadeloupe)

Bigorre.org

On the other hand, using #PTP to sync time to my web servers is a big win. They're behind a software router and using PTP with Intel NICs drops the sync error from ~10 us to ~5 ns, according to #Chrony. This is mostly because time is bypassing the firewall entirely and being distributed directly by the switch, so there's far less jitter.

I'm not quite sure what a 5 ns time error means when devices are more than 2.5 feet apart, but lets ignore that for now.

Aviation weather for Pointe-à-Pitre Le Raizet airport (Guadeloupe) is “TFFR 030500Z AUTO 28002KT 9999 FEW045 BKN054 24/23 Q1013 TEMPO 2000 SHRA SCT008 FEW015CB BKN018TCU” : See what it means on https://www.bigorre.org/aero/meteo/tffr/en #pointeapitre #guadeloupe #pointeapitreleraizetairport #tffr #ptp #metar #aviation #aviationweather #avgeek vl
Pointe-à-Pitre Le Raizet airport (Guadeloupe) aviation weather and informations TFFR PTP

Aviation weather with TAF and METAR, Maps, hotels and aeronautical information for Pointe-à-Pitre Le Raizet airport (Guadeloupe)

Bigorre.org
Pointe-à-Pitre International Airport - Wikipedia

DIY PTP Grandmaster Clock with a Raspberry Pi | Jeff Geerling

David Groves' (https://www.fibrecat.org/) talk at #NetMCR about measuring and calculating the current time was really interesting, explaining crazy timezones, how DST has changed over the years, leap years and seconds, and then how computers sync time to each other via #NTP and #PTP.

I learnt that you can even get expansion cards that have mini atomic clocks on them! (https://www.opencompute.org/products/319/ocp-time-card-made-by-time-beat)

Sadly I couldn't stay and chat afterwards as I had to get my train ☹️

Fibrecat.org

XXXX

My newest topic I am trying to understand better: #GNSS

While trying to write about #NTP as well as #PTP another world opened up as well. Global Navigation Satellite System or #GNSS which is most often the source of time. Digging through the different Satellite constellations #GPS , #GALILEO , #BAIDU , #GLONASS and understanding the differences was interesting.

I am still digging deeper. Anyone with knowledge, is highly welcome to educate me. ☺️👍