Fixed a bunch of bugs in the SGMII block, the QSGMII-SGMII bridge, and even in ngscopeclient.
And the TX eye still isn't very pretty, I need to investigate that more.
But the QSGMII links are now alive! Let's see if I can actually pass traffic...
And it looks like the PHY is able to receive traffic! Haven't tested if it decodes properly in the FPGA etc, but the PHY is sending well formed QSGMII, the FPGA sees the link as up, and the decode in libscopehal is making sense of it.
Not sending anything yet. A lot more work needed on the switch logic in the FPGA to make *that* happen.
Continuing switch bringup work.
All ports (except the four VSC8512 interfaces which aren't responding over MDIO) have link state/speed working and queriable via the MCU.
Something is wonky with the basic status register, it's saying the link is half duplex even though it's negotiated to full duplex (in fact, only advertising full duplex). Not sure if this is a bug or what. Might have something to do with the 8051 microcode patch I haven't yet applied?
Spent a while today debugging on live hardware and finally reproduced the issue in simulation.
Packets more than 32 128-bit words in length will max out the prefetch FIFO but I never continue to fetch traffic after that point. There's a big giant TODO comment I never implemented. Oops.
Did a bunch of timing fixes and added some more pipeline stages. Latency is higher than I'd like now and I'll definitely want to work on reducing it, but it should do for a starting point.
Also did some per-link power estimates: about 13.3W in the current test configuration (management port, SFP+ uplink, and two VSC8512 edge ports active at 1 Gbps, no packet traffic).
This climbs to about 13.8W (+0.5W, so 0.25W per interface) if looping back two DP83867 interfaces, and 14W (+0.7W, so 0.35W per interface) looping back two VSC8512 interfaces.
With all links up, I thus project that the total board power consumption would climb to about 17.3W. This would likely increase a bit further with heavy traffic due to increased toggles on the SRAM bus etc.
Not too bad for a ~16 port switch (counting management and uplink ports). I've also put zero effort into optimizing the FPGA design for power to date, so there's probably things I can do to improve there.
Off the top of my head:
* If an entire group of four baseT links is down or disabled, I can shut down the QSGMII SERDES
* If there's no traffic on the read side of the SRAM bus, I can disable the input terminations
* If there's no traffic on the write side of the SRAM bus, I might be able to tristate the bus except for control signals
* It might be possible to consolidate/optimize PLL configuration to use less PLLs
* There's definitely work to be done to use less long range high fanout clocks on the FPGA
* Improve gating of unused signals on wide buses etc to avoid propagation of toggles that don't do useful work
Always a fun day when you have to write code like this...
Hopefully this will give me a trigger condition that will let me figure out why my switch fabric is deadlocking trying to forward a packet without actually doing anything to it.
Welp. Somehow I'm trying to start forwarding from port #15.
Except I only have 15 ports (14 plus the uplink) and port numbers are zero based.
Looks like I was incrementing the round robin counter but forgot to add the "mod portcount" bit.
And apparently whatever logic Vivado synthesizes for accessing the 16th element of a 15-element vector resulted in the arbiter thinking it had data to send, entering the busy state, but then never getting a done signal.
And after a few more fixes, it's working!
Here an ARP frame shows up on port 0 (g0), is received via QSGMII, transferred to the core clock domain, processed through the SRAM FIFO (all offscreen).
Then at T=32 it's looked up in the MAC address table. At T=35 the table returns "not found", which makes sense since the destination is a layer 2 broadcast.
At T=39 a forwarding decision is made: the frame should be broadcast to all of VLAN 99 except for g0, where the frame came from. In this example config that's ports 5 (g5) and 14 (xg0).
Then at T=41 after some pipeline latency, data begins flowing.
It ends up in /dev/null for now because there's no exit queues between the frame_* control signals and the TX-side MAC IPs. But that's the only missing piece to make this a fully functional, if very basic, switch!
FPGA resource usage is growing, but things are still looking good in terms of being able to finish the job - and hopefully fit a full 24 port design in the same FPGA.
Current total fabric usage including the logic analyzer IP is 34% LUT, 23% FF, 39% BRAM, 6% DSP, 100% SERDES (duh), 65% IO, 53% global clocks, 25% MMCM/PLL.
One big unknown is how to scale the architecture up to 24 ports, since the current shared bus architecture is running close to its max performance with 14 ports and assumes a single memory channel. Refactoring this to work with a dual channel RAM controller will be interesting.
One "easy" option is to have essentially two independent sub-switches and a high bandwidth interconnect between them. But that might mean duplicating resources like the MAC address table.
Added exit queues and it's getting fuller. 38% LUT, 25% FF, 48% BRAM, 6% DSP, 100% SERDES, 65% IO, 53% BUFG, 25% MMCM / PLL.
Still missing VLAN tag insertion for outbound trunk ports (and some other logic to propagate VLAN tag information to support that) but in theory it should be capable of switching between access ports now. About to try in hardware, wish me luck!
And no go. My pings aren't being seen and I'm seeing no transmit activity on the QSGMII link.
But at least I have some idea of where to add on-chip debug probes to troubleshoot further.
Ok, turns out there is transmit activity but it's gibberish. Skipping data bytes or something.
Upon closer inspection it seems I had incorrect TX clock configuration (feeding TXUSRCLK with 156.25 MHz instead of 125) due to some confusing GTX configuration. Hopefully this will fix it...
It's alive!! First light on the switch passing packets!
When I ping flooded through it, it locked up and stopped forwarding traffic until I reloaded the FPGA. Probably related to one of the dozens of FIFO-full error handling code paths I haven't tested or fully implemented.
Still lots more work to do: VLAN tag insertion on outbound trunk interfaces, 10/100 support in the SGMII MAC, performance counters, tons of error handling, lots of CLI commands, investigating SI on the QSGMII TX diffpair, figuring out why g8-g11 aren't responding on MDIO, power integrity validation...
Found a few more thermometers on the board. Turns out in addition to the externally pinned out thermal diode on the VSC8512 (which I didn't hook up to anything) there is an (undocumented, but used in some example code I dug up) internal digital temperature sensor.
There's also one on the STM32.
Fixed a bunch of bugs and reduced latency of the QDR-II+ controller. End to end latency from read request to full burst data in hand - including PCB trace delays and clock domain crossing but not the additional pipeline stage for ECC - is now down to nine clocks at 187.5 MHz (48 ns). Probably more room to improve further on that but it's already way better than the 11-17 cycles I was seeing before with a less efficient CDC structure.
It no longer falls over instantly when ping flooded, however sustained floods (especially with preload) still make it start corrupting packets. So I've fixed the easiest-to-trigger bug and there's still more.
Debating how much time I want to spend chasing bugs in the current fabric architecture since I know it won't scale to 24 ports and barely makes timing as-is. Might just blow away everything between the input FIFOs and the MAC table and redo it clean slate.
Welp, seems I have a new bug: I'm reading a frame out of the input FIFO that's shifted by one word.
The first word of the packet (src/dest MAC address, ethertype, and first 4 bytes of payload) is gone (sent as part of the previous packet<, then there's another word that I assume is the start of the subsequent packet at the end.
Seems to be triggered by heavy traffic like ping floods, but haven't caught it happening on the write side yet.
So far not sure if fifo pointers are getting desynced or if I'm writing bad data out of the CDC.
Nope, the SRAM FIFO is fine. Garbage in, garbage out.
So the problem is happening earlier on, in the CDC or maybe as I'm filling buffers to be written to SRAM?
Yeeep, it's something in the CDC FIFO (or the logic interfacing with it).
When the packet that actually goes sideways starts, there's already six words of data in the CDC buffer. But all of the other state - most notably packet metadata with length, vlan ID, etc - is missing, so that data gets ignored and isn't popped until more data shows up, at which point you get a hodgepodge of both packets.
Still don't know which clock domain the actual bug is in so this will be fun...
Oops it's 3:30 AM and I have to be awake for work tomorrow... But I think I found the bug.
If I'm right it's one of those "how did this ever work" moments. Very confused as to how ping flooding makes it fail, it seems like it should *always* fail with packets of a certain length mod 16.
Nope, that wasn't it. But it put me on the trail of the actual bug.
Not one but *two* packets before SHTF, something goes wrong. There's nothing in the metadata fifo, there's nothing visible on the read side of the data fifo, but the *write* side of the data fifo shows 506 free words, out of a capacity of 512.
Meaning something pushed six words into it, then (for at least the few hundred clocks I have data captured for), never asserted the "commit" flag.
This CDC FIFO has a commit/rollback mechanism intended to be used for store-and-forward packet processing; the write side maintains a private write pointer that is only pushed to the read side when you hit "commit". Until then, the available space is decreased but the read side still shows empty.
The intent is to commit on end of packet with valid FCS and roll back on end of packet with invalid FCS, or if the FIFO fills prior to the end of a packet. Having stale data in the buffer that never gets commited/rolled back SHOULD be impossible...
And here's the root cause: https://github.com/azonenberg/latentpacket/commit/15a9c4359809ae00801205d9f1fa73a02463f06d
The VLAN tag removal logic on the input side, between the MAC and the CDC FIFO, was failing to forward the "drop" flag. So any time a packet had a FCS failure, the metadata would be discarded and the packet content would be prepended to the next valid packet.
This solves the "ping -f" hang; I just did a test of 100K pings with only 25 drops and it was still working fine after that.
This now raises two new questions:
1) Why did I still lose 25 packets? Judging by the previous bug, at least some are getting FCS errors. Is this signal integrity on the QSGMII link, a logic bug in the MAC, or something else?
2) When I ping flood with preload, i.e. ping -f -l 50, the switch still hard locks up pretty quickly. So I have a second, likely unrelated bug caused by a lot of packets in quick succession.
Looks like the incoming data is occasionally (25 of 100K packets in my last test) getting corrupted somewhere between the upstream switch MAC and my 32 bit MAC data bus.
In between:
* Switch PHY
* On rack patch cable
* Plant cable
* Bench patch cable
* Magjack and PCB
* VSC8512
* QSGMII link to 7 series GTX
* My QSGMII to SGMII demux
* My SGMII PCS
* My GMII MAC
Suspecting something in the serdes/QSGMII region, but not sure yet.
Closing in on this bug.
The data coming off the PHY is fine, verified by sniffing and protocol decoding the QSGMII link.
The data entering the decode side of the PCS (after elastic buffer shifting from SERDES clock domain to MAC clock domain) is wrong.
First guess: something in that buffer is borked and it's filling up, rather than dropping idles between packets when it gets too full like it's supposed to. If the remote side of the link has a clock a few ppm faster than the FPGA, the FPGA will have to occasionally drop idles to rate match. If that logic is broken we'll just see random bytes of data not show up when they should.
Hmmmm. It helps if your elastic buffer drops extra idle ordered sets when it's almost *full*.
Not when almost *empty*. 🤦♂️
OK, this one is interesting.
The switch is forwarding packets that are completely correct except for the first 16 bytes, which at first glance appear to be gibberish.
The 16 byte size is a clue, since most of the fabric and the external packet buffer SRAM are using a 128-bit datapath, while the MAC/PCS blocks are narrower (8-32 bits at various spots).
So the problem here is likely a lot closer to the core than the previous bug.
When your 16K entry FIFO has 16388 free spots in it, that's awesome!
It's a TARDIS or something, bigger on the inside than the outside. ... right?
Switch fabric reliability is improving! I'm now needing heavier and heavier loads and triggering less frequent bugs.
The one I'm chasing now involves a port getting stuck in the PREFETCH state, indicating it's asked for data from external RAM but it got less data than it expected.
I'm actually getting up to a pretty decent link utilization with this ping flood. Far from saturating the pipe, but looks like maybe 20-30% ish?
Pretty sure I have a root cause on this one already. Just took a few P&R runs to get probes on the right signals.
I cleared the prefetch-in-progress combinatorially on the last cycle of a prefetch to enable gap-free transitioning to a second prefetch on a different port.
But when I started a prefetch I'd also start a read request to the RAM that cycle. So if this happened the second prefetch would steal the bus cycle from the first.
The fix is simply to not do that, and wait until next cycle to fetch the next word. As a bonus, this eliminates a critical path I was worried about.
Yep, that was the bug. Seemed to fix the other packet corruption problem I had been chasing as well.
So at this point there are no known bugs in the fabric and it's time to work on building other stuff.
I still need a bazillion performance counters to evaluate how things are going as I push the fabric to heavier loads, plus a lot of debug features for things like printing PHY status registers in human readable form.
Adding performance counters and a bunch of other debug features is gradually increasing FPGA resource usage to a concerning level.
Fitting the rest of LATENTPINK is not going to be a problem, but there won't be a whole lot of free space.
I could probably... probably... shoehorn a full 24+1/24+2 port LATENTRED design into the 7k160t if I really squeezed. But I'd have to start cutting features and I'd have no room for e.g. potential layer 3 processing or ACLs in the future.
The question then becomes, what do I replace it with? I want "comfortably more" than the 100K LUTs of the 7k160t, enough high performance IO for two channels of QDR-II+, and at least eight transceivers.
The XC7K325T is out, I want to stay with free Vivado for F/OSS friendliness reasons (and to avoid increasing the already significant project budget by another $3K), so there's no path forward using 7 series.
Assuming I stay Xilinx, that means UltraScale or UltraScale+.
And if I limit myself to parts supported by free Vivado, that leaves five options: XCAU25P, XCKU025, XCKU035, XCKU3P, XCKU5P.
The AU25P is by far the least expensive (XCAU25P-1FFVB676E is $427 at Digikey) and I have two in inventory already. It's got 40% more LUT capacity than the 7k160t, but slightly *less* block RAM, and a lot less IO: 208 HP and 96 HD. I'd need 196 HP for the RAM, leaving 12 left: enough for clock and Vref and that's about it.
Which leaves me HD pins for interfacing with the MCU, maybe driving some indicator LEDs, and boot flash. But for a 24+2 port design I only need 6 GTs for QSGMII and 2 for 10G, so I'd have four extras.
Which is good because RGMII would really be pushing limits for HD I/O, and free GTs would let me use a SGMII PHY instead.
So as long as I can get by with 300 BRAMs (I'm using 157 in LATENTPINK including the management engine and MAC table which don't scale with interface count, so should be doable?) I think I've got a good shot.
The XCKU025 is a lot pricier (XCKU025-1FFVA1156C is $1288 at Digikey). 45% bigger than the 7k160t, so almost the same size as the au25p, but has 360 BRAMs - a nice increase over the AU25P.
It also has 208 HP IOs, but has 104 HR IOs instead of slow UltraScale+ HD IOs (which should have no trouble doing RGMII for the management port).
Fabric performance might actually be a little slower than the AU+ since it's 20nm rather than 16nm, but both should be comfortably faster than the 28nm 7k160t.
Also, the AU25P is the biggest AU+ device so there's no upgrade path if I outgrow it, while the KU025 FFVA1156 package is pin compatible with the KU035.
Interestingly, though, the KU025 is *not* offered in any of the lower pin count packages like FBVA676. So if I went with the Kintex UltraScale route I'd need a PCB with enough layer capacity to fan out an 1156 ball package.
The XCKU3P is even more expensive (XCKU3P-1FFVB676E is $1491 at Digikey), and 60% larger than the 7k160t, also with 360 BRAMs (same capacity as the KU025), but it also has 48 UltraRAMs so the total usable on-die memory capacity is more than doubled.
Most interestingly, the FFVB676 package is pin compatible with the XCAU25P if I'm reading the docs correctly (but has only 72 HD IOs vs 96, so if I wanted the PCB to be compatible I'd need to avoid the last 24 sites).
But this leaves open the possibility that I could design LATENTRED with the intention of using the AU25P, with potential to scale up to the KU3P or even KU5P if I ran out of fabric resources without having to respin the PCB.
Well that was weird. Something I did apparently resulted in Vivado unplacing all of my I/O pins?? Never had that happen before.
I have all of the old pinout constraints in Git so it's not a huge deal, but wasted a P&R run finding it out after bitstream generation failed.
Thinking more, I might still have a shot at fitting everything into the 7k160t. I have two of them sitting around earmarked for this project and have no other near term use for them, so I'd like to try and make it work if I possibly can. Layer 3 functionality in the edge switch isn't a huge deal, that was going to be a 10G core switch thing anyway.
New plan is to get LATENTPINK closer to completion, then attempt scaling the fabric up to 24+2 ports in simulation and build an FPGA design for a notional pinout of it. See what happens and if I can make timing.
Had a sudden realization while playing with the little one in the backyard: 7 series DSP48 slices have 48 bit counters/accumulators, 48 bits should be big enough for my perf counters, and I have 566 unused DSP slices on this FPGA right now.
Let's see how much space I can save by doing this (and a few other optimizations I had in mind for the counters)!
Here's the baseline: 37831 LUT, 56959 FF.
I'll do the conversion in a couple of stages to compare the area improvements at each step and verify I didn't break anything.
First (running now) is to reduce the 64-bit counters to 48 bits, and remove some redundant pipeline registers in the CDC paths.
In this layout screenshot MAC/PCS logic is dark blue, input buffering and CDCs are green, exit queues are pink, debug logic is light pink, the MAC address table and forwarding decision logic is brown, and the crypto accelerator is cyan. Performance counters, which I'm trying to massively shrink, are yellow.
That initial cleanup cut 2500 flipflops and 700 LUTs off the area.
Now the counters are all 48 bits, which should make them the right size to absorb into DSP48s.
And now almost 4000 FFs and 100 LUTs chopped off that, at the cost of 80 DSPs. Seems like a good deal if you ask me.
Still room for optimization on the readout muxing and CDC. And I can now move the perf counters closer to their parent logic since there's plenty of unused DSP slices in the MAC/PCS area.
After some more tweaking I managed to pack things in even tighter.
Now CLOCKREGION_X1Y0, X1Y4, and X0Y4 are completely empty.
The SGMII logic in the right half of CLOCKREGION_X1Y1 won't be needed if I move to dual VSC8512s for LATENTRED.
The QSGMII logic (blue, top right) will need to be replicated into the vacant space above it to hook up another 12 ports, then the exit queues (magenta at 3 o'clock position) will need to be replicated as well - perhaps into the top left area.
Then I'll need to find somewhere for more ingress queues (green), and a second channel of RAM and controller (red) along the right side of the chip. Moving some of the packet metadata into the external RAM might help free up block RAM.
I think this might actually be doable as 24 ports with dual SFP+ uplinks in a 7k160t.
Here's what I'm thinking. Green X denotes logic to be removed (not needed in 24 port design).
Green arrow(s) denotes logic to be replicated from its current location to the new location.