FPGA logic reports none of the QSGMII links are up.
Not entirely surprising since I've never actually tested the QSGMII block in hardware, but still a bit annoying.
I think that's it for today. Tomorrow I'll decable the whole setup (again), and probably try to bodge one or more of the VSC8512 RJ45s as long as i have it off the bench.
Then get test leads on the VSC8512 MDIO bus (to see if anything funky is happening with timing there, I still can only talk to 8 of the 12 PHYs... might be a register misconfiguration too though), and probably land a high BW probe on one or more of the QSGMII lanes to see what's happening with that.
Quick handheld probe measurement off the QSGMII TX line from the FPGA.
Definitely some logic bugs, we're supposed to have K28.1 in lane 0 and all I'm seeing is K28.5.
The eye (measured at the PHY side of the coupling capacitor) is pretty wide open, but I will definitely want to tweak driver settings given the closure in the right half. Need to check this against the QSGMII eye mask but I don't have the specs for that in ngscopeclient yet (also a job for tomorrow).
Seems like drive on my QSGMII TX is just a little bit over the top. Left eye has the transmitter mask, right has the receiver.
This is a mid-channel measurement (at the AC coupling cap) so we need to be better than the RX mask but don't need to pass the TX.
Back to the lab for the evening and continuing switch bringup.
Double checking pins on the VSC8512 and so far not seeing any issues.
I did notice that the thermal diode is tied off to ground, which is in retrospect a mistake. I should have provided a means to monitor it externally. Now I have no way to tell if the PHY is overheating other than by pointing a FLIR camera at the heatsink and adding a couple of degrees to the reading.
Signal integrity tweaking on the QSGMII.
Took initial measurements with an AKL-PT5 and a D1330, then cross checked the PT5 measurements against a D1605.
After some tweaking, the QSGMII TX waveform isn't overshooting.
But when I soldered an AKL-PT5 on, I saw a huge dip around T=25ps that I don't remember seeing in the handheld probe view (maybe it didn't have enough BW to show it?)
I repeated the same measurement with a D1605 (shown here) just in case it was an artifact of the PT5. Other than a bit less noise, the eye looked identical.
Need to check and see if the remaining QSGMII lanes have similar issues or if this is the only one, or what. It technically passes the QSGMII eye mask so it *should* work but I wouldn't want to field it looking like this!
RX drive strength is a bit higher than spec, but the FPGA will happily eat it so I'm not concerned.
Looking at the QSGMII link state, it seems that the FPGA is sending autonegotiation codeword 0x4001 (SGMII mode, no remote fault etc, no next page).
The PHY is sending K28.5 D16.2 which is IDLE 2, so I think this means it's waiting for the FPGA to go "ok, link is up"?
Reading register 19E3 from the PHY (link partner clause 37 ability) shows 0x4001, the same thing the FPGA is sending. This means that the PHY is seeing my autonegotiation traffic and decoding it correctly.
Register 17E3 is 0x0409: no SGMII alignment error or remote fault, no full duplex advertised by MAC (seems wrong), no half duplex advertised by MAC, link partner AN capable, link not connected, AN not complete, signal present.
But... bit 5 of the AN advertisement (which means full duplex capable) is *reserved, must be zero* in SGMII mode. So I'm not sure if this is a problem or not.
Fixed a bunch of bugs in the SGMII block, the QSGMII-SGMII bridge, and even in ngscopeclient.
And the TX eye still isn't very pretty, I need to investigate that more.
But the QSGMII links are now alive! Let's see if I can actually pass traffic...
And it looks like the PHY is able to receive traffic! Haven't tested if it decodes properly in the FPGA etc, but the PHY is sending well formed QSGMII, the FPGA sees the link as up, and the decode in libscopehal is making sense of it.
Not sending anything yet. A lot more work needed on the switch logic in the FPGA to make *that* happen.
Continuing switch bringup work.
All ports (except the four VSC8512 interfaces which aren't responding over MDIO) have link state/speed working and queriable via the MCU.
Something is wonky with the basic status register, it's saying the link is half duplex even though it's negotiated to full duplex (in fact, only advertising full duplex). Not sure if this is a bug or what. Might have something to do with the 8051 microcode patch I haven't yet applied?
Spent a while today debugging on live hardware and finally reproduced the issue in simulation.
Packets more than 32 128-bit words in length will max out the prefetch FIFO but I never continue to fetch traffic after that point. There's a big giant TODO comment I never implemented. Oops.
Did a bunch of timing fixes and added some more pipeline stages. Latency is higher than I'd like now and I'll definitely want to work on reducing it, but it should do for a starting point.
Also did some per-link power estimates: about 13.3W in the current test configuration (management port, SFP+ uplink, and two VSC8512 edge ports active at 1 Gbps, no packet traffic).
This climbs to about 13.8W (+0.5W, so 0.25W per interface) if looping back two DP83867 interfaces, and 14W (+0.7W, so 0.35W per interface) looping back two VSC8512 interfaces.
With all links up, I thus project that the total board power consumption would climb to about 17.3W. This would likely increase a bit further with heavy traffic due to increased toggles on the SRAM bus etc.
Not too bad for a ~16 port switch (counting management and uplink ports). I've also put zero effort into optimizing the FPGA design for power to date, so there's probably things I can do to improve there.
Off the top of my head:
* If an entire group of four baseT links is down or disabled, I can shut down the QSGMII SERDES
* If there's no traffic on the read side of the SRAM bus, I can disable the input terminations
* If there's no traffic on the write side of the SRAM bus, I might be able to tristate the bus except for control signals
* It might be possible to consolidate/optimize PLL configuration to use less PLLs
* There's definitely work to be done to use less long range high fanout clocks on the FPGA
* Improve gating of unused signals on wide buses etc to avoid propagation of toggles that don't do useful work
Always a fun day when you have to write code like this...
Hopefully this will give me a trigger condition that will let me figure out why my switch fabric is deadlocking trying to forward a packet without actually doing anything to it.
Welp. Somehow I'm trying to start forwarding from port #15.
Except I only have 15 ports (14 plus the uplink) and port numbers are zero based.
Looks like I was incrementing the round robin counter but forgot to add the "mod portcount" bit.
And apparently whatever logic Vivado synthesizes for accessing the 16th element of a 15-element vector resulted in the arbiter thinking it had data to send, entering the busy state, but then never getting a done signal.
And after a few more fixes, it's working!
Here an ARP frame shows up on port 0 (g0), is received via QSGMII, transferred to the core clock domain, processed through the SRAM FIFO (all offscreen).
Then at T=32 it's looked up in the MAC address table. At T=35 the table returns "not found", which makes sense since the destination is a layer 2 broadcast.
At T=39 a forwarding decision is made: the frame should be broadcast to all of VLAN 99 except for g0, where the frame came from. In this example config that's ports 5 (g5) and 14 (xg0).
Then at T=41 after some pipeline latency, data begins flowing.
It ends up in /dev/null for now because there's no exit queues between the frame_* control signals and the TX-side MAC IPs. But that's the only missing piece to make this a fully functional, if very basic, switch!
FPGA resource usage is growing, but things are still looking good in terms of being able to finish the job - and hopefully fit a full 24 port design in the same FPGA.
Current total fabric usage including the logic analyzer IP is 34% LUT, 23% FF, 39% BRAM, 6% DSP, 100% SERDES (duh), 65% IO, 53% global clocks, 25% MMCM/PLL.
One big unknown is how to scale the architecture up to 24 ports, since the current shared bus architecture is running close to its max performance with 14 ports and assumes a single memory channel. Refactoring this to work with a dual channel RAM controller will be interesting.
One "easy" option is to have essentially two independent sub-switches and a high bandwidth interconnect between them. But that might mean duplicating resources like the MAC address table.
Added exit queues and it's getting fuller. 38% LUT, 25% FF, 48% BRAM, 6% DSP, 100% SERDES, 65% IO, 53% BUFG, 25% MMCM / PLL.
Still missing VLAN tag insertion for outbound trunk ports (and some other logic to propagate VLAN tag information to support that) but in theory it should be capable of switching between access ports now. About to try in hardware, wish me luck!
And no go. My pings aren't being seen and I'm seeing no transmit activity on the QSGMII link.
But at least I have some idea of where to add on-chip debug probes to troubleshoot further.
Ok, turns out there is transmit activity but it's gibberish. Skipping data bytes or something.
Upon closer inspection it seems I had incorrect TX clock configuration (feeding TXUSRCLK with 156.25 MHz instead of 125) due to some confusing GTX configuration. Hopefully this will fix it...
It's alive!! First light on the switch passing packets!
When I ping flooded through it, it locked up and stopped forwarding traffic until I reloaded the FPGA. Probably related to one of the dozens of FIFO-full error handling code paths I haven't tested or fully implemented.
Still lots more work to do: VLAN tag insertion on outbound trunk interfaces, 10/100 support in the SGMII MAC, performance counters, tons of error handling, lots of CLI commands, investigating SI on the QSGMII TX diffpair, figuring out why g8-g11 aren't responding on MDIO, power integrity validation...
Found a few more thermometers on the board. Turns out in addition to the externally pinned out thermal diode on the VSC8512 (which I didn't hook up to anything) there is an (undocumented, but used in some example code I dug up) internal digital temperature sensor.
There's also one on the STM32.
Fixed a bunch of bugs and reduced latency of the QDR-II+ controller. End to end latency from read request to full burst data in hand - including PCB trace delays and clock domain crossing but not the additional pipeline stage for ECC - is now down to nine clocks at 187.5 MHz (48 ns). Probably more room to improve further on that but it's already way better than the 11-17 cycles I was seeing before with a less efficient CDC structure.
It no longer falls over instantly when ping flooded, however sustained floods (especially with preload) still make it start corrupting packets. So I've fixed the easiest-to-trigger bug and there's still more.
Debating how much time I want to spend chasing bugs in the current fabric architecture since I know it won't scale to 24 ports and barely makes timing as-is. Might just blow away everything between the input FIFOs and the MAC table and redo it clean slate.
Welp, seems I have a new bug: I'm reading a frame out of the input FIFO that's shifted by one word.
The first word of the packet (src/dest MAC address, ethertype, and first 4 bytes of payload) is gone (sent as part of the previous packet<, then there's another word that I assume is the start of the subsequent packet at the end.
Seems to be triggered by heavy traffic like ping floods, but haven't caught it happening on the write side yet.
So far not sure if fifo pointers are getting desynced or if I'm writing bad data out of the CDC.
Nope, the SRAM FIFO is fine. Garbage in, garbage out.
So the problem is happening earlier on, in the CDC or maybe as I'm filling buffers to be written to SRAM?
Yeeep, it's something in the CDC FIFO (or the logic interfacing with it).
When the packet that actually goes sideways starts, there's already six words of data in the CDC buffer. But all of the other state - most notably packet metadata with length, vlan ID, etc - is missing, so that data gets ignored and isn't popped until more data shows up, at which point you get a hodgepodge of both packets.
Still don't know which clock domain the actual bug is in so this will be fun...
Oops it's 3:30 AM and I have to be awake for work tomorrow... But I think I found the bug.
If I'm right it's one of those "how did this ever work" moments. Very confused as to how ping flooding makes it fail, it seems like it should *always* fail with packets of a certain length mod 16.
Nope, that wasn't it. But it put me on the trail of the actual bug.
Not one but *two* packets before SHTF, something goes wrong. There's nothing in the metadata fifo, there's nothing visible on the read side of the data fifo, but the *write* side of the data fifo shows 506 free words, out of a capacity of 512.
Meaning something pushed six words into it, then (for at least the few hundred clocks I have data captured for), never asserted the "commit" flag.
This CDC FIFO has a commit/rollback mechanism intended to be used for store-and-forward packet processing; the write side maintains a private write pointer that is only pushed to the read side when you hit "commit". Until then, the available space is decreased but the read side still shows empty.
The intent is to commit on end of packet with valid FCS and roll back on end of packet with invalid FCS, or if the FIFO fills prior to the end of a packet. Having stale data in the buffer that never gets commited/rolled back SHOULD be impossible...
And here's the root cause: https://github.com/azonenberg/latentpacket/commit/15a9c4359809ae00801205d9f1fa73a02463f06d
The VLAN tag removal logic on the input side, between the MAC and the CDC FIFO, was failing to forward the "drop" flag. So any time a packet had a FCS failure, the metadata would be discarded and the packet content would be prepended to the next valid packet.
This solves the "ping -f" hang; I just did a test of 100K pings with only 25 drops and it was still working fine after that.
This now raises two new questions:
1) Why did I still lose 25 packets? Judging by the previous bug, at least some are getting FCS errors. Is this signal integrity on the QSGMII link, a logic bug in the MAC, or something else?
2) When I ping flood with preload, i.e. ping -f -l 50, the switch still hard locks up pretty quickly. So I have a second, likely unrelated bug caused by a lot of packets in quick succession.
Looks like the incoming data is occasionally (25 of 100K packets in my last test) getting corrupted somewhere between the upstream switch MAC and my 32 bit MAC data bus.
In between:
* Switch PHY
* On rack patch cable
* Plant cable
* Bench patch cable
* Magjack and PCB
* VSC8512
* QSGMII link to 7 series GTX
* My QSGMII to SGMII demux
* My SGMII PCS
* My GMII MAC
Suspecting something in the serdes/QSGMII region, but not sure yet.
Closing in on this bug.
The data coming off the PHY is fine, verified by sniffing and protocol decoding the QSGMII link.
The data entering the decode side of the PCS (after elastic buffer shifting from SERDES clock domain to MAC clock domain) is wrong.
First guess: something in that buffer is borked and it's filling up, rather than dropping idles between packets when it gets too full like it's supposed to. If the remote side of the link has a clock a few ppm faster than the FPGA, the FPGA will have to occasionally drop idles to rate match. If that logic is broken we'll just see random bytes of data not show up when they should.
Hmmmm. It helps if your elastic buffer drops extra idle ordered sets when it's almost *full*.
Not when almost *empty*. 🤦♂️
OK, this one is interesting.
The switch is forwarding packets that are completely correct except for the first 16 bytes, which at first glance appear to be gibberish.
The 16 byte size is a clue, since most of the fabric and the external packet buffer SRAM are using a 128-bit datapath, while the MAC/PCS blocks are narrower (8-32 bits at various spots).
So the problem here is likely a lot closer to the core than the previous bug.
When your 16K entry FIFO has 16388 free spots in it, that's awesome!
It's a TARDIS or something, bigger on the inside than the outside. ... right?
Switch fabric reliability is improving! I'm now needing heavier and heavier loads and triggering less frequent bugs.
The one I'm chasing now involves a port getting stuck in the PREFETCH state, indicating it's asked for data from external RAM but it got less data than it expected.
I'm actually getting up to a pretty decent link utilization with this ping flood. Far from saturating the pipe, but looks like maybe 20-30% ish?
@jpm Iperf will happen once I'm ready to stress it to the max.
Ping is easier for debug since the packets are serialized and i get nice feedback as to which ones didn't make it, which I can cross-check against scope/LA captures to figure out where things went bad.
@jpm This is also a single port pair test (upstream -> g2, laptop -> g0) with no other ports participating.
For a more proper stress test I'll make a bunch of vlans and add daisy-chain cables so a frame might come in g0, out g2, in g4, out g6, in g8, out g10, in g12, out g14. This will create a lot more load on the fabric without me having to hook up a dozen separate machines running separate iperf servers etc.
But I still can't run the fabric beyond 50% load until i finish reworking all of the odd-numbered ports (in the upper row) to fix the pin swaps. I did one as a test to confirm that this was the only problem, but still have to do the other six.