@david my first guess is that you have a lot of small files and something is causing zfs to insert a lot of padding.
Is ashift the same on both pools (zpool get ashift, I think)? My guess is the source may be 9 (512 byte minimum block size) and the destination is 12 (4k min block).
Is the source not raidz and destination is raidz?
How are you looking at total space? zpool and zfs commands look at different things?
@david 8k recordsize + compression could lead to a poor interaction with ashift=12 as well. Suppose an 8k block would compress to 4200 bytes. With ashift=12, the compressed 8k block will consume 2 x 4k sectors (8k total). With ashift=9, the compressed 8k block will consume 9 x 512b sectors (4.5k total).
With raidz the overhead varies by number of drives in a raidz vdev. See my explanation here:
https://github.com/openzfs/zfs/blob/master/lib/libzfs/libzfs_dataset.c#L5340-L5426
@david while we have concluded raidz is not to blame here, I figured it may be worth mentioning that I did a talk on this work while at #Joyent.
Slides: https://us-east.manta.joyent.com/Joyent_Dev/public/docs/2019-06-RAIDZ_on_small_blocks.pdf
Video: https://youtu.be/sTvVIF5v2dw
Contrary to what I predicted back then, today’s NVMe SSDs pretty much all present as 512n, not as 4Kn.