I have two directories that must contain identical files. I can use jdupes or fdupes to find & list files between these directories that are duplicates. How can I do the opposite of this & find files that are different from eachother? My original idea for this was to hash all files, sort by file name so lines would only differ by hash, then use diff to pick out file names from wrong hashes, but I'm not sure how to process the output of diff to get this.

#Linux

I've been trying to create an archive of some old family photos, but the NTFS partition they're on is causing issues with some metadata values being too large for any archiving utility to store, & they all refuse to add those files. So I copied everything to a BTRFS partition with rsync (which also complained about the same files but still copied them), & then I was able to actually archive the files. I'm double checking everything though, & ended up finding out that some files did not actually copy correctly despite rsync supposedly verifying the files to ensure correct copying. I don't know which files though, just that there are over 300 of them. I want to retry just those files, because rsync seems to update timestamps on all files even if I tell it not to & I want to avoid that. Also because it takes three hours to do the copying.
@jackemled yeah, hashing all the files and comparing hashes is how I've done this kind of thing in the past. You could do a sort and uniq on a full list of all hashes to get a list of hashes that have changed.
@chrisbier Is that the same as cat FILES… | sort -u? That might work but would have duplicate lines for each different file. I guess I could do it twice, sorting out unique lines by the second key instead of the whole line.
@chrisbier No, it includes correct files too. It keeps the first of each duplicate line instead of removing all instances.