The RFC distinguishes subscription feeds and archive feeds, where subscription feed documents are expected to be unstable and archive feed documents are expected to be immutable -- if you encounter a URL in the chain that you already visited once, you're done fetching.
How (and if) people actually use subscription feeds and archive feeds in practice, I don't know. If an entire logical feed, all page documents, needs to be either subscription or archive, then an archive feed can't have a current head? If I sort the document chain in reverse chronological order, is a document describing older entries "next" or "previous"?
It seems to me a good HTTP citizen would make any document immutable when possible, but a conforming Atom client needs to check every document in a subscription feed to be sure. Time stamps and ETags would make it unnecessary to literally fetch all the document contents, it could get 304s when accessing them, but not having to even check would be even better. I guess HTTP cache headers would make this even less an issue -- if you have a subscription feed with most pages being in practice immutable, you could have a `Cache-Control` with a high `max-age`, or if you have hashes in your URLs you could even be brave and use `immutable`.
Sounds like all of this is fodder for an #
HPREp . I seem to remember there is someone on the Fediverse that scours feed-related projects out there and files issues for them to please handle paged feeds correctly, would be cool to interview that person.