The more I publish the more I am frustrated by the review process. Papers gets rejected by reviewers that clearly have not paid attention while reading the paper. I can understand making mistakes, but I cannot understand:
* Asking for information that a Ctrl+F in the document would have provided to you. "Please provide key training and inference details such as optimizer, learning rate schedule, number of epochs/iterations, weight decay, batch size, data augmentation, input preprocessing, and inference speed. " which are all in the "Implementation" section
* Criticizing the lack of "visual figures" without telling you which one you expect apart from a broad statement. Just "The paper lacks enough visual experimental figures that clearly demonstrate the effectiveness of the proposed method. Please add additional qualitative results (e.g., more visual comparisons with baselines, failure cases, and ablation visualizations) to better support your claims". I'm sorry do you not like mean, standard deviations, and statistical test? Mind you this is a paper about IMU data, not images. I have more than 100 samples cut in smaller windows so you can do the math
Also, because I am going crazy, can anyone tell me what that means: "add ablations (e.g., diagonal vs. correlated uncertainty) if possible" in the context of method that predicts the single value uncertainty of a IMU data sample?
All this after 4 months with the editor before being sent to review