The more I publish the more I am frustrated by the review process. Papers gets rejected by reviewers that clearly have not paid attention while reading the paper. I can understand making mistakes, but I cannot understand:

* Asking for information that a Ctrl+F in the document would have provided to you. "Please provide key training and inference details such as optimizer, learning rate schedule, number of epochs/iterations, weight decay, batch size, data augmentation, input preprocessing, and inference speed. " which are all in the "Implementation" section
* Criticizing the lack of "visual figures" without telling you which one you expect apart from a broad statement. Just "The paper lacks enough visual experimental figures that clearly demonstrate the effectiveness of the proposed method. Please add additional qualitative results (e.g., more visual comparisons with baselines, failure cases, and ablation visualizations) to better support your claims". I'm sorry do you not like mean, standard deviations, and statistical test? Mind you this is a paper about IMU data, not images. I have more than 100 samples cut in smaller windows so you can do the math

Also, because I am going crazy, can anyone tell me what that means: "add ablations (e.g., diagonal vs. correlated uncertainty) if possible" in the context of method that predicts the single value uncertainty of a IMU data sample?

All this after 4 months with the editor before being sent to review

#review #peerReview #science

Complained to the AE, was sent back the exact comments from the #reviewers that I demonstrated are incorrect or already done. The AE say they read the reviewers comments again and stand by their judgement, but _did they read our paper that already answers the reviewers comments_? I guess not.

Why be an associate editor when you clearly don’t want to do the job…?

#PeerReview #science

Like this is what's ground for rejection according to them:

"I believe rejection is appropriate.

1. Lack of novelty and comparison with related methods:

* Reviewer 1: The paper suggests there is no prior work on uncertainty-aware calibration for IMU-based gesture detection. But uncertainty and calibration have many related studies in time-series and other tasks.

> Yes but not on IMU data for gesture recognition...

* Reviewer 2: The review of related work in the interdisciplinary field of IMU gesture recognition and model calibration is not in-depth enough. It fails to clearly distinguish the technical differences between UAC and existing uncertainty-aware methods (such as Monte Carlo Dropout and Deep Ensembles), and does not highlight the unique innovative value of the "two-stage calibration + entropy-weighted aggregation" strategy.

> "Monte Carlo Dropout and Deep Ensembles" are *not even* uncertainty aware methods. They are methods to improve calibration.

To be continued...

2. Lack of further results like feature visualization, downstream results and so on.

* Reviewer 1: The paper lacks enough visual experimental figures that clearly demonstrate the effectiveness of the proposed method.

> I don't have pretty pictures to impress you in this paper. Just number and stats over large datasets. Does that mean the science is wrong?

* Reviewer 2: Only the accuracy and calibration of the fusion model itself are evaluated, without verifying its performance improvement for downstream safety-critical tasks, resulting in a lack of practicality validation."

> That is a whole ass new paper you are asking for because this data is not available as far as I know. There is very few dataset of **checks notes** people wearing IMU sensors while being in a situation in which they might hurt themselves and die.

#science #peerreview