If you exercise some degrees of freedom in conducting your experiment, you can make almost any stupid hypothesis (appear to) satisfy p<0.05.

False positive rates can be as high as 60%.

These researchers (appeared to) prove, to p<0.05, that listening to kids music made participants 1.5 years younger.

You can make anything "statistically significant".

Searching for the statistically significant effect.

But why?

So you have something to publish.

Authors conducted a real study of whether listening to kids music made the participants feel younger.

It did.

They then tested whether it MADE them younger ...

...

It did.

By adjusting the degrees of freedom in the experiment, you can generate frighteningly high false positive rates.

That is, you can adjust the methods to get a positive result that is NOT TRUE.

Just collect data, and if it fails, collect a bit more and test again until you get a positive.
Here's how they suggest researchers avoid this bias.
Same applies to RCTs. In addition to being the wrong tool for testing masks, etc. this is no doubt why so many fail, esp the meta studies where authors have huge degrees of freedom to exclude whatever "biased" or "methodologically wrong" study they want.
Here is an excellent demonstration of the effect. Results - NOT significant at the beginning,
- BECOME stat sig at a certain point, but
- MORE data shows it is in fact not stat sig.
Authors conclude with my point, science is the search for truth, yielding to pressures of the every-day such as publish or perish.

Which is why the World Health Organization tossing money to idiots to run stupid meta studies to continue the "droplet" tradition is just egregious patronage to support the status quo of the system.

This money could go to real science.

What a waste.

What bad science.

What a blow to credibility.