De omnibus dubitandum

https://lemmy.zip/post/6072022

De omnibus dubitandum - Lemmy.zip

Academic philosophy is mostly concerned with the Greeks and Germans. The Romans had their philosophers, but they did not have the same influence on modern thought.

Also, often times philosophers do use an original word or phrase because it cannot be translated well into English. Language evolves over time and concepts as they were originally understood can be lost or muddled by modern uses of a word used to substitute. Also, etymology is more and more important in philosophy.

OP confused philosophers with lawyers, probably.
Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency

Self-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of self-assessment investigations: reliability, correlation, critical sample size, and the graphical representations of self-assessment data. We show that graphical conventions common in the self-assessment literature introduce artifacts that invite misinterpretation. Troublesome conventions include: (y minus x) vs. (x) scatterplots; (y minus x) vs. (x) column graphs aggregated as quantiles; line charts that display data aggregated as quantiles; and some histograms. Graphical conventions that generate minimal artifacts include scatterplots with a best-fit line that depict (y) vs. (x) measures (self-assessed competence vs. measured competence) plotted by individual participant scores, and (y) vs. (x) scatterplots of collective average measures of all participants plotted item-by-item. This last graphic convention attenuates noise and improves the definition of the signal. To provide relevant comparisons across varied graphical conventions, we use a single dataset derived from paired measures of 1154 participants' self-assessed competence and demonstrated competence in science literacy. Our results show that different numerical approaches employed in investigating and describing self-assessment accuracy are not equally valid. By modeling this dataset with random numbers, we show how recognizing the varied expressions of randomness in self-assessment data can improve the validity of numeracy-based descriptions of self-assessment.

Digital Commons @ University of South Florida