New Research from VSP Vision Care and Workplace Intelligence Finds Desk Workers Spend Nearly 100 Hours a Week on Screens, 71% Say Screen-Related Visual Discomfort is Reducing Productivity<br/>
Base Site
๐ Oh, joy! Researchers have finally cracked the code to power our future with the same fungi that make our overpriced stir-fry taste earthy. Because who wouldn't want their high-frequency
#bioelectronics to double as a garnish? ๐๐
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0328965 #fungiinnovation #sustainablefuture #foodtech #researchhumor #HackerNews #ngated
Sustainable memristors from shiitake mycelium for high-frequency bioelectronics
Neuromorphic computing, inspired by the structure of the brain, offers advantages in parallel processing, memory storage, and energy efficiency. However, current semiconductor-based neuromorphic chips require rare-earth materials and costly fabrication processes, whereas neural organoids need complex bioreactor maintenance. In this study, we explored shiitake (Lentinula edodes) fungi as a robust, sustainable alternative, exploiting its adaptive electrical signaling, which is akin to neuronal spiking. We demonstrate fungal computing via mycelial networks interfaced with electrodes, showing that fungal memristors can be grown, trained, and preserved through dehydration, retaining functionality at frequencies up to 5.85 kHz, with an accuracy of 90 ยฑ 1%. Notably, shiitake has exhibited radiation resistance, suggesting its viability for aerospace applications. Our findings show that fungal computers can provide scalable, eco-friendly platforms for neuromorphic tasks, bridging bioelectronics and unconventional computing.
In the latest snoozer from the ivory tower, we're treated to a riveting tale of how machine learning is somehow the new yardstick for "order" in aperiodic sequences ๐๐ค. Because clearly, the universe was just waiting on edge for this groundbreaking revelation to explain why your WiFi is still terrible. Keep those donations coming,
#academia needs more coffee โ.
https://arxiv.org/abs/2509.18103 #machinelearning #aperiodicsequences #technews #researchhumor #WiFiissues #HackerNews #ngated
Machine Learnability as a Measure of Order in Aperiodic Sequences
Research on the distribution of prime numbers has revealed a dual character: deterministic in definition yet exhibiting statistical behavior reminiscent of random processes. In this paper we show that it is possible to use an image-focused machine learning model to measure the comparative regularity of prime number fields at specific regions of an Ulam spiral. Specifically, we demonstrate that in pure accuracy terms, models trained on blocks extracted from regions of the spiral in the vicinity of 500m outperform models trained on blocks extracted from the region representing integers lower than 25m. This implies existence of more easily learnable order in the former region than in the latter. Moreover, a detailed breakdown of precision and recall scores seem to imply that the model is favouring a different approach to classification in different regions of the spiral, focusing more on identifying prime patterns at lower numbers and more on eliminating composites at higher numbers. This aligns with number theory conjectures suggesting that at higher orders of magnitude we should see diminishing noise in prime number distributions, with averages (density, AP equidistribution) coming to dominate, while local randomness regularises after scaling by log x. Taken together, these findings point toward an interesting possibility: that machine learning can serve as a new experimental instrument for number theory. Notably, the method shows potential 1 for investigating the patterns in strong and weak primes for cryptographic purposes.
arXiv.org๐คก "Hello, is this Anna?" No, it's yet another mind-numbing academic paper ๐ on
#scams that everyone with a brain already understands without a PhD in
#PigButchering. ๐ท Grateful to the Simons Foundation for funding this "groundbreaking" revelation. ๐
https://arxiv.org/abs/2503.20821 #academicpapers #SimonsFoundation #researchhumor #HackerNews #ngated
"Hello, is this Anna?": Unpacking the Lifecycle of Pig-Butchering Scams
Pig-butchering scams have emerged as a complex form of fraud that combines elements of romance, investment fraud, and advanced social engineering tactics to systematically exploit victims. In this paper, we present the first qualitative analysis of pig-butchering scams, informed by in-depth semi-structured interviews with $N=26$ victims. We capture nuanced, first-hand accounts from victims, providing insight into the lifecycle of pig-butchering scams and the complex emotional and financial manipulation involved. We systematically analyze each phase of the scam, revealing that perpetrators employ tactics such as staged trust-building, fraudulent financial platforms, fabricated investment returns, and repeated high-pressure tactics, all designed to exploit victims' trust and financial resources over extended periods. Our findings reveal an organized scam lifecycle characterized by emotional manipulation, staged financial exploitation, and persistent re-engagement efforts that amplify victim losses. We also find complex psychological and financial impacts on victims, including heightened vulnerability to secondary scams. Finally, we propose actionable intervention points for social media and financial platforms to curb the prevalence of these scams and highlight the need for non-stigmatizing terminology to encourage victims to report and seek assistance.
arXiv.org๐ก OMG, someone wrote *another* thesis on parabolic microphones! ๐ค Surprise, surprise: big things capture more sound. Who knew?! ๐ฒ๐ฅฑ Expect *riveting* revelations like "size matters" and "mirrors are shiny." ๐
https://legallyblindbirding.net/2023/10/13/frequency-dependence-of-parabolic-microphone-gain/ #parabolicmicrophones #soundengineering #researchhumor #audioinnovation #thesisanalysis #HackerNews #ngated
The Physics of Parabolic Microphones: Frequency Dependence of Gain - Physics, Birding and Blindness
Introduction [latexpage] Parabolic microphones are known for their extreme sensitivity, and the origin of their acuity isn't difficult to guess. It is the most obvious thing about them, which can also make them a liability for field use, namely, their considerable size. Just as a large amount of weak light is captured by a telescope's
Physics, Birding and Blindness - Michael Hurben, PhDAh, yet another groundbreaking piece of "research" on squeezing 3D Gaussian blobs into even tinier spacesโas if thatโs exactly what the world needed right now! ๐ค Meanwhile,
#arXiv is on a hiring spree for a DevOps engineer because someone actually needs to maintain the website where these highly crucial findings reside. ๐๐
https://arxiv.org/abs/2505.05587 #3DGausianBlobs #HiringDevOps #ResearchHumor #TechNews #HackerNews #ngated
Steepest Descent Density Control for Compact 3D Gaussian Splatting
3D Gaussian Splatting (3DGS) has emerged as a powerful technique for real-time, high-resolution novel view synthesis. By representing scenes as a mixture of Gaussian primitives, 3DGS leverages GPU rasterization pipelines for efficient rendering and reconstruction. To optimize scene coverage and capture fine details, 3DGS employs a densification algorithm to generate additional points. However, this process often leads to redundant point clouds, resulting in excessive memory usage, slower performance, and substantial storage demands - posing significant challenges for deployment on resource-constrained devices. To address this limitation, we propose a theoretical framework that demystifies and improves density control in 3DGS. Our analysis reveals that splitting is crucial for escaping saddle points. Through an optimization-theoretic approach, we establish the necessary conditions for densification, determine the minimal number of offspring Gaussians, identify the optimal parameter update direction, and provide an analytical solution for normalizing off-spring opacity. Building on these insights, we introduce SteepGS, incorporating steepest density control, a principled strategy that minimizes loss while maintaining a compact point cloud. SteepGS achieves a ~50% reduction in Gaussian points without compromising rendering quality, significantly enhancing both efficiency and scalability.
arXiv.org๐๐ซ Oh no, Columbia! The NIH has put your research on ice, but don't worry, just sprinkle some JavaScript and cookies on top to melt the freeze. ๐ช๐ป Because who needs groundbreaking research when you can just debug your browser settings instead? ๐
https://www.science.org/content/article/nih-freezes-all-research-grants-columbia-university #NIHfreeze #JavaScriptCookies #ResearchHumor #DebuggingIssues #Columbia #HackerNews #ngatedAh, the groundbreaking revelation that you can still achieve low-bit quantization of
#LLMs without a GPUโbecause clearly, everyone has a spare supercomputer lying around ๐. We humbly thank the Simons Foundation for this earth-shattering news that no one asked for. And let's not forget to tip our hats to the brave souls who dared to write this fanfic of a research paper ๐๐.
https://arxiv.org/abs/2503.07657 #lowbitquantization #noGPU #SimonsFoundation #researchhumor #techsatire #HackerNews #ngated
SplitQuantV2: Enhancing Low-Bit Quantization of LLMs Without GPUs
The quantization of large language models (LLMs) is crucial for deploying them on devices with limited computational resources. While advanced quantization algorithms offer improved performance compared to the basic linear quantization, they typically require high-end graphics processing units (GPUs), are often restricted to specific deep neural network (DNN) frameworks, and require calibration datasets. This limitation poses challenges for using such algorithms on various neural processing units (NPUs) and edge AI devices, which have diverse model formats and frameworks. In this paper, we show SplitQuantV2, an innovative algorithm designed to enhance low-bit linear quantization of LLMs, can achieve results comparable to those of advanced algorithms. SplitQuantV2 preprocesses models by splitting linear and convolution layers into functionally equivalent, quantization-friendly structures. The algorithm's platform-agnostic, concise, and efficient nature allows for implementation without the need for GPUs. Our evaluation on the Llama 3.2 1B Instruct model using the AI2's Reasoning Challenge (ARC) dataset demonstrates that SplitQuantV2 improves the accuracy of the INT4 quantization model by 11.76%p, matching the performance of the original floating-point model. Remarkably, SplitQuantV2 took only 2 minutes 6 seconds to preprocess the 1B model and perform linear INT4 quantization using only an Apple M4 CPU. SplitQuantV2 provides a practical solution for low-bit quantization on LLMs, especially when complex, computation-intensive algorithms are inaccessible due to hardware limitations or framework incompatibilities.
arXiv.org๐คก Behold, an article so desperately chasing after JIT compilers, it needed 2502.20547 words just to say "we're not there yet." Meanwhile, the Simons Foundation probably wonders if their donations fund research... or just a never-ending spiral of developer existential crises. ๐ฅธ
https://arxiv.org/abs/2502.20547 #JITcompilers #DeveloperCrisis #SimonsFoundation #ResearchHumor #TechSatire #HackerNews #ngated
An Attempt to Catch Up with JIT Compilers: The False Lead of Optimizing Inline Caches
Context: Just-in-Time (JIT) compilers are able to specialize the code they generate according to a continuous profiling of the running programs. This gives them an advantage when compared to Ahead-of-Time (AoT) compilers that must choose the code to generate once for all.
Inquiry: Is it possible to improve the performance of AoT compilers by adding Dynamic Binary Modification (DBM) to the executions?
Approach: We added to the Hopc AoT JavaScript compiler a new optimization based on DBM to the inline cache (IC), a classical optimization dynamic languages use to implement object property accesses efficiently.
Knowledge: Reducing the number of memory accesses as the new optimization does, does not shorten execution times on contemporary architectures.
Grounding: The DBM optimization we have implemented is fully operational on x86_64 architectures. We have conducted several experiments to evaluate its impact on performance and to study the reasons of the lack of acceleration.
Importance: The (negative) result we present in this paper sheds new light on the best strategy to be used to implement dynamic languages. It tells that the old days were removing instructions or removing memory reads always yielded to speed up is over. Nowadays, implementing sophisticated compiler optimizations is only worth the effort if the processor is not able by itself to accelerate the code. This result applies to AoT compilers as well as JIT compilers.
arXiv.org