CorridorKey Is What You Get When Artists Make AI Tools

You may not have noticed, but so-called “artificial intelligence” is slightly controversial in the arts world. Illustrators, graphics artists, visual effects (VFX) professionals —…

Hackaday

@deborahh I agree.

Artificial Neural Networks model out neuron cells’ networks. The connection between cells is simple math function though. Without all the DOSE (Dopamine, Oxitocine, Serotonin and Endorfins) chemicals our brain is filled with. And probably other differences I am even not aware of.

I see it as a fundamental difference between AI and human intelligence.

We are just so much more than those artificial networks.

#AI #coaching #artificialneuralnetwork #machinelearning #ML

"How fast is the Greenland ice sheet warming? In this study, we compiled 4500+ temperature measurements at 10 m below the ice sheet surface (T10m) from 1912 to 2022. ..After a slight cooling during 1950–1985, the ice sheet warmed at a rate of 0.7 °C per decade until 2022. Climate models showed mixed results compared to our observations and underestimated the warming in key regions."

Nifty neural network: Section 2.2.

#ArtificialNeuralNetwork
#Cryosphere
@Ruth_Mottram
https://tc.copernicus.org/articles/18/609/2024/

Recent warming trends of the Greenland ice sheet documented by historical firn and ice temperature observations and machine learning

Abstract. Surface melt on the Greenland ice sheet has been increasing in intensity and extent over the last decades due to Arctic atmospheric warming. Surface melt depends on the surface energy balance, which includes the atmospheric forcing but also the thermal budget of the snow, firn and ice near the ice sheet surface. The temperature of the ice sheet subsurface has been used as an indicator of the thermal state of the ice sheet's surface. Here, we present a compilation of 4612 measurements of firn and ice temperature at 10 m below the surface (T10 m) across the ice sheet, spanning from 1912 to 2022. The measurements are either instantaneous or monthly averages. We train an artificial neural network model (ANN) on 4597 of these point observations, weighted by their relative representativity, and use it to reconstruct T10 m over the entire Greenland ice sheet for the period 1950–2022 at a monthly timescale. We use 10-year averages and mean annual values of air temperature and snowfall from the ERA5 reanalysis dataset as model input. The ANN indicates a Greenland-wide positive trend of T10 m at 0.2 ∘C per decade during the 1950–2022 period, with a cooling during 1950–1985 (−0.4 ∘C per decade) followed by a warming during 1985–2022 (+0.7 ∘ per decade). Regional climate models HIRHAM5, RACMO2.3p2 and MARv3.12 show mixed results compared to the observational T10 m dataset, with mean differences ranging from −0.4 ∘C (HIRHAM) to 1.2 ∘C (MAR) and root mean squared differences ranging from 2.8 ∘C (HIRHAM) to 4.7 ∘C (MAR). The observation-based ANN also reveals an underestimation of the subsurface warming trends in climate models for the bare-ice and dry-snow areas. The subsurface warming brings the Greenland ice sheet surface closer to the melting point, reducing the amount of energy input required for melting. Our compilation documents the response of the ice sheet subsurface to atmospheric warming and will enable further improvements of models used for ice sheet mass loss assessment and reduce the uncertainty in projections.

New NIOO publication: Temporal modelling of long-term heavy metal concentrations in #AquaticEcosystems. #timeseriesanalysis #artificialneuralnetwork #metalconcentration
https://doi.org/10.2166/hydro.2023.151
Temporal modelling of long-term heavy metal concentrations in aquatic ecosystems | Journal of Hydroinformatics | IWA Publishing

HIGHLIGHTS. Heavy metal contamination in aquatic ecosystems is a significant environmental concern.A series of connected and isolated lakes were examined as a m

I found watching this #AI simulation of predators and preys using #artificialNeuralNetwork pretty mesmerizing: https://www.youtube.com/watch?v=tVNoetVLuQg. At least check out video after 19th minute
Predators VS Preys - Much bigger simulation

YouTube
Neural Network Identifies Bird Calls, Even On Your Pi

Recently, we’ve stumbled upon the extensive effort that is the BirdNET research platform. BirdNET uses a neural network to identify birds by the sounds they make, and is a joint project betwe…

Hackaday

Avoid Repetitive Strain Injury With Machine Learning – And Pikachu

The humble mouse has been an essential part of the desktop computing experience ever since the original Apple Macintosh popularized it in 1984. While mice enabled user-friendly GUIs, thus making computers accessible to more people than ever, they also caused a significant increase in repetitive strain injuries (RSI). Mainly caused by poor posture and stress, RSI can lead to pain, numbness and tingling sensations in the hand and arm, which the user might only notice when it's too late.

Hoping to catch signs of RSI before it manifests itself, [kutluhan_aktar] built a device that allows him to track mouse fatigue. It does so through two sensors: one that measures galvanic skin response (GSR) and another that performs electromyography (EMG). Together, these two measurements should give an indication of the amount of muscle soreness. The sensor readout circuits are connected to a Wio Terminal, a small ARM Cortex-M4 development board with a 2.4″ LCD.

However, calculating muscle soreness is not as simple as just adding a few numbers together; in fact the link between the sensor data and the muscles' state of health is complicated enough that [kutluhan] decided to train a TensorFlow artificial neural network (ANN), taking into account observed stress levels collected in real life. The network ran on the Wio while he used the mouse, pressing buttons to indicate the amount of stress he experienced. After a few rounds of training he ended up with a network that reached an accuracy of more than 80%.

[kutluhan] also designed a rather neat 3D printed enclosure to house the sensor readout boards as well as a battery to power the Wio Terminal. Naturally, the case was graced by a 3D rendition of Pikachu on top (get it? a mouse Pokémon that can paralyze its opponents!). We've seen [kutluhan]'s fondness for Pokémon-themed projects in his earlier Jigglypuff CO2 sensor.

Although the setup with multiple sensors doesn't seem too practical for everyday use, the Mouse Fatigue Estimator might be a useful tool to train yourself to keep good posture and avoid stress while using a mouse. If you also use a keyboard (and who doesn't?), make sure you're using that correctly as well.

#computerhacks #peripheralshacks #artificialneuralnetwork #repetitivestraininjury #tensorflow #wioterminal

Avoid Repetitive Strain Injury With Machine Learning – And Pikachu

The humble mouse has been an essential part of the desktop computing experience ever since the original Apple Macintosh popularized it in 1984. While mice enabled user-friendly GUIs, thus making co…

Hackaday

Researchers Build Neural Networks With Actual Neurons

Neural networks have become a hot topic over the last decade, put to work on jobs from recognizing image content to generating text and even playing video games. However, these artificial neural networks are essentially just piles of maths inside a computer, and while they are capable of great things, the technology hasn't yet shown the capability to produce genuine intelligence.

Cortical Labs, based down in Melbourne, Australia, has a different approach. Rather than rely solely on silicon, their work involves growing real biological neurons on electrode arrays, allowing them to be interfaced with digital systems. Their latest work has shown promise that these real biological neural networks can be made to learn, according to a pre-print paper that is yet to go through peer review.

Wetware

Scanning electron microscope pictures of neurons grown on a microelectrode array. Credit: Cortical Labs

The broad aim of the work is to harness biological neurons for their computational power, in an attempt to create "synthetic biological intelligence". The general idea is that biological neurons have far more complexity and capability than any neural networks simulated in software. Thus, if one wishes to create a viable intelligence from scratch, it makes more sense to use biological neurons rather than messing about with human-created simulations of such.

The team behind the project investigated neural networks grown from both mouse and human cells. Mouse cortical cells were harvested from embryos for the purpose, while in the human cell case, pluripotent stem cells were used and differentiated into cortical neurons for the purpose of testing. These cells were plated onto a high-density multielectrode array from Maxwell Biosystems.

Once deposited and properly cultured in the lab, the cells formed "densely-interconnected dendritic networks" across the surface of the electrode array. These could then be stimulated electronically via the electrode array, and the responses of the neurons read back in turn. The result was a system nicknamed DishBrain , for the simple fact that it consists of neural matter essentially living in a petri dish.

DishBrain was put to the test in a simulated game environment reminiscent of the game Pong. The biological neural network (BNN) has a series of electrodes that were stimulated based on the game state, providing the cells with sensory input. Other electrodes were then assigned to control the up and down movement of the paddle in the game.

A variety of feedback approaches were then used to see if the neural network could be taught to control the game intelligently. The primary idea was based around the Free Energy Principle, in which biological systems aim to act to maintain a world state that matches their own internal models. Thus, the "Stimulus" condition feedback loop was designed to provide unpredictable random feedback when the ball was missed by the paddle, and predictable feedback when the paddle hit the ball properly. This method was then contrasted against a silent mode where stimulus was entirely cut when the paddle hit the ball, and a no-feedback mode where no special stimulus was provided relative to the gamestate. A rest mode was also used to get a baseline reading of activity when unstimulated.

Analysis of data showing the performance of the neural network in various stimulus modes when interacting with a Pong -like environment. Credit: Cortical Labs

The results showed that, initially, there was little difference in game performance between the different modes, with the Stimulus condition performing slightly worse. However, after the first five minutes, statistics showed that under the Stimulus condition, the neural network maintained longer rallies of hitting the ball repeatedly, and was less likely to miss the initial serve, compared to the silent and no-feedback modes. In fact, the Stimulus condition was also the only condition in which the network showed improved performance over time, suggesting evidence of a learning effect. In comparison, the silent and no-feedback modes maintained a relatively flat performance level throughout a full 20-minute test.

The research, yet to be peer reviewed, shows much promise in several areas. Not only is it more evidence that we can successfully grow and interface with neuronal cells, it also provides a platform for a better understanding of how our brains work, on both a conceptual and physical level. If the results are confirmed to be valid, it suggests that the research team essentially managed to grow a very simple brain in a vat, and trained it to control a video game. Professional e-sports players should be on warning! (OK, maybe not yet.)

The paper makes for dense reading, but it shows that there is real potential for biological neurons to be trained to intelligently complete tasks in concert with digital interfaces. While it's early days yet, in a few decades, you might be topping up your self-driving car with a vial of neuronal growth medium to ensure you can safely make it across the country on your roadtrip without it accidentally merging into traffic. Humanity is just learning how to interface with real biological brains, and it may be that we master that before we succeed in creating our own from scratch!

#featured #interest #originalart #science #artificialintelligence #artificialneuralnetwork #biologicalneuralnetwork #neuralnetwork

Researchers Build Neural Networks With Actual Neurons

Neural networks have become a hot topic over the last decade, put to work on jobs from recognizing image content to generating text and even playing video games. However, these artificial neural ne…

Hackaday