English – The Conversation | Self-driving cars struggle to see at night or in fog – but imitating the human brain can make them safe by Pablo Hernández Cámara, Profesor e investigador. Departamento de Ingeniería Electrónica & Laboratorio de Procesado de Imágenes, Universitat de València, Universitat de València

AI generated summary, Read the full article for complete information.

Self‑driving cars work well in clear daylight but become almost blind in darkness, rain or fog, because current AI vision systems lack the adaptive mechanisms that human eyes use. Researchers at the University of Valencia mimicked the brain’s “divisive normalisation”—a neuronal “volume‑control” that amplifies weak signals in dark scenes and attenuates bright ones—to modify standard AI models. Tests with real‑world European driving data, night‑time images from Switzerland and virtual simulators showed that the brain‑inspired models retained accurate object detection under fog and complete darkness, outperforming unmodified AI by more than 20 %. The study suggests that improving autonomous‑vehicle safety does not require larger computers or massive datasets, but rather can be achieved by borrowing evolution‑tested strategies from human vision, making AI systems more robust, adaptable, and trustworthy in all weather conditions.

Read more: https://theconversation.com/self-driving-cars-struggle-to-see-at-night-or-in-fog-but-imitating-the-human-brain-can-make-them-safe-282284

#UniversityofValencia #Selfdrivingcars #AIvision #Neuralnetwork #Humanbrain #Divisivenormalisation #Switzerland #Europeandatasets #Autonomousvehicles #Braininspired #

Self-driving cars struggle to see at night or in fog – but imitating the human brain can make them safe

AI models that power self-driving cars work great in clear conditions, but turn blind in fog or at night.

The Conversation

WTR (@wtry1102)

FACS(Facial Action Coding System)를 바탕으로 AI 이미지·비디오에서 표정을 생성할 때 감정 단어를 직접 넣기보다 얼굴 근육의 움직임을 구조적으로 반영해야 한다는 점을 설명한다. 감정 표현 생성에 유용한 기술적 접근을 제시한다.

https://x.com/wtry1102/status/2053723016752959928

#facs #facialexpression #aivision #imagegeneration #videogen

WTR (@wtry1102) on X

FACSは頭頸部周辺、表情筋の筋肉運動によって顔表面に現れる可視的動作の記述体系なので AI画像/動画で表情を作る時、 「笑顔」「悲しい顔」みたいな感情語をそのまま入れるより、 感情語 → FACS(AU)分解 → 顔パーツの可視動作を自然言語に逆変換 → プロンプト化

X (formerly Twitter)

田中義弘 | taziku CEO / AI × Creative (@taziku_co)

Meta의 TRIBE v2는 영상, 오디오, 텍스트를 바탕으로 시청자의 fMRI 뇌 반응을 예측하는 모델이다. 게시 전 단계에서 훅의 약함, 늘어진 전개, 정보 밀도 저하 구간을 찾아내 편집을 돕는 UI/분석 도구로 소개된다.

https://x.com/taziku_co/status/2049715779881578795

#meta #tribe #fmri #videoanalysis #aivision

田中義弘 | taziku CEO / AI × Creative (@taziku_co) on X

これは動画分析ではなく、 「視聴者の脳反応」を投稿前に覗くUI。 MetaのTRIBE v2は、動画・音声・テキストからfMRI反応を予測する。 弱いフック、間延び、情報密度の谷を編集前に見つける用途が見えてくる。 これは創作支援か、最適化のやりすぎか?

X (formerly Twitter)

el.cine (@EHuanglu)

Gemini 3.1 Flash Live가 매우 빠르고 더 똑똑해졌다는 평가다. 사용자의 화면과 소리를 실시간으로 보고 들으면서, 즉석에서 가르쳐주는 형태의 멀티모달 라이브 AI 기능이 강조됐다.

https://x.com/EHuanglu/status/2037219331785056288

#gemini #google #multimodal #aivision #realtime

el.cine (@EHuanglu) on X

Gemini 3.1 Flash Live is crazy faster and smarter.. it can see and hear what you’re doing.. teach you anything in real time

X (formerly Twitter)
Retail security is moving from “review footage later” to prevent incidents in real time.
know more:https://zurl.co/O6iPL
#Smidmart #AIVision #RetailSecurity #TheftPrevention #LossPrevention #ComputerVision #VideoAnalytics #SmartRetail #AnomalyDetection #StoreOperations
Renewables are scaling fast — and the next big challenge is maintenance at scale. ☀️🌬️👁️
know more:https://zurl.co/451iw
#Smidmart #AIVision #RenewableEnergy #SolarInspection #WindTurbineInspection #PredictiveMaintenance #ComputerVision #ThermalImaging #DroneInspection
Cities are moving from fixed garbage routes to data-driven waste collection. 🗑️🤖
know more:https://zurl.co/I4YfO
#Smidmart #AIVision #SmartCity #SmartWasteManagement #ComputerVision #VisionInspection #IoT #RouteOptimization #UrbanTech #Sustainability #Recycling