#starthilfe #powerbank #autozubehör #amazondeal #astroai Die AstroAI P10 Starthilfe Powerbank bringt dank bis zu 3000A Spitzenstrom selbst große Benzin- und Diesel-Motoren bis 10,0L bzw. 8,0L wieder zum Laufen und dient gleichzeitig als Powerbank mit LED-Notfalllampe – aktuell im Amazon-Angebot

https://www.harris-blog.de/astroai-p10-starthilfe-powerbank-kraftvolle-auto-starthilfe-fuer-bis-zu-100l-benzin-und-80l-diesel-im-amazon-angebot/

AstroAI P10 Starthilfe Powerbank: Kraftvolle Auto-Starthilfe für bis zu 10,0L Benzin und 8,0L Diesel im Amazon-Angebot - Harris Blog – Die besten Online-Deals, Guides & Empfehlungen

Wenn dein Auto genau dann nicht anspringt, wenn du es am dringendsten brauchst, ist die AstroAI P10 Starthilfe Powerbank ein echter Lebensretter:

Harris Blog

📄 An Integer-Only Resource-Minimized RNN on FPGA for Low-Frequency Sens…

Quicklook:
Bartels, Jim et al. (2023) · Zenodo
Reads: 0 · Citations: 1
DOI: 10.5281/zenodo.7800728

🔗 https://ui.adsabs.harvard.edu/abs/2023zndo...7800728B/abstract

#Astronomy #Astrophysics #AstroAI #InternetOfThingsIot #FieldProgrammableGateArrayFpga

An Integer-Only Resource-Minimized RNN on FPGA for Low-Frequency Sensors in Edge-AI

This repository offers the code for a Recurrent Neural Network Implementation on FPGA, referred to as Integer-Only Resource-Minimized Recurrent Neural Network (RNN), along with a comprehensive guide on its usage in a few easy steps, making it easy to use in sensor applications. Currently, a scientific work disclosing the full details of this RNN is under review, and will be added as a reference in a future date. The RNN is built using one or multiple simple RNN layers followed by a linear layer, utilizing Tensorflow 2.0 (Keras Layers). The guide consists of two parts: Python: Tensorflow 2.0 model to integer-only shared-scale RNN conversion and memory extraction. HDL: Synthesis in Lattice Radiant and FPGA implementation. ! -- Caution -- ! All RNN layers have to be Keras-based "simple RNN" with an equal width, i.e., the weight dimensions have to be equivalent for each layer. Only a single linear layer that follows the simple RNN layers is supported. The following is an example model description from Tensorflow 2.0 (cow behavior estimation, see model description in Ref. [1], dataset openly available at Ref. [2]), here the parameters are {N, L, I, O, q, Ts, fs}: {13, 4, 3, 4, 8, 35, 25}: Layer (type) Output Shape Param # ================================================================= simple_rnn_8 (SimpleRNN) (None, 35, 13) 221 simple_rnn_9 (SimpleRNN) (None, 35, 13) 351 simple_rnn_10 (SimpleRNN) (None, 35, 13) 351 simple_rnn_11 (SimpleRNN) (None, 13) 351 dense_2 (Dense) (None, 4) 56 ================================================================= Total params: 1,330 Trainable params: 1,330 Non-trainable params: 0 _______________________ 1. Python Install pip requirements Before you use any of the Python code, make sure you install the requirements.txt in the Python project folder: $ cd ./python $ pip install -r requirements.txt Tensorflow 2.0 SimpleRNN --> Integer-Only Shared-Scale RNN and ".mem" files for FPGA impl.: First you have to set the parameters of your simpleRNN in the "parameters.csv" file, located in the top "python" folder. Please check the section about parameters what parameters you have to set. After setting parameters, you have to make sure that the x dataset and y dataset (input data and labels) are saved as .npy files (x_test.npy, y_test.npy) files. Then, you can run the code from the "python" directory: $ python3 ./src/convert_and_extract.py --parameter_file parameters.csv --model_directory TF_model_directory --x_data directory_to_x_test.npy --y_data directory_to_y_test.npy Example of cow behavior estimation (default setting): Change directory to "python" and run: $ python3 ./src/convert_and_extract.py --parameter_file ./cow/parameters.csv --model_directory ./cow/model --x_data ./cow/x_test.npy --y_data ./cow/y_test.npy output 50/50 [==============================] - 1s 6ms/step 100%|██████████████████████████████████████| 1594/1594 [00:09<00:00, 167.04it/s] pre-quantized model (TF2.0) --- Misclassified labels :77 top-1 accuracy: 95.17 post-quantized model (shared-scale RNN) --- Misclassified labels :78 top-1 accuracy: 95.11 % If you do not have the test data at hand, you can leave out --x_data and --y_data and the Python code will use random data to convert the model. This will not change anything however you will not be able to see how much the accuracy drops when converting to 8-bit integer on the shared-scale RNN. By running the above code, the to 8-bit integer converted model weights are automatically copied to the "HDL" folder as "RNN_8b.mem. Parameter setting The following list of parameters has to be defined by you, based on your own pre-trained model: N: Layer width L: number of RNN layers I: number of inputs O: number of classes Ts: number of timesteps fs: sensor frequency --> necessary for setting minimum clock frequency of FPGA At the moment only 8-bit bitwidth is supported! Adjust these parameters according to your own model in "parameters.csv" before running the convert_and_extract.py file in the above order (see "parameters.csv" in the cow folder for an example). For the HDL, you need to adjust the "RNNCommon.sv" file (only 6 lines at the top): localparam LAYERS (L), LAYER_WIDTH (N), INPUT_WIDTH (I), TIMESTEPS (Ts), CLASS_COUNT (O), INPUT_FREQ (fs) 2. HDL and implementation on Lattice FPGA Module Tree ├── top - top.sv │ ├── IOController - IOController.sv --> SPI Controller (Slave) │ │ └── SPI_Slave - SPI_Slave.sv --> Please see SPI_Slave.v section │ ├── RNN - RNN.sv --> top module of RNN │ │ ├── VecRAM - VecRAM.sv --> RAM wrapper for storing recurrent vectors (h_t) │ │ │ └── pmi_ram_dq - pmi_ram_dq.v --> Lattice RAM IP │ │ ├── TimestepController - TimestepController.sv --> FSM of the RNN (central module) │ │ ├── Tanh - Tanh.sv --> 1-to-1 Hyperbolic tangent mapping │ │ │ └── pmi_rom - pmi_rom.v --> Lattice ROM IP │ │ ├── RNNParam - RNNParam.sv --> contains RNN model weights and parameters │ │ │ └── ParamROM - pmi_rom.v --> model weight wrapper │ │ │ └──pmi_rom - pmi_rom.v --> Lattice ROM IP │ │ ├── ALU - ALU.sv --> performs 3 operations relevant to RNN │ │ │ ├── 2x pmi_sub - pmi_sub.v --> Lattice substracter IP │ │ │ ├── 2x pmi_mult - pmi_mult.v --> Lattice multipier IP │ │ │ └── 3x pmi_add - pmi_add.v --> Lattice adder IP Information about the IPs and how to use them can be found here (hyperlink might not work, please copy into browser): Arithmethic modules: http://www.latticesemi.com/view_document?document_id=52684 Memory modules: http://www.latticesemi.com/view_document?document_id=52685 Synthesis After setting the parameters according to your own model in "RNNCommon.sv" the following two steps have to be performed for FPGA implementation: Open project with "RNN_f.rdf" on Lattice Radiant, set the Lattice FPGA that you need (default is set to Lattice ICE40UP5K) or check "Adaptation for other FPGAs" section below. Used Lattice Radiant Ver. 3.2.0.18.0. Adjust the constraint files, currently the following signals are set to these pins on Lattice ICE40UP5K, these pins are located on bank 2 of the ICE40UP5K-B-EVN breakout board using SGI48, adjust according to own settings: clk_ext (external clock) --> 35 (12 MHz clock of breakout board) sclk --> 44 (Bank2, SPI) mosi --> 47 (Bank2, SPI) ss --> 46 (Bank2, SPI) miso --> 45 (Bank2, SPI) rst_ext --> 23 (button on the breakout board) Running the model on FPGA with a data stream After implementing on FPGA, first reset the FPGA with rst_ext. Hereafter, the RNN automatically starts when you send I single byte SPI messages where the byte contains one data point of the input data. Make sure you repeat transmission of I bytes with SPI for the set amount of timesteps, Ts, and make sure to leave enough time in between (equivalent to that of the actual sensor sampling frequency). When you complete transmission for the set amount of timesteps, send O individual single byte dummy messages, the slave FPGA will then encode the classification results unto these messages. 3. Adaptation for other FPGAs The supported FPGAs are only those by Lattice (Lattice Semiconductor Inc., Hillsboro OG) because IP modules from this vendor are used. However, if you change the following IPs, Xilinx, Intel FPGAs etc. are compatible: - pmi_rom --> "RNNParam.sv", "Tanh.sv" - pmi_ram_dq --> "VecRAM.sv" - pmi_add, pmi_sub, pmi_mult --> "ALU.sv" We cannot guarantee that the implementation will work for other vendors. However, please contact us if you have any idea for collaboration or for any questions! SPI_Slave.v (MIT License, https://github.com/nandland/spi-slave) A softcore SPI_Slave module has been utilized to process transmission of messages from the Master. This module, written in verilog, was retrieved from https://github.com/nandland/spi-slave. This module is under a separate license, the MIT License. The License and copyright details have been added as a header of SPI_Slave.v as necessary and as a seperate file ("LICENSE") under the HDL/source/SPI_Slave directory. Please note that the other source files of this project are copyrighted and owned (Excluding the Lattice IPs) by the authors of this repository, licensed under the GNU GENERAL PUBLIC LICENSE, see "LICENSE" file in the root directory for more details. References [1]: J. Bartels, K. K. Tokgoz, S. A, M. Fukawa, S. Otsubo, C. Li, I. Rachi, K.-I. Takeda, L. Minati, and H. Ito, "Tinycownet: Memory- and powerminimized rnns implementable on tiny edge devices for lifelong cow behavior distribution estimation," IEEE Access, vol. 10, pp. 32 706– 32 727, 2022. [2]:H. Ito, K. Takeda, K. Tokgoz, L. Minati, M. Fukawa, L. Chao, J. Bartels, I. Rachi, and A. Sihan, “‘japanese black beef cow behavior classification dataset,” 2022.

ADS

📄 pyclustering 0.8.1

Quicklook:
Novikov, Andrei et al. (2018) · Zenodo
Reads: 0 · Citations: 1
DOI: 10.5281/zenodo.1254845

🔗 https://ui.adsabs.harvard.edu/abs/2018zndo...1254845N/abstract

#Astronomy #Astrophysics #AstroAI #ClusteringDataminingClusteranalysisAiMachinelearningOscillatoryn

pyclustering 0.8.1

pyclustering 0.8.1 library is collection of clustering algorithms, oscillatory networks, neural networks, etc. GENERAL CHANGES: Implemented feature to use specific metric for distance calculation in K-Means algorithm (pyclustering.cluster.kmeans, ccore.clst.kmeans). See: https://github.com/annoviko/pyclustering/issues/434 Implemented BANG-clustering algorithm with result visualizer (pyclustering.cluster.bang). See: https://github.com/annoviko/pyclustering/issues/424 Implemented feature to use specific metric for distance calculation in K-Medians algorithm (pyclustering.cluster.kmedians, ccore.clst.kmedians). See: https://github.com/annoviko/pyclustering/issues/429 Supported new type of input data for K-Medoids - distance matrix (pyclustering.cluster.kmedoids, ccore.clst.kmedoids). See: https://github.com/annoviko/pyclustering/issues/418 Implemented TTSAS algorithm (pyclustering.cluster.ttsas, ccore.clst.ttsas). See: https://github.com/annoviko/pyclustering/issues/398 Implemented MBSAS algorithm (pyclustering.cluster.mbsas, ccore.clst.mbsas). See: https://github.com/annoviko/pyclustering/issues/398 Implemented BSAS algorithm (pyclustering.cluster.bsas, ccore.clst.bsas). See: https://github.com/annoviko/pyclustering/issues/398 Implemented feature to use specific metric for distance calculation in K-Medoids algorithm (pyclustering.cluster.kmedoids, ccore.clst.kmedoids). See: https://github.com/annoviko/pyclustering/issues/417 Implemented distance metric collection (pyclustering.utils.metric, ccore.utils.metric). See: no reference. Supported new type of input data for OPTICS - distance matrix (pyclustering.cluster.optics, ccore.clst.optics). See: https://github.com/annoviko/pyclustering/issues/412 Supported new type of input data for DBSCAN - distance matrix (pyclustering.cluster.dbscan, ccore.clst.dbscan). See: no reference. Implemented K-Means observer and visualizer to visualize and animate clustering results (pyclustering.cluster.kmeans, ccore.clst.kmeans). See: no reference. CORRECTED MAJOR BUGS: Bug with out of range in K-Medians (pyclustering.cluster.kmedians, ccore.clst.kmedians). See: https://github.com/annoviko/pyclustering/issues/428 Bug with fast linking in PCNN (python implementation only) that wasn't used despite the corresponding option (pyclustering.nnet.pcnn). See: https://github.com/annoviko/pyclustering/issues/419

ADS

📄 Code for the Fanoos Multi-Resolution, Multi-Strength, Interactive XAI…

Quicklook:
Bayani, David et al. (2021) · Zenodo
Reads: 0 · Citations: 2
DOI: 10.5281/zenodo.5513079

🔗 https://ui.adsabs.harvard.edu/abs/2021zndo...5513079B/abstract

#Astronomy #Astrophysics #AstroAI #NeuralNetworkVerification #NeuralNetworkCertification

Code for the Fanoos Multi-Resolution, Multi-Strength, Interactive XAI System

This upload contains a tarred and compressed copy of the code and git-history available at https://github.com/DBay-ani/Fanoos as of hour 3 day 16 month 5 year 2021 UTC. See the following paper for a description:     @inproceedings{DBLP:conf/vmcai/BayaniM22,       author    = {David Bayani and Stefan Mitsch},       editor    = {Bernd Finkbeiner and                    Thomas Wies},       title     = {Fanoos: Multi-resolution, Multi-strength, Interactive Explanations for Learned Systems},       booktitle = {Verification, Model Checking, and Abstract Interpretation - 23rd International Conference, {VMCAI} 2022, Philadelphia, PA, USA, January 16-18, 2022, Proceedings},       series    = {Lecture Notes in Computer Science},       volume    = {13182},       pages     = {43--68},       publisher = {Springer},       year      = {2022},       url       = {https://doi.org/10.1007/978-3-030-94583-1\_3},       doi       = {10.1007/978-3-030-94583-1\_3},       timestamp = {Fri, 21 Jan 2022 22:02:46 +0100},       biburl    = {https://dblp.org/rec/conf/vmcai/BayaniM22.bib},       bibsource = {dblp computer science bibliography, https://dblp.org}     } or see the extended write-up at:     @article{bayani2020fanoos,       title={Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems},       author={Bayani, David and Mitsch, Stefan},       journal={arXiv preprint arXiv:2006.12453},       year={2020},       url={https://arxiv.org/abs/2006.12453}     } This upload is related to the additional written materials available on the following Zenodo item:     @misc{david_bayani_2022_6069468,       author       = {David Bayani},       title        = {{Further Materials (Additional Slides, Write-ups, Results, etc.) for Fanoos: Multi-Resolution, Multi-Strength, Interactive XAI System}},       month        = feb,       year         = 2022,       publisher    = {Zenodo},       doi          = {10.5281/zenodo.6069468},       url          = {https://doi.org/10.5281/zenodo.6069468}     } We note that this upload ("Code for Fanoos [...]"), in contrast to the material cited immediately above, contains source related to Fanoos, as opposed to additional write-up, slides, etc.

ADS

📄 Deep Residual Learning for Image Recognition

Quicklook:
He, Kaiming et al. (2016) · 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Reads: 45017 · Citations: 27779
DOI: 10.1109/CVPR.2016.90

🔗 https://ui.adsabs.harvard.edu/abs/2016cvpr.confE...1H/abstract

#Astronomy #Astrophysics #AstroAI #ComputerScienceComputerVisionAndPatternRecognition

📄 Random Forests.

Quicklook:
Breiman, Leo et al. (2001) · Machine Learning
Reads: 14 · Citations: 31413
DOI: 10.1023/A:1010933404324

🔗 https://ui.adsabs.harvard.edu/abs/2001MachL..45....5B/abstract

#Astronomy #Astrophysics #AstroAI #MachineLearning

Random Forests.

Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148-156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

ADS

📄 Rapid inversion method for parameters of contact binaries based on in…

Quicklook:
Zeng, Xiangyun et al. (2026) · New Astronomy
Reads: 0 · Citations: 0
DOI: 10.1016/j.newast.2025.102511

#Astronomy #Astrophysics #AstroAI #NeuralNetwork #DifferentialEvolution

AstroAI DT132A Digital Multimeter unbox and first thoughts

https://makertube.net/w/93puqWs7xxvi2wfcFjot8E

AstroAI DT132A Digital Multimeter unbox and first thoughts

PeerTube
AstroAI 4-Liter/6-Can Minifridges have electrical switches that short circuit leading to fires and burns. The affected units are model number LY0204A. #astroai #inifridge #overheating #fires #burns #recall
https://www.instagram.com/p/DME454-pBKD/
Howard G. Smith MD, AM on Instagram: "AstroAI Minifridges Overheat AstroAI 4-Liter/6-Can Minifridges have electrical switches that short circuit leading to fires and burns. The affected units are model number LY0204A with serial numbers starting with 19, 20, 21, 2201, 2202, or 2203. About 249,100 of these fridges were sold nationwide through Amazon.com and AstroAI.com from June 2019 through June 2022. Turn off these minifridges immediately. Contact AstroAI for a free replacement by writing the word “Recalled” on the fridge in permanent marker and sending a photo showing both the model and serial number to [email protected] or upload it at astroai.com/product-recall. Then discard the unit at your local recycling center. For additional information, call AstroAI at 1-877-278-7624. https://www.cpsc.gov/Recalls/2025/AstroAI-Recalls-Minifridges-Due-to-Fire-and-Burn-Hazards-Two-Fires-Resulted-in-More-Than-360000-in-Reported-Property-Damages #astroai #inifridge #overheating #fires #burns #recall"

0 likes, 0 comments - drhowardsmithreports on July 13, 2025: "AstroAI Minifridges Overheat AstroAI 4-Liter/6-Can Minifridges have electrical switches that short circuit leading to fires and burns. The affected units are model number LY0204A with serial numbers starting with 19, 20, 21, 2201, 2202, or 2203. About 249,100 of these fridges were sold nationwide through Amazon.com and AstroAI.com from June 2019 through June 2022. Turn off these minifridges immediately. Contact AstroAI for a free replacement by writing the word “Recalled” on the fridge in permanent marker and sending a photo showing both the model and serial number to [email protected] or upload it at astroai.com/product-recall. Then discard the unit at your local recycling center. For additional information, call AstroAI at 1-877-278-7624. https://www.cpsc.gov/Recalls/2025/AstroAI-Recalls-Minifridges-Due-to-Fire-and-Burn-Hazards-Two-Fires-Resulted-in-More-Than-360000-in-Reported-Property-Damages #astroai #inifridge #overheating #fires #burns #recall".

Instagram
Bought an #AstroAI multimeter. I am having not enough experience to judge if it really is good, but it seems great to me... nice packaging, came with batteries plus a set of batteries installed, has a flashlight, bright backlight, feels nice and rugged, and unexpectedly super fast shipping.