Note to self: #NVIDIA have an open-source inference server for machine learning models. (They mostly sell SaaS on top of it)
Supports #TensorFlow, #PyTorch, #ONNX, #TensorRT, #mxnet.
Runs on #k8s. Features queue control, monitoring.
Triton Inference Server https://github.com/triton-inference-server
MXNet doesn't look difficult...but I see two mayor issues:
a) Community yet small
b) Lack of pretrained models for transfer learning. A big hit would be support by Hugginface models and pretrained models with ImageNet.
(1/n)๐๐๐ฐ ๐ซ๐๐ฅ๐๐๐ฌ๐ ๐๐จ๐ซ ๐ญ๐ก๐ ๐๐ข๐ฏ๐ ๐ข๐ง๐ญ๐จ ๐๐๐๐ฉ ๐๐๐๐ซ๐ง๐ข๐ง๐ ๐๐จ๐จ๐ค! ๐๐
The ๐๐ข๐ฏ๐ ๐ข๐ง๐ญ๐จ ๐๐๐๐ฉ ๐๐๐๐ซ๐ง๐ข๐ง๐ book is an educational project by Amazon's scientists and other contributors focusing on deep learning models and their applications. It is interactive and available online for ๐๐ซ๐๐ and implemented with multiple deep learning frameworks such as ๐๐ฒ๐๐จ๐ซ๐๐ก, ๐๐ฎ๐ฆ๐๐ฒ/๐๐๐๐๐ญ, ๐๐จ๐จ๐ ๐ฅ๐ ๐๐๐, and ๐๐๐ง๐ฌ๐จ๐ซ๐ ๐ฅ๐จ๐ฐ ๐.
#deeplearning #neuralnetworks #machinelearning #pytorch #python #nlp #computervision #tensorflow #mxnet #jax
Alright ! I got #mxnet for #cplusplus to compile from scratch without any effort.
```
$ mkdir -p build && cd build
$ cmake -DUSE_CPP_PACKAGE=1 -DUSE_CUDA=OFF ..
$ make -j4
```
Wait almost 45 minutes and it is done.
Next step is to try out their tutorial examples.
Machine Learning with MXNet.cr
https://diode.zone/videos/watch/e2b0ef27-ca55-40c8-bd54-306c2fbc451a
