Can you please add some performance numbers to the main project docs indicating inference latency running some common hardware options e.g. AWS p2, GCP gpu instance, CPU inference, Raspbery pi, etc.
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
MATE Desktop container for NVIDIA GPUs without using an X Server, directly accessing the GPU with EGL using VirtualGL, TurboVNC, and Guacamole. Does not require /tmp/.X11-unix host sockets. Designed for Kubernetes, also supporting audio forwarding.
MATE Desktop container supporting GLX/Vulkan for NVIDIA GPUs by spawning its own X Server and Guacamole interface instead of using the host X server. Does not require /tmp/.X11-unix host sockets or host configuration. Designed for Kubernetes, also supporting audio forwarding.
⭐🐧 GPU Sku usage for Ubuntu 16.04-LTS and CentOS 7.3 , Standard Open Source Scheduler Deployments for HPC Skus for CentOS 7.1-HPC with OMS. This is presently on the GAed CentOS-HPC A9/H16R/H16MR and GPU NC6/NC12/NC24 to be expanded later for other Skus like NVs /NC24R on Linux. Latest Docker CE and nvidia-docker present in all.
Can you please add some performance numbers to the main project docs indicating inference latency running some common hardware options e.g. AWS p2, GCP gpu instance, CPU inference, Raspbery pi, etc.