-
Updated
Jun 20, 2021
model-serving
Here are 54 public repositories matching this topic...
/kind feature
Describe the solution you'd like
In pkg/apis/serving/v1beta1/inference_service_defaults.go
the default InferenceService resource requests and limits are hard coded to be 1 cpu and 2Gi memory. These are reasonable defaults. However, the entire existence of these defaults should be disablable. Moreover, administrators should be able to quickly adjust defaults globally via t
-
Updated
May 30, 2019 - Scala
-
Updated
Jun 10, 2021 - Python
-
Updated
Jul 23, 2021 - Java
-
Updated
May 9, 2019 - Scala
-
Updated
Aug 5, 2021 - Java
-
Updated
Sep 26, 2019 - Scala
-
Updated
Jun 7, 2021 - Jupyter Notebook
-
Updated
Feb 28, 2021 - Jupyter Notebook
-
Updated
Aug 2, 2021 - Python
-
Updated
Jul 22, 2020
-
Updated
Aug 5, 2021 - Python
-
Updated
Sep 18, 2017 - Scala
-
Updated
Jan 11, 2020 - Jupyter Notebook
-
Updated
Mar 9, 2021 - Jupyter Notebook
-
Updated
Aug 3, 2021 - Python
-
Updated
Mar 4, 2021 - Python
@zacbrannelly Not sure if this is possible in the current released version, so creating this just to keep track.
-
Updated
Jan 3, 2019 - Java
-
Updated
Jun 29, 2021 - Python
-
Updated
Aug 6, 2021 - Java
-
Updated
Jul 30, 2021 - Jupyter Notebook
-
Updated
Jun 17, 2021 - Python
-
Updated
Sep 24, 2018 - Scala
Improve this page
Add a description, image, and links to the model-serving topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the model-serving topic, visit your repo's landing page and select "manage topics."
Pretty neat if documentation can be translated to multiple language, Mandarin, Spanish, Portuguese, Vietnamese, etc.
P1 items, after 1.0 releases