Skip to main content

· 2 min read
MLflow maintainers

MLflow 2.22.0 brings important bug fixes and improvements to the UI and tracking capabilities.

Features:

  • [Tracking[ Supported tracing for OpenAI Responses API.
    (#15240, @B-Step62)
  • [Tracking[ Introduced get_last_active_trace, which affects model serving/monitoring logic.
    (#15233, @B-Step62)
  • [Tracking[ Introduced async export for Databricks traces (default behavior).
    (#15163, @B-Step62)
  • [AI Gateway[ Added Gemini embeddings support with corresponding unit tests.
    (#15017, @joelrobin18)
  • [Tracking / SQLAlchemy[ MySQL SSL connections are now supported with client certs.
    (#14839, @aksylumoed)
  • [Models[ Added Optuna storage utility for enabling parallel hyperparameter tuning.
    (#15243, @XiaohanZhangCMU)
  • [Artifacts[ Added support for Azure Data Lake Storage (ADLS) artifact repositories.
    (#14723, @serena-ruan)
  • [UI[ Artifact views for text now auto-refresh in the UI.
    (#14939, @joelrobin18)

Bug Fixes:

  • [Tracking / UI[ Fixed serialization for structured output in langchain_tracer + added unit tests.
    (#14971, @joelrobin18)
  • [Server-infra[ Enforced password validation for authentication (min. 8 characters).
    (#15287, @WeichenXu123)
  • [Deployments[ Resolved an issue with the OpenAI Gateway adapter.
    (#15286, @WeichenXu123)
  • [Artifacts / Tracking / Server-infra[ Normalized paths by stripping trailing slashes.
    (#15016, @tarek7669)
  • [Tags[ Fixed a bug where tag values containing ": " were being truncated.
    (#14896, @harupy)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· One min read
MLflow maintainers

MLflow 2.21.1 is a patch release that introduces minor features and addresses some minor bugs.

Features:

  • Introduce support for logging evaluations within DSPy (#14962, @TomeHirata)
  • Add support for run creation when DSPy compile is executed (#14949, @TomeHirata)
  • Add support for building a SageMaker serving container that does not contain Java via the --install-java option (#14868, @rgangopadhya)

Bug fixes:

  • Fix an issue with trace ordering due to a timestamp conversion timezone bug (#15094, @orm011)
  • Fix a typo in the environment variable OTEL_EXPORTER_OTLP_PROTOCOL definition (#15008, @gabrielfu)
  • Fix an issue in shared and serverless clusters on Databricks when logging Spark Datasources when using the evaluate API (#15077, @WeichenXu123)
  • Fix a rendering issue with displaying images from within the metric tab in the UI (#15034, @TomeHirata)

Documentation updates:

  • Add additional contextual information within the set_retriever_schema API docs (#15099, @smurching)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· 3 min read
MLflow maintainers

We are excited to announce the release of MLflow 2.21.0! This release includes a number of significant features, enhancements, and bug fixes.

Major New Features

Features:

Bug fixes:

  • [Models] Fix infinite recursion error with warning handler module (#14954, @BenWilson2)
  • [Model Registry] Fix invalid type issue for ModelRegistry RestStore (#14980, @B-Step62)
  • [Tracking] Fix: ExperimentViewRunsControlsActionsSelectTags doesn't set loading state to false when set-tag request fails. (#14907, @harupy)
  • [Tracking] Fix a bug in tag creation where tag values containing ": " get truncated (#14896, @harupy)
  • [Tracking] Fix false alert from AMD GPU monitor (#14884, @B-Step62)
  • [Tracking] Fix mlflow.doctor to fall back to mlflow-skinny when mlflow is not found (#14782, @harupy)
  • [Models] Handle LangGraph breaking change (#14794, @B-Step62)

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· One min read
MLflow maintainers

MLflow 2.20.3 is a patch release includes several major features and improvements

Features:

Bug fixes:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· One min read
MLflow maintainers

MLflow 2.20.2 is a patch release includes several bug fixes and features

Features:

Bug fixes:

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· One min read
MLflow maintainers

MLflow 2.20.1 is a patch release includes several bug fixes and features:

Features:

  • Spark_udf support for the model signatures based on type hints (#14265, @serena-ruan)
  • Helper connectors to use ChatAgent with LangChain and LangGraph (#14215, @bbqiu)
  • Update classifier evaluator to draw RUC/Lift curves for CatBoost models by default (#14333, @singh-kristian)

Bug fixes:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· 2 min read
MLflow maintainers

Major New Features

  • 💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel's predict function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)

  • 🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing or mlflow.groq.tracing call to the code. (#14018, @B-Step62, #14006, @anumita0203)

  • 🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)

  • ⚡️Faster Model Validation with uv Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)

  • 🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)

Other Features:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

· 2 min read
MLflow maintainers

MLflow 2.20.0rc0 is a release candidate for 2.20.0. To install, run the following command:

pip install mlflow==2.20.0rc0

Major New Features

  • 💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel's predict function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)

  • 🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing or mlflow.groq.tracing call to the code. (#14018, @B-Step62, #14006, @anumita0203)

  • 🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)

  • ⚡️Faster Model Validation with uv Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)

  • 🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)

Other Features:

Please try it out and report any issues on the issue tracker!