Release 2.4.3
This release introduces several vulnerability fixes:
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Assets
2
Release 2.3.4
NOTE: This is the last release in the 2.3.x line
This release introduces several vulnerability fixes:
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Assets
2
Release 2.6.0
Breaking Changes
-
tf.train.experimental.enable_mixed_precision_graph_rewrite
is removed, as the API only works in graph mode and is not customizable. The function is still accessible undertf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite
, but it is recommended to use the Keras mixed precision API instead. -
tf.lite
:- Remove
experimental.nn.dynamic_rnn
,experimental.nn.TfLiteRNNCell
andexperimental.nn.TfLiteLSTMCell
since they're no longersupported. It's recommended to just use keras lstm instead.
- Remove
-
tf.keras
:- Keras been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports totensorflow.python.keras
and replace them with public tf.keras API instead. - The methods
Model.to_yaml()
andkeras.models.model_from_yaml
have been replaced to raise aRuntimeError
as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.
- Keras been split into a separate PIP package (
Known Caveats
- TF Core:
- A longstanding bug in
tf.while_loop
, which caused it to execute sequentially, even whenparallel_iterations>1
, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset theirwhile_loop
'sparallel_iterations
value to 1, which is consistent with prior behavior.
- A longstanding bug in
Major Features and Improvements
-
tf.keras
:- Keras has been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repository keras-team/keras.
The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras. tf.keras.utils.experimental.DatasetCreator
now takes an optionaltf.distribute.InputOptions
for specific options when used with distribution.tf.keras.experimental.SidecarEvaluator
is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running withtf.distribute.experimental.ParameterServerStrategy
(see `https://www.tensorflow.org/tutorials/distribute/parameter_server_training). It can also be used with single-worker training or other strategies. See docstring for more info.- Preprocessing layers moved from experimental to core.
- Import paths moved from
tf.keras.layers.preprocessing.experimental
totf.keras.layers
.
- Import paths moved from
- Updates to Preprocessing layers API for consistency and clarity:
StringLookup
andIntegerLookup
default formask_token
changed toNone
. This matches the default masking behavior ofHashing
andEmbedding
layers. To keep existing behavior, passmask_token=""
during layer creation.- Renamed
"binary"
output mode to"multi_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, andTextVectorization
. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples. - Added a new output mode
"one_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old"binary"
behavior of one-hot encoding a batch of scalars. Normalization
will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples.
- Keras has been split into a separate PIP package (
-
tf.lite
:- The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
- Supports int64 for mul.
- Supports native variable builtin ops - ReadVariable, AssignVariable.
- Converter:
- Experimental support for variables in TFLite. To enable through conversion, users need to set
experimental_enable_resource_variables
on tf.lite.TFLiteConverter to True.
Note: mutable variables is only available usingfrom_saved_model
in this release, support for other methods is coming soon. - Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.
- Experimental support for variables in TFLite. To enable through conversion, users need to set
-
tf.saved_model
:- SavedModels can now save custom gradients. Use the option
tf.saved_model.SaveOption(experimental_custom_gradients=True)
to enable this feature. The documentation in Advanced autodiff has been updated. - Object metadata has now been deprecated and no longer saved to the SavedModel.
- SavedModels can now save custom gradients. Use the option
-
TF Core:
- Added
tf.config.experimental.reset_memory_stats
to reset the tracked peak memory returned bytf.config.experimental.get_memory_info
.
- Added
-
tf.data
:- Added
target_workers
param todata_service_ops.from_dataset_id
anddata_service_ops.distribute
. Users can specify"AUTO"
,"ANY"
, or"LOCAL"
(case insensitive). If"AUTO"
, tf.data service runtime decides which workers to read from. If"ANY"
, TF workers read from any tf.data service workers. If"LOCAL"
, TF workers will only read from local in-processs tf.data service workers."AUTO"
works well for most cases, while users can specify other targets. For example,"LOCAL"
would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently,"AUTO"
reads from any tf.data service workers to preserve existing behavior. The default value is"AUTO"
.
- Added
Bug Fixes and Other Changes
- TF Core:
- Added
tf.lookup.experimental.MutableHashTable
, which provides a generic mutable hash table implementation.- Compared to
tf.lookup.experimental.DenseHashTable
this offers lower overall memory usage, and a cleaner API. It does not require specifying adelete_key
andempty_key
that cannot be inserted into the table.
- Compared to
- Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
- Add an option
perturb_singular
totf.linalg.tridiagonal_solve
that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration. - Added
tf.linalg.eigh_tridiagonal
that computes the eigenvalues of a Hermitian tridiagonal matrix. tf.constant
now places its output on the current default device.- SavedModel
- Added
tf.saved_model.experimental.TrackableResource
, which allows the creation of custom wrapper objects for resource tensors. - Added a SavedModel load option to allow restoring partial checkpoints into the SavedModel. See [
tf.saved_model.LoadOptions
]
(https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) for details.
- Added
- Added a new op
SparseSegmentSumGrad
to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation. - Added a new session config setting
internal_fragmentation_fraction
, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request. - Added
tf.get_current_name_scope()
which returns the current full name scope string that will be prepended to op names.
- Added
tf.data
:- Promoting
tf.data.experimental.bucket_by_sequence_length
API totf.data.Dataset.bucket_by_sequence_length
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.get_single_element
API totf.data.Dataset.get_single_element
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.group_by_window
API totf.data.Dataset.group_by_window
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.RandomDataset
API totf.data.Dataset.random
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.scan
API totf.data.Dataset.scan
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.snapshot
API totf.data.Dataset.shapshot
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.take_while
API totf.data.Dataset.take_while
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.ThreadingOptions
API totf.data.ThreadingOptions
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.unique
API totf.data.Dataset.unique
and deprecating the experimental endpoint. - Added
stop_on_empty_dataset
parameter tosample_from_datasets
andchoose_from_datasets
. Settingstop_on_empty_dataset=True
will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False
) is preserved. - Removed previously deprecated tf.data statistics related APIs:
tf.data.Options.experimental_stats
tf.data.experimental.StatsAggregator
tf.data.experimental.StatsOptions.*
tf.data.experimental.bytes_produced_stats
tf.data.experimental.latency_stats
- Removed the following experimental tf.data optimization APIs:
tf.data.experimental.MapVectorizationOptions.*
tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion
tf.data.experimental.OptimizationOptions.hoist_random_uniform
tf.data.experimental.OptimizationOptions.map_vectorization
*tf.data.experimental.OptimizationOptions.reorder_data_discarding_ops
- Promoting
tf.keras
:- Fix usage of
__getitem__
slicing in Keras Functional APIs when the inputs areRaggedTensor
objects. - Add
keepdims
argument to allGlobalPooling
layers. - Add
include_preprocessing
argument toMobileNetV3
architectures to control the inclusion ofRescaling
layer in the model. - Add optional argument (
force
) tomake_(train|test|predict)_funtion
methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any.trainable
attribute of any model's layer has changed. - Models now have a
save_spec
property which contains theTensorSpec
specs for calling the model. This spec is automatically saved when the model is called for the first time.
- Fix usage of
tf.linalg
:- Add
CompositeTensor
as a base class toLinearOperator
.
- Add
tf.lite
:- Fix mean op reference quantization rounding issue.
- Added
framework_stable
BUILD target, which links in only the non-experimental TF Lite APIs. - Remove deprecated Java
Interpreter
methods:modifyGraphWithDelegate
- UseInterpreter.Options.addDelegate
setNumThreads
- UseInterpreter.Options.setNumThreads
- Add Conv3DTranspose as a builtin op.
tf.summary
:- Fix
tf.summary.should_record_summaries()
so it correctly reflects when summaries will be written, even whentf.summary.record_if()
is not n effect, by returning True tensor if default writer is present.
- Fix
- Grappler:
- Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).
- Deterministic Op Functionality (enabled by setting
TF_DETERMINISTIC_OPS
to"true"
or"1"
):- Add a deterministic GPU implementation of
tf.nn.softmax_cross_entropy_with_logits
. See PR 49178. - Add a deterministic CPU implementation of
tf.image.crop_and_resize
. See PR 48905. - Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected, an attempt to use the specified paths through the following ops on a GPU will cause
tf.errors.UnimplementedError
(with an understandable message) to be thrown.
- Add a deterministic GPU implementation of
Security
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aadhitya A, Abhilash Mahendrakar, Abhishek Varma, Abin Shahab, Adam Hillier, Aditya Kane, AdityaKane2001, ag.ramesh, Amogh Joshi, Armen Poghosov, armkevincheng, Avrosh K, Ayan Moitra, azazhu, Banikumar Maiti, Bas Aarts, bhack, Bhanu Prakash Bandaru Venkata, Billy Cao, Bohumir Zamecnik, Bradley Reece, CyanXu, Daniel Situnayake, David Pal, Ddavis-2015, DEKHTIARJonathan, Deven Desai, Duncan Riach, Edward, Eli Osherovich, Eugene Kuznetsov, europeanplaice, evelynmitchell, Evgeniy Polyakov, Felix Vollmer, Florentin Hennecker, François Chollet, Frederic Bastien, Fredrik Knutsson, Gabriele Macchi, Gaurav Shukla, Gauri1 Deshpande, geetachavan1, Georgiy Manuilov, H, Hengwen Tong, Henri Woodcock, Hiran Sarkar, Ilya Arzhannikov, Janghoo Lee, jdematos, Jens Meder, Jerry Shih, jgehw, Jim Fisher, Jingbei Li, Jiri Podivin, Joachim Gehweiler, Johannes Lade, Jonas I. Liechti, Jonas Liechti, Jonas Ohlsson, Jonathan Dekhtiar, Julian Gross, Kaixi Hou, Kevin Cheng, Koan-Sin Tan, Kulin Seth, linzewen, Liubov Batanina, luisleee, Lukas Geiger, Mahmoud Abuzaina, mathgaming, Matt Conley, Max H. Gerlach, mdfaijul, Mh Kwon, Michael Martis, Michal Szutenberg, Måns Nilsson, nammbash, Neil Girdhar, Nicholas Vadivelu, Nick Kreeger, Nirjas Jakilim, okyanusoz, Patrice Vignola, Patrik Laurell, Pedro Marques, Philipp Hack, Phillip Cloud, Piergiacomo De Marchi, Prashant Kumar, puneeshkhanna, pvarouktsis, QQ喵, Rajeshwar Reddy T, Rama Ketineni, Reza Rahimi, Robert Kalmar, rsun, Ryan Kuester, Saduf2019, Sean Morgan, Sean Moriarity, Shaochen Shi, Sheng, Yang, Shu Wang, Shuai Zhang, Soojeong, Stanley-Nod, Steven I Reeves, stevenireeves, Suraj Sudhir, Sven Mayer, Tamas Bela Feher, tashuang.zk, tcervi, Teng Lu, Thales Elero Cervi, Thibaut Goetghebuer-Planchon, Thomas Walther, Till Brychcy, Trent Lo, Uday Bondhugula, vishakha.agrawal, Vishnuvardhan Janapati, wamuir, Wenwen Ouyang, wenwu, Williard Joshua Jose, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yasir Modak, Yi Li, Yong Tang, zilinzhu, 박상준, 이장
Assets
2
Release 2.5.1
This release introduces several vulnerability fixes:
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Assets
2
Release 2.6.0
Breaking Changes
-
tf.train.experimental.enable_mixed_precision_graph_rewrite
is removed, as the API only works in graph mode and is not customizable. The function is still accessible undertf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite
, but it is recommended to use the Keras mixed precision API instead. -
tf.lite
:- Remove
experimental.nn.dynamic_rnn
,experimental.nn.TfLiteRNNCell
andexperimental.nn.TfLiteLSTMCell
since they're no longersupported. It's recommended to just use keras lstm instead.
- Remove
-
Keras been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports totensorflow.python.keras
and replace them with public tf.keras API instead.
Known Caveats
- TF Core:
- A longstanding bug in
tf.while_loop
, which caused it to execute sequentially, even whenparallel_iterations>1
, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset theirwhile_loop
'sparallel_iterations
value to 1, which is consistent with prior behavior.
- A longstanding bug in
Major Features and Improvements
-
tf.keras
:- Keras has been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repository keras-team/keras.
The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras. tf.keras.utils.experimental.DatasetCreator
now takes an optionaltf.distribute.InputOptions
for specific options when used with distribution.tf.keras.experimental.SidecarEvaluator
is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running withtf.distribute.experimental.ParameterServerStrategy
(see `https://www.tensorflow.org/tutorials/distribute/parameter_server_training). It can also be used with single-worker training or other strategies. See docstring for more info.- Preprocessing layers moved from experimental to core.
- Import paths moved from
tf.keras.layers.preprocessing.experimental
totf.keras.layers
.
- Import paths moved from
- Updates to Preprocessing layers API for consistency and clarity:
StringLookup
andIntegerLookup
default formask_token
changed toNone
. This matches the default masking behavior ofHashing
andEmbedding
layers. To keep existing behavior, passmask_token=""
during layer creation.- Renamed
"binary"
output mode to"multi_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, andTextVectorization
. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples. - Added a new output mode
"one_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old"binary"
behavior of one-hot encoding a batch of scalars. Normalization
will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples.
- Keras has been split into a separate PIP package (
-
tf.lite
:- The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
- Supports int64 for mul.
- Supports native variable builtin ops - ReadVariable, AssignVariable.
- Converter:
- Experimental support for variables in TFLite. To enable through conversion, users need to set
experimental_enable_resource_variables
on tf.lite.TFLiteConverter to True.
Note: mutable variables is only available usingfrom_saved_model
in this release, support for other methods is coming soon. - Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.
- Experimental support for variables in TFLite. To enable through conversion, users need to set
-
tf.saved_model
:- SavedModels can now save custom gradients. Use the option
tf.saved_model.SaveOption(experimental_custom_gradients=True)
to enable this feature. The documentation in Advanced autodiff has been updated. - Object metadata has now been deprecated and no longer saved to the SavedModel.
- SavedModels can now save custom gradients. Use the option
-
TF Core:
- Added
tf.config.experimental.reset_memory_stats
to reset the tracked peak memory returned bytf.config.experimental.get_memory_info
.
- Added
-
tf.data
:- Added
target_workers
param todata_service_ops.from_dataset_id
anddata_service_ops.distribute
. Users can specify"AUTO"
,"ANY"
, or"LOCAL"
(case insensitive). If"AUTO"
, tf.data service runtime decides which workers to read from. If"ANY"
, TF workers read from any tf.data service workers. If"LOCAL"
, TF workers will only read from local in-processs tf.data service workers."AUTO"
works well for most cases, while users can specify other targets. For example,"LOCAL"
would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently,"AUTO"
reads from any tf.data service workers to preserve existing behavior. The default value is"AUTO"
.
- Added
Bug Fixes and Other Changes
- TF Core:
- Added
tf.lookup.experimental.MutableHashTable
, which provides a generic mutable hash table implementation.- Compared to
tf.lookup.experimental.DenseHashTable
this offers lower overall memory usage, and a cleaner API. It does not require specifying adelete_key
andempty_key
that cannot be inserted into the table.
- Compared to
- Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
- Add an option
perturb_singular
totf.linalg.tridiagonal_solve
that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration. - Added
tf.linalg.eigh_tridiagonal
that computes the eigenvalues of a Hermitian tridiagonal matrix. tf.constant
now places its output on the current default device.- SavedModel
- Added
tf.saved_model.experimental.TrackableResource
, which allows the creation of custom wrapper objects for resource tensors. - Added a SavedModel load option to allow restoring partial checkpoints into the SavedModel. See [
tf.saved_model.LoadOptions
]
(https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) for details.
- Added
- Added a new op
SparseSegmentSumGrad
to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation. - Added a new session config setting
internal_fragmentation_fraction
, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request. - Added
tf.get_current_name_scope()
which returns the current full name scope string that will be prepended to op names.
- Added
tf.data
:- Promoting
tf.data.experimental.bucket_by_sequence_length
API totf.data.Dataset.bucket_by_sequence_length
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.get_single_element
API totf.data.Dataset.get_single_element
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.group_by_window
API totf.data.Dataset.group_by_window
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.RandomDataset
API totf.data.Dataset.random
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.scan
API totf.data.Dataset.scan
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.snapshot
API totf.data.Dataset.shapshot
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.take_while
API totf.data.Dataset.take_while
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.ThreadingOptions
API totf.data.ThreadingOptions
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.unique
API totf.data.Dataset.unique
and deprecating the experimental endpoint. - Added
stop_on_empty_dataset
parameter tosample_from_datasets
andchoose_from_datasets
. Settingstop_on_empty_dataset=True
will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False
) is preserved. - Removed previously deprecated tf.data statistics related APIs:
tf.data.Options.experimental_stats
tf.data.experimental.StatsAggregator
tf.data.experimental.StatsOptions.*
tf.data.experimental.bytes_produced_stats
tf.data.experimental.latency_stats
- Removed the following experimental tf.data optimization APIs:
tf.data.experimental.MapVectorizationOptions.*
tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion
tf.data.experimental.OptimizationOptions.hoist_random_uniform
tf.data.experimental.OptimizationOptions.map_vectorization
*tf.data.experimental.OptimizationOptions.reorder_data_discarding_ops
- Promoting
tf.keras
:- Fix usage of
__getitem__
slicing in Keras Functional APIs when the inputs areRaggedTensor
objects. - Add
keepdims
argument to allGlobalPooling
layers. - Add
include_preprocessing
argument toMobileNetV3
architectures to control the inclusion ofRescaling
layer in the model. - Add optional argument (
force
) tomake_(train|test|predict)_funtion
methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any.trainable
attribute of any model's layer has changed. - Models now have a
save_spec
property which contains theTensorSpec
specs for calling the model. This spec is automatically saved when the model is called for the first time.
- Fix usage of
tf.linalg
:- Add
CompositeTensor
as a base class toLinearOperator
.
- Add
tf.lite
:- Fix mean op reference quantization rounding issue.
- Added
framework_stable
BUILD target, which links in only the non-experimental TF Lite APIs. - Remove deprecated Java
Interpreter
methods:modifyGraphWithDelegate
- UseInterpreter.Options.addDelegate
setNumThreads
- UseInterpreter.Options.setNumThreads
- Add Conv3DTranspose as a builtin op.
tf.summary
:- Fix
tf.summary.should_record_summaries()
so it correctly reflects when summaries will be written, even whentf.summary.record_if()
is not n effect, by returning True tensor if default writer is present.
- Fix
- Grappler:
- Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).
- Deterministic Op Functionality (enabled by setting
TF_DETERMINISTIC_OPS
to"true"
or"1"
):- Add a deterministic GPU implementation of
tf.nn.softmax_cross_entropy_with_logits
. See PR 49178. - Add a deterministic CPU implementation of
tf.image.crop_and_resize
. See PR 48905. - Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected, an attempt to use the specified paths through the following ops on a GPU will cause
tf.errors.UnimplementedError
(with an understandable message) to be thrown.
- Add a deterministic GPU implementation of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aadhitya A, Abhilash Mahendrakar, Abhishek Varma, Abin Shahab, Adam Hillier, Aditya Kane, AdityaKane2001, ag.ramesh, Amogh Joshi, Armen Poghosov, armkevincheng, Avrosh K, Ayan Moitra, azazhu, Banikumar Maiti, Bas Aarts, bhack, Bhanu Prakash Bandaru Venkata, Billy Cao, Bohumir Zamecnik, Bradley Reece, CyanXu, Daniel Situnayake, David Pal, Ddavis-2015, DEKHTIARJonathan, Deven Desai, Duncan Riach, Edward, Eli Osherovich, Eugene Kuznetsov, europeanplaice, evelynmitchell, Evgeniy Polyakov, Felix Vollmer, Florentin Hennecker, François Chollet, Frederic Bastien, Fredrik Knutsson, Gabriele Macchi, Gaurav Shukla, Gauri1 Deshpande, geetachavan1, Georgiy Manuilov, H, Hengwen Tong, Henri Woodcock, Hiran Sarkar, Ilya Arzhannikov, Janghoo Lee, jdematos, Jens Meder, Jerry Shih, jgehw, Jim Fisher, Jingbei Li, Jiri Podivin, Joachim Gehweiler, Johannes Lade, Jonas I. Liechti, Jonas Liechti, Jonas Ohlsson, Jonathan Dekhtiar, Julian Gross, Kaixi Hou, Kevin Cheng, Koan-Sin Tan, Kulin Seth, linzewen, Liubov Batanina, luisleee, Lukas Geiger, Mahmoud Abuzaina, mathgaming, Matt Conley, Max H. Gerlach, mdfaijul, Mh Kwon, Michael Martis, Michal Szutenberg, Måns Nilsson, nammbash, Neil Girdhar, Nicholas Vadivelu, Nick Kreeger, Nirjas Jakilim, okyanusoz, Patrice Vignola, Patrik Laurell, Pedro Marques, Philipp Hack, Phillip Cloud, Piergiacomo De Marchi, Prashant Kumar, puneeshkhanna, pvarouktsis, QQ喵, Rajeshwar Reddy T, Rama Ketineni, Reza Rahimi, Robert Kalmar, rsun, Ryan Kuester, Saduf2019, Sean Morgan, Sean Moriarity, Shaochen Shi, Sheng, Yang, Shu Wang, Shuai Zhang, Soojeong, Stanley-Nod, Steven I Reeves, stevenireeves, Suraj Sudhir, Sven Mayer, Tamas Bela Feher, tashuang.zk, tcervi, Teng Lu, Thales Elero Cervi, Thibaut Goetghebuer-Planchon, Thomas Walther, Till Brychcy, Trent Lo, Uday Bondhugula, vishakha.agrawal, Vishnuvardhan Janapati, wamuir, Wenwen Ouyang, wenwu, Williard Joshua Jose, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yasir Modak, Yi Li, Yong Tang, zilinzhu, 박상준, 이장
Assets
2
Release 2.6.0
Breaking Changes
-
tf.train.experimental.enable_mixed_precision_graph_rewrite
is removed, as the API only works in graph mode and is not customizable. The function is still accessible undertf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite
, but it is recommended to use the Keras mixed precision API instead. -
tf.lite
:- Remove
experimental.nn.dynamic_rnn
,experimental.nn.TfLiteRNNCell
andexperimental.nn.TfLiteLSTMCell
since they're no longersupported. It's recommended to just use keras lstm instead.
- Remove
-
Keras been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports totensorflow.python.keras
and replace them with public tf.keras API instead.
Known Caveats
- TF Core:
- A longstanding bug in
tf.while_loop
, which caused it to execute sequentially, even whenparallel_iterations>1
, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset theirwhile_loop
'sparallel_iterations
value to 1, which is consistent with prior behavior.
- A longstanding bug in
Major Features and Improvements
-
tf.keras
:- Keras has been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repository keras-team/keras.
The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras. tf.keras.utils.experimental.DatasetCreator
now takes an optionaltf.distribute.InputOptions
for specific options when used with distribution.tf.keras.experimental.SidecarEvaluator
is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running withtf.distribute.experimental.ParameterServerStrategy
(see `https://www.tensorflow.org/tutorials/distribute/parameter_server_training). It can also be used with single-worker training or other strategies. See docstring for more info.- Preprocessing layers moved from experimental to core.
- Import paths moved from
tf.keras.layers.preprocessing.experimental
totf.keras.layers
.
- Import paths moved from
- Updates to Preprocessing layers API for consistency and clarity:
StringLookup
andIntegerLookup
default formask_token
changed toNone
. This matches the default masking behavior ofHashing
andEmbedding
layers. To keep existing behavior, passmask_token=""
during layer creation.- Renamed
"binary"
output mode to"multi_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, andTextVectorization
. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples. - Added a new output mode
"one_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old"binary"
behavior of one-hot encoding a batch of scalars. Normalization
will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples.
- Keras has been split into a separate PIP package (
-
tf.lite
:- The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
- Supports int64 for mul.
- Supports native variable builtin ops - ReadVariable, AssignVariable.
- Converter:
- Experimental support for variables in TFLite. To enable through conversion, users need to set
experimental_enable_resource_variables
on tf.lite.TFLiteConverter to True.
Note: mutable variables is only available usingfrom_saved_model
in this release, support for other methods is coming soon. - Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.
- Experimental support for variables in TFLite. To enable through conversion, users need to set
-
tf.saved_model
:- SavedModels can now save custom gradients. Use the option
tf.saved_model.SaveOption(experimental_custom_gradients=True)
to enable this feature. The documentation in Advanced autodiff has been updated. - Object metadata has now been deprecated and no longer saved to the SavedModel.
- SavedModels can now save custom gradients. Use the option
-
TF Core:
- Added
tf.config.experimental.reset_memory_stats
to reset the tracked peak memory returned bytf.config.experimental.get_memory_info
.
- Added
-
tf.data
:- Added
target_workers
param todata_service_ops.from_dataset_id
anddata_service_ops.distribute
. Users can specify"AUTO"
,"ANY"
, or"LOCAL"
(case insensitive). If"AUTO"
, tf.data service runtime decides which workers to read from. If"ANY"
, TF workers read from any tf.data service workers. If"LOCAL"
, TF workers will only read from local in-processs tf.data service workers."AUTO"
works well for most cases, while users can specify other targets. For example,"LOCAL"
would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently,"AUTO"
reads from any tf.data service workers to preserve existing behavior. The default value is"AUTO"
.
- Added
Bug Fixes and Other Changes
- TF Core:
- Added
tf.lookup.experimental.MutableHashTable
, which provides a generic mutable hash table implementation.- Compared to
tf.lookup.experimental.DenseHashTable
this offers lower overall memory usage, and a cleaner API. It does not require specifying adelete_key
andempty_key
that cannot be inserted into the table.
- Compared to
- Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
- Add an option
perturb_singular
totf.linalg.tridiagonal_solve
that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration. - Added
tf.linalg.eigh_tridiagonal
that computes the eigenvalues of a Hermitian tridiagonal matrix. tf.constant
now places its output on the current default device.- SavedModel
- Added
tf.saved_model.experimental.TrackableResource
, which allows the creation of custom wrapper objects for resource tensors. - Added a SavedModel load option to allow restoring partial checkpoints into the SavedModel. See [
tf.saved_model.LoadOptions
]
(https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) for details.
- Added
- Added a new op
SparseSegmentSumGrad
to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation. - Added a new session config setting
internal_fragmentation_fraction
, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request. - Added
tf.get_current_name_scope()
which returns the current full name scope string that will be prepended to op names.
- Added
tf.data
:- Promoting
tf.data.experimental.bucket_by_sequence_length
API totf.data.Dataset.bucket_by_sequence_length
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.get_single_element
API totf.data.Dataset.get_single_element
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.group_by_window
API totf.data.Dataset.group_by_window
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.RandomDataset
API totf.data.Dataset.random
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.scan
API totf.data.Dataset.scan
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.snapshot
API totf.data.Dataset.shapshot
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.take_while
API totf.data.Dataset.take_while
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.ThreadingOptions
API totf.data.ThreadingOptions
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.unique
API totf.data.Dataset.unique
and deprecating the experimental endpoint. - Added
stop_on_empty_dataset
parameter tosample_from_datasets
andchoose_from_datasets
. Settingstop_on_empty_dataset=True
will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False
) is preserved. - Removed previously deprecated tf.data statistics related APIs:
tf.data.Options.experimental_stats
tf.data.experimental.StatsAggregator
tf.data.experimental.StatsOptions.*
tf.data.experimental.bytes_produced_stats
tf.data.experimental.latency_stats
- Removed the following experimental tf.data optimization APIs:
tf.data.experimental.MapVectorizationOptions.*
tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion
tf.data.experimental.OptimizationOptions.hoist_random_uniform
tf.data.experimental.OptimizationOptions.map_vectorization
*tf.data.experimental.OptimizationOptions.reorder_data_discarding_ops
- Promoting
tf.keras
:- Fix usage of
__getitem__
slicing in Keras Functional APIs when the inputs areRaggedTensor
objects. - Add
keepdims
argument to allGlobalPooling
layers. - Add
include_preprocessing
argument toMobileNetV3
architectures to control the inclusion ofRescaling
layer in the model. - Add optional argument (
force
) tomake_(train|test|predict)_funtion
methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any.trainable
attribute of any model's layer has changed. - Models now have a
save_spec
property which contains theTensorSpec
specs for calling the model. This spec is automatically saved when the model is called for the first time.
- Fix usage of
tf.linalg
:- Add
CompositeTensor
as a base class toLinearOperator
.
- Add
tf.lite
:- Fix mean op reference quantization rounding issue.
- Added
framework_stable
BUILD target, which links in only the non-experimental TF Lite APIs. - Remove deprecated Java
Interpreter
methods:modifyGraphWithDelegate
- UseInterpreter.Options.addDelegate
setNumThreads
- UseInterpreter.Options.setNumThreads
- Add Conv3DTranspose as a builtin op.
tf.summary
:- Fix
tf.summary.should_record_summaries()
so it correctly reflects when summaries will be written, even whentf.summary.record_if()
is not n effect, by returning True tensor if default writer is present.
- Fix
- Grappler:
- Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).
- Deterministic Op Functionality (enabled by setting
TF_DETERMINISTIC_OPS
to
"true"
or"1"
):- Add a deterministic GPU implementation of
tf.nn.softmax_cross_entropy_with_logits
. See PR
49178. - Add a deterministic CPU implementation of
tf.image.crop_and_resize
.
See PR 48905. - Add determinism-unimplemented exception-throwing to the following ops.
When op-determinism is expected, an attempt to use the
specified paths through the following ops on a GPU will cause
tf.errors.UnimplementedError
(with an understandable message) to be
thrown.
- Add a deterministic GPU implementation of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aadhitya A, Abhilash Mahendrakar, Abhishek Varma, Abin Shahab, Adam Hillier, Aditya Kane, AdityaKane2001, ag.ramesh, Amogh Joshi, Armen Poghosov, armkevincheng, Avrosh K, Ayan Moitra, azazhu, Banikumar Maiti, Bas Aarts, bhack, Bhanu Prakash Bandaru Venkata, Billy Cao, Bohumir Zamecnik, Bradley Reece, CyanXu, Daniel Situnayake, David Pal, Ddavis-2015, DEKHTIARJonathan, Deven Desai, Duncan Riach, Edward, Eli Osherovich, Eugene Kuznetsov, europeanplaice, evelynmitchell, Evgeniy Polyakov, Felix Vollmer, Florentin Hennecker, François Chollet, Frederic Bastien, Fredrik Knutsson, Gabriele Macchi, Gaurav Shukla, Gauri1 Deshpande, geetachavan1, Georgiy Manuilov, H, Hengwen Tong, Henri Woodcock, Hiran Sarkar, Ilya Arzhannikov, Janghoo Lee, jdematos, Jens Meder, Jerry Shih, jgehw, Jim Fisher, Jingbei Li, Jiri Podivin, Joachim Gehweiler, Johannes Lade, Jonas I. Liechti, Jonas Liechti, Jonas Ohlsson, Jonathan Dekhtiar, Julian Gross, Kaixi Hou, Kevin Cheng, Koan-Sin Tan, Kulin Seth, linzewen, Liubov Batanina, luisleee, Lukas Geiger, Mahmoud Abuzaina, mathgaming, Matt Conley, Max H. Gerlach, mdfaijul, Mh Kwon, Michael Martis, Michal Szutenberg, Måns Nilsson, nammbash, Neil Girdhar, Nicholas Vadivelu, Nick Kreeger, Nirjas Jakilim, okyanusoz, Patrice Vignola, Patrik Laurell, Pedro Marques, Philipp Hack, Phillip Cloud, Piergiacomo De Marchi, Prashant Kumar, puneeshkhanna, pvarouktsis, QQ喵, Rajeshwar Reddy T, Rama Ketineni, Reza Rahimi, Robert Kalmar, rsun, Ryan Kuester, Saduf2019, Sean Morgan, Sean Moriarity, Shaochen Shi, Sheng, Yang, Shu Wang, Shuai Zhang, Soojeong, Stanley-Nod, Steven I Reeves, stevenireeves, Suraj Sudhir, Sven Mayer, Tamas Bela Feher, tashuang.zk, tcervi, Teng Lu, Thales Elero Cervi, Thibaut Goetghebuer-Planchon, Thomas Walther, Till Brychcy, Trent Lo, Uday Bondhugula, vishakha.agrawal, Vishnuvardhan Janapati, wamuir, Wenwen Ouyang, wenwu, Williard Joshua Jose, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yasir Modak, Yi Li, Yong Tang, zilinzhu, 박상준, 이장
Assets
2
Release 2.6.0
Breaking Changes
-
tf.train.experimental.enable_mixed_precision_graph_rewrite
is removed, as the API only works in graph mode and is not customizable. The function is still accessible undertf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite
, but it is recommended to use the Keras mixed precision API instead. -
tf.lite
:- Remove
experimental.nn.dynamic_rnn
,experimental.nn.TfLiteRNNCell
andexperimental.nn.TfLiteLSTMCell
since they're no longersupported. It's recommended to just use keras lstm instead.
- Remove
Known Caveats
- TF Core:
- A longstanding bug in
tf.while_loop
, which caused it to execute sequentially, even whenparallel_iterations>1
, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset theirwhile_loop
'sparallel_iterations
value to 1, which is consistent with prior behavior.
- A longstanding bug in
Major Features and Improvements
-
tf.keras
:- Keras has been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repository keras-team/keras.
The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras. tf.keras.utils.experimental.DatasetCreator
now takes an optionaltf.distribute.InputOptions
for specific options when used with distribution.tf.keras.experimental.SidecarEvaluator
is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running withtf.distribute.experimental.ParameterServerStrategy
(see `https://www.tensorflow.org/tutorials/distribute/parameter_server_training). It can also be used with single-worker training or other strategies. See docstring for more info.- Preprocessing layers moved from experimental to core.
- Import paths moved from
tf.keras.layers.preprocessing.experimental
totf.keras.layers
.
- Import paths moved from
- Updates to Preprocessing layers API for consistency and clarity:
StringLookup
andIntegerLookup
default formask_token
changed toNone
. This matches the default masking behavior ofHashing
andEmbedding
layers. To keep existing behavior, passmask_token=""
during layer creation.- Renamed
"binary"
output mode to"multi_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, andTextVectorization
. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples. - Added a new output mode
"one_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old"binary"
behavior of one-hot encoding a batch of scalars. Normalization
will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples.
- Keras has been split into a separate PIP package (
-
tf.lite
:- The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
- Supports int64 for mul.
- Supports native variable builtin ops - ReadVariable, AssignVariable.
- Converter:
- Experimental support for variables in TFLite. To enable through conversion, users need to set
experimental_enable_resource_variables
on tf.lite.TFLiteConverter to True.
Note: mutable variables is only available usingfrom_saved_model
in this release, support for other methods is coming soon. - Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.
- Experimental support for variables in TFLite. To enable through conversion, users need to set
-
tf.saved_model
:- SavedModels can now save custom gradients. Use the option
tf.saved_model.SaveOption(experimental_custom_gradients=True)
to enable this feature. The documentation in Advanced autodiff has been updated. - Object metadata has now been deprecated and no longer saved to the SavedModel.
- SavedModels can now save custom gradients. Use the option
-
TF Core:
- Added
tf.config.experimental.reset_memory_stats
to reset the tracked peak memory returned bytf.config.experimental.get_memory_info
.
- Added
-
tf.data
:- Added
target_workers
param todata_service_ops.from_dataset_id
anddata_service_ops.distribute
. Users can specify"AUTO"
,"ANY"
, or"LOCAL"
(case insensitive). If"AUTO"
, tf.data service runtime decides which workers to read from. If"ANY"
, TF workers read from any tf.data service workers. If"LOCAL"
, TF workers will only read from local in-processs tf.data service workers."AUTO"
works well for most cases, while users can specify other targets. For example,"LOCAL"
would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently,"AUTO"
reads from any tf.data service workers to preserve existing behavior. The default value is"AUTO"
.
- Added
Bug Fixes and Other Changes
- TF Core:
- Added
tf.lookup.experimental.MutableHashTable
, which provides a generic mutable hash table implementation.- Compared to
tf.lookup.experimental.DenseHashTable
this offers lower overall memory usage, and a cleaner API. It does not require specifying adelete_key
andempty_key
that cannot be inserted into the table.
- Compared to
- Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
- Add an option
perturb_singular
totf.linalg.tridiagonal_solve
that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration. - Added
tf.linalg.eigh_tridiagonal
that computes the eigenvalues of a Hermitian tridiagonal matrix. tf.constant
now places its output on the current default device.- SavedModel
- Added
tf.saved_model.experimental.TrackableResource
, which allows the creation of custom wrapper objects for resource tensors. - Added a SavedModel load option to allow restoring partial checkpoints into the SavedModel. See [
tf.saved_model.LoadOptions
]
(https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) for details.
- Added
- Added a new op
SparseSegmentSumGrad
to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation. - Added a new session config setting
internal_fragmentation_fraction
, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request. - Added
tf.get_current_name_scope()
which returns the current full name scope string that will be prepended to op names.
- Added
tf.data
:- Promoting
tf.data.experimental.bucket_by_sequence_length
API totf.data.Dataset.bucket_by_sequence_length
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.get_single_element
API totf.data.Dataset.get_single_element
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.group_by_window
API totf.data.Dataset.group_by_window
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.RandomDataset
API totf.data.Dataset.random
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.scan
API totf.data.Dataset.scan
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.snapshot
API totf.data.Dataset.shapshot
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.take_while
API totf.data.Dataset.take_while
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.ThreadingOptions
API totf.data.ThreadingOptions
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.unique
API totf.data.Dataset.unique
and deprecating the experimental endpoint. - Added
stop_on_empty_dataset
parameter tosample_from_datasets
andchoose_from_datasets
. Settingstop_on_empty_dataset=True
will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False
) is preserved. - Removed previously deprecated tf.data statistics related APIs:
tf.data.Options.experimental_stats
tf.data.experimental.StatsAggregator
tf.data.experimental.StatsOptions.*
tf.data.experimental.bytes_produced_stats
tf.data.experimental.latency_stats
- Removed the following experimental tf.data optimization APIs:
tf.data.experimental.MapVectorizationOptions.*
tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion
tf.data.experimental.OptimizationOptions.hoist_random_uniform
tf.data.experimental.OptimizationOptions.map_vectorization
*tf.data.experimental.OptimizationOptions.reorder_data_discarding_ops
- Promoting
tf.keras
:- Fix usage of
__getitem__
slicing in Keras Functional APIs when the inputs areRaggedTensor
objects. - Add
keepdims
argument to allGlobalPooling
layers. - Add
include_preprocessing
argument toMobileNetV3
architectures to control the inclusion ofRescaling
layer in the model. - Add optional argument (
force
) tomake_(train|test|predict)_funtion
methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any.trainable
attribute of any model's layer has changed. - Models now have a
save_spec
property which contains theTensorSpec
specs for calling the model. This spec is automatically saved when the model is called for the first time.
- Fix usage of
tf.linalg
:- Add
CompositeTensor
as a base class toLinearOperator
.
- Add
tf.lite
:- Fix mean op reference quantization rounding issue.
- Added
framework_stable
BUILD target, which links in only the non-experimental TF Lite APIs. - Remove deprecated Java
Interpreter
methods:modifyGraphWithDelegate
- UseInterpreter.Options.addDelegate
setNumThreads
- UseInterpreter.Options.setNumThreads
- Add Conv3DTranspose as a builtin op.
tf.summary
:- Fix
tf.summary.should_record_summaries()
so it correctly reflects when summaries will be written, even whentf.summary.record_if()
is not n effect, by returning True tensor if default writer is present.
- Fix
- Grappler:
- Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aadhitya A, Abhilash Mahendrakar, Abhishek Varma, Abin Shahab, Adam Hillier, Aditya Kane, AdityaKane2001, ag.ramesh, Amogh Joshi, Armen Poghosov, armkevincheng, Avrosh K, Ayan Moitra, azazhu, Banikumar Maiti, Bas Aarts, bhack, Bhanu Prakash Bandaru Venkata, Billy Cao, Bohumir Zamecnik, Bradley Reece, CyanXu, Daniel Situnayake, David Pal, Ddavis-2015, DEKHTIARJonathan, Deven Desai, Duncan Riach, Edward, Eli Osherovich, Eugene Kuznetsov, europeanplaice, evelynmitchell, Evgeniy Polyakov, Felix Vollmer, Florentin Hennecker, François Chollet, Frederic Bastien, Fredrik Knutsson, Gabriele Macchi, Gaurav Shukla, Gauri1 Deshpande, geetachavan1, Georgiy Manuilov, H, Hengwen Tong, Henri Woodcock, Hiran Sarkar, Ilya Arzhannikov, Janghoo Lee, jdematos, Jens Meder, Jerry Shih, jgehw, Jim Fisher, Jingbei Li, Jiri Podivin, Joachim Gehweiler, Johannes Lade, Jonas I. Liechti, Jonas Liechti, Jonas Ohlsson, Jonathan Dekhtiar, Julian Gross, Kaixi Hou, Kevin Cheng, Koan-Sin Tan, Kulin Seth, linzewen, Liubov Batanina, luisleee, Lukas Geiger, Mahmoud Abuzaina, mathgaming, Matt Conley, Max H. Gerlach, mdfaijul, Mh Kwon, Michael Martis, Michal Szutenberg, Måns Nilsson, nammbash, Neil Girdhar, Nicholas Vadivelu, Nick Kreeger, Nirjas Jakilim, okyanusoz, Patrice Vignola, Patrik Laurell, Pedro Marques, Philipp Hack, Phillip Cloud, Piergiacomo De Marchi, Prashant Kumar, puneeshkhanna, pvarouktsis, QQ喵, Rajeshwar Reddy T, Rama Ketineni, Reza Rahimi, Robert Kalmar, rsun, Ryan Kuester, Saduf2019, Sean Morgan, Sean Moriarity, Shaochen Shi, Sheng, Yang, Shu Wang, Shuai Zhang, Soojeong, Stanley-Nod, Steven I Reeves, stevenireeves, Suraj Sudhir, Sven Mayer, Tamas Bela Feher, tashuang.zk, tcervi, Teng Lu, Thales Elero Cervi, Thibaut Goetghebuer-Planchon, Thomas Walther, Till Brychcy, Trent Lo, Uday Bondhugula, vishakha.agrawal, Vishnuvardhan Janapati, wamuir, Wenwen Ouyang, wenwu, Williard Joshua Jose, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yasir Modak, Yi Li, Yong Tang, zilinzhu, 박상준, 이장
Assets
2
Release 2.4.2
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
Assets
2
Release 2.2.3
Note that this is the last patch release for the TensorFlow 2.2.x series.
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
Assets
2
Release 2.1.4
Note that this is the last patch release for the TensorFlow 2.1.x series.
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.