Source
ghsa
### Impact The [`TensorKey` hash function](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64) used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`). It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. We couldn't use this byte vector anyways, since types like `tstring` include pointers, whereas we need to hash the string values themselves. ### Patches We have patched the issue in GitHub commit [1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6](https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6). The fix will b...
### Impact The [macros that TensorFlow uses for writing assertions (e.g., `CHECK_LT`, `CHECK_GT`, etc.)](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/platform/default/logging.h) have an incorrect logic when comparing `size_t` and `int` values. Due to type conversion rules, several of the macros would trigger incorrectly. ### Patches We have patched the issue in GitHub commit [b917181c29b50cb83399ba41f4d938dc369109a1](https://github.com/tensorflow/tensorflow/commit/b917181c29b50cb83399ba41f4d938dc369109a1) (merging GitHub PR [#55730](https://github.com/tensorflow/tensorflow/pull/55730)). The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on TensorFlow 2.8.1, TensorFlow 2.7.2, and TensorFlow 2.6.4, as these are also affected and still in supported range. ### For more information Please consult [our security guide](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for more informati...
### Impact The implementation of [`tf.raw_ops.EditDistance`]() has incomplete validation. Users can pass negative values to cause a segmentation fault based denial of service: ```python import tensorflow as tf hypothesis_indices = tf.constant(-1250999896764, shape=[3, 3], dtype=tf.int64) hypothesis_values = tf.constant(0, shape=[3], dtype=tf.int64) hypothesis_shape = tf.constant(0, shape=[3], dtype=tf.int64) truth_indices = tf.constant(-1250999896764, shape=[3, 3], dtype=tf.int64) truth_values = tf.constant(2, shape=[3], dtype=tf.int64) truth_shape = tf.constant(2, shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape) ``` In multiple places throughout the code, we are computing an index for a write operation: ```cc if (g_truth == g_hypothesis) { auto loc = std::inner_product(g_...
### Impact Multiple TensorFlow operations misbehave in eager mode when the resource handle provided to them is invalid: ```python import tensorflow as tf tf.raw_ops.QueueIsClosedV2(handle=[]) ``` ```python import tensorflow as tf tf.summary.flush(writer=()) ``` In graph mode, it would have been impossible to perform these API calls, but migration to TF 2.x eager mode opened up this vulnerability. If the resource handle is empty, then a reference is bound to a null pointer inside TensorFlow codebase (various codepaths). This is undefined behavior. ### Patches We have patched the issue in GitHub commit [a5b89cd68c02329d793356bda85d079e9e69b4e7](https://github.com/tensorflow/tensorflow/commit/a5b89cd68c02329d793356bda85d079e9e69b4e7) and GitHub commit [dbdd98c37bc25249e8f288bd30d01e118a7b4498](https://github.com/tensorflow/tensorflow/commit/dbdd98c37bc25249e8f288bd30d01e118a7b4498). The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on TensorFlow 2....
### Impact The implementation of [`tf.raw_ops.SparseTensorDenseAdd`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/sparse_tensor_dense_add_op.cc) does not fully validate the input arguments: ```python import tensorflow as tf a_indices = tf.constant(0, shape=[17, 2], dtype=tf.int64) a_values = tf.constant([], shape=[0], dtype=tf.float32) a_shape = tf.constant([6, 12], shape=[2], dtype=tf.int64) b = tf.constant(-0.223668531, shape=[6, 12], dtype=tf.float32) tf.raw_ops.SparseTensorDenseAdd( a_indices=a_indices, a_values=a_values, a_shape=a_shape, b=b) ``` In this case, a reference gets bound to a `nullptr` during kernel execution. This is UB. ### Patches We have patched the issue in GitHub commit [11ced8467eccad9c7cb94867708be8fa5c66c730](https://github.com/tensorflow/tensorflow/commit/11ced8467eccad9c7cb94867708be8fa5c66c730). The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on Te...
### Impact There is a potential for segfault / denial of service in TensorFlow by calling `tf.compat.v1.*` ops which don't yet have support for quantized types (added after migration to TF 2.x): ```python import numpy as np import tensorflow as tf tf.compat.v1.placeholder_with_default(input=np.array([2]),shape=tf.constant(dtype=tf.qint8, value=np.array([1]))) ``` In these scenarios, since the kernel is missing, a [`nullptr` value is passed](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L480-L482) to [`ParseDimensionValue`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L296-L320) for the `py_value` argument. Then, this is dereferenced, resulting in segfault. ### Patches We have patched the issue in GitHub commit [237822b59fc504dda2c564787f5d3ad9c4aa62d9](https://github.com/tensorflow/tensorflow/commit/237822b59fc504dda2...
### Impact The implementation of [`tf.raw_ops.UnsortedSegmentJoin`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/unsorted_segment_join_op.cc#L83-L148) does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack: ```python import tensorflow as tf tf.strings.unsorted_segment_join( inputs=['123'], segment_ids=[0], num_segments=-1) ``` The code assumes `num_segments` is a positive scalar but there is no validation: ```cc const Tensor& num_segments_tensor = context->input(2); auto num_segments = num_segments_tensor.scalar<NUM_SEGMENTS_TYPE>()(); // ... Tensor* output_tensor = nullptr; TensorShape output_shape = GetOutputShape(input_shape, segment_id_shape, num_segments); ``` Since this value is used to allocate the output tensor, a negative value would result in a `CHECK`-failure (assertion failure), as per [TFSA-2021-198](https://github...
### Impact The implementation of `tf.raw_ops.SpaceToBatchND` (in all backends such as XLA and handwritten kernels) is vulnerable to an integer overflow: ```python import tensorflow as tf input = tf.constant(-3.5e+35, shape=[10,19,22], dtype=tf.float32) block_shape = tf.constant(-1879048192, shape=[2], dtype=tf.int64) paddings = tf.constant(0, shape=[2,2], dtype=tf.int32) tf.raw_ops.SpaceToBatchND(input=input, block_shape=block_shape, paddings=paddings) ``` The result of this integer overflow is used to allocate the output tensor, hence we get a denial of service via a `CHECK`-failure (assertion failure), as in [TFSA-2021-198](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md). ### Patches We have patched the issue in GitHub commit [acd56b8bcb72b163c834ae4f18469047b001fadf](https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf). The fix will be included in TensorFlow 2.9.0. We will also cherrypick...
### Impact The implementation of [`tf.ragged.constant`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/ops/ragged/ragged_factory_ops.py#L146-L239) does not fully validate the input arguments. This results in a denial of service by consuming all available memory: ```python import tensorflow as tf tf.ragged.constant(pylist=[],ragged_rank=8968073515812833920) ``` ### Patches We have patched the issue in GitHub commit [bd4d5583ff9c8df26d47a23e508208844297310e](https://github.com/tensorflow/tensorflow/commit/bd4d5583ff9c8df26d47a23e508208844297310e). The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on TensorFlow 2.8.1, TensorFlow 2.7.2, and TensorFlow 2.6.4, as these are also affected and still in supported range. ### For more information Please consult [our security guide](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for more information regarding the security model and how...
### Impact The implementation of [`tf.raw_ops.QuantizedConv2D`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc) does not fully validate the input arguments: ```python import tensorflow as tf input = tf.constant(1, shape=[1, 2, 3, 3], dtype=tf.quint8) filter = tf.constant(1, shape=[1, 2, 3, 3], dtype=tf.quint8) # bad args min_input = tf.constant([], shape=[0], dtype=tf.float32) max_input = tf.constant(0, shape=[], dtype=tf.float32) min_filter = tf.constant(0, shape=[], dtype=tf.float32) max_filter = tf.constant(0, shape=[], dtype=tf.float32) tf.raw_ops.QuantizedConv2D( input=input, filter=filter, min_input=min_input, max_input=max_input, min_filter=min_filter, max_filter=max_filter, strides=[1, 1, 1, 1], padding="SAME") ``` In this case, references get bound to `nullptr` for each argument that is empty (in the example, all arguments in the `bad args` section). ### Patches We have...