Headline
CVE-2022-29212: Core dumped when invoking TFLite model converted using latest nightly TFLite converter (2.4.0dev2020929) · Issue #43661 · tensorflow/tensorflow
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling QuantizeMultiplierSmallerThanOneExp
, the TFLITE_CHECK_LT
assertion would trigger and abort the process. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04
- TensorFlow installed from (source or binary): binary
- TensorFlow version (or github SHA if from source): 2.4.0.dev20200929
Command used to run the converter or code if you’re using the Python API
import tensorflow as tf
import numpy as np
def wrap_frozen_graph(graph_def, inputs, outputs):
def _imports_graph_def():
tf.compat.v1.import_graph_def(graph_def, name="")
wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
import_graph = wrapped_import.graph
return wrapped_import.prune(
tf.nest.map_structure(import_graph.as_graph_element, inputs),
tf.nest.map_structure(import_graph.as_graph_element, outputs))
graph_def = tf.compat.v1.GraphDef()
_ = graph_def.ParseFromString(open('minimal_093011.pb', 'rb').read())
dnn_function = wrap_frozen_graph(graph_def, inputs='import/first_graph_input:0', outputs='import_1/second_graph_output/Mean:0')
converter = tf.lite.TFLiteConverter.from_concrete_functions([dnn_function])
converter.experimental_enable_mlir_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
def representative_dataset_gen():
image = np.random.randint(low=0, high=255, size=(1, 480, 640, 3), dtype='uint8')
yield [image]
converter.representative_dataset = representative_dataset_gen
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
model = converter.convert()
Link to Google Colab Notebook
https://colab.research.google.com/drive/1U8UVDl6lIs1zKjfpFc7hrr3jAo-0eh_i?usp=sharing
Also, please include a link to the saved model or GraphDef
https://drive.google.com/file/d/1Hvr9hfvaxj3sBi0D0U0iAAe1kEaiJJWB/view?usp=sharing
Failure details
The conversion is successful in that it generates a tflite graph. However, when I invoke the graph, I get a core dump error:
[1] 511859 abort (core dumped) python src/reproduce_minimal_tflite_test.py
Code used to invoke the graph. Also included in Colab notebook linked above.
image = np.random.randint(low=0, high=255, size=(1, 480, 640, 3), dtype='uint8')
tflite_model = tf.lite.Interpreter('models/minimal_093011.tflite')
tflite_model.allocate_tensors()
input_details = tflite_model.get_input_details()
tflite_model.set_tensor(input_details[0]['index'], image)
tflite_model.invoke()
Traceback
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7dc0859 in __GI_abort () at abort.c:79
#2 0x00007fffb9386e42 in tflite::QuantizeMultiplierSmallerThanOneExp(double, int*, int*) ()
from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#3 0x00007fffb9158090 in void tflite::ops::builtin::comparisons::(anonymous namespace)::ComparisonQuantized<signed char, &(bool tflite::reference_ops::GreaterFn<int>(int, int))>(TfLiteTensor const*, TfLiteTensor const*, TfLiteTensor*, bo
ol) () from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#4 0x00007fffb9158b7e in tflite::ops::builtin::comparisons::(anonymous namespace)::GreaterEval(TfLiteContext*, TfLiteNode*) ()
from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#5 0x00007fffb9369713 in tflite::Subgraph::Invoke() () from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#6 0x00007fffb936c1f0 in tflite::Interpreter::Invoke() () from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#7 0x00007fffb90f7548 in tflite::interpreter_wrapper::InterpreterWrapper::Invoke() ()
from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#8 0x00007fffb90eb6ee in pybind11::cpp_function::initialize<pybind11_init__pywrap_tensorflow_interpreter_wrapper(pybind11::module&)::{lambda(tflite::interpreter_wrapper::InterpreterWrapper&)#6}, pybind11::object, tflite::interpreter_wrap
per::InterpreterWrapper&, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11_init__pywrap_tensorflow_interpreter_wrapper(pybind11::module&)::{lambda(tflite::interpreter_wrapper::InterpreterWrapper&)#6}&&, pybind11::object (*
)(tflite::interpreter_wrapper::InterpreterWrapper&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) ()
from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#9 0x00007fffb90ecb39 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
from /home/yousef/miniconda3/envs/tf2.3/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/_pywrap_tensorflow_interpreter_wrapper.so
#10 0x00005555556b9914 in _PyMethodDef_RawFastCallKeywords (method=0x55555694b100, self=0x7fffbb8c9270, args=0x7fffaf04dd98, nargs=<optimised out>, kwnames=<optimised out>)
at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:693
#11 0x00005555556b9a31 in _PyCFunction_FastCallKeywords (func=0x7fffc08de460, args=<optimised out>, nargs=<optimised out>, kwnames=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:732
#12 0x000055555572639e in call_function (kwnames=0x0, oparg=<optimised out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4619
#13 _PyEval_EvalFrameDefault (f=<optimised out>, throwflag=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3093
#14 0x00005555556b8e7b in function_code_fastcall (globals=<optimised out>, nargs=1, args=<optimised out>, co=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:283
#15 _PyFunction_FastCallKeywords (func=<optimised out>, stack=0x7ffff6d615c0, nargs=1, kwnames=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:408
#16 0x0000555555721740 in call_function (kwnames=0x0, oparg=<optimised out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4616
#17 _PyEval_EvalFrameDefault (f=<optimised out>, throwflag=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3110
#18 0x0000555555668829 in _PyEval_EvalCodeWithName (_co=0x7ffff6cfa1e0, globals=<optimised out>, locals=<optimised out>, args=<optimised out>, argcount=<optimised out>, kwnames=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0,
kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3930
#19 0x0000555555669714 in PyEval_EvalCodeEx (_co=<optimised out>, globals=<optimised out>, locals=<optimised out>, args=<optimised out>, argcount=<optimised out>, kws=<optimised out>, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0,
closure=0x0) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3959
#20 0x000055555566973c in PyEval_EvalCode (co=<optimised out>, globals=<optimised out>, locals=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:524
#21 0x0000555555780f14 in run_mod (mod=<optimised out>, filename=<optimised out>, globals=0x7ffff6dcac30, locals=0x7ffff6dcac30, flags=<optimised out>, arena=<optimised out>)
at /tmp/build/80754af9/python_1598874792229/work/Python/pythonrun.c:1035
#22 0x000055555578b331 in PyRun_FileExFlags (fp=0x5555558c3100, filename_str=<optimised out>, start=<optimised out>, globals=0x7ffff6dcac30, locals=0x7ffff6dcac30, closeit=1, flags=0x7fffffffdd80)
at /tmp/build/80754af9/python_1598874792229/work/Python/pythonrun.c:988
#23 0x000055555578b523 in PyRun_SimpleFileExFlags (fp=0x5555558c3100, filename=<optimised out>, closeit=1, flags=0x7fffffffdd80) at /tmp/build/80754af9/python_1598874792229/work/Python/pythonrun.c:429
#24 0x000055555578c655 in pymain_run_file (p_cf=0x7fffffffdd80, filename=0x5555558c2870 L"src/reproduce_minimal_tflite_test.py", fp=0x5555558c3100) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:462
#25 pymain_run_filename (cf=0x7fffffffdd80, pymain=0x7fffffffde90) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:1652
#26 pymain_run_python (pymain=0x7fffffffde90) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:2913
#27 pymain_main (pymain=0x7fffffffde90) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:3460
#28 0x000055555578c77c in _Py_UnixMain (argc=<optimised out>, argv=<optimised out>) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:3495
#29 0x00007ffff7dc20b3 in __libc_start_main (main=0x555555649c90 <main>, argc=2, argv=0x7fffffffdff8, init=<optimised out>, fini=<optimised out>, rtld_fini=<optimised out>, stack_end=0x7fffffffdfe8) at ../csu/libc-start.c:308
#30 0x0000555555730ff0 in _start () at ../sysdeps/x86_64/elf/start.S:103
Related news
### Impact Certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling [`QuantizeMultiplierSmallerThanOneExp`](https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/lite/kernels/internal/quantization_util.cc#L114-L123), the `TFLITE_CHECK_LT` assertion would trigger and abort the process. ### Patches We have patched the issue in GitHub commit [a989426ee1346693cc015792f11d715f6944f2b8](https://github.com/tensorflow/tensorflow/commit/a989426ee1346693cc015792f11d715f6944f2b8). The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on TensorFlow 2.8.1, TensorFlow 2.7.2, and TensorFlow 2.6.4, as these are also affected and still in supported range. ### For more information Please consult [our security ...
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, multiple TensorFlow operations misbehave in eager mode when the resource handle provided to them is invalid. In graph mode, it would have been impossible to perform these API calls, but migration to TF 2.x eager mode opened up this vulnerability. If the resource handle is empty, then a reference is bound to a null pointer inside TensorFlow codebase (various codepaths). This is undefined behavior. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.