Skip to content

Uint8 quantisized model throws "struct.error" #120

Open
@robinvanemden

Description

@robinvanemden

Using WinMLTools floating point 32 into 8-bit integer optimization results in the following error:

Traceback (most recent call last):
  File "/usr/local/bin/onnx-cpp", line 11, in <module>
    load_entry_point('deepC==0.13', 'console_scripts', 'onnx-cpp')()
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 489, in main
    self.addParams(param);
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 129, in addParams
    param_vals = struct.unpack(pack_format*param_len, param.raw_data) ;
struct.error: unpack requires a buffer of 432 bytes

The traceback seems to indicate that deepC ought to be able to convert the model, but encounters a minor issue - would you agree? See attached the uint8 optimized Resnet Cifar model we used to test the 8-bit integer quantisized model.

model.zip

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions