Age | Commit message (Collapse) | Author |
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
According to [1]: Keras 2 remains available as the `tf-keras` package,
still use `TF-Keras: the pure-TensorFlow implementation of Keras'[2] as usual
[1] https://github.com/keras-team/keras/commit/d7268f3b32312cacd79d12b872ebb51ee98e6354
[2] https://github.com/keras-team/tf-keras
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
The demos are obsolete and not maintained
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
The -dl (download) layers are not used anywhere but in Wind River products
so drop the line that was added in:
5044996 layer.conf: add dl layer to LAYERRECOMMENDS
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
|
|
For cross compiling, run tf_python_api_gen at build time will fail which
loads target library
Build api_gen_binary_target as host tools and load native library at build time
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
...
root@intel-x86-64:~# label_image.lite
INFO: Loaded model /usr/share/label_image/mobilenet_v1_1.0_224_quant.tflite
INFO: resolved reporter
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
ERROR: failed to get XNNPACK profile information
...
Build with option --define tflite_with_xnnpack=false to skip XNNPACK error
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
It is required by tensorflow 2.14.0 and keras 2.14.0
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
- Drop obsolete patches:
0001-fix-compile-error-for-gcc-13.1.patch
0001-Revert-set-distinct_host_configuration-false-by-defa.patch
- Add TF_NEED_CLANG=0 and TF_NEED_ROCM=0 to explicitly disable
clang and rocm
- Set CROSSTOOL_PYTHON_INCLUDE_PATH for tensorflow-native
- Revert hermetic Python in Tensorflow, and use host python
(python3-native) as depends in tensorflow
- Fix build failure on gcc 13 (missing #include <cstdint>)
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
failure on gcc 13
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
...
/usr/include/bits/string_fortified.h:59:10: error: '__builtin_memset' may write between 16 and 2147483647 bytes into a region of size 15 [-Werror=stringop-overflow=]
...
Add -Wno-stringop-overflow to host gcc
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
...
FATAL: $USER is not set, and unable to look up name of current user: (error: 0): Success
...
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
external/com_google_absl/absl/strings/internal/str_format/extension.h:34:33: error: found ':' in nested-name-specifier, expected '::'
34 | enum class FormatConversionChar : uint8_t;
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
...
tensorflow/tsl/platform/denormal.cc:20:1: note: 'uint32_t' is defined in header '<cstdint>'; did you forget to '#include <cstdint>'?
19 | #include "tensorflow/tsl/platform/platform.h"
+++ |+#include <cstdint>
...
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
The command bazel is a binary of a self-extracting zip file,
call patchelf-uninative on it may break it occasionally
...
bazel/bazel.real: Input/output error
...
Due to commit [89bb76d bazel.bbclass: Fix build with
bazel from sstate-cache] has called patchelf-uninative on
the extracted binaries, disable uninative on bazel command
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
After gcc is upgraded to 13.1, a compile error:
|external/boringssl/src/crypto/bytestring/cbb.c:388:14: error: storing
the address of local variable 'child' in '*cbb.child' [-Werror=dangling-pointer=]
| 388 | cbb->child = out_contents;
| | ~~~~~~~~~~~^~~~~~~~~~~~~~
Add -Wno-dangling-pointer to workaround the error
...
|tensorflow/lite/kernels/internal/spectrogram.cc:46:22: error: 'uint32_t' was not declared in this scope
| 46 | inline int Log2Floor(uint32_t n) {
| ^~~~~~~~
|tensorflow/lite/kernels/internal/spectrogram.cc:20:1: note: 'uint32_t' is defined in header '<cstdint>'; did you forget to '#include <cstdint>'?
...
Add '#include <cstdint>'
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know the content is safe.
Random errors can occur when bazel is taken from sstate-cache and the
dynamic loader is no longer available. Setting DYNAMIC_LOADER and
letting populate_sysroot_setscene modify the UNINATIVE_LOADER is not
enough. That way we just modify the loader of bazel binary, but bazel is
also a self-extracting zip file with built-in binaries that need
dynamic loader modification to work.
Eample error, as you can see it's quite misleading:
| An error occurred during the fetch of repository 'local_config_cuda'
| Cuda Configuration Error: Invalid cpu_value
| Skipping 'tensorflow/lite/tools/benchmark:benchmark_model': no such package '@local_config_cuda//cuda
To fix this in do_configure execute bazel version to unpack
output_user_root and run patchelf-uninative on the ELF executables. Then
change modification time to some future date so that bazel does not see
the modification.
Signed-off-by: Tomasz Dziendzielski <tomasz.dziendzielski@gmail.com>
|
|
CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know the content is safe.
Using TOPDIR variable breaks the sstate-cache every time build directory
changes totally breaking build from sstate-cache among different workspaces.
Changing that to TMPDIR that is included in BB_HASHEXCLUDE_COMMON.
Another thing is disabling the UNINATIVE_LOADER, causing sstate-cache
artifacts not working in different workspaces. On
populate_sysroot_setscene patchelf-uninative --set-interpreter with
empty argument is ran which does not change the interpreter path and
then bazel binary ends up with path to the interpreter that might not
exist, since the bazel was taken from sstate-cache.
Removing the UNINATIVE_LOADER = "" so that uninative.bbclass can
correctly replace the interpreter path and make bazel binary usable.
One could think that it will reintroduce the original issue behind
disabling uninative, which was some java file corrupted (see commit
dd7642b), but I think we don't have this problem anymore and also I
don't think it was the correct solution - since the loader is anyway
included in the binary, so it wasn't really disabling it, just disabling
the yocto functionality around uninative. If the error re-occurs I think
different solution should be found.
Signed-off-by: Tomasz Dziendzielski <tomasz.dziendzielski@gmail.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Replace qemuarm override to arm, apply the fix to all 32 bit arm BSPs
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Drop the revert patch, and fix the error directly
|tensorflow/compiler/mlir/quantization/tensorflow/debugging/mlir_dump.cc:93:10:
error: could not convert 'dump_file' from 'std::unique_ptr<llvm::raw_fd_ostream>'
to 'absl::lts_20220623::StatusOr<std::unique_ptr<llvm::raw_fd_ostream> >'
| return dump_file;
^~~~~~~~~
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
The official tensorflow 2.12.0 is released [1], upgrade it
[1] https://github.com/tensorflow/tensorflow/releases
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
The recipe tensorflow does not support qemuarm, so drop
0001-fix-XNNPACK-build-failure-for-qemuarm.patch
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
$ find packages-split/ -name libtensorflow_cc.so*
packages-split/tensorflow-dbg/usr/lib64/.debug/libtensorflow_cc.so
packages-split/tensorflow-dbg/usr/lib64/python3.11/site-packages/tensorflow/.debug/libtensorflow_cc.so.2
packages-split/libtensorflow-c/usr/lib64/libtensorflow_cc.so
packages-split/python3-tensorflow/usr/lib64/python3.11/site-packages/tensorflow/libtensorflow_cc.so.2
Package libtensorflow-c provides /usr/lib64/libtensorflow_cc.so
and package python3-tensorflow provides /usr/lib64/python3.11/
site-packages/tensorflow/libtensorflow_cc.so.2
The OE build system's shared library resolver incorrectly report the
error of multiple shlib providers
|ERROR: python3-tensorflow: Multiple shlib providers for libtensorflow_cc.so.2:
libtensorflow-c, python3-tensorflow (used by files: tmp-glibc/work/
corei7-64-wrs-linux/tensorflow/2.12.0rc1-r0/packages-split/python3-tensorflow/
usr/lib64/python3.11/site-packages/tensorflow/python/_pywrap_tfe.so)
Ignore OE shared library resolver
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
It does not make sense to run tensorflow on 32 bit arm bsp.
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
2.12.0rc1
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
|
|
According to [1][2], adjust default ram and cpu for bazel,
use 25% of them for bazel build
[1] https://bazel.build/reference/command-line-reference#flag--local_ram_resources
[2] https://bazel.build/reference/command-line-reference#flag--local_cpu_resources
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
For arm64 bsp, run 'bitbake lib32-tensorflow lib32-tensorflow-lite' caused
mutilple compile failures, it does not make sense to build lib32-tensorflow,
make recipe tensorflow not be built for multilib
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
In commit [8235508 tensorflow: split sub packages], it split sub packages
python3-tensorflow out of package tensorflow.
This commit corrects the python3 runtime depends of packages
python3-tensorflow
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
The tensorflow 2.12 has supported python 3.11, remove old python3
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Drop obsolete patch:
- 0001-support-to-compat-python-3.11.patch
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Remove python 3.10 requirement checking, tensorflow 2.12 has supported
python 3.11, remove old python 3.10
Refresh patches:
- 0001-add-yocto-toolchain-to-support-cross-compiling.patch
- 0001-support-32-bit-x64-and-arm-for-yocto.patch
Drop obsolete patches:
- 0003-Support-python-3.11-changes.patch
- 0004-protobuf-fix-build-with-Python-3.11.patch
- 0005-cpython-support-Python-3.11.patch
- 0006-wrapt-support-Python-3.11.patch
Add -Wno-stringop-overflow to CFLAGS to workaround gcc error
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Backport patches from python 3.11 to add a new standard library module,
`tomllib`, for parsing TOML.
The implementation is based on Tomli (https://github.com/hukkin/tomli).
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Improve 0001-HttpDownloader-save-download-tarball-to-distdir.patch
to fix download failure of tensorflow 2.11.0
Issue: LINUXEXEC-24074
(LOCAL REV: NOT UPSTREAM) -- will send to upstream
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
In tensorflow build, it downloads a full list of files, and
Bazel won't cache the downloaded archive.
But for Yocto, in order to support offline build ,we need to cache
downloaded archive, so hardcode go SDKS list files rather than
fetch from internet
Issue: LINUXEXEC-24074
(LOCAL REV: NOT UPSTREAM) -- will send to upstream
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Issue: LINUXEXEC-24074
(LOCAL REV: NOT UPSTREAM) -- will send to upstream
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|
|
Issue: LINUXEXEC-24074
(LOCAL REV: NOT UPSTREAM) -- will send to upstream
Signed-off-by: Hongxu Jia <hongxu.jia@eng.windriver.com>
|